[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Flask vs paging mempool - Was: [xen-unstable test] 174809: regressions - trouble: broken/fail/pass


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: "Daniel P. Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>
  • Date: Mon, 21 Nov 2022 07:14:23 -0500
  • Arc-authentication-results: i=1; mx.zohomail.com; dkim=pass header.i=apertussolutions.com; spf=pass smtp.mailfrom=dpsmith@xxxxxxxxxxxxxxxxxxxx; dmarc=pass header.from=<dpsmith@xxxxxxxxxxxxxxxxxxxx>
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1669032867; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; bh=kAvyxNhxrDhDOTk3kJ4SBAnDuRQLgM3Lx2hQnjnoA9A=; b=a+S9zrpKWPTtoH9Mq0v6fk3qZseeCUMRO/6I/Pswp+RygZbq7cjbkRVI6ae7hRX9d0wCISysDnyvyc+zcUZ1qv4y8u8Vfh6YFijFftdKrOB6nfxjQisTgROV6DfluXjUsZ/Q+vajkVaFO6bMgdD8UjlaQlcPwz5zfmvzgdtV4RI=
  • Arc-seal: i=1; a=rsa-sha256; t=1669032867; cv=none; d=zohomail.com; s=zohoarc; b=Y+LsheZCVhhdQeNSE8I45kLFI95mq+XzQ4AOZ7rgVQ9gf3MiPOhwsU/HYTN728xroWjUsZW70+qif3Ovs3EcihxoyfoVJyBlFWJjkHsCjJINpz/yBCMvyuWcAaM6IfRmCe28gn1QNnozbVCzOD/9VZ8NEuhtfD/SLf9WjjGEOmI=
  • Cc: Roger Pau Monne <roger.pau@xxxxxxxxxx>, Henry Wang <Henry.Wang@xxxxxxx>, Anthony Perard <anthony.perard@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Jason Andryuk <jandryuk@xxxxxxxxx>, Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>
  • Delivery-date: Mon, 21 Nov 2022 12:14:40 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 11/21/22 03:04, Jan Beulich wrote:
On 20.11.2022 12:08, Daniel P. Smith wrote:
On 11/18/22 16:10, Jason Andryuk wrote:
On Fri, Nov 18, 2022 at 12:22 PM Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx> 
wrote:
For Flask, we need new access vectors because this is a common
hypercall, but I'm unsure how to interlink it with x86's shadow
control.  This will require a bit of pondering, but it is probably
easier to just leave them unlinked.

It sort of seems like it could go under domain2 since domain/domain2
have most of the memory stuff, but it is non-PV.  shadow has its own
set of hooks.  It could go in hvm which already has some memory stuff.

Since the new hypercall is for managing a memory pool for any domain,
though HVM is the only one supported today, imho it belongs under
domain/domain2.

Something to consider is that there is another guest memory pool that is
managed, the PoD pool, which has a dedicated privilege for it. This
leads me to the question of whether access to manage the PoD pool and
the paging pool size should be separate accesses or whether they should
be under the same access. IMHO I believe it should be the latter as I
can see no benefit in disaggregating access to the PoD pool and the
paging pool. In fact I find myself thinking in terms of should the
managing domain have control over the size of any backing memory pools
for the target domain. I am not seeing any benefit to discriminating
between which backing memory pool a managing domain should be able to
manage. With that said, I am open to being convinced otherwise.

Yet the two pools are of quite different nature: The PoD pool is memory
the domain itself gets to use (more precisely it is memory temporarily
"stolen" from the domain). The paging pool, otoh, is memory we need to
make the domain actually function, without the guest having access to
that memory.

The question is not necessarily what the pools' exact purpose are, but who will need control over their size. If one takes a courser view, and say these memory pools relate to how a domain is consuming memory, then it follows that only entity needing access is the entity granted control/management over the domain memory usage. In the end there will still be an access check for both calls, the question is whether it makes any sense to differentiate between them in the security model. As I just outlined, IMHO there is not, but I am open to hearing why they would need to be differentiated in the security model.

v/r,
dps



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.