[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Flask vs paging mempool - Was: [xen-unstable test] 174809: regressions - trouble: broken/fail/pass


  • To: "Daniel P. Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Mon, 21 Nov 2022 09:04:12 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=B1oerJ/CYyTjY9T3AnzNTNUOt+7S9HlMFlNCOHSnzcQ=; b=eSSHDLlf79wH7+N3a55MW0JvDEH/EdPI7CF1jq72om0uwBgkArfFsjx/G4hVuNFsKg8HHnlbJN3l7b7bxhHjLLqihzkP58/Px/A00bqGKixE73kee6qRBYn4k2/zcDi8fBX8iaEHe2fLidrZOgdooUFpb+j+wAMMUOAjFwIMVDJDejINi0Xww7Hg3E8bIp1qOdpg/yprkeCVwEfncHAROZGHmJNW7R9R0UmBdswqgxJ/7oWuSt/ZgEo3gt/zGfAYxKiprWMoaRfmeuZqPztFMNJW5l7eswCG6aWmamm5i/EgmUTF77oOBzUX4OA7hd9v1bZP+RTNp0jxQHfk6mqiPw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=GxLp5rc3Ly1vkn3CLW0R2PvgflnK4Jgy/93aiHg625VBVP6pam6T+3gghhMgAnvpj2z+RiFGLWq5WJrq6N6wZcLIwRI+dDYcuprzp3LEKVcEG8On/wet9daogvtZq0kA8jvt2ZIzkd/aNB4K2ZyB43Xl3q8cm2dW9wLxqQLFr+US0gkfaB+XsPIw/pz237/sPhF/5iUye1eLaeigIV6ypXj+eZtfcGuuyKnLM/BiYlSfBbfns7xx/Vm0fdNUHQgR/ouSzOPYLdcVY/qYrdA5TKz3xKCLZvrnAFy9YB6w8BkCYJ6hbyTUcXQc5q0qORAE+ILIBL8k6VOTLzTefpvIBA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Roger Pau Monne <roger.pau@xxxxxxxxxx>, Henry Wang <Henry.Wang@xxxxxxx>, Anthony Perard <anthony.perard@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Jason Andryuk <jandryuk@xxxxxxxxx>, Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>
  • Delivery-date: Mon, 21 Nov 2022 08:04:25 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 20.11.2022 12:08, Daniel P. Smith wrote:
> On 11/18/22 16:10, Jason Andryuk wrote:
>> On Fri, Nov 18, 2022 at 12:22 PM Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx> 
>> wrote:
>>> For Flask, we need new access vectors because this is a common
>>> hypercall, but I'm unsure how to interlink it with x86's shadow
>>> control.  This will require a bit of pondering, but it is probably
>>> easier to just leave them unlinked.
>>
>> It sort of seems like it could go under domain2 since domain/domain2
>> have most of the memory stuff, but it is non-PV.  shadow has its own
>> set of hooks.  It could go in hvm which already has some memory stuff.
> 
> Since the new hypercall is for managing a memory pool for any domain, 
> though HVM is the only one supported today, imho it belongs under 
> domain/domain2.
> 
> Something to consider is that there is another guest memory pool that is 
> managed, the PoD pool, which has a dedicated privilege for it. This 
> leads me to the question of whether access to manage the PoD pool and 
> the paging pool size should be separate accesses or whether they should 
> be under the same access. IMHO I believe it should be the latter as I 
> can see no benefit in disaggregating access to the PoD pool and the 
> paging pool. In fact I find myself thinking in terms of should the 
> managing domain have control over the size of any backing memory pools 
> for the target domain. I am not seeing any benefit to discriminating 
> between which backing memory pool a managing domain should be able to 
> manage. With that said, I am open to being convinced otherwise.

Yet the two pools are of quite different nature: The PoD pool is memory
the domain itself gets to use (more precisely it is memory temporarily
"stolen" from the domain). The paging pool, otoh, is memory we need to
make the domain actually function, without the guest having access to
that memory.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.