[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/4] xen: Introduce non-broken hypercalls for the p2m pool size


  • To: Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 16 Nov 2022 09:26:58 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LLGdDMOOC/ygyuSi8a2hmVUDtBb5YksZXW/kX2+p4DM=; b=GjqKVibOT1lIWq5tNnEIC+LaHPgGdgLpUHFeXA5FIiXglltVyv/kKI9Q7dlWwtqPeOb32SRxc1ih4vYFC/7goTXPcRPMlRmEBeqKqR9Yt+ATCacoAP+cs+o6zUlqYK7me+zaPWT5Lt4BztKM0Fn0FKUDHHiC+xLUWzFMuEUGdQQB/ktFw7KIWlva/k4rL+xv4g8bfQXreJN0d5h6+aSbmOUVao/V5dT4CfOFB523kLLbOWBwyXv+6HjXn3vUVPqibh0/NKDwWD7NrylWgG7WHQ4JLTsNO+7nYs6l+5+OBcs6EvpQLNHQSXvZOTuvic28OMTT/It86Y/zWhtONhCusA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=V1ohmuKgWOBw4uLj60NtPVaE0iThQzfSQ8BVuG8z6fam6ZcP3i5qvJ1I0jRl5CXDCtTqnWKwEZ9m+D1Zsd9BFSU34HpiNDaDlamA6/E+aTrj7+2PPvWdoTfvvcHHVdL0WixsFI37a/NzqsefpDTt0LAX0GOBvq8EtQ3kJHMCAetrNVgrn6ZkjbHjdXHC4IvoAAHQ/48rWzTHFsBYcIPsst4XZg7UHft9BEyUalmRGaCHZeMKuAdGf036mI0EQLe6BvQLNQzQf0d1O1ZSybnaBWj+O3MR2brttaoD68z3lTbX9AfCJzUSlELZkfIzS97HX+kvtL7+VN8BwoVnLsko8A==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>, Roger Pau Monne <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Henry Wang <Henry.Wang@xxxxxxx>, Anthony Perard <anthony.perard@xxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, George Dunlap <dunlapg@xxxxxxxxx>
  • Delivery-date: Wed, 16 Nov 2022 08:27:37 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 16.11.2022 02:19, Stefano Stabellini wrote:
> On Fri, 28 Oct 2022, George Dunlap wrote:
>> On Thu, Oct 27, 2022 at 8:12 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>>       On 26.10.2022 21:22, Andrew Cooper wrote:
>>       > On 26/10/2022 14:42, Jan Beulich wrote:
>>
>>  
>>       > paging isn't a great name.  While it's what we call the 
>> infrastructure
>>       > in x86, it has nothing to do with paging things out to disk (the 
>> thing
>>       > everyone associates the name with), nor the xenpaging infrastructure
>>       > (Xen's version of what OS paging supposedly means).
>>
>>       Okay, "paging" can be somewhat misleading. But "p2m" also doesn't fit
>>       the use(s) on x86. Yet we'd like to use a name clearly better than the
>>       previous (and yet more wrong/misleading) "shadow". I have to admit that
>>       I can't think of any other sensible name, and among the ones discussed
>>       I still think "paging" is the one coming closest despite the
>>       generally different meaning of the word elsewhere.
>>
>>
>> Inside the world of operating systems / hypervisors, "paging" has always 
>> meant "things related to a pagetable"; this includes "paging out
>> to disk".  In fact, the latter already has a perfectly good name -- "swap" 
>> (e.g., swap file, swappiness, hypervisor swap).
>>
>> Grep for "paging" inside of Xen.  We have the paging lock, paging modes, 
>> nested paging, and so on.  There's absolutely no reason to start
>> thinking of "paging" as exclusively meaning "hypervisor swap".
>>  
>> [ A bunch of stuff about using bytes as a unit size]
>>
>>       > This is going to be a reoccurring theme through fixing the ABIs.  Its
>>       > one of a several areas where there is objectively one right answer, 
>> both
>>       > in terms of ease of use, and compatibility to future circumstances.
>>
>>       Well, I wouldn't say using whatever base granularity as a unit is
>>       "objectively" less right.
>>
>>
>> Personally I don't think bytes or pages either have a particular advantage:
>>
>> * Using bytes
>>  - Advantage: Can always use the same number regardless of the underlying 
>> page size
>>  - Disadvantage: "Trap" where if you forget to check the page size, you 
>> might accidentally pass an invalid input.  Or to put it
>> differently, most "reasonable-looking" numbers are actually invalid (since 
>> most numbers aren't page-aligned)/
>> * Using pages
>>  - Advantage: No need to check page alignment in HV, no accidentally invalid 
>> input
>>  - Disadvantage: Caller must check page size and do a shift on every call
>>
>> What would personally tip me one way or the other is consistency with other 
>> hypercalls.  If most of our hypercalls (or even most of our MM
>> hypercalls) use bytes, then I'd lean towards bytes.  Whereas if most of our 
>> hypercalls use pages, I'd lean towards pages.
> 
> 
> Joining the discussion late to try to move things forward.
> 
> Let me premise that I don't have a strong feeling either way, but I
> think it would be clearer to use "bytes" instead of "pages" as argument.
> The reason is that with pages you are never sure of the actual
> granularity. Is it 4K? 16K? 64K? Especially considering that hypervisor
> pages can be of different size than guest pages. In theory you could
> have a situation where Xen uses 4K, Dom0 uses 16K and domU uses 64K, or
> any combination of the three. With bytes, at least you know the actual
> size.
> 
> If we use "bytes" as argument, then it also makes sense not to use the
> word "pages" in the hypercall name.
> 
> That said, any name would work and both bytes and pages would work, so
> I would leave it to the contributor who is doing the work to choose.

FAOD: There was no suggestion to use "pages" in the name; it was "paging"
which was suggested.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.