[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 15/15] xen: add new Xen cpuid node for max address width info



On 20/09/17 17:42, Jan Beulich wrote:
>>>> On 20.09.17 at 14:58, <jgross@xxxxxxxx> wrote:
>> On 20/09/17 14:18, Jan Beulich wrote:
>>>>>> On 20.09.17 at 08:34, <jgross@xxxxxxxx> wrote:
>>>> On very large hosts a guest needs to know whether it will have to
>>>
>>> ... a PV guest ...
>>
>> What about a HVM guest with (potentially) more than 16TB?
> 
> Such a guest knows how much memory it has without querying Xen.
> 
>>>> handle frame numbers larger than 32 bits in order to select the
>>>> appropriate grant interface version.
>>>>
>>>> Add a new Xen specific CPUID node to contain the maximum guest address
>>>> width
>>>
>>> "guest address width" is ambiguous here, the more when looking at
>>> what you actually return. We should no longer allow ourselves to
>>> mix up the different address spaces.
>>
>> I've chosen "guest address width" similar to the "guest frame number"
>> we already have: it is MFN based for PV and PFN based for HVM (and ARM).
>> I'm open for a better name.
> 
> If the interface is needed for other than PV, then the term likely
> is fine. But for a PV only interface I'd prefer it to be "machine
> address".
> 
>>> The limit you want to report
>>> here is that in MFN space, which ought to be of no relevance to
>>> HVM guests. Therefore I'm against uniformly exposing this (as much
>>> as almost no other host property should have any relevance for
>>> HVM guests), and would instead like to see a PV-only leaf just like
>>> we already have a HVM only one.
>>
>> As said above: a HVM guest needs to know whether it will have to deal
>> with frame numbers >32 bits, too.
>>
>> For HVM guests this would just be a hint that the host might be large
>> enough for this to happen, as even today a HVM guest could in theory
>> reorganize its memory map to have parts of the memory above the 16TB
>> border even with only rather small amounts of memory. But this would
>> be the problem of the guest then.
> 
> A HVM guest booted with less than 16Tb and then being pushed
> up beyond that boundary would still know in advance that this
> could happen - the SRAT table would tell it what hotplug regions
> there are.

Okay, lets go that route then.

HVM guests need to query current memory map and/or the SRAT table in
order to decide which grant interface to use, while PV guests have to
use the CPUID leaf, which will be a PV specific one.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.