[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] include/public: add new elf note for support of huge physical addresses



>>> On 14.08.17 at 12:35, <jgross@xxxxxxxx> wrote:
> On 14/08/17 12:29, Jan Beulich wrote:
>>>>> On 14.08.17 at 12:21, <jgross@xxxxxxxx> wrote:
>>> Current pv guests will only see physical addresses up to 46 bits wide.
>>> In order to be able to run on a host supporting 5 level paging and to
>>> make use of any possible memory page there, physical addresses with up
>>> to 52 bits have to be supported.
>> 
>> Is this a Xen shortcoming or a Linux one (I assume the latter)?
> 
> It is a shortcoming of the Xen pv interface.

Please be more precise: Where in the interface to we have a
restriction to 46 bits?

>>> --- a/xen/include/public/elfnote.h
>>> +++ b/xen/include/public/elfnote.h
>>> @@ -212,9 +212,18 @@
>>>  #define XEN_ELFNOTE_PHYS32_ENTRY 18
>>>  
>>>  /*
>>> + * Maximum physical address size the kernel can handle.
>>> + *
>>> + * All memory of the PV guest must be allocated below this boundary,
>>> + * as the guest kernel can't handle page table entries with MFNs referring
>>> + * to memory above this value.
>>> + */
>>> +#define XEN_ELFNOTE_MAXPHYS_SIZE 19
>> 
>> Without use in the hypervisor or tools I don't see what good
>> introducing this note will do.
> 
> The Linux kernel could make use of it from e.g. kernel 4.14 on. So in
> case supports 5 level paging hosts lets say in Xen 4.12 it could run
> Linux pv guests with kernel 4.14 making use of high memory addresses.
> 
> In case we don't define the note (or do it rather late) pv guests would
> have to be restricted to the low 64TB of host memory.

No matter what you say here - I can't see how defining the note
alone will help.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.