[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 04/15] xen: add function for obtaining highest possible memory address



On 20/09/17 19:15, Julien Grall wrote:
> Hi Juergen,
> 
> On 20/09/17 15:33, Juergen Gross wrote:
>> On 20/09/17 16:24, Julien Grall wrote:
>>> On 20/09/17 14:08, Juergen Gross wrote:
>>>> On 20/09/17 14:51, Julien Grall wrote:
>>>>> Hi Juergen,
>>>>>
>>>>> Sorry for the late comment.
>>>>>
>>>>> On 20/09/17 07:34, Juergen Gross wrote:
>>>>>> Add a function for obtaining the highest possible physical memory
>>>>>> address of the system. This value is influenced by:
>>>>>>
>>>>>> - hypervisor configuration (CONFIG_BIGMEM)
>>>>>> - processor capability (max. addressable physical memory)
>>>>>> - memory map at boot time
>>>>>> - memory hotplug capability
>>>>>>
>>>>>> The value is especially needed for dom0 to decide sizing of grant
>>>>>> frame
>>>>>> limits of guests and for pv domains for selecting the grant interface
>>>>>
>>>>> Why limiting to PV domain? Arm domain may also need to switch to
>>>>> another
>>>>> interface between v1 only support 32-bit GFN.
>>>>
>>>> Right. And I just used that reasoning for an answer to Jan. :-)
>>>>
>>>>>
>>>>>> version to use.
>>>>>>
>>>>>> Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
>>>>>
>>>>> [...]
>>>>>
>>>>>> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
>>>>>> index cd6dfb54b9..6aa8cba5e0 100644
>>>>>> --- a/xen/include/asm-arm/mm.h
>>>>>> +++ b/xen/include/asm-arm/mm.h
>>>>>> @@ -376,6 +376,11 @@ static inline void put_page_and_type(struct
>>>>>> page_info *page)
>>>>>>       void clear_and_clean_page(struct page_info *page);
>>>>>>     +static inline unsigned long arch_get_upper_mfn_bound(void)
>>>>>> +{
>>>>>> +    return 0;
>>>>>> +}
>>>>>
>>>>> I am not sure to understand the Arm implementation given the
>>>>> description
>>>>> of the commit message.
>>>>>
>>>>> The guest layout is completely separate from the host layout. It might
>>>>> be possible to have all the memory below 40 bits on the host, but this
>>>>> does not preclude the guest to have all memory below 40 bits (the
>>>>> hardware might support, for instance, up to 48 bits).
>>>>
>>>> Who is setting up the memory map for the guest then?
>>>
>>> The memory map is at the moment static and described in
>>> public/arch-arm.h. The guest is not allowed to assume it and should
>>> discover it through ACPI/DT.
>>
>> Is there any memory hotplug possible (host level, guest level)?
> 
> It is not implemented at the moment.
> 
>>
>>> There are 2 banks of memory for the guest (it depends on the amount of
>>> memory requested by the user):
>>>      - 3GB @ 1GB
>>>      - 1016GB @ 8GB
>>>
>>> But the guest would be free to use the populate memory hypercall to
>>> allocate memory anywhere in the address space.
>>
>> Okay, so this is similar to x86 HVM then.
> You could compare Arm guest to PVH.
> 
>>
>>> For Arm32, the maximum IPA (Intermediate Physical Address aka guest
>>> physical address on Xen) we currently support is always 40 bits.
>>>
>>> For Arm64, this range from 32 bits to 48 bits. New hardware can support
>>> up to 52 bits.
>>
>> I guess this information is included in some tables like ACPI or DT?
> 
> No. On Arm64, you can deduce the maximum size from the ID_AA64MMFR0_EL1.
> But, the hypervisor would be free to limit the number of guest physical
> bits. Although, it could never be higher than the Physical Address range
> supported.

Okay, so we have no need for an additional interface on ARM, right?
It can all be handled via the existing interfaces.

Juergen


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.