[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Design doc of adding ACPI support for arm64 on Xen - version 2



On 11/08/15 16:19, Ian Campbell wrote:
> On Tue, 2015-08-11 at 16:11 +0100, Julien Grall wrote:
>> On 11/08/15 15:59, Ian Campbell wrote:
>>> On Tue, 2015-08-11 at 15:51 +0100, David Vrabel wrote:
>>>> On 11/08/15 15:12, Ian Campbell wrote:
>>>>> On Fri, 2015-08-07 at 10:11 +0800, Shannon Zhao wrote:
>>>>>>
>>>>> [...]
>>>>>> 3. Dom0 gets grant table and event channel irq information
>>>>>> -----------------------------------------------------------
>>>>>> As said above, we assign the hypervisor_id be "XenVMM" to tell 
>>>>>> Dom0 
>>>>>> that
>>>>>> it runs on Xen hypervisor.
>>>>>>
>>>>>> For grant table, add two new HVM_PARAMs: 
>>>>>> HVM_PARAM_GNTTAB_START_ADDRESS
>>>>>> and HVM_PARAM_GNTTAB_SIZE.
>>>>>
>>>>> The reason we expose this range is essentially to allow OS authors 
>>>>> to 
>>>>> take
>>>>> a short cut by telling them about an IPA range which is unused, so 
>>>>> it 
>>>>> is
>>>>> available for remapping the grant table into. On x86 there is a BAR 
>>>>> on 
>>>>> the
>>>>> Xen platform PCI device which serves a similar purpose.
>>>>>
>>>>> IIRC somebody (perhaps David V, CCd) had proposed at some point to 
>>>>> make 
>>>>> it
>>>>> so that Linux was able to pick such an IPA itself by examining the 
>>>>> memory
>>>>> map or by some other scheme.
>>>>
>>>> PVH in Linux uses ballooned pages which are vmap()'d into a virtually
>>>> contiguous region.
>>>>
>>>> See xlated_setup_gnttab_pages().
>>>
>>> So somewhat more concrete than a proposal then ;-)
>>>
>>> I don't see anything there which would be a problem on ARM, so we 
>>> should
>>> probably go that route there too (at least for ACPI, if not globally 
>>> for
>>> all ARM guests).
>>
>> IIRC we talked about it few months ago and you said that using balloon
>> page will split in 4K the 1G/2M mapping done in the stage-2 p2m.
> 
> Did I? Odd because I'm also of the opinion that alloc_ballooned_pages
> should operate in chunks of 2M at the hypercall layer and keep any
> resulting spare 4K pages on a free list to use for future such allocations.
> 
> IOW it should avoid such shattering where it can.

You can also (soon) enable memory hotplug and get to hotplug new (empty)
memory sections to avoid having to release any frames back to Xen.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.