[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC 1/2] xen/mm: Clarify the granularity for each Frame Number



On 12/08/15 11:33, Jan Beulich wrote:
>>>> On 12.08.15 at 11:57, <julien.grall@xxxxxxxxxx> wrote:
>> On 12/08/2015 08:16, Jan Beulich wrote:
>>>>>> On 05.08.15 at 15:18, <julien.grall@xxxxxxxxxx> wrote:
>>>> On 05/08/15 13:46, Andrew Cooper wrote:
>>>>> On 05/08/15 13:36, Julien Grall wrote:
>>>>>> So we need to introduce the concept of in each definition. This patch
>>>>>> makes clear that MFN and GFN is always 4KB and PFN may vary.
>>>>>
>>>>> Is (or rather will) a 4K dom0 able to make 4K mappings of a 64K domU?
>>>>> How is a 64K dom0 expected to make mappings of a 4K domU?
>>>>
>>>> The Xen interface will stay 4K even with 64K guest. We have to support
>>>> 64K guest/dom0 on the current Xen because some distro may do the choice
>>>> to only ship 64K.
>>>
>>> Interesting. Does Linux on ARM not require any atomic page table
>>> entry updates? I ask because I can't see how you would emulate
>>> such when you need to deal with 16 of them at a time.
>>
>> I'm not sure to understand this question.
>>
>> ARM64 is able to support different page granularity (4KB, 16KB and 
>> 64KB). You have to setup the page table registers during the boot in 
>> order to specify the granularity used for the whole page table.
> 
> But you said you use 4k pages in Xen nevertheless. I.e. page tables
> would still be at 4k granularity, i.e. you'd need to update 16 entries
> for a single 64k page. Or can you have 64k pages in L1 and 4k pages
> in L2?

The page table for each stage are completely dissociated. So you can use
a different page granularity for Xen and Linux.

>>>> In my current implementation of Linux 64K support (see [1]), there is no
>>>> changes in Xen (hypervisor and tools). Linux is breaking the 64K page in
>>>> 4K chunk.
>>>>
>>>> When the backend is 64K, it will map the foreign 4K at the top of a 64K
>>>> page. It's a waste of memory, but it's easier to implement and it's
>>>> still and improvement compare to have Linux crashing at boot.
>>>
>>> Waste of memory? You're only mapping an existing chunk of memory.
>>> DYM waste of address space?
>>
>> No, I really meant waste of memory. The current grant API in Linux is 
>> allocating one Linux Page per grant. Although the grant is always 4K, so 
>> we won't be able to make use of the 60K for anything as long as we use 
>> this page for a grant.
>>
>> So if the grant is pre-allocated (such as for PV block), we won't be 
>> able use nr_grant * 60KB memory.
> 
> I still don't follow - grant mappings ought to be done into ballooned
> (i.e. empty) pages, i.e. no memory would get wasted unless there
> are too few balloon pages available.

Everything balloon out is less memory that can be used by Linux. If we
are only using 1/15 of the balloon out page that a huge waste of memory
for me.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.