[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 1/1] Xen PV support for hugepages
Dave McCracken wrote: On Thursday 06 November 2008, Jan Beulich wrote:This is the point of having hugepages a command line option. It should only be turned on if you intend to run guests who enforce the alignment rules.You mean 'if you intend to run *only* guests ...', including dom0. Any guest unaware of the connection of X86_FEATURE_PSE and the need to create contiguous 2M chunks would fail, and any guest not having I/O memory assigned would never manage to create such chunks.Hmm... Sounds like a per-domain flag would be a good solution here. I'll look into it. I think a guest should explicitly enable large page support via something like vmassist. In other words: * keep PSE clear - you are not implementing something similar to PSE * export the ability to map large pages via a feature flag or something * require guests to explicitly enable large page supportI'm sceptical about the value of this patch as it stands. We already have a vast number of operating modes and combinations, and I think we need to have a pretty high bar before adding any more. If this work arranged things so that we could just set PSE - and thereby make supporting it on the guest side much simpler - then I think the decision to include it would be easier. In practical terms, I think this means that we need to modify Xen and the domain builder to *always* allocate memory to guests in 2M contig chunks, and then sort out all the tricky details ;) In practice, however, we would lose a lot of the benefit of doing this we'd get from the native kernel, because all the RO pagetable mappings would chop up the linear and kernel maps so much that we'd not get much chance to use the 2M mappings. It would, however, make hugetlbfs work without much drama. J _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |