[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86: add feature flags to shared_info page
On 03/02/2015 02:56 PM, Jan Beulich wrote: On 02.03.15 at 14:44, <andrew.cooper3@xxxxxxxxxx> wrote:On 02/03/15 13:15, Jan Beulich wrote:On 02.03.15 at 13:59, <"jgross@xxxxxxxx".non-mime.internet> wrote:In order to indicate the Xen tools capability to support the virtual mapped linear p2m list instead the 3 level mfn tree add feature flags to the shared_info page.But why in the shared info page? They'd belong to start info or should be obtainable via XENVER_get_features.Furthermore in this case, the virtual linear p2m is purely a guest->toolstack feature/ABI. Xen deliberately has no knowledge of PV guest p2ms of either the 3level or linear variety. Is there genuinely no better interface than the hypervisor feature flags to indicate a piece of toolstack support?As Ian indicated in his reply, much depends on whether any other mechanism would allow the information to be retrieved early enough in the guest. Options that would work are, regarding the time the information is needed: - XENVER_get_features - start info - shared info - any (other) hypercall All other interfaces are available much too late. I think start info is the best option, as it is built by the tools for domUs and, as Jan already mentioned, would be a better place for the information as shared info. The last remaining question: what to do regarding dom0? Here I see the following alternatives: - do nothing: the 3 level mfn tree is built even if it is not needed, wastes up to 2MB memory and might slow down dom0 (I doubt the slow down would be detectable) - set the flag based on a hypervisor boot parameter (not very nice) - throw the 3 level mfn tree away during boot as soon as the tools can tell dom0 to do so (requires a new interface) Any thoughts? Juergen _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |