[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] A couple of HVMlite loose ends



On 13/01/16 15:49, Roger Pau Monnà wrote:
> Hello,
>
> While working on a HVMlite Dom0 implementation I've found a couple of
> loose ends with the design that I would like to comment because it's not
> clear to me what's the best direction to take.
>
> 1. HVM CPUID and Dom0.
>
> Sadly the way CPUID is handled inside of Xen varies between PV and HVM.
> On PV guests AFAICT we mostly do black-listing (I think this is the
> right term), which means we pick the native CPUID result and then
> perform a series of filter operations in order to remove features which
> should not be exposed to a PV guest. On the other hand, for HVM guests
> we pre-populate an array (d->arch.cpuids) during domain build time, and
> the contents of that array is what is returned to the guest when a CPUID
> instruction is executed.
>
> This is a problem for a HVMlite Dom0, since the code that populates this
> array resides in libxc, and when creating a HVMlite Dom0 from Xen itself
> (using a properly arranged Dom0 builder) this array doesn't get
> populated at all, leading to wrong CPUID information being returned to
> the guest. I can see two solutions to this problem:
>
>  a) Switch CPUID handling for HVMlite Dom0 to the PV one, like it's done
> for PVH Dom0.
>
>  b) Duplicate the code in libxc into the Xen HVMlite Dom0 builder and
> populate d->arch.cpuids.
>
> I'm leaning towards option "b)", because I would like HVMlite to behave
> as a HVM guest as much as possible, but I would like to hear opinions
> from others before taking either route.

My phase 2 plans for cpuid involve having Xen generate a complete
maximum cpuid policy for each type of guest, having libxc copy this and
trim this down to suit the domain, then provide it back to Xen as the
end policy for the domain.

The current situation of having libxc do a guestimation based on what
dom0 can see is wrong, and I will be removing it in the longterm.  I
will also be removing any distinction between the hardware domain and
other domains, when it comes to cpuid handling.

In the short term, this isn't very helpful.  I would suggest an
intermediate "b)", which will end up coming back out when CPUID phase 2
is complete.

>
> 2. HVM MTRR and Dom0.
>
> MTRR ranges are initialised from hvmloader, which means that although we
> expose the MTRR functionality to HVMlite guests (and AFAICT the
> functionality is fully complete/usable), the initial state in which a
> guest finds the MTRR ranges is not expected, leading to errors. Again, I
> see three ways to solve this:

What errors?  OSes already need to deal with any quantity of crazy setup
from firmware.

>
>  a) Mask the MTRR functionality from CPUID for HVMlite guests. This
> requires adding a XEN_X86_EMU_MTRR flag to the bitmap introduced in arch
> domain.

MTRRs are x86 architectural features.

>
>  b) Setting up the initial MTRR state from libxl/libxc for HVMlite DomU
> and from the Xen domain builder for HVMlite Dom0. This again implies
> some functional duplication of MTRR related code, since now we would
> have 3 different places where MTRR is setup. One inside hvmloader for
> classic HVM guests, another one inside of libxl/libxc for HVMlite DomU
> and yet another one in the Dom0 building for HVMlite Dom0.
>
>  c) Be aware of this fact and change the OS code so that it will setup
> the initial MTRR ranges correctly.
>
> Again, I'm leaning towards "b)", because that's the one that's closest
> to native and what we do for classic HVM guests, but would like to hear
> opinions.

What is wrong with having them disabled by default?  They are not
generally useful to guests, given PAT and no SMM/firmware to get in the
way.  If a guest really wants to use them, it can always turn them on
and reconfigure them.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.