[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 05/30] xen/public: Export cpu featureset information in the public API

On 02/20/2016 07:17 PM, Andrew Cooper wrote:
> On 20/02/16 17:39, Joao Martins wrote:
>>>>>>  and given that this
>>>>>> is exposed on both sysctl and libxl (through libxl_hwcap) shouldn't its 
>>>>>> size
>>>>>> match the real one (boot_cpu_data.x86_capability) i.e. NCAPINTS ? 
>>>>>> Additionally I
>>>>>> see that libxl_hwcap is also hardcoded to 8 alongside struct 
>>>>>> xen_sysctl_physinfo
>>>>>> when it should be 10 ?
>>>>> Hardcoding of the size in sysctl can be worked around. Fixing libxl is
>>>>> harder.
>>>>> The synthetic leaves are internal and should not be exposed.
>>>>>> libxl users could potentially make use of this hwcap field to see what 
>>>>>> features
>>>>>> the host CPU supports.
>>>>> The purpose of the new featureset interface is to have stable object
>>>>> which can be used by higher level toolstacks.
>>>>> This is done by pretending that hw_caps never existed, and replacing it
>>>>> wholesale with a bitmap, (specified as variable length and safe to
>>>>> zero-extend), with an ABI in the public header files detailing what each
>>>>> bit means.
>>>> Given that you introduce a new API for libxc (xc_get_cpu_featureset()) 
>>>> perhaps
>>>> an equivalent to libxl could also be added? That wat users of libxl could 
>>>> also
>>>> query about the host and guests supported features. I would be happy to 
>>>> produce
>>>> patches towards that.
>>> In principle, this is fine.  Part of this is covered by the xen-cpuid
>>> utility in a later patch.
>> OK.
>>> Despite my plans to further rework guest cpuid handling, the principle
>>> of the {raw,host,pv,hvm}_featuresets is expected to stay, and be usable
>>> in their current form.
>> That's great to hear. The reason I brought this up is because libvirt has the
>> idea of cpu model and features associated with it (similar to qemu -cpu
>> XXX,+feature,-feature stuff but in an hypervisor agnostic manner that other
>> architectures can also use). libvirt could do mostly everything on its own, 
>> but
>> it still needs to know what the host supports. Based on that it then 
>> calculates
>> the lowest common denominator of cpu features to be enabled or masked out for
>> guests when comparing to an older family in a pool of servers. Though PV/HVM
>> (with{,out} hap/shadow) have different feature sets as you mention. So 
>> libvirt
>> might be thrown into error since a certain feature isn't sure to be 
>> set/masked
>> for a certain type of guest. So knowing those (i.e {pv,hvm,...}_featuresets 
>> in
>> advance lets libxl users make more reliable usage of the libxl cpuid 
>> policies to
>> more correctly normalize the cpuid for each type of guest.
> Does libvirt currently use hw_caps (and my series will inadvertently
> break it), or are you looking to do some new work for future benefit?
Yeah, but only one bit i.e. PAE on word 0 (which is the only word that was kept
on the same place on your series). Yeah I am looking at this for future work and
trying to understand what's missing there. I do have a patch for libvirt to
parse your hw_caps but given it's not a stable format, so it might not make
sense anymore to upstream it.

> Sadly, cpuid levelling is a quagmire and not as simple as just choosing
> the common subset of bits.  When I started this project I was expecting
> it to be bad, but nothing like as bad as it has turned out to be.
Indeed, Perhaps I overstated a bit before, when saying "libvirt could do mostly
everything on its own". It certainly doesn't deal with these issues you mention
below. I guess this would hypervisor part of it (qemu/xen/vmware module on
libvirt). I further extend a bit below on what libvirt deals with.

> As an example, the "deprecates fcs/fds" bit which is the subject of the
> "inverted" mask.  The meaning of the bit is "hardware no longer supports
> x87 fcs/fds, and they are hardwired to zero".
> Originally, the point of the inverted mask was to make a "featureset"
> which could be levelled sensibly without specific knowledge of the
> meaning of each bit.  This property is important for forwards
> compatibility, and avoiding unnecessary complexity in higher level
> toolstack components.
> However, with hindsight, attempting to level this bit is pointless.  It
> is a statement about a change in pre-existing behaviour of an element of
> the cpu pipeline, and the pipeline behaviour will not change depending
> on how the bit is advertised to the guest.  Another bit, "fdp exception
> only" is in a similar bucket.
> Other issues, which I haven't even tried to tackle in this series, are
> items such as the MXCSR mask.  The real value cannot be levelled, is
> expected to remain constant after boot, and liable to induce #GP faults
> on fxrstor if it changes.  Alternatively, there is EFER.LMSLE (long mode
> segment limit enable) which doesn't even have a feature bit to indicate
> availability (not that I can plausibly see an OS actually turning that
> feature on).
Woah, I wasn't aware of these issues levelling issues.

> A toolstack needs to handles all of:
> * The maximum "configuration" available to a guest on the available servers.
> * Which bits of that can be controlled, and which will simply leak through.
> * What the guest actually saw when it booted.
> (I use configuration here to include items such as max leaf, max phys
> addr, etc which are important to be levelled, but not included in the
> plain feature bits in cpuid).
> My longterm plans involve:
> * Having Xen construct a full "maximum" cpuid policy, rather than just a
> featureset.
> * Per-domain cpuid policy, seeded from maximum on domain_create, and
> modified where appropriate (e.g. hap vs shadow, PV guest switching
> between native and compat mode).
> * All validity checking for updates in the set_cpuid hypercall rather
> than being deferred to the cpuid intercept point.
> * A get_cpuid hypercall so a toolstack can actually retrieve the policy
> a guest will see.
> Even further work involves:
> * Put all this information into the migration stream, rather than having
> it regenerated by the destination toolstack.
> * MSR levelling.
> But that is a huge quantity more work, which is why this series focuses
> just on the featureset alone, in the hope that the featureset it still a
> useful discrete item outside the context of a full cpuid policy.
> I guess my question at the end of all this is what libvirt currently
> handles of all of this? 

Hm, libvirt is a high level toolstack (meaning higher than libxl) and doesn't
deal with these things at this detail, at least AFAICT. It has the idea of cpu
and feature of each which is an idea originally borrowed from qemu as a way of
feature representation of each type of host. Each supported hypervisor in
libvirt will deal it's own way.

It has a cpu map per architecture[0] (x86/ppc only) to describe for example how
does it look like each family of CPUs (Penryn, Broadwell, Opteron, etc). It
describes too how can the features be checked: on x86, these features are
described with CPUID leaf, subleaf and registers output as you might imagine.
Note that these can be changed the way the admin says, and even define custom
ones and exclude features from them too. With these defined you include the
common features and model to create the *guest* CPU definition. Upon
bootstrapping the hypervisor driver, it looks for the most similar model and
append any unmatched features in addition to the host cpu model. This is the
same algorithm when comparing a newer family to an older one in a pool of
servers i.e. comparing cpu definitions.

[This could be viewed the same as the items you included above:
* The maximum "configuration" available to a guest on the available servers.
* Which bits of that can be controlled, and which will simply leak through.]

Though it wouldn't deal with the configuration as you say, but just with the
features, deferring the rest to underlying hypervisor libraries in use?]

In addition there are also policies attached to each features: for features
there is "force", "require", "disable", "optional", "forbid". Also there are
policies to describe how you want to match the cpu model you are describing such
as *minimum* amount of features, *exact* match of features described. When
booting the guest it then checks whether all the features are actually there and
if it's all according to feature policies.
[This could be viewed the same as the items you included above:
* What the guest actually saw when it booted.]


> We certainly can wire the featureset
> information through libxl, but it is insufficient in the general case
> for making migration safe.
Right, with the info and plans you just described I guess it some of things
aren't there yet and it would be a lot of "guesswork".


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.