[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus



On 11/15/2016 03:34 AM, Jan Beulich wrote:
>>>> On 09.11.16 at 15:39, <boris.ostrovsky@xxxxxxxxxx> wrote:
>> This domctl is called when a VCPU is hot-(un)plugged to a guest (via
>> 'xl vcpu-set'). While this currently is only intended to be needed by
>> PVH guests we will call this domctl for all (x86) guests for consistency.
> The discussion on the actual change seems to have pointed out all
> needs of change, but what I wasn't able to understand yet is why
> this is needed in the first place. From hypervisor pov, so far it's been
> up to the guest which CPUs get onlined/offlined, and the interface
> to request offlining (not an issue for onlining) was - afaict - a purely
> voluntary one. Why does this change with PVH? Any such ratonale
> should be put in the commit message.


If the question is why we need to have hypervisor interface for PVH
guests then it's because we need someone to send an SCI and set GPE
registers and there is noone but the hypervisor to do that for PVH (I
will add it to the commit message).

As for whether we want to enforce available VCPU count --- I think we
decided that we can't do this for PV and so the question is whether it's
worth doing only for some types of guests. And as you pointed out the
second question (or may be the first) is whether enforcing it is the
right thing in the first place.

(BTW, I am thinking to move the domctl from x86-specific to common code
since if we are no longer saying that it's PVH-only then ARM should have
it available too)

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.