[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 12/12] docs: Describe PVHv2's VCPU hotplug procedure

>>> On 03.01.17 at 20:33, <boris.ostrovsky@xxxxxxxxxx> wrote:
> On 01/03/2017 11:58 AM, Jan Beulich wrote:
>>>>> On 03.01.17 at 15:04, <boris.ostrovsky@xxxxxxxxxx> wrote:
>>> --- a/docs/misc/hvmlite.markdown
>>> +++ b/docs/misc/hvmlite.markdown
>>> @@ -75,3 +75,14 @@ info structure that's passed at boot time (field 
> rsdp_paddr).
>>>  Description of paravirtualized devices will come from XenStore, just as 
> it's
>>>  done for HVM guests.
>>> +
>>> +## VCPU hotplug ##
>>> +
>>> +VCPU hotplug (e.g. 'xl vcpu-set <domain> <num_vcpus>') for PVHv2 guests
>>> +follows ACPI model where change in domain's number of VCPUS (stored in
>>> +domain.avail_vcpus) results in an SCI being sent to the guest. The guest
>>> +then executes DSDT's PRSC method, updating MADT enable status for the
>>> +affected VCPU.
>>> +
>>> +Updating VCPU number is achieved by having the toolstack issue a write to
>> Is any of this valid anymore in the context of the recent discussion?
>> Perhaps even wider - how much of this series is applicable if pCPU
>> hotplug is to use the normal ACPI code path? 
> pCPU hotplug is not going to use this path because it would not be
> executing PRSC method that we (Xen toolstack) provide.
>> I hope the plan is not
>> to have different vCPU hotplug for DomU and Dom0?
> That was not the plan. But I haven't thought of dom0 not being able to
> execute PRSC.

Well - bottom line to me then is: This series needs to be deferred
until there is a plan for acceptable Dom0 behavior. In particular it
may well be that PVH needs to go the PV vCPU hotplug route instead,
in which case we'd need to evaluate which of the already committed
patches make no sense anymore.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.