[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 00/10] PVH VCPU hotplug support



On 07/11/16 14:19, Boris Ostrovsky wrote:
> On 11/07/2016 06:41 AM, Andrew Cooper wrote:
>> On 06/11/16 21:42, Boris Ostrovsky wrote:
>>> This series adds support for ACPI-based VCPU hotplug for unprivileged
>>> PVH guests.
>>>
>>> New XEN_DOMCTL_set_avail_vcpus is introduced and is called during
>>> guest creation and in response to 'xl vcpu-set' command. This domctl
>>> updates GPE0's status and enable registers and sends an SCI to the
>>> guest using (newly added) VIRQ_SCI.
>> Thankyou for doing this.  Getting APCI hotplug working has been a low
>> item on my TODO list for while now.
>>
>> Some queries and comments however.
>>
>> This series is currently very PVH centric, to the point of making it
>> unusable for plain HVM guests.  While I won't insist on you implementing
>> this for HVM (there are some particularly awkward migration problems to
>> be considered), I do insist that its implementation isn't tied
>> implicitly to being PVH.
>>
>> The first part of this will be controlling the hypervisor emulation of
>> the PM1* blocks with an XEN_X86_EMU_* flag just like all other emulation.
> Something like XEN_X86_EMU_ACPI?

Sounds good.

>
> That would also eliminate the need for explicitly setting
> HVM_PARAM_NR_IOREQ_SERVER_PAGES to zero which I used as indication that
> we should have IO handler in the hypervisor. Paul (copied) didn't like that.

Definitely an improvement.

>>> Boris Ostrovsky (10):
>>>   x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
>> Why is this necessary?  Given that a paravirtual hotplug mechanism
>> already exists, why isn't its equivalent mechanism suitable?
> PV guests register a xenstore watch and the toolstack updates cpu's 
> "available" entry. And ACPI codepath (at least for Linux guest) is not
> involved at all.
>
> I don't think we can use anything like that in hypervisor.

There must be something in the hypervisor; what currently prevents the
PV path ignoring xenstore and onlining CPUs themselves?

Or do we currently have nothing... ?

>
>
>>>   acpi: Define ACPI IO registers for PVH guests
>> Can Xen use pm1b, or does there have to be a pm1a available to the guest?
> pm1a is a required block (unlike pm1b). ACPICA, for example, always
> first checks pm1a when handling an SCI.
>
> (And how having pm1b only would have helped?)

For the HVM case, I think we are going to need to one pm1 block
belonging to qemu, and one belonging to Xen.

>
>>>   pvh: Set online VCPU map to avail_vcpus
>>>   acpi: Power and Sleep ACPI buttons are not emulated
>> PVH might not want power/sleep, but you cannot assume that HVM guests
>> have a paravirtual mechnism of shutting down.
> AFAIK they don't rely on a button-initiated codepath. At least Linux.
>
> I don't know Windows path though. I can add ACPI_HAS_BUTTONS.

Windows very definitely does respond to button presses (although not in
a helpful way).  Please keep them enabled by default for HVM guests,
even if we disallow their use with PVH.

>
>>>   acpi: Make pmtimer optional in FADT
>>>   acpi: PVH guests need _E02 method
>> Patch 6 Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>>
>>>   pvh/ioreq: Install handlers for ACPI-related PVH IO accesses
>> Do not make any assumptions about PVHness based on IOREQ servers.  It
>> will not be true for usecases such as vGPU.
> Is this comment related to the last patch or is it a general one?  If
> it's the latter and we use XEN_X86_EMU_ACPI then I think this will not
> be an issue.

It was about that patch specifically, but XEN_X86_EMU_ACPI is definitely
the better way to go.

The only question is whether there might be other things APCI things we
wish to emulate in the future (PCI hotplug by any chance?), in which
case, should we be slightly more specific than just ACPI in the name label?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.