[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus





On 11/22/2016 08:59 AM, Jan Beulich wrote:
On 22.11.16 at 13:34, <boris.ostrovsky@xxxxxxxxxx> wrote:


On 11/22/2016 05:39 AM, Jan Beulich wrote:
On 22.11.16 at 11:31, <JBeulich@xxxxxxxx> wrote:
On 21.11.16 at 22:00, <boris.ostrovsky@xxxxxxxxxx> wrote:
This domctl is called when a VCPU is hot-(un)plugged to a guest (via
'xl vcpu-set').

The primary reason for adding this call is because for PVH guests
the hypervisor needs to send an SCI and set GPE registers. This is
unlike HVM guests that have qemu to perform these tasks.

And the tool stack can't do this?

For the avoidance of further misunderstandings: Of course likely
not completely on its own, but by using a (to be introduced) more
low level hypervisor interface (setting arbitrary GPE bits, with SCI
raised as needed, or the SCI raising being another hypercall).

So you are suggesting breaking up XEN_DOMCTL_set_avail_vcpus into

XEN_DOMCTL_set_acpi_reg(io_offset, length, val)
XEN_DOMCTL_set_avail_vcpus(avail_vcpus_bitmap)
XEN_DOMCTL_send_virq(virq)

(with perhaps set_avail_vcpus folded into set_acpi_reg) ?

Well, I don't see what set_avail_vcpus would be good for
considering that during v2 review you've said that you need it
just for the GPE modification and SCI sending.


Someone needs to provide the hypervisor with the new number of available (i.e. hot-plugged/unplugged) VCPUs, thus the name of the domctl. GPE/SCI manipulation are part of that update.

(I didn't say it during v2 review and I should have)



And whether to split the other two, or simply send SCI whenever
GPE bits have been modified depends on the specific requirements
the ACPI spec puts on us. Or maybe it could always be folded,
with (if necessary) SCI sending being controlled by a flag.

True. The spec says that all ACPI events generate an SCI.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.