[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 01/11] x86/domctl: Add XEN_DOMCTL_set_avail_vcpus





On 11/22/2016 11:25 AM, Boris Ostrovsky wrote:


On 11/22/2016 11:01 AM, Jan Beulich wrote:
On 22.11.16 at 16:43, <boris.ostrovsky@xxxxxxxxxx> wrote:


On 11/22/2016 10:07 AM, Jan Beulich wrote:
On 22.11.16 at 15:37, <boris.ostrovsky@xxxxxxxxxx> wrote:


On 11/22/2016 08:59 AM, Jan Beulich wrote:
On 22.11.16 at 13:34, <boris.ostrovsky@xxxxxxxxxx> wrote:


On 11/22/2016 05:39 AM, Jan Beulich wrote:
On 22.11.16 at 11:31, <JBeulich@xxxxxxxx> wrote:
On 21.11.16 at 22:00, <boris.ostrovsky@xxxxxxxxxx> wrote:
This domctl is called when a VCPU is hot-(un)plugged to a
guest (via
'xl vcpu-set').

The primary reason for adding this call is because for PVH guests
the hypervisor needs to send an SCI and set GPE registers.
This is
unlike HVM guests that have qemu to perform these tasks.

And the tool stack can't do this?

For the avoidance of further misunderstandings: Of course likely
not completely on its own, but by using a (to be introduced) more
low level hypervisor interface (setting arbitrary GPE bits, with
SCI
raised as needed, or the SCI raising being another hypercall).

So you are suggesting breaking up XEN_DOMCTL_set_avail_vcpus into

XEN_DOMCTL_set_acpi_reg(io_offset, length, val)
XEN_DOMCTL_set_avail_vcpus(avail_vcpus_bitmap)
XEN_DOMCTL_send_virq(virq)

(with perhaps set_avail_vcpus folded into set_acpi_reg) ?

Well, I don't see what set_avail_vcpus would be good for
considering that during v2 review you've said that you need it
just for the GPE modification and SCI sending.


Someone needs to provide the hypervisor with the new number of
available
(i.e. hot-plugged/unplugged) VCPUs, thus the name of the domctl.
GPE/SCI
manipulation are part of that update.

(I didn't say it during v2 review and I should have)

And I've just found that need while looking over patch 8. With
that I'm not sure the splitting would make sense, albeit we may
find it necessary to fiddle with other GPE bits down the road.

Just to make sure we are talking about the same thing:
XEN_DOMCTL_set_acpi_reg is sufficient for both GPE and CPU map (or any
ACPI register should the need arise)

Well, my point is that as long as we continue to need
set_avail_vcpus (which I hear you say we do need), I'm not
sure the splitting would be helpful (minus the "albeit" part
above).


So the downside of having set_avail is that if we ever find this need to
touch ACPI registers we will be left with a useless (or at least
redundant) domctl.

Let me try to have set_acpi_reg and see if it looks good enough. If
people don't like it then I'll go back to set_avail_vcpus.

(apparently I replied to Jan only. Resending to everyone)

I have a prototype that replaces XEN_DOMCTL_set_avail_vcpus with XEN_DOMCTL_acpi_access and it seems to work OK. The toolstack needs to perform two (or more, if >32 VCPUs) hypercalls and the logic on the hypervisor side is almost the same as the ioreq handling that this series added in patch 8.

However, I now realized that this interface will not be available to PV guests (and it will only become available to HVM guests when we move hotplug from qemu to hypervisor). And it's x86-specific.

This means that PV guests will not know what the number of available VCPUs is and therefore we will not be able to enforce it. OTOH we don't know how to do that anyway since PV guests bring up all VCPUs and then offline them.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.