[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 00/10] PVH VCPU hotplug support



>>>> Boris Ostrovsky (10):
>>>>   x86/domctl: Add XEN_DOMCTL_set_avail_vcpus
>>> Why is this necessary?  Given that a paravirtual hotplug mechanism
>>> already exists, why isn't its equivalent mechanism suitable?
>> PV guests register a xenstore watch and the toolstack updates cpu's 
>> "available" entry. And ACPI codepath (at least for Linux guest) is not
>> involved at all.
>>
>> I don't think we can use anything like that in hypervisor.
> There must be something in the hypervisor; what currently prevents the
> PV path ignoring xenstore and onlining CPUs themselves?
>
> Or do we currently have nothing... ?


I don't think we have anything. libxl__set_vcpuonline_xenstore() is the
only thing that the toolstack does.

HVM is *possibly* more strict in that onlining involves qemu but I am
not sure even about that (especially qemu-trad, which also triggers
hotplug via xenstore watch).


>
>>
>>>>   acpi: Define ACPI IO registers for PVH guests
>>> Can Xen use pm1b, or does there have to be a pm1a available to the guest?
>> pm1a is a required block (unlike pm1b). ACPICA, for example, always
>> first checks pm1a when handling an SCI.
>>
>> (And how having pm1b only would have helped?)
> For the HVM case, I think we are going to need to one pm1 block
> belonging to qemu, and one belonging to Xen.

The only place we use pm1 block in the hypervisor is for pmtimer (and I
am actually not sure I see how qemu uses it for Xen guests).

>
>>>>   acpi: Make pmtimer optional in FADT
>>>>   acpi: PVH guests need _E02 method
>>> Patch 6 Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>>>
>>>>   pvh/ioreq: Install handlers for ACPI-related PVH IO accesses
>>> Do not make any assumptions about PVHness based on IOREQ servers.  It
>>> will not be true for usecases such as vGPU.
>> Is this comment related to the last patch or is it a general one?  If
>> it's the latter and we use XEN_X86_EMU_ACPI then I think this will not
>> be an issue.
> It was about that patch specifically, but XEN_X86_EMU_ACPI is definitely
> the better way to go.
>
> The only question is whether there might be other things APCI things we
> wish to emulate in the future (PCI hotplug by any chance?), in which
> case, should we be slightly more specific than just ACPI in the name label?

The flag  would be meant to say that no ACPI accesses are emulated by
qemu and that would be true for any accesses by PVH guests --- either
CPU or PCI-related.

But the name is somewhat misleading, even without considering other
APCI-related things: we do emulate ACPI, but in the hypervisor and not
in qemu. (As ridiculous as it sounds that actually was one of the
reasons why I didn't use a flag). EMU_NO_DM? But that's the whole PVH thing.


-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.