[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PVH CPU hotplug design document



On 01/17/2017 12:45 PM, Roger Pau Monné wrote:
> On Tue, Jan 17, 2017 at 10:50:44AM -0500, Boris Ostrovsky wrote:
>> On 01/17/2017 10:33 AM, Jan Beulich wrote:
>>>>>> On 17.01.17 at 16:27, <boris.ostrovsky@xxxxxxxxxx> wrote:
>>>> On 01/17/2017 09:44 AM, Jan Beulich wrote:
>>>>>>>> On 17.01.17 at 15:13, <roger.pau@xxxxxxxxxx> wrote:
>>>>>> There's only one kind of PVHv2 guest that doesn't require ACPI, and that 
>>>>>> guest
>>>>>> type also doesn't have emulated local APICs. We agreed that this model 
>>>>>> was
>>>>>> interesting from things like unikernels DomUs, but that's the only 
>>>>>> reason why
>>>>>> we are providing it. Not that full OSes couldn't use it, but it seems
>>>>>> pointless.
>>>>> You writing things this way makes me notice another possible design
>>>>> issue here: Requiring ACPI is a bad thing imo, with even bare hardware
>>>>> going different directions for at least some use cases (SFI being one
>>>>> example). Hence I think ACPI should - like on bare hardware - remain
>>>>> an optional thing. Which in turn require _all_ information obtained from
>>>>> ACPI (if available) to also be available another way. And this other
>>>>> way might by hypercalls in our case.
>>>> At the risk of derailing this thread: why do we need vCPU hotplug for
>>>> dom0 in the first place? What do we gain over "echo {1|0} >
>>>> /sys/devices/system/cpu/cpuX/online" ?
>>>>
>>>> I can see why this may be needed for domUs where Xen can enforce number
>>>> of vCPUs that are allowed to run (which we don't enforce now anyway) but
>>>> why for dom0?
>>> Good that you now ask this too - that's the PV hotplug mechanism,
>>> and I've been saying all the time that this should be just fine for PVH
>>> (Dom0 and DomU).
>> I think domU hotplug has some value in that we can change number VCPUs
>> that the guest sees and ACPI-based hotplug allows us to do that in a
>> "standard" manner.
>>
>> For dom0 this doesn't seem to be necessary as it's a special domain
>> available only to platform administrator.
>>
>> Part of confusion I think is because PV hotplug is not hotplug, really,
>> as far as Linux kernel is concerned.
> Hm, I'm not really sure I'm following, but I think that we could translate 
> this
> Dom0 PV hotplug mechanism to PVH as:
>
>  - Dom0 is provided with up to HVM_MAX_VCPUS local APIC entries in the MADT, 
> and
>    the entries > dom0_max_vcpus are marked as disabled.
>  - Dom0 has HVM_MAX_VCPUS vCPUs ready to be started, either by using the local
>    APIC or an hypercall.
>
> Would that match what's done for classic PV Dom0?

To match what we have for PV dom0 I believe you'd provide MADT with
opt_dom0_max_vcpus_max entries and mark all of them enabled.

dom0 brings up all opt_dom0_max_vcpus_max VCPUs, and then offlines
(opt_dom0_max_vcpus_min+1)..opt_dom0_max_vcpus_max. See
drivers/xen/cpu_hotplug.c:setup_cpu_watcher(). That's why I said it's
not a hotplug but rather on/off-lining.

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.