|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] PVH CPU hotplug design document
>>> On 18.01.17 at 11:34, <roger.pau@xxxxxxxxxx> wrote:
> On Tue, Jan 17, 2017 at 01:50:14PM -0500, Boris Ostrovsky wrote:
>> On 01/17/2017 12:45 PM, Roger Pau Monné wrote:
>> > On Tue, Jan 17, 2017 at 10:50:44AM -0500, Boris Ostrovsky wrote:
>> >> Part of confusion I think is because PV hotplug is not hotplug, really,
>> >> as far as Linux kernel is concerned.
>> > Hm, I'm not really sure I'm following, but I think that we could translate
>> > this
>> > Dom0 PV hotplug mechanism to PVH as:
>> >
>> > - Dom0 is provided with up to HVM_MAX_VCPUS local APIC entries in the
>> > MADT, and
>> > the entries > dom0_max_vcpus are marked as disabled.
>> > - Dom0 has HVM_MAX_VCPUS vCPUs ready to be started, either by using the
>> > local
>> > APIC or an hypercall.
>> >
>> > Would that match what's done for classic PV Dom0?
>>
>> To match what we have for PV dom0 I believe you'd provide MADT with
>> opt_dom0_max_vcpus_max entries and mark all of them enabled.
>>
>> dom0 brings up all opt_dom0_max_vcpus_max VCPUs, and then offlines
>> (opt_dom0_max_vcpus_min+1)..opt_dom0_max_vcpus_max. See
>> drivers/xen/cpu_hotplug.c:setup_cpu_watcher(). That's why I said it's
>> not a hotplug but rather on/off-lining.
>
> But how does Dom0 get the value of opt_dom0_max_vcpus_min? It doesn't seem to
> be propagated anywhere from domain_build.
I'm afraid Boris has given a meaning to that (Xen) command line
option which it doesn't have. Please see that option's description
in xen-command-line.markdown. How many vCPU-s should be
offlined is - iirc - being established by a system boot setting
inside Dom0.
> Also the logic in cpu_hotplug.c is weird IMHO:
>
> static int vcpu_online(unsigned int cpu)
> {
> int err;
> char dir[16], state[16];
>
> sprintf(dir, "cpu/%u", cpu);
> err = xenbus_scanf(XBT_NIL, dir, "availability", "%15s", state);
> if (err != 1) {
> if (!xen_initial_domain())
> pr_err("Unable to read cpu state\n");
> return err;
> }
>
> if (strcmp(state, "online") == 0)
> return 1;
> else if (strcmp(state, "offline") == 0)
> return 0;
>
> pr_err("unknown state(%s) on CPU%d\n", state, cpu);
> return -EINVAL;
> }
> [...]
> static int setup_cpu_watcher(struct notifier_block *notifier,
> unsigned long event, void *data)
> {
> int cpu;
> static struct xenbus_watch cpu_watch = {
> .node = "cpu",
> .callback = handle_vcpu_hotplug_event};
>
> (void)register_xenbus_watch(&cpu_watch);
>
> for_each_possible_cpu(cpu) {
> if (vcpu_online(cpu) == 0) {
> (void)cpu_down(cpu);
> set_cpu_present(cpu, false);
> }
> }
>
> return NOTIFY_DONE;
> }
>
> xenbus_scanf should return ENOENT for Dom0, because those paths don't exist,
> and the all vcpus are going to be left enabled? I'm quite sure I'm missing
> something here...
Well, the watch ought to trigger once the paths appear, at which
point the offlining should be happening. The explicit check in
setup_cpu_watcher() is indeed useful only for DomU.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |