[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-devel] [Question] vcpu-set before or after xen_pause_requested
Ian Jackson wrote: > Liu, Jinsong writes ("RE: [Xen-devel] [Question] vcpu-set before or > after xen_pause_requested"): ... >> We keep same (HVM) xm command --> xend server --> xenstore path as >> PV domain. At /local/domain/(domid)/cpu, we setup vcpu watch and >> handle at qemu side. > > I've looked at your patch 2 and it's not correct because there is no > acknowledgement back to the utility which changes xenstore. You have > to close the loop, if for no other reason than if there are two > xenstore changes in a row which the receiving qemu-dm only gets around > to dealing with after the second, it will see only the second value. I think no problem here. In our test of the patch, 16 xenstore changes in row (vcpu0~15), and it trigger 16 event. > > How does the PV vcpu protocol deal with this ? Doesn't a PV guest > find out about VCPU changes from Xen ? 'xm vcpu-set' command works for PV now. Nixon and Campbell implement vcpu-set PV dirver at drivers/xen/cpu_hotplug.c. CC them :) > Ian, I noticed that qemu watch xenstore nodes and handle event in a close-loop way, like, usb-add/usb-del watch '/local/domain/0/device-model/domid/command' node and response xenstore with 'usb-added' / 'usb-deleted'. It's one way to communicate between xenstore and qemu. However, is it the only way to communicate between qemu/xenstore, or between PV/xenstore? I check 'xm vcpu-set' command path, it now works for PV domain in an open-loop way: 1). PV register a xenbus_watch static struct xenbus_watch cpu_watch = { .node = "cpu", .callback = handle_vcpu_hotplug_event}; (void)register_xenbus_watch(&cpu_watch); 2). when xend write xenstore cpu node or its sub-level nodes, it trigger callback function handle_vcpu_hotplug_event(), then xenbus_scanf() / xenbus_read() ... Before we implement our current patch, in fact we have 2 choices: A). keep same HVM 'xm command --> xend --> xenstore' path with PV, qemu watch '/local/domain/domid/cpu', trigger callback function in open-loop way (also similar as PV). The advantage is we have unify 'xm command --> xend --> xenstore' path for both PV and HVM. B). qemu work in close-loop way, like 'usb-add' and 'usb-del' command. The disadvantage is, we need add an hvm path at 'xm command --> xend --> xenstore', so there are 2 path for 'xm vcpu-set' command, 1 for PV and 1 for HVM. It's not beautiful. BTW, since vcpu-set need watch different xenstore node with 'usb-add''usb-del', we cannot re-use 'usb' path or function like signalDeviceModel(). It need add more code to 'xm command --> xend --> xendstore'. Current our patch use scheme A). However, I think scheme B) also can work fine. It depend on you and let me know your decision :) Thanks, Jinsong _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |