[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [Question] vcpu-set before or after xen_pause_requested



Liu, Jinsong writes ("RE: [Xen-devel] [Question] vcpu-set before or after 
xen_pause_requested"):
> I think no problem here. In our test of the patch, 16 xenstore
> changes in row (vcpu0~15), and it trigger 16 event.

Firstly, testing is no way to eliminate the possibility of races.
That must be done by analysis.

Secondly, yes, you will in the current implementation get 16 watch
triggers for 16 changes (although that's not guaranteed to remain the
case).  But if you don't do xs_read in time for one of them you will
miss one of the 16 different values.

> I noticed that qemu watch xenstore nodes and handle event in a
> close-loop way, like, usb-add/usb-del watch
> '/local/domain/0/device-model/domid/command' node and response
> xenstore with 'usb-added' / 'usb-deleted'.  It's one way to
> communicate between xenstore and qemu.

Yes, that is how it should be done.

> However, is it the only way to communicate between qemu/xenstore, or between 
> PV/xenstore?
> I check 'xm vcpu-set' command path, it now works for PV domain in an 
> open-loop way:
> 1). PV register a xenbus_watch
>         static struct xenbus_watch cpu_watch = {
>                 .node = "cpu",
>                 .callback = handle_vcpu_hotplug_event};
> 
>         (void)register_xenbus_watch(&cpu_watch);
> 2). when xend write xenstore cpu node or its sub-level nodes, it trigger 
> callback function handle_vcpu_hotplug_event(), then xenbus_scanf() / 
> xenbus_read() ...

This is broken.  If for any reason multiple vcpu-set actions happen in
quick succession, before the PV guest is scheduled, the
xenbus_scanf/read will see only the last one.

The protocol should be fixed before we implement any more of it.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.