[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 4 of 5 V3] tools/libxl: Control network buffering in remus callbacks [and 1 more messages] [and 1 more messages]
On Tue, Nov 12, 2013 at 9:38 AM, Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx> wrote:
Shriram Rajagopalan writes ("Re: [PATCH 4 of 5 V3] tools/libxl: Control network buffering in remus callbacks [and 1 more messages] [and 1 more messages]"): The nested-ao patch makes sense for Remus, even without fixing this timeout issue. I can modify my stuff accordingly. Probably create a nested-ao per iteration and drop
it at the start of the next iteration. However, the timeout part is not convincing enough. For example, libxl__domain_suspend_common_callback [the version before your patch]
has two 6 second wait loops in the worst case.. LOG(DEBUG, "issuing %s suspend request via XenBus control node", dss->hvm ? "PVHVM" : "PV");
libxl__domain_pvcontrol_write(gc, XBT_NULL, domid, "suspend"); LOG(DEBUG, "wait for the guest to acknowledge suspend request"); watchdog = 60;
while (!strcmp(state, "suspend") && watchdog > 0) { usleep(100000); state = libxl__domain_pvcontrol_read(gc, XBT_NULL, domid);
if (!state) state = ""; watchdog--; } and then once again LOG(DEBUG, "wait for the guest to suspend");
watchdog = 60; while (watchdog > 0) { xc_domaininfo_t info; usleep(100000); ret = xc_domain_getinfolist(CTX->xch, domid, 1, &info);
Now I know where the 200ms overhead per checkpoint comes from. Shouldn't this also be made into an event loop? Irrespective of whether it is invoked in
Remus' context or normal suspend/resume/save/restore/migrate context. And if this remains, the Remus checkpoint interval is much lower compared to this. Typically between 25-100ms.
The only reason it might get committed to staging without other remus patches would be to fix the issue I cited above. cheers shriram
_______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |