[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] pvops: fix "xm save -c" issue
On Fri, Dec 17, 2010 at 09:43:16AM +0000, Ian Campbell wrote: > On Thu, 2010-12-16 at 05:49 +0000, Kenji Wakamiya wrote: > > Hi Konrad, > > > > Konrad Rzeszutek Wilk wrote: > > >> When the guest is unapused, stop_machine() in manage.c returns as the > > >> suspend is cancelled. In that case, I think dpm_resume_end() should > > >> not be called after stop_machine(). > > >> > > >> I tested vbd and net, the guest did not hang. > > > > > > OK. Does 'xm save' (so no -c) and then resume work with this patch > > > (I would think so, but I am curious whether you tested this). > > > > > >> But I'm not sure if this is a right way... > > > > > > Looks OK to me, just want to make sure that the normal 'xm save' still > > > works after this. > > > > I tested normal 'xm save' with this patch, and made sure it works well. > > > > When normal save/restore is performed, the domain is suspended without > > cancel, unlike 'save -c'. So, dpm_resume_end() is called. This > > scenario haven't changed even after applying this patch. > > With this change how is the effect of dpm_suspend_start undone in the > suspend cancelled case? > > Currently we have > dpm_suspend_start(PMSG_SUSPEND) > xs_suspend > dpm_suspend_noirq(PMSG_SUSPEND) > SUSPEND > dpm_resume_noirq(PMSG_RESUME) > xs_resume or xs_supend_cancel > dpm_resume_end(PMSG_RESUME) > > Which seems nicely nested and logical but by only calling dpm_resume_end > in the non-cancelled case we seem to be unbalancing things. > > Do we need some sort of dpm_resume_cancel, or some way of pushing the > cancelled flag down into the individual xenbus_device.resume handlers? > > Should we maybe simply be using a difference PMSG_XXX in the cancelled > case? Is this what one of PMSG_RESTORE or PMSG_RECOVER means? > > Looks like to propagate the PMSG_* to the actual device resume functions > we would need to provide a pm_ops for the struct bus xenbus_frontend > instead of relying on the legacy handlers. This is probably a > independently good idea anyway. ping? Kenji any ideas or patches to address Ian's comments? _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |