[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Use freeze/thaw/restore PM events for Guest suspend/resume/checkpoint

On Mon, Feb 14, 2011 at 11:06 AM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> On Mon, 2011-02-14 at 16:21 +0000, Shriram Rajagopalan wrote:
>> Use PM_FREEZE, PM_THAW and PM_RESTORE power events for
>> suspend/resume functionality, instead of PM_SUSPEND and
>> PM_RESUME. Use of these pm events also fixes the Xen Guest
>> hangup when taking checkpoints. When a suspend event is cancelled
>> (i.e. while taking checkpoints once/continuously), we use PM_THAW
>> instead of PM_RESUME. PM_RESTORE is used when suspend is not
>> cancelled. See Documentation/power/devices.txt and linux/pm.h
>> for more info about freeze, thaw and restore. The sequence of
>> pm events in a suspend-resume scenario is shown below.
>>       dpm_suspend_start(PMSG_FREEZE);
>>               dpm_suspend_noirq(PMSG_FREEZE);
>>                        sysdev_suspend(PMSG_FREEZE);
>>                        cancelled = suspend_hypercall()
>>                        sysdev_resume();
>>                dpm_resume_noirq(cancelled ? PMSG_THAW : PMSG_RESTORE);
>>        dpm_resume_end(cancelled ? PMSG_THAW : PMSG_RESTORE);
> Thanks.
> Which tree/branch is this against?
Oh I thought that info be present in the git indices
"index 5413248..ff32ffb 100644" .
Its against my local git branch (which tracks xen/stable-2.6.32.x and
is uptodate).
Sorry, I am still a bit of git newbie.
> Can you please at least do the dev_pm_ops as a separate patch to allow
> bisectability etc. (generally each patch should be a single logical
> change so if the remaineder can be sensibly split too it is worth doing
> so).
> Did you test regular save/restore as well as cancelled migrations? What
> about PVHMV guests?
>> +static struct dev_pm_ops xenbus_pm_ops = {
>> +     .suspend = xenbus_dev_suspend,
>> +     .resume  = xenbus_dev_resume,
>> +     .freeze  = xenbus_dev_suspend,
>> +     .thaw    = xenbus_dev_cancel,
>> +     .restore = xenbus_dev_resume,
>> +};
> Perhaps xenbus_dev_thaw?
> Are suspend/freeze and resume/restore really the same?
Semantically they are not. From the documentation in pm.h,

suspend/resume events deal with change in sleep states. Devices
are quiesced on suspend and might go to lower power mode.
(==> xm suspend/resume ?)

freeze/thaw/restore are used for hibernation.
 freeze:save state to memory before hibernation, quiesce device but
  do not change power state.
 thaw: undo changes made by freeze, if hibernation fails.
 restore: restore device state from hibernation image
 (==> xm save/restore/checkpoint)

 I looked at xen frontend drivers (blkfront, netfront, etc). They only use
 the resume handler to teardown and re-establishe contact with backend.

So, in our case, suspend/freeze and resume/restore are basically same.
suspend-cancel is a thaw event.
> Once we've transitioned to the PMSG_FREEZE way of doing things do we
> still need to keep the other hooks around? If not then the other ones
> could be renamed as well?
If your question is whether we can change
static struct dev_pm_ops xenbus_pm_ops = {
     .suspend = xenbus_dev_suspend,
     .resume  = xenbus_dev_resume,
     .freeze  = xenbus_dev_suspend,
     .thaw    = xenbus_dev_cancel,
     .restore = xenbus_dev_resume,
to just
static struct dev_pm_ops xenbus_pm_ops = {
     .freeze  = xenbus_dev_freeze,
     .thaw    = xenbus_dev_thaw,
     .restore = xenbus_dev_restore,

then the answer is no, AFAICS, from the code in drivers/base/power/main.c
(pm_op function).
> On Mon, 2011-02-14 at 16:24 +0000, Shriram Rajagopalan wrote:
>> parts of this patch were based on Kazuhiro Suzuki's initial patch to
>> fix the same issue. Refer to
>> http://lists.xensource.com/archives/html/xen-devel/2011-02/msg00371.html
>> for further details.
> It's worth mentioning this in the commit message, also please CC
> Kazuhiro since he has been working on this too, has a repro scenario
> etc.
will do.
> Ian.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.