[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 01/11] xen/manage: keep track of the on-going suspend mode




>>
>>
>>>>> +
>>>>> +static int xen_setup_pm_notifier(void)
>>>>> +{
>>>>> +     if (!xen_hvm_domain() || xen_initial_domain())
>>>>> +             return -ENODEV;
>>>>
>>>> I don't think this works anymore.
>>> What do you mean?
>>> The first check is for xen domain types and other is for architecture 
>>> support.
>>> The reason I put this check here is because I wanted to segregate the two.
>>> I do not want to register this notifier at all for !hmv guest and also if 
>>> its
>>> an initial control domain.
>>> The arm check only lands in notifier because once hibernate() api is called 
>>> ->
>>> calls pm_notifier_call_chain for PM_HIBERNATION_PREPARE this will fail for
>>> aarch64.
>>> Once we have support for aarch64 this notifier can go away altogether.
>>>
>>> Is there any other reason I may be missing why we should move this check to
>>> notifier?
>>
>>
>> Not registering this notifier is equivalent to having it return NOTIFY_OK.
>>
> How is that different from current behavior?
>>
>> In your earlier versions just returning NOTIFY_OK was not sufficient for
>> hibernation to proceed since the notifier would also need to set
>> suspend_mode appropriately. But now your notifier essentially filters
>> out unsupported configurations. And so if it is not called your
>> configuration (e.g. PV domain) will be considered supported.
>>
> I am sorry if I am having a bit of hard time understanding this. 
> How will it be considered supported when its not even registered? My
> understanding is if its not registered, it will not land in notifier call 
> chain
> which is invoked in pm_notifier_call_chain().


Returning an error from xen_setup_pm_notifier() doesn't have any effect
on whether hibernation will start. It's the notifier that can stop it.

> 
> As Roger, mentioned in last series none of this should be a part of PVH dom0 
> hibernation as its not tested but this series should also not break anything.
> If I register this notifier for PVH dom0 and return error later that will 
> alter
> the current behavior right?
> 
> If a pm_notifier for pvh dom0 is not registered then it will not land in the
> notifier call chain and system will work as before this series.
> If I look for unsupported configurations, then !hvm domain is also one but we
> filter that out at the beginning and don't even bother about it.
> 
> Unless you mean guest running VMs itself? [Trying to read between the lines 
> may
> not be the case though]



In hibernate():

        error = __pm_notifier_call_chain(PM_HIBERNATION_PREPARE, -1,
&nr_calls);
        if (error) {
                nr_calls--;
                goto Exit;
        }


Is you don't have notifier registered (as will be the case with PV
domains and dom0) you won't get an error and proceed with hibernation.
(And now I actually suspect it didn't work even with your previous patches)


But something like this I think will do what you want:


static int xen_pm_notifier(struct notifier_block *notifier,
        unsigned long pm_event, void *unused)

{

       if (IS_ENABLED(CONFIG_ARM64) ||
           !xen_hvm_domain() || xen_initial_domain() ||
           (pm_event == PM_SUSPEND_PREPARE)) {
                if ((pm_event == PM_SUSPEND_PREPARE) || (pm_event ==
PM_HIBERNATION_PREPARE))
                        pr_warn("%s is not supported for this guest",
                                (pm_event == PM_SUSPEND_PREPARE) ?
                                "Suspend" : "Hibernation");
                return NOTIFY_BAD;

        return NOTIFY_OK;

}

static int xen_setup_pm_notifier(void)
{
        return register_pm_notifier(&xen_pm_notifier_block);
}


I tried to see if there is a way to prevent hibernation without using
notifiers (like having a global flag or something) but didn't find
anything obvious. Perhaps others can point to a simpler way of doing this.


-boris




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.