[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 3/6] xen: add new cpu notifier action CPU_RESUME_FAILED
On Mon, 2019-03-18 at 14:11 +0100, Juergen Gross wrote: > --- a/xen/include/xen/cpu.h > +++ b/xen/include/xen/cpu.h > @@ -32,23 +32,25 @@ void register_cpu_notifier(struct notifier_block > *nb); > * (a) A CPU is going down; or (b) CPU_UP_CANCELED > */ > /* CPU_UP_PREPARE: Preparing to bring CPU online. */ > -#define CPU_UP_PREPARE (0x0001 | NOTIFY_FORWARD) > +#define CPU_UP_PREPARE (0x0001 | NOTIFY_FORWARD) > In the comment block before these definitions, there's this: * Possible event sequences for a given CPU: * CPU_UP_PREPARE -> CPU_UP_CANCELLED -- failed CPU up * CPU_UP_PREPARE -> CPU_STARTING -> CPU_ONLINE -- successful CPU up * CPU_DOWN_PREPARE -> CPU_DOWN_FAILED -- failed CPU down * CPU_DOWN_PREPARE -> CPU_DYING -> CPU_DEAD -- successful CPU down Shouldn't we add a line for this new hook? Something, IIUIC, like: CPU_UP_PREPARE -> CPU_UP_CANCELLED -> CPU_RESUME_FAILED --CPU not resuming With this, FWIW, Reviewed-by: Dario Faggioli <dfaggioli@xxxxxxxx> One more (minor) thing... > /* CPU_REMOVE: CPU was removed. */ > -#define CPU_REMOVE (0x0009 | NOTIFY_REVERSE) > +#define CPU_REMOVE (0x0009 | NOTIFY_REVERSE) > +/* CPU_RESUME_FAILED: CPU failed to come up in resume, all other CPUs up. */ > +#define CPU_RESUME_FAILED (0x000a | NOTIFY_REVERSE) > ... technically, when we're dealing with CPU_RESUME_FAILED on one CPU, we don't know if _all_ others really went up, so I think I'd remove what follows the ','. Regards, Dario -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Software Engineer @ SUSE https://www.suse.com/ Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |