[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] libxl: invert xc and domain model resume calls in xc_domain_resume()



On Tue, 29 Nov 2016, Juergen Gross wrote:
> On 29/11/16 08:34, Wei Liu wrote:
> > On Mon, Nov 28, 2016 at 02:53:57PM +0100, Cédric Bosdonnat wrote:
> >> Resume is sometimes silently failing for HVM guests. Getting the
> >> xc_domain_resume() and libxl__domain_resume_device_model() in the
> >> reverse order than what is in the suspend code fixes the problem.
> >>
> >> Signed-off-by: Cédric Bosdonnat <cbosdonnat@xxxxxxxx>
> >  
> > I think it would be nice to explain why reversing the order fixes the
> > problem for you. My guess is because device model needs to be ready when
> > the guest runs, but I'm not fully convinced by this explanation --
> > guests should just be trapped in the hypervisor waiting for device model
> > to come up.
> 
> I'm not completely sure this is true. qemu is in "stopped" state, so it
> might be any emulation requests are just silently dropped. In any case
> it is just weird to stop qemu in suspend case only after suspending the
> domain, but let it continue _after_ resuming the domain. So I'd rather
> expect an explanation (not from Cedric) why this should be okay in case
> the patch isn't accepted.

Calling xc_domain_resume before libxl__domain_resume_device_model seems
wrong to me. For example in libxl_domain_unpause we call
libxl__domain_resume_device_model, then xc_domain_unpause. We should get
the DM ready before resuming the VM, right?

TBH I don't know exactly what would happen if an ioreq comes in QEMU
before we send the QMP "cont" command. It could be silenty dropped,
causing the issue described above, but it would be nice if somebody
instrumented QEMU with some debug printf to be sure.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.