[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: handle_pio looping during domain shutdown, with qemu 4.2.0 in stubdom



> -----Original Message-----
> From: 'Marek Marczykowski-Górecki' <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
> Sent: 05 June 2020 14:04
> To: paul@xxxxxxx
> Cc: 'Jan Beulich' <jbeulich@xxxxxxxx>; 'Andrew Cooper' 
> <andrew.cooper3@xxxxxxxxxx>; 'xen-devel' <xen-
> devel@xxxxxxxxxxxxxxxxxxxx>
> Subject: Re: handle_pio looping during domain shutdown, with qemu 4.2.0 in 
> stubdom
> 
> On Fri, Jun 05, 2020 at 01:39:31PM +0100, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
> > > Sent: 05 June 2020 13:02
> > > To: Jan Beulich <jbeulich@xxxxxxxx>
> > > Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>; Paul Durrant 
> > > <paul@xxxxxxx>; xen-devel <xen-
> > > devel@xxxxxxxxxxxxxxxxxxxx>
> > > Subject: Re: handle_pio looping during domain shutdown, with qemu 4.2.0 
> > > in stubdom
> > >
> > > On Fri, Jun 05, 2020 at 11:22:46AM +0200, Jan Beulich wrote:
> > > > On 05.06.2020 11:09, Jan Beulich wrote:
> > > > > On 04.06.2020 16:25, Marek Marczykowski-Górecki wrote:
> > > > >> (XEN) hvm.c:1620:d6v0 All CPUs offline -- powering off.
> > > > >> (XEN) d3v0 handle_pio port 0xb004 read 0x0000
> > > > >> (XEN) d3v0 handle_pio port 0xb004 read 0x0000
> > > > >> (XEN) d3v0 handle_pio port 0xb004 write 0x0001
> > > > >> (XEN) d3v0 handle_pio port 0xb004 write 0x2001
> > > > >> (XEN) d4v0 XEN_DMOP_remote_shutdown domain 3 reason 0
> > > > >> (XEN) d4v0 domain 3 domain_shutdown vcpu_id 0 defer_shutdown 1
> > > > >> (XEN) d4v0 XEN_DMOP_remote_shutdown domain 3 done
> > > > >> (XEN) hvm.c:1620:d5v0 All CPUs offline -- powering off.
> > > > >> (XEN) d1v0 handle_pio port 0xb004 read 0x0000
> > > > >> (XEN) d1v0 handle_pio port 0xb004 read 0x0000
> > > > >> (XEN) d1v0 handle_pio port 0xb004 write 0x0001
> > > > >> (XEN) d1v0 handle_pio port 0xb004 write 0x2001
> > > > >> (XEN) d2v0 XEN_DMOP_remote_shutdown domain 1 reason 0
> > > > >> (XEN) d2v0 domain 1 domain_shutdown vcpu_id 0 defer_shutdown 1
> > > > >> (XEN) d2v0 XEN_DMOP_remote_shutdown domain 1 done
> > > > >> (XEN) grant_table.c:3702:d0v0 Grant release 0x3 ref 0x11d flags 0x2 
> > > > >> d6
> > > > >> (XEN) grant_table.c:3702:d0v0 Grant release 0x4 ref 0x11e flags 0x2 
> > > > >> d6
> > > > >> (XEN) d3v0 handle_pio port 0xb004 read 0x0000
> > > > >
> > > > > Perhaps in this message could you also log
> > > > > v->domain->is_shutting_down, v->defer_shutdown, and
> > > > > v->paused_for_shutdown?
> > > >
> > > > And v->domain->is_shut_down please.
> > >
> > > Here it is:
> > >
> > > (XEN) hvm.c:1620:d6v0 All CPUs offline -- powering off.
> > > (XEN) d3v0 handle_pio port 0xb004 read 0x0000 is_shutting_down 0 
> > > defer_shutdown 0
> paused_for_shutdown
> > > 0 is_shut_down 0
> > > (XEN) d3v0 handle_pio port 0xb004 read 0x0000 is_shutting_down 0 
> > > defer_shutdown 0
> paused_for_shutdown
> > > 0 is_shut_down 0
> > > (XEN) d3v0 handle_pio port 0xb004 write 0x0001 is_shutting_down 0 
> > > defer_shutdown 0
> paused_for_shutdown
> > > 0 is_shut_down 0
> > > (XEN) d3v0 handle_pio port 0xb004 write 0x2001 is_shutting_down 0 
> > > defer_shutdown 0
> paused_for_shutdown
> > > 0 is_shut_down 0
> > > (XEN) d4v0 XEN_DMOP_remote_shutdown domain 3 reason 0
> > > (XEN) d4v0 domain 3 domain_shutdown vcpu_id 0 defer_shutdown 1
> > > (XEN) d4v0 XEN_DMOP_remote_shutdown domain 3 done
> > > (XEN) hvm.c:1620:d5v0 All CPUs offline -- powering off.
> > > (XEN) d1v0 handle_pio port 0xb004 read 0x0000 is_shutting_down 0 
> > > defer_shutdown 0
> paused_for_shutdown
> > > 0 is_shut_down 0
> > > (XEN) d1v0 handle_pio port 0xb004 read 0x0000 is_shutting_down 0 
> > > defer_shutdown 0
> paused_for_shutdown
> > > 0 is_shut_down 0
> > > (XEN) d1v0 handle_pio port 0xb004 write 0x0001 is_shutting_down 0 
> > > defer_shutdown 0
> paused_for_shutdown
> > > 0 is_shut_down 0
> > > (XEN) d1v0 handle_pio port 0xb004 write 0x2001 is_shutting_down 0 
> > > defer_shutdown 0
> paused_for_shutdown
> > > 0 is_shut_down 0
> > > (XEN) d2v0 XEN_DMOP_remote_shutdown domain 1 reason 0
> > > (XEN) d2v0 domain 1 domain_shutdown vcpu_id 0 defer_shutdown 1
> > > (XEN) d2v0 XEN_DMOP_remote_shutdown domain 1 done
> > > (XEN) grant_table.c:3702:d0v1 Grant release 0x3 ref 0x125 flags 0x2 d6
> > > (XEN) grant_table.c:3702:d0v1 Grant release 0x4 ref 0x126 flags 0x2 d6
> > > (XEN) d1v0 handle_pio port 0xb004 read 0x0000 is_shutting_down 1 
> > > defer_shutdown 1
> paused_for_shutdown
> > > 0 is_shut_down 0
> > > (XEN) d1v0 Unexpected PIO status 1, port 0xb004 read 0xffff
> > >
> > > (and then the stacktrace saying it's from vmexit handler)
> > >
> > > Regarding BUG/WARN - do you think I could get any more info then? I
> > > really don't mind crashing that system, it's a virtual machine
> > > currently used only for debugging this issue.
> >
> > In your logging, is that handle_pio with is_shutting_down == 1 the very 
> > last one, or is the
> 'Unexpected PIO' coming from another one issued afterwards?
> 
> That's the same function call - handle_pio message is before 
> hvmemul_do_pio_buffer() and Unexpected
> PIO is after.
> 
> Here is the current debugging patch: 
> https://gist.github.com/marmarek/da37da3722179057a6e7add4fb361e06
> 

Ok.

> > The reason I ask is that hvmemul_do_io() can call hvm_send_ioreq() to start 
> > an I/O when
> is_shutting_down is set, but will write the local io_req.state back to NONE 
> even when X86EMUL_RETRY is
> returned. Thus another call to handle_pio() will try to start a new I/O but 
> will fail with
> X86EMUL_UNHANDLEABLE in hvm_send_ioreq() because the ioreq state in the 
> shared page will not be NONE.
> 
> Isn't it a problem that hvm_send_ioreq() can be called called if 
> is_shutting_down is set?
> 

I don't think so... as long as it is not called again a second time.

  Paul

> --
> Best Regards,
> Marek Marczykowski-Górecki
> Invisible Things Lab
> A: Because it messes up the order in which people normally read text.
> Q: Why is top-posting such a bad thing?




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.