[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

handle_pio looping during domain shutdown, with qemu 4.2.0 in stubdom



Hi,

(continuation of a thread from #xendevel)

During system shutdown quite often I hit infinite stream of errors like
this:

    (XEN) d3v0 Weird PIO status 1, port 0xb004 read 0xffff
    (XEN) domain_crash called from io.c:178

This is all running on Xen 4.13.0 (I think I've got this with 4.13.1
too), nested within KVM. The KVM part means everything is very slow, so
various race conditions are much more likely to happen.

It started happening not long ago, and I'm pretty sure it's about
updating to qemu 4.2.0 (in linux stubdom), previous one was 3.0.0.

Thanks to Andrew and Roger, I've managed to collect more info.

Context:
    dom0: pv
    dom1: hvm
    dom2: stubdom for dom1
    dom3: hvm
    dom4: stubdom for dom3
    dom5: pvh
    dom6: pvh

It starts I think ok:

    (XEN) hvm.c:1620:d6v0 All CPUs offline -- powering off.
    (XEN) d3v0 handle_pio port 0xb004 read 0x0000
    (XEN) d3v0 handle_pio port 0xb004 read 0x0000
    (XEN) d3v0 handle_pio port 0xb004 write 0x0001
    (XEN) d3v0 handle_pio port 0xb004 write 0x2001
    (XEN) d4v0 XEN_DMOP_remote_shutdown domain 3 reason 0
    (XEN) hvm.c:1620:d5v0 All CPUs offline -- powering off.
    (XEN) d1v0 handle_pio port 0xb004 read 0x0000
    (XEN) d1v0 handle_pio port 0xb004 read 0x0000
    (XEN) d1v0 handle_pio port 0xb004 write 0x0001
    (XEN) d1v0 handle_pio port 0xb004 write 0x2001
    (XEN) d2v0 XEN_DMOP_remote_shutdown domain 1 reason 0

But then (after a second or so) when the toolstack tries to clean it up,
things go sideways:

    (XEN) d0v0 XEN_DOMCTL_destroydomain domain 6
    (XEN) d0v0 XEN_DOMCTL_destroydomain domain 6 got domain_lock
    (XEN) d0v0 XEN_DOMCTL_destroydomain domain 6 ret -85
    (XEN) d0v0 XEN_DOMCTL_destroydomain domain 6
    (XEN) d0v0 XEN_DOMCTL_destroydomain domain 6 got domain_lock
    (XEN) d0v0 XEN_DOMCTL_destroydomain domain 6 ret -85
    (... long stream of domain destroy that can't really finish ...)
    
And then, similar also for dom1:

    (XEN) d0v1 XEN_DOMCTL_destroydomain domain 1
    (XEN) d0v1 XEN_DOMCTL_destroydomain domain 1 got domain_lock
    (XEN) d0v1 XEN_DOMCTL_destroydomain domain 1 ret -85
    (... now a stream of this for dom1 and dom6 interleaved ...)

At some point, domain 2 (stubdom for domain 1) and domain 5 join too. 

Then, we get the main issue:

    (XEN) d3v0 handle_pio port 0xb004 read 0x0000
    (XEN) d3v0 Weird PIO status 1, port 0xb004 read 0xffff
    (XEN) domain_crash called from io.c:178

Note, there was no XEN_DOMCTL_destroydomain for domain 3 nor its stubdom
yet. But XEN_DMOP_remote_shutdown for domain 3 was called already.

Full log of the shutdown:
https://gist.github.com/marmarek/fbfe1b5d8f4c7b47df5a5e28bd95ea66

And the patch adding those extra messages:
https://gist.github.com/marmarek/dc739a820928e641a1ed6b4759cdf6f3

-- 
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

Attachment: signature.asc
Description: PGP signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.