[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] [Xen-devel] substantial shutdown delay for PV guests with PCI -passthrough

Hi Konrad,
thanks for your quick reply. I have re-added the other recipients that were in the list prior to my reply from 2 April as I just saw that I somehow have managed to drop those guys - which might also explain their silence to my reply. All: sorry for dropping you from my earlier reply. For your convenience I have added my reply from 2 April at the end of this mail.

Am 19.04.14 02:12, schrieb Konrad Rzeszutek Wilk:
On Fri, Apr 18, 2014 at 11:47:46PM +0200, Atom2 wrote:
This is just a (very) gentle ping ... or have I missed out on a reply?

I ran an PV guest with PCI passthrough this week and it had no trouble -
didn't see 10 seconds or so. But I did the shutdown from within the
guest (poweroff).
For me it makes no difference timewise whether I issue a
        xl shutdown guest
from dom0 or whether I issue
        shutdown -h now
from a connection (i.e. ssh or screen or console) to the guest. The main difference being that for the latter the delay is visible whereas for the former, the delay is not so obvious because 'xl shutdown guest' from dom0 due to its asynchronous nature returns immediately even when the guest is still alive.

One difference that I have noticed however is that for the shutdown from _within_ the guest (i.e. shutdown -h now) the state of the guest remains 's' in 'xl list' from the time the "system halted" message appears on screen until the prompt returns in dom0 whereas for a shutdown from dom0 with 'xl shutdown guest' the state changes from 's' to 'ps' for a number of seconds before it is finally gone.

What is the kernel you are running as your dom0? Is it the same as
Frontend and Backend are both running the same kernel version, albeit obviously with different configurations. The current version of both kernels is 13.3.2-r3 and I am using the gentoo-hardened sources. The same thing also happend with my previous kernel version which was 3.11.7-r2 (also gentoo-hardened sources).

Thanks Atom2

===== Am 02.04.14 17:17, schrieb Atom2: =======
Am 02.04.14 16:44, schrieb Ian Jackson:
> Atom2 writes ("Re: [Xen-devel] [Xen-users] substantial shutdown delay for PV guests with PCI -passthrough"):
>> Am 21.03.14 19:11, schrieb Ian Jackson:
>>> Can you run it again with this, on top of the previous patch, please ?
>> Sure, the new output of xl -vvv create -F domain is again attached to
>> this e-Mail.
> Sorry for the delay replying.  I have been ill .
Sorry to hear that. Though I noticed your absence from the list I simply assumed that you were off on vacation. In any case good to see you back.
>> <NOTE: at this point a 10s pause happens>
>> libxl: error: libxl_device.c:1134:libxl__wait_for_backend_deprecated: Backend /local/domain/0/backend/pci/4/0 not ready (state 7) >> libxl: error: libxl_device.c:1138:libxl__wait_for_backend_deprecated: FE /local/domain/4/device/pci/0 state 6 >> libxl: debug: libxl_pci.c:204:libxl__device_pci_remove_xenstore: pci backend at /local/domain/0/backend/pci/4/0 is not ready >> libxl: error: libxl_pci.c:1250:do_pci_remove: xc_domain_irq_permission irq=16
> So the backend here is in state 7 (Reconfiguring), but the frontend is
> in state 6 (Closed).  I think this is a bug in pciback.
> I looked at drivers/xen/xen-pciback/xenbus.c in Linux 3.13 and found
> xen_pcibk_frontend_changed which seems to do roughly what I would
> expect.
> Has this changed at some point ?
> Atom, what kernel are you using ?
All the error messages stem from kernel 3.11.7. In the meantime 3.13.2 became stable for gentoo and I installed that a few days ago. I have not run the debug output yet or timed the shutdown process, but there's still a delay with that kernel and it feels as long as before. I you want, I can clearly provide new debug output or timing information.

Thanks Atom2
> Thanks,
> Ian.

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.