[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [win-pv-devel] Windows 10 domU hang on boot after pv drivers install if dom0 have old kernel (or specific 3.2 problem)



Il 09/11/2015 14:19, Paul Durrant ha scritto:
-----Original Message-----
From: Fabio Fantoni [mailto:fabio.fantoni@xxxxxxx]
Sent: 09 November 2015 12:43
To: win-pv-devel@xxxxxxxxxxxxxxxxxxxx
Cc: Paul Durrant
Subject: Windows 10 domU hang on boot after pv drivers install if dom0 have
old kernel (or specific 3.2 problem)

On testing servers I saw in initial tests that newer dom0 kernel seems
was needed and I always used kernel >=3.14 after start use new winpv
drivers.
Now I'm using 4.1 kernel in test servers but I had occasional problems
that I now know exactly was the problem, then for now I use it only for
tests.
I started to update also few production servers with newer software
tested and that seems stable, xen 4.6.0, qemu 2.4, spice 0.12.6 and
seabios 1.8.2.
I tried to use also new winpv drivers, I installed today latest build on
clean windows 10 pro 64 bit domU with same/similar cfg of testing server
with same softwares except dom0 kernel (I have 3.2 wheezy from official
repo, 3.2.68-1+deb7u5).
On boot after winpv install (more exactly the second reboot because the
first is working but still with emulated disk and network as already
reported time ago) domU's hang at windows boot, full qemu log with trace
in attachment.

I know since long time ago that winpv require xen>=4.5.0 and upstream
qemu>=1.6.1
Probably also backports of these xen patches are needed if xen<4.6
(based on critical problems had long time ago):
- x86/hvm: add per-vcpu evtchn upcalls
There was a bug in XENBUS which would cause a boot-time hang if you were not 
running a Xen with this patch, but that was fixed by:

commit 021d1f91ff9c1c10fa59e6d4200628b9d0d37eab
Author: Paul Durrant <paul.durrant@xxxxxxxxxx>
Date:   Thu Jul 2 10:23:26 2015 +0100

     Fix fall-back to two-level EVTCHN ABI

     When the EVTCHN code attempts to acquire the FIFO ABI it may fail to do
     so because the version of Xen may not support it. In this case the code
     was issuing an EventChannelReset() which has the unfortunate side effect of
     killing any toolstack-created channels, such as the xenstored channel.

     This patch moves the existent EvtchnFifoReset function into the base
     evtchn source module (since it's not ABI specific) and uses that function
     as the only mechanism of issuing an EventChannelReset() since it contains
     code to preserve event channel bindings. (Prior to the move it only
     preserved the xenstore channel but this patch adds code to preserve the
     console event channel too, if it exists).

     Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>

...which is in the staging-8.1 branch and hence will be in the 8.1 release.

- x86/hvm: extend HVM cpuid leaf with vcpu id

This is not relied upon so you should be ok without it.

   Paul

Thanks for your reply.
Based on your reply with recent winpv builds don't require backport of these xen patches but only require xen>=4.5.0 and upstream qemu>=1.6.1.

About the problem reported in this mail that seems related to the kernel (the only different thing comparing with test server), what can you tell me about?

Thanks for any reply and sorry for my bad english.


I do not understand for sure if kernel>=N is also needed (for dom0) to
use winpv drivers or if problem like this are specific to kernel 3.2 (or
debian patches).
Someone know anything certain about it?


If you need more tests/data tell me and I'll post them.


_______________________________________________
win-pv-devel mailing list
win-pv-devel@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/win-pv-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.