[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Paravirtualised drivers for fully virtualised domains, rev9



On Fri, 2006-08-11 at 11:17 +0100, Steven Smith wrote:
> > Here is what I have found so far in trying to chase down the cause of the
> > slowdown.
> > The qemu-dm process is running 99.9% of the CPU on dom0.
> That seems very wrong.  When I try this, the device model is almost
> completely idle.  Could you see what strace says, please, or if there
> are any strange messages in the /var/log/qemu-dm. file?

I haven't tried the patches being discussed in this thread but I'm
seeing similar problems with qemu-dm anyway...

I've been looking into bugzilla 725 and I'm also seeing 100% cpu usage
by qemu-dm.  xm-test uses the nographic flag and I find that if this is
not set then the cpu usage drops to normal levels and the test passes.

> 
> > It appears that a lot of time is spent running timers and getting the
> > current time.

Yes, this is what I was seeing with the nographic flag set.

> Not being familiar with the code, I am now crawling through
> > it to see how timers are handled and how the xen-vnif PV driver uses them.
> Timer handling isn't really changed by any of these patches.  Patch
> 02.ioemu_xen_evtchns.diff is in vaguely the same area, but I can't see
> how it could cause the problems you're seeing, assuming your
> hypervisor and libxc are up to date.
> 
> What changeset of xen-unstable did you apply the patches to?

I've been seeing the problem on recent unstable changesets without the
patches.  Changesets 10992, 10949 for example.

> 
> > P.S.  This just in from a test running while I typed the above.  I noticed
> > that qemu will start a "gui_timer" when VNC is not used.  I normally run
> > without graphics (nographic=1 in the domain config file).

>   I changed the
> > config file to use VNC. The qemu-dm CPU utilization in dom0 dropped to
> > below 10%.

Yep, that's what I see without the patches.

> The network performance improved from 0.19 Mb/s to 9.75 Mb/s
> > (still less than the 23.07 Mb/s for a fully virtualized domain).
> When I try this, I see about 1600Mb/s between dom0 and a
> paravirtualised domU, about 30Mb/s between dom0 and an ioemu domU, and
> about 1200Mb/s between dom0 and an HVM domU running these drivers, all
> collected using netpipe-tcp.  That is a regression, but much smaller
> than you're seeing.
> 
> There are a couple of obvious things to check:
> 
> 1) Do the statistics reported by ifconfig show any errors?
> 2) How often is the event channel interrupt firing according to
>       /proc/interrupts?  I see about 50k-150k/second.
> 3) Is there any packet loss when you ping a domain?  Start your test
>       and run a ping in parallel.
> 
> The other thing is that these drivers seem to be very sensitive to
> kernel debugging options in the domU.  If you've got anything enabled
> in the kernel hacking menu it might be worth trying again with that
> switched off.
> 
> > It appears there is some interaction between using the xen-vnif
> > driver and the qemu timer code.  I'm still exploring.
> I'd be happier if I could reproduce this problem here.  Are you
> running SMP?  PAE?  64 bit?  What kernel are you running in the domU?
> 
> Steven.
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.