[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Different issues with Xen PV drivers and GPLPV drivers



Hello,

I'm hoping that I've simply missed something in my configurations, but I'm running into two different issues while trying out the two flavors of PV drivers.  The first is that the Xen PV drivers constantly eat up CPU while the machine sits idle, the second is that the GPLPV drivers have a 100% reproducible (on my setup, at least) bluescreen if you try to boot a VM with an empty CD-ROM drive.

The Xen dom0 is a freshly installed Debian Squeeze, running 4.0.1.

I've installed two 32-bit Windows 7 VMs from scratch, both using the same configuration file with only the mac and disk info changed:

kernel = "hvmloader"
builder='hvm'
memory = 1024
name = "win7-1"
vcpus=1
vif = [ 'bridge=xenbr0, mac=0A:0B:09:14:07:1C' ]
disk = [ 'phy:/dev/mapper/vg_VMdisks-win7--1,hda,w', 'file:/ISOs/en_windows_7_ultimate_x86_dvd_x15-65921.iso,hdc:cdrom,r' ]
device_model = 'qemu-dm'
boot="cda"
vnc=1
vncpasswd=''
usbdevice='tablet'
>
>
>


The installations complete perfectly fine.  When the machines are back up, I install the Xen PV drivers on Win7-1.  On Win7-2, I enable testsigning, reboot, install GPLPV, and reboot.

At this point, the machine running the Xen PV drivers can't boot, but it's a known issue with a workaround.  You have to create a script ( /usr/lib64/xen-4.0/bin/qemu-dm-citrixpv ) that contains the following:
-------------------------------
#!/bin/bash

/usr/sbin/xenstore-rm /local/domain/0/backend/vfb/$2
/usr/sbin/xenstore-rm /local/domain/0/backend/console/$2

sh -c "sleep 15 ; /usr/sbin/xenstore-rm /local/domain/0/backend/console/$2; /usr/sbin/xenstore-rm /local/domain/0/backend/vfb/$2" &
exec /usr/lib/xen-4.0/bin/qemu-dm $*
--------------------------------

... with that script in place, you have to edit the Xen PV machine's configuration and change device_model from qemu-dm to qemu-dm-citrixpv.  Now the machine boots without issue.

Both machines want one more reboot after initially coming up on the PV drivers, so I bounce them that last time.

At this point, I can log onto both and start the task manager to watch the performance tab.  After they've settled down from being booted up, the machine running GPLPV pretty much flatlines around 0% CPU utilization, with only the occasional blip.  This is exactly what should happen (both machines run this way if monitored prior to the PV driver installs).  The machine running the Xen drivers has a CPU graph that looks like a saw, just constant spikes of CPU, 0% then 20%, back to 0% then 25%.  These spikes never stop, even if left idling for hours or overnight.  Microsoft's "TrustedInstaller.exe" process is what shows up as the culprit, but again this never occurs unless the Xen drivers are installed, and uninstalling them completely fixes the issue.

So, that's the issue I'm hitting with the Xen PV drivers.  With the GPLPV drivers, the issue is easier to describe.  I've built the machine using the configuration above, installed the drivers, it's running great.  Well, I don't want to always have that iso associated with the machine, so I'd like to have the cdrom drive empty.  I change its disk line accordingly:

disk = [ 'phy:/dev/mapper/vg_VMdisks-win7--1,hda,w', ',hdc:cdrom,r' ]

And with only that change and the GPLPV drivers installed, the machine bluescreens 100% of the time when trying to start.  Then Xen PV machine has no problem at all with an empty cdrom drive, and neither machine has a problem with an empty drive prior to the PV driver installation.

If I remove the cdrom drive from the configuration completely, then the machine boots right up and runs great.

Has anyone else experienced one or both of these issues, and have I missed an easy solution in my hours of wiki reading and googling?

Thanks,
Mark


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.