[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] ATI VGA Passthrough / Xen 4.2 / Linux 3.8.6


  • To: xen-users@xxxxxxxxxxxxx
  • From: Gordan Bobic <gordan@xxxxxxxxxx>
  • Date: Sun, 21 Apr 2013 18:07:53 +0100
  • Delivery-date: Sun, 21 Apr 2013 17:09:49 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>

OK, that last error seemed to have come from a duff hypervisor upgrade (4.2.1-7 seems to have issues, 4.2.1-6 doesn't).

I got all this working again, but not without issues. In general, the first time I add the physical VGA card to the VM, it works; it initializes correctly and produces output. Reboot, and it will never work again, until the next clean install, at least with XP x64.

I tried with Windows 7, and that seems to be behaving better (i.e. the configuration at least survives multiple reboots). As soon as the power management kicks in and the display goes to sleep, it all goes wrong - Win7 crashes in the VM, and continues crashing on VM reboots, saying that the PCI device reset has failed. I tried to unbind the GPU, and load the radeon FB driver, and then unbound it again and removed the radeon kernel driver, just to try to reset the card that way, re-bound it to the stub driver, rebooted the VM, aaaaaand - on the next VM boot attempt the whole _host_ locked up (or at least the dom0 did). That's pretty poor...

Is there a way to force a PCI device reset? I'm going to try to work around this by disabling all power management in the guest OS, but a proper fix would be nice. I also have a sneaky suspicion that it is dodgyness surrounding device resets that is making XP64 break. Are there any workarounds that can be applied to try to work around this issue?

On 04/20/2013 06:06 PM, Gordan Bobic wrote:
On 04/15/2013 02:19 PM, Peter Maloney wrote:
On 2013-04-15 14:24, Gordan Bobic wrote:
On 04/15/2013 01:02 PM, Peter Maloney wrote:
On 2013-04-15 10:32, Aurélien MILLIAT wrote:

I'm trying to get VGA passthrough to work to an XP x64 guest, and
I'm seeing "interesting" things happening.

I'm using the kernel and userspace tools from here:
http://xen.crc.id.au/support/guides/install/#
on Scientific Linux 6.

I gave up on trying to get an Nvidia card to work in the guest
having read about the extra patches required to get a non-Quadro
card to work.
So I switched to using an ATI 6450/7450 card. This works fine -
almost.
ATI cards have a secondary audio output device function on them for
outputting audio over HDMI outputs. When I pass both the VGA and
the HDMI audio devices from the host to the guest, the guest
cannot use the VGA card. It always shows up as unusable in the
guest (yellow >exclamation mark in XP x64).
I had this problem (windows XP only), and fixed it by setting:

stdvga=1

That's another thing I've been meaning to ask - where are the VM
configs stored? I am using xend with VMs configured using virt-manager
on EL6, and I cannot figure out where it put the configuration. It
doesn't appear to be in /etc/xen. So where is it and how do I get to
it? I ask because virt-manager doesn't actually show a Video adapter
in the configuration after the VM is created (a bug no doubt).
That possibly depends on your distro... with openSUSE, I created my vms
with virt-manager, and then looked in /etc/xen and found a text version
and an xml one... so I deleted the xml ones, and hand edited the text
ones.

Here is my very old working windows xp config:

http://pastebin.com/WYawYpRM

You can just copy that wherever you want, and use it from the command
line and forget your old config.

vim /path/to/file
xm create /path/to/file
xm list
xm destroy nameofvm
xm destroy vmid

I just tried that, and now I'm getting this.

# xm create /etc/xen/edi
Using config file "/etc/xen/edi".
Error: (22, 'Invalid argument')

Interestingly, I'm also now getting that error message when I am using
virt-manager to start the same domain. It seems to be related to having
PCI devices passed through. If I comment out the pci= line, the domain
gets created fine.

Digging a little further, it seems to be specifically related to
actually passing the ATI card through. If I remove that and leave the
PCI network card passed through, that works fine.

This is where things appear to go wrong in xend.log:

[2013-04-20 18:05:36 10570] ERROR (XendDomainInfo:2933)
XendDomainInfo.initDomain: exception occurred
Traceback (most recent call last):
   File "/usr/lib64/python2.6/site-packages/xen/xend/XendDomainInfo.py",
line 2920, in _initDomain
     self._createDevices()
   File "/usr/lib64/python2.6/site-packages/xen/xend/XendDomainInfo.py",
line 2396, in _createDevices
     self.pci_device_configure_boot()
   File "/usr/lib64/python2.6/site-packages/xen/xend/XendDomainInfo.py",
line 627, in pci_device_configure_boot
     self.pci_device_configure(dev_sxp, first_dev = first)
   File "/usr/lib64/python2.6/site-packages/xen/xend/XendDomainInfo.py",
line 970, in pci_device_configure
     devid = self._createDevice('pci', existing_pci_conf)
   File "/usr/lib64/python2.6/site-packages/xen/xend/XendDomainInfo.py",
line 2327, in _createDevice
     return self.getDeviceController(deviceClass).createDevice(devConfig)
   File
"/usr/lib64/python2.6/site-packages/xen/xend/server/DevController.py",
line 67, in createDevice
     self.setupDevice(config)
   File "/usr/lib64/python2.6/site-packages/xen/xend/server/pciif.py",
line 453, in setupDevice
     self.setupOneDevice(d)
   File "/usr/lib64/python2.6/site-packages/xen/xend/server/pciif.py",
line 353, in setupOneDevice
     allow_access = True)
Error: (22, 'Invalid argument')
[2013-04-20 18:05:36 10570] ERROR (XendDomainInfo:488) VM start failed
Traceback (most recent call last):
   File "/usr/lib64/python2.6/site-packages/xen/xend/XendDomainInfo.py",
line 474, in start
     XendTask.log_progress(31, 60, self._initDomain)
   File "/usr/lib64/python2.6/site-packages/xen/xend/XendTask.py", line
209, in log_progress
     retval = func(*args, **kwds)
   File "/usr/lib64/python2.6/site-packages/xen/xend/XendDomainInfo.py",
line 2936, in _initDomain
     raise VmError(str(exn))
VmError: (22, 'Invalid argument')
[2013-04-20 18:05:36 10570] DEBUG (XendDomainInfo:3077)
XendDomainInfo.destroy: domid=18
[2013-04-20 18:05:37 10570] DEBUG (XendDomainInfo:2402) Destroying
device model
[2013-04-20 18:05:37 10570] INFO (image:619) edi device model terminated
[2013-04-20 18:05:37 10570] DEBUG (XendDomainInfo:2409) Releasing devices
[2013-04-20 18:05:37 10570] DEBUG (XendDomainInfo:2415) Removing vbd/768
[2013-04-20 18:05:37 10570] DEBUG (XendDomainInfo:1276)
XendDomainInfo.destroyDevice: deviceClass = vbd, device = vbd/768
[2013-04-20 18:05:37 10570] DEBUG (XendDomainInfo:2415) Removing vfb/0
[2013-04-20 18:05:37 10570] DEBUG (XendDomainInfo:1276)
XendDomainInfo.destroyDevice: deviceClass = vfb, device = vfb/0
[2013-04-20 18:05:37 10570] DEBUG (XendDomainInfo:2407) No device model
[2013-04-20 18:05:37 10570] DEBUG (XendDomainInfo:2409) Releasing devices
[2013-04-20 18:05:37 10570] DEBUG (XendDomainInfo:2415) Removing vbd/768
[2013-04-20 18:05:37 10570] DEBUG (XendDomainInfo:1276)
XendDomainInfo.destroyDevice: deviceClass = vbd, device = vbd/768
[2013-04-20 18:05:37 10570] ERROR (XendDomainInfo:108) Domain
construction failed
Traceback (most recent call last):
   File "/usr/lib64/python2.6/site-packages/xen/xend/XendDomainInfo.py",
line 106, in create
     vm.start()
   File "/usr/lib64/python2.6/site-packages/xen/xend/XendDomainInfo.py",
line 474, in start
     XendTask.log_progress(31, 60, self._initDomain)
   File "/usr/lib64/python2.6/site-packages/xen/xend/XendTask.py", line
209, in log_progress
     retval = func(*args, **kwds)
   File "/usr/lib64/python2.6/site-packages/xen/xend/XendDomainInfo.py",
line 2936, in _initDomain
     raise VmError(str(exn))
VmError: (22, 'Invalid argument')



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.