[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Some questions about PCI-passthrough for HVM(Non-IOMMU)


  • To: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
  • From: "陈诚" <concretechen@xxxxxxxxx>
  • Date: Mon, 1 Oct 2007 13:15:13 +0800
  • Delivery-date: Mon, 01 Oct 2007 10:05:50 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:mime-version:content-type:content-transfer-encoding:content-disposition; b=M8SUqLTx/BT3WIdLmMPJI7Sq4dcvunjhN6/Kx1bHhhoWEisoMyhqN4wAnLKUj/8CvDGfXedWScKHyhVkq0YbVIYZHTARPDbvQUT/YLg8cm70PI3oOupe7wx38WSIecjFVQdlmN2R4T6Ze38htqwSYwOafXx4FsMQ/B03YnrUNm0=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

Hello,
        I saw some patches about PCI-passthrough for HVM(non-IOMMU)
and I am interested in it. I want to assign my graphics card(nVidia
GeForce 7900GS) to an
HVM domain (Vista) in order to run 3D-intensive work(for example, 3D-games).
What I want to ask is that is it really possible to pass the graphics
card to HVM domain running Vista now? That is, have anyone ever
successfully

passed a modern graphics card to a Vista HVM? Since the graphics card
is a kind of complicated device. Are there any technical problems with
the passthrough

of a modern graphics card?
        I have tried the direct-io.hg subtree, but I just can't boot the
Vista HVM domain with nativedom=1 option. Xen boots without any
problem with

nativedom=1, nativedom_mem=1024M option(here the 1024M memory is
reserved for the HVM) and dom0_mem=800M option(I have 2G RAM totally).
But when I type xm cr

vista.hvm to create the HVM domain(note that it is bootable without
the nativedom=1 option), sometimes a disk read error occurs(as
displayed on the HVM

screen), sometimes it just appear to be dead locked and sometimes it
says: "Error: Device 768(vbd) could not be connected". After the
failure of starting the

HVM as nativedom, I can't even start it as a normal HVM domain, the
symptom will also be dead lock or "Error: Device 768(vbd) could not be
connected". If a

disk read error occurs during the boot of the HVM domain, the host's
filesystem will appear to be inconsistent.
        So there is surely some problems with the nativedom(Neo-1to1) part in
the current diret-io.hg subtree. The author has memtioned that on
x86-32, the

memory 0-12M is mapped to 16-28M, and if any devices DMA at 0-12M,
crashes may occur. Is it possible that the dead-lock problem and the
"Device 768(vbd)

could not be connected" problem during booting are caused by the mapping?
        This is just the one part of the problem. Another problem is that
currently I don't have the driver of my graphics card(nVidia GeForce
7900GS), using

lspci command I saw it is the pci device 06:00.0, but there is no
"driver" directory in /sys/bus/pci/device/06:00.0/, when I use the
PCI=['06:00.0'] option

in the hvm config file, error message says can't find
/sys/bus/pci/device/06:00.0/driver. Is it still possible to use
pciback to hide this device and pass it

to the HVM domain? I typed "modprobe pciback" to insert the pciback
module, but I still can't find any pciback directory in /sys/bus and
any of its sub-dir.

Should I need to compile it directly into the kernel?
        And finally I want to ask another question, can I assign two usb
devices to two different HVM domains now? If not, any future support
about this

issue?
        I am not familiar with the code of the HVM part. I will take a look
at this part and the nativedom patches, but first I want to make sure
I am on the

right way.
        Thank you.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.