[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] VGA PASSTHROUGH not working :(



My Dom0 is Fedora 23 Server and my "/etc/default/grub" is :

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rd.lvm.lv=fedora/root00 rd.lvm.lv=fedora/swap rhgb quiet xen-pciback.hide=(01:00.0)"
GRUB_DISABLE_RECOVERY="true"

and I added below lines to my Windows vm config file :

acpi=1
pci=['01:00.0']

But

[root@localhost ~]# xl create /etc/xen/windows.cfg
Parsing config from /etc/xen/windows.cfg
libxl: error: libxl_pci.c:1089:libxl__device_pci_add: PCI device 0000:01:00.0 cannot be assigned - no IOMMU?
libxl: error: libxl_create.c:1405:domcreate_attach_pci: libxl_device_pci_add failed: -1
libxl: info: libxl.c:1698:devices_destroy_cb: forked pid 2282 for destroy of domain 1

As you see, I got error.

# lspci
01:00.0 VGA compatible controller: NVIDIA Corporation GT218 [GeForce 210] (rev a2)





On Tuesday, May 24, 2016 6:07 PM, "sm8ax1@xxxxxxxxxxx" <sm8ax1@xxxxxxxxxxx> wrote:


It sounds to me like xen-pciback is compiled into the kernel then. Edit GRUB_CMDLINE_LINUX in /etc/default/grub and add xen-pciback.hide=(00:00.0) (replacing the number with the BDF of your graphics card) and run `grub-mkconfig -o /boot/grub/grub.cfg` and reboot. There is also an unlikely chance that xen-pciback is not included with your kernel at all (compiled in nor as a module), in which case you'll have to track down the kernel build configuration for your distro, and/or build your own custom kernel. But if your distro supports CONFIG_XEN_DOM0 then there's no reason it shouldn't include xen-pciback.

Quoting Jason Long <hack3rcon@xxxxxxxxx>:
I did :
 
[root@localhost ~]# modprobe -v xen-pciback 
[root@localhost ~]# lsmod | grep xen-pciback
[root@localhost ~]# 
 
but not any output :(

 
On Tuesday, May 17, 2016 7:30 PM, "sm8ax1@xxxxxxxxxxx" <sm8ax1@xxxxxxxxxxx> wrote:


 
So the iommu /should/ be working at this point. Now we just need to hide the graphics card from the Dom0 operating system.

Check if xen-pciback is a module using `modprobe -v xen-pciback` or `lsmod | grep xen-pciback`. If xen-pciback is built as a module on your system, you need to make sure that your graphics driver a) is a module and not compiled-in, and b) the graphics driver cannot be loaded before xen-pciback (the article tells you how to achieve this), even in early userspace! I would recommend blacklisting the graphics driver too, just to be safe.

See this article for the various methods of hiding PCI devices and the syntax to do so.
http://wiki.xen.org/wiki/Xen_PCI_Passthrough#Preparing_a_device_for_passthrough

As I said before, you'll want your DomU utilizing passthrough to be automatically started up on boot, because you probably won't be able to see anything on the screen until the DomU starts.

Quoting Jason Long <hack3rcon@xxxxxxxxx>:
I added but :
 
[root@localhost ~]# xl dmesg | grep iommu
(XEN) Command line: intel_iommu=on placeholder
[root@localhost ~]# xl pci-
pci-assignable-add      pci-assignable-remove   pci-detach 
pci-assignable-list     pci-attach              pci-list 
[root@localhost ~]# xl pci-assignable-list 
[root@localhost ~]# 
 

 
On Monday, May 16, 2016 12:56 PM, De Coro Guillaume <guillaumedecoro@xxxxxxxxx> wrote:


Hi,

I'm not a great user of xen but I known several things. sm8ax1 is right. It seems you are missing IOMMU. Don't forget to add "intel_iommu=on" in your grub default. If it works you can see that in your dmesg:
[    0.000000] DMAR: IOMMU enabled
[    0.078793] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.865169] iommu: Adding device 0000:00:00.0 to group 0
[    0.865179] iommu: Adding device 0000:00:02.0 to group 1
[    0.865194] iommu: Adding device 0000:00:14.0 to group 2
[    0.865204] iommu: Adding device 0000:00:14.2 to group 2
[    0.865215] iommu: Adding device 0000:00:16.0 to group 3
[    0.865225] iommu: Adding device 0000:00:17.0 to group 4
[    0.865243] iommu: Adding device 0000:00:1c.0 to group 5
[    0.865257] iommu: Adding device 0000:00:1c.2 to group 5
[    0.865280] iommu: Adding device 0000:00:1f.0 to group 6
[    0.865294] iommu: Adding device 0000:00:1f.2 to group 6
[    0.865303] iommu: Adding device 0000:00:1f.3 to group 6
[    0.865312] iommu: Adding device 0000:00:1f.4 to group 6
[    0.865321] iommu: Adding device 0000:00:1f.6 to group 6
[    0.865329] iommu: Adding device 0000:01:00.0 to group 5
[    0.865338] iommu: Adding device 0000:02:00.0 to group 5


and about DMAR:

[    0.000000] ACPI: DMAR 0x00000000A52B3000 0000A8 (v01 INTEL  SKL      00000001 INTL 00000001)
[    0.000000] DMAR: IOMMU enabled
[    0.078747] DMAR: Host address width 39
[    0.078751] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.078769] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap 1c0000c40660462 ecap 7e3ff0505e
[    0.078775] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.078781] DMAR: dmar1: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da
[    0.078786] DMAR: RMRR base: 0x000000a4eff000 end: 0x000000a4f1efff
[    0.078789] DMAR: RMRR base: 0x000000a5800000 end: 0x000000a7ffffff
[    0.078793] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.078796] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.078799] DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit.
[    0.078801] DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting.
[    0.080255] DMAR-IR: Enabled IRQ remapping in xapic mode
[    0.864755] DMAR: No ATSR found
[    0.864868] DMAR: dmar0: Using Queued invalidation
[    0.864992] DMAR: dmar1: Using Queued invalidation
[    0.865000] DMAR: Setting RMRR:
[    0.865022] DMAR: Setting identity map for device 0000:00:02.0 [0xa5800000 - 0xa7ffffff]
[    0.865037] DMAR: Setting identity map for device 0000:00:14.0 [0xa4eff000 - 0xa4f1efff]
[    0.865047] DMAR: Prepare 0-16MiB unity mapping for LPC
[    0.865093] DMAR: Setting identity map for device 0000:00:1f.0 [0x0 - 0xffffff]
[    0.865112] DMAR: Intel(R) Virtualization Technology for Directed I/O


It's important not to claim your passthroughed hardware by dom0. Just use "pci_stub" parameters in your grub default. After that, your command "xl pci-assignable-list" will give your video card available.

About IGD passthrough, I'm working on it without success for monthes. Maybe I'm wrong, but XenGT build fail for me. I'm using an Intel i3 6100T with Intel HD 530. I saw that Xen 4.7 will handle some igd passthrough parameters. So wait and see... and try it :)

Ciao.

Le 14/05/2016 14:56, sm8ax1@xxxxxxxxxxx a écrit :
 
 
Well, it looks to me like you don't have an IOMMU. You can check your Intel processor http://ark.intel.com/ and look for "VT-d" support. There is probably a similar site for AMD, but they call it "IOMMU" support; same thing just different names. There might also be some way to check through `lshw` or `/proc/cpuinfo` or the like, but I don't know for sure.

The wiki mentions that generic PCI passthrough might still work on some graphics cards, even without an IOMMU, but I imagine your chances are pretty slim. Something to try perhaps is setting up your HVM to automatically start when the system is booted, with generic PCI passthrough enabled, and blacklist the module on the Dom0 and reboot. In theory this prevents the Dom0 driver from interfering with the HVM's configuration of the graphics card, but once again, it might work or it might not.

http://wiki.xen.org/wiki/Xen_VGA_Passthrough
http://wiki.xen.org/wiki/VTdHowTo

If that doesn't work, your options are buy a new PC/processor with an IOMMU, or use VNC, Spice, SDL, GTK, etc. with userspace frontends in the Dom0. Spice with the QXL video driver is likely to give you the best performance, but even it won't compete with that of VGA passthrough.

http://wiki.xen.org/wiki/SPICE_support_in_Xen

If you go the route of upgrading your hardware, XenGT (now called "GVT-v for Xen") is something else to look into. The idea behind it is to allow multiple VMs to simultaneously use VGA passthrough in a safe and performant manner by creating multiple virtual graphics cards at the hardware level. At least as of Jan 2015, XenGT is being developed out-of-tree, but I haven't followed up on it as to whether it's been merged (or abandoned). If I recall correctly, this is supported on Intel 6th generation and newer processors with Intel HD 6000+ graphics, but you should definitely double check that.

http://wiki.xen.org/wiki/XenGT
https://blog.xenproject.org/2014/03/11/xen-graphics-virtualization-xengt/
http://events.linuxfoundation.org/sites/events/files/slides/XenGT-LinuxCollaborationSummit-final_1.pdf

There used to be something called "Paravirtualized DRM", which probably worked like the paravirtualized framebuffer, only using the newer and faster Linux DRM API. This, I guess, would have allowed multiple rendering clients across multiple VMs to directly render their window contents just as they would on baremetal (with the PV DRM driver acting as a shim), without any kind of VGA/PCI passthrough. Unfortunately this effort has been abandoned, and I've been unable to track down the author or even the original code.

http://wiki.xen.org/wiki/Paravirtualized_DRM


Quoting Jason Long <hack3rcon@xxxxxxxxx>:
Hello.
I want to use my VGA in VM that running Windows 7, My VGA information is :
 
01:00.0 VGA compatible controller: NVIDIA Corporation GT218 [GeForce 210] (rev a2)
Subsystem: ASUSTeK Computer Inc. Device 8354
Kernel driver in use: nouveau
Kernel modules: nouveau
 
And :
 
[root@localhost ~]# xl pci-assignable-list 
[root@localhost ~]# 
 
And I added below lines to my VM config file :
 
gfx_passthru=0
acpi=1
pci=['01:00.0 ']
 
but when I want fire my VM, It show me below error :
 
libxl: error: libxl_pci.c:1089:libxl__device_pci_add: PCI device 0000:01:00.0 cannot be assigned - no IOMMU?
libxl: error: libxl_create.c:1405:domcreate_attach_pci: libxl_device_pci_add failed: -1
libxl: info: libxl.c:1698:devices_destroy_cb: forked pid 3365 for destroy of domain 3
 
How can I solve it?

 


-------------------------------------------------
ONLY AT VFEmail! - Use our Metadata Mitigator™ to keep your email out of the NSA's hands!
$24.95 ONETIME Lifetime accounts with Privacy Features!
No Bandwidth Quotas!   15GB disk space!
Commercial and Bulk Mail Options!



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users
 
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users



 


-------------------------------------------------
ONLY AT VFEmail! - Use our Metadata Mitigator™ to keep your email out of the NSA's hands!
$24.95 ONETIME Lifetime accounts with Privacy Features!
No Bandwidth Quotas!   15GB disk space!
Commercial and Bulk Mail Options!

 

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users






-------------------------------------------------
ONLY AT VFEmail! - Use our Metadata Mitigator™ to keep your email out of the NSA's hands!
$24.95 ONETIME Lifetime accounts with Privacy Features!
No Bandwidth Quotas!   15GB disk space!
Commercial and Bulk Mail Options!


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.