[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen PCI passthru supported reset methods (d3d0, FLR, bus reset, link reset)



On 2012-09-18 13:39, Pasi Kärkkäinen wrote:
On Tue, Sep 18, 2012 at 01:19:31PM +0200, Robin Axelsson wrote:
With a pvops dom0 Xen resets devices by writing to its "reset" node in
sysfs so it will reset the device using whatever method the dom0 kernel
supports for that device.
And if you use Xen PCI-back it has this enabled so you don't even
need the 'reset' functionality.
The version of Linux I have to hand has, in __pci_dev_reset, calls to
the following in this order and stops after the first one which
succeeds:
       * pci_dev_specific_reset (AKA per device quirks)
       * pcie_flr
       * pci_af_flr
       * pci_pm_reset
       * pci_parent_bus_reset

See drivers/pci/pci.c in the kernel for more info.

IIRC classic Xen kernels had similar code in pciback, although I don't
know which specific sets of actions or in which order they were tried.

Ian.

That sounds like great news, that means that FLR is not a
requirement to successfully pass through hardware without errors, as
is stated in the VTdHowTo page. So it seems that the VTdHowTo page
needs to be updated with this information.

Do you want to update the wiki page? :)

-- Pasi



.


I wouldn't mind updating the Wiki but I don't have the authority and perhaps not the knowledge.

Before announcing "official" support for non-FLR hardware I think some testing or testimonials would be in place. When I was experimenting with IOMMU on a server a little over half a year ago right before sending it out for regular use, the xen pciback drivers were not properly rolled out on the paravirt ops kernel. But this probably has improved by then and I do happen to have IOMMU capable hardware available for testing once again.

* When reading into the pci.c code I think I understand the "pcie_flr" and "pci_pm_reset" methods/functions but I don't 100% understand the other functions used above to reset hardware. I guess I could figure that "pci_parent_bus_reset" trigs a bus reset, but I can't tell the difference between "pci_dev_specific_reset" and "pcie_flr" or what 'af' means in "pci_af_flr".

* Another thing that I've always been wondering is whether d3d0 trigs a reset in hardware that takes power from an auxiliary input and don't rely on the power provided by the PCI bus (such as GPUs and some soundcards such as the Asus Xonar Essence STX). I read about a guy who managed to pass through a non-FLR radeon card to a windows guest by using the "Safe remove" feature in the Windows domU which perhaps trigged a d3d0 reset through the ACPI framework somehow. Here's what he wrote:

"I've never had Xen throw errors at me regarding FLR for the 5850, but some recent experience suggests that FLR doesn't quite behave correctly when it's initiated by pciback... really screwy stuff. There have been a lot of threads between me and a few other people on the Xen-Users mailing list regarding the issue of PCI passthrough for Radeon 5XXX and 6XXX series cards to Windows HVM guests.

Radeon cards seem to work properly when they're being used after they're first used by a DomU following a restart of Dom0, then suffer performance problems if DomU is rebooted and Dom0 is not. The solution is to use the Windows "Safe Remove" feature on the video card. That forces a proper FLR (for some reason) and makes it work right.

Aside from that, I can't get the 5850 to coexist with the GPLPV package.... and I wish I knew why. It just BSODs every time. :("

He wrote this about 4 months ago so some things may have changed since then.

So unless more information is divulged in this matter, at least the Wiki could be updated with an honorable mention of these other reset methods instead of just saying that non-FLR hardware is a no-go for PCI passthrough.

Robin.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.