|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Issue With Patch Compilation Fails ( xen/arm: Introduce a generic way to describe device) with HAS_PCI and HAS_PASSTHROUGH.
On Wednesday 08 April 2015 11:05 AM, Manish Jaggi wrote: On Tuesday 07 April 2015 10:13 PM, Stefano Stabellini wrote:The problem with the patch is it introduces two different structures for device for x86 and arm. While x86 device = pci_dev, for ARM there is a proper device structure and != pci_dev.On Tue, 7 Apr 2015, Jaggi, Manish wrote:Hi Julien,Following patch generated compiler error when HAS_PCI adn HAS_PASSTHROUGH enabled.Please advice how to fix this issue, or you can revert this patch. Should I add a device structure in pci_dev or there is another way.Hello Manish, we have never really built Xen on ARM with HAS_PCI=y so it is normal that it won't compile out of the box, it is not just a problem caused by the commit below.So the compilation failure is by design.I imagine that you'll need to do more than setting HAS_PCI to y in order to get PCI and PCI passthrough working properly with Xen on ARM. Feel free to go ahead and propose any changes necessary.ok The source of the problem is reusing code of two functions - reassign_device - assign_device Earlier code has dt_ variants of these two functions. Now the point is simple, should there be redundancy of two functions ORchange a lot of code in common file drivers/passthrough/pci.c and add pci_to_dev macros in all platform_ops calls ? There are lot of issues with pci_to_dev approacha) iommu_ops callbacks have a pci_dev parameter in x86 but have a device parameter in arm (smmu.c) b) hack is done to make device as pci_dev and that is not a good way of doing. I prefer having minimal/some redundancy of two functions rather than changing a lot of code. So IMHO revert this patch. Cheers, Stefano _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |