[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 08/11] xen/arm: Enable the existing x86 virtual PCI support for ARM.



On Mon, 11 Oct 2021, Oleksandr Andrushchenko wrote:
> On 11.10.21 20:15, Bertrand Marquis wrote:
> > Hi Roger,
> >
> >> On 11 Oct 2021, at 17:43, Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
> >>
> >> On Mon, Oct 11, 2021 at 04:20:14PM +0000, Oleksandr Andrushchenko wrote:
> >>>
> >>> On 11.10.21 19:12, Bertrand Marquis wrote:
> >>>> Hi Roger,
> >>>>
> >>>>> On 11 Oct 2021, at 11:51, Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote:
> >>>>>
> >>>>> On Wed, Oct 06, 2021 at 06:40:34PM +0100, Rahul Singh wrote:
> >>>>>> The existing VPCI support available for X86 is adapted for Arm.
> >>>>>> When the device is added to XEN via the hyper call
> >>>>>> “PHYSDEVOP_pci_device_add”, VPCI handler for the config space
> >>>>>> access is added to the Xen to emulate the PCI devices config space.
> >>>>>>
> >>>>>> A MMIO trap handler for the PCI ECAM space is registered in XEN
> >>>>>> so that when guest is trying to access the PCI config space,XEN
> >>>>>> will trap the access and emulate read/write using the VPCI and
> >>>>>> not the real PCI hardware.
> >>>>>>
> >>>>>> For Dom0less systems scan_pci_devices() would be used to discover the
> >>>>>> PCI device in XEN and VPCI handler will be added during XEN boots.
> >>>>>>
> >>>>>> Signed-off-by: Rahul Singh <rahul.singh@xxxxxxx>
> >>>>>> Reviewed-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> >>>>>> ---
> >>>>>> Change in v5:
> >>>>>> - Add pci_cleanup_msi(pdev) in cleanup part.
> >>>>>> - Added Reviewed-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> >>>>>> Change in v4:
> >>>>>> - Move addition of XEN_DOMCTL_CDF_vpci flag to separate patch
> >>>>>> Change in v3:
> >>>>>> - Use is_pci_passthrough_enabled() in place of pci_passthrough_enabled 
> >>>>>> variable
> >>>>>> - Reject XEN_DOMCTL_CDF_vpci for x86 in arch_sanitise_domain_config()
> >>>>>> - Remove IS_ENABLED(CONFIG_HAS_VPCI) from has_vpci()
> >>>>>> Change in v2:
> >>>>>> - Add new XEN_DOMCTL_CDF_vpci flag
> >>>>>> - modify has_vpci() to include XEN_DOMCTL_CDF_vpci
> >>>>>> - enable vpci support when pci-passthough option is enabled.
> >>>>>> ---
> >>>>>> ---
> >>>>>> xen/arch/arm/Makefile         |   1 +
> >>>>>> xen/arch/arm/domain.c         |   4 ++
> >>>>>> xen/arch/arm/domain_build.c   |   3 +
> >>>>>> xen/arch/arm/vpci.c           | 102 ++++++++++++++++++++++++++++++++++
> >>>>>> xen/arch/arm/vpci.h           |  36 ++++++++++++
> >>>>>> xen/drivers/passthrough/pci.c |  18 ++++++
> >>>>>> xen/include/asm-arm/domain.h  |   7 ++-
> >>>>>> xen/include/asm-x86/pci.h     |   2 -
> >>>>>> xen/include/public/arch-arm.h |   7 +++
> >>>>>> xen/include/xen/pci.h         |   2 +
> >>>>>> 10 files changed, 179 insertions(+), 3 deletions(-)
> >>>>>> create mode 100644 xen/arch/arm/vpci.c
> >>>>>> create mode 100644 xen/arch/arm/vpci.h
> >>>>>>
> >>>>>> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> >>>>>> index 44d7cc81fa..fb9c976ea2 100644
> >>>>>> --- a/xen/arch/arm/Makefile
> >>>>>> +++ b/xen/arch/arm/Makefile
> >>>>>> @@ -7,6 +7,7 @@ ifneq ($(CONFIG_NO_PLAT),y)
> >>>>>> obj-y += platforms/
> >>>>>> endif
> >>>>>> obj-$(CONFIG_TEE) += tee/
> >>>>>> +obj-$(CONFIG_HAS_VPCI) += vpci.o
> >>>>>>
> >>>>>> obj-$(CONFIG_HAS_ALTERNATIVE) += alternative.o
> >>>>>> obj-y += bootfdt.init.o
> >>>>>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> >>>>>> index 36138c1b2e..fbb52f78f1 100644
> >>>>>> --- a/xen/arch/arm/domain.c
> >>>>>> +++ b/xen/arch/arm/domain.c
> >>>>>> @@ -39,6 +39,7 @@
> >>>>>> #include <asm/vgic.h>
> >>>>>> #include <asm/vtimer.h>
> >>>>>>
> >>>>>> +#include "vpci.h"
> >>>>>> #include "vuart.h"
> >>>>>>
> >>>>>> DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
> >>>>>> @@ -767,6 +768,9 @@ int arch_domain_create(struct domain *d,
> >>>>>>      if ( is_hardware_domain(d) && (rc = domain_vuart_init(d)) )
> >>>>>>          goto fail;
> >>>>>>
> >>>>>> +    if ( (rc = domain_vpci_init(d)) != 0 )
> >>>>>> +        goto fail;
> >>>>>> +
> >>>>>>      return 0;
> >>>>>>
> >>>>>> fail:
> >>>>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> >>>>>> index c5afbe2e05..f4c89bde8c 100644
> >>>>>> --- a/xen/arch/arm/domain_build.c
> >>>>>> +++ b/xen/arch/arm/domain_build.c
> >>>>>> @@ -3053,6 +3053,9 @@ void __init create_dom0(void)
> >>>>>>      if ( iommu_enabled )
> >>>>>>          dom0_cfg.flags |= XEN_DOMCTL_CDF_iommu;
> >>>>>>
> >>>>>> +    if ( is_pci_passthrough_enabled() )
> >>>>>> +        dom0_cfg.flags |= XEN_DOMCTL_CDF_vpci;
> >>>>> I think I'm confused with this. You seem to enable vPCI for dom0, but
> >>>>> then domain_vpci_init will setup traps for the guest virtual ECAM
> >>>>> layout, not the native one that dom0 will be using.
> >>>> I think after the last discussions, it was decided to also installed the 
> >>>> vpci
> >>>> handler for dom0. I will have to look into this and come back to you.
> >>>> @Oleksandr: Could you comment on this.
> >>> Yes, we do trap Dom0 as well. The Dom0 traps are not in this series, but
> >>> are in mine as it needs more preparatory work for that. Please see [1]
> >> Then I don't think we should set XEN_DOMCTL_CDF_vpci for dom0 here, it
> >> should instead be done in the patch where dom0 support is introduced.
> > Ok I will check to remove this and include the change in v6.
> Just to make it clear: do we want to remove this piece from this patch
> and instead have a dedicated patch on top of my series, so it is enabled
> right after we have the code that sets up the trap handlers for Dom0?
> If so, then do we want that patch to be chained in my series or sent as
> a follow up right after it separately?

I think we want to remove the XEN_DOMCTL_CDF_vpci chunk from this patch.

Where exactly it should be introduced, I am not sure. I think it would
be OK as a separate single patch at the end. I doesn't have to be part
of the outstanding series, considering that we are also missing the
patch to add "select HAS_PCI" for ARM.

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.