[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 07/10] ioreq: allow decoding accesses to MMCFG regions



> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx> On Behalf Of Roger 
> Pau Monne
> Sent: 30 September 2019 14:33
> To: xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Paul Durrant <paul@xxxxxxx>; 
> Wei Liu <wl@xxxxxxx>; Jan
> Beulich <jbeulich@xxxxxxxx>; Roger Pau Monne <roger.pau@xxxxxxxxxx>
> Subject: [Xen-devel] [PATCH v3 07/10] ioreq: allow decoding accesses to MMCFG 
> regions
> 
> Pick up on the infrastructure already added for vPCI and allow ioreq
> to decode accesses to MMCFG regions registered for a domain. This
> infrastructure is still only accessible from internal callers, so
> MMCFG regions can only be registered from the internal domain builder
> used by PVH dom0.
> 
> Note that the vPCI infrastructure to decode and handle accesses to
> MMCFG regions will be removed in following patches when vPCI is
> switched to become an internal ioreq server.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>

Reviewed-by: Paul Durrant <paul@xxxxxxx>

...with one nit below...

> ---
> Changes since v2:
>  - Don't prevent mapping MCFG ranges by ioreq servers.
> 
> Changes since v1:
>  - Remove prototype for destroy_vpci_mmcfg.
>  - Keep the code in io.c so PCI accesses to MMCFG regions can be
>    decoded before ioreq processing.
> ---
>  xen/arch/x86/hvm/dom0_build.c       |  8 +--
>  xen/arch/x86/hvm/hvm.c              |  2 +-
>  xen/arch/x86/hvm/io.c               | 79 ++++++++++++-----------------
>  xen/arch/x86/hvm/ioreq.c            | 18 +++++--
>  xen/arch/x86/physdev.c              |  5 +-
>  xen/drivers/passthrough/x86/iommu.c |  2 +-
>  xen/include/asm-x86/hvm/io.h        | 29 ++++++++---
>  7 files changed, 75 insertions(+), 68 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
> index 831325150b..b30042d8f3 100644
> --- a/xen/arch/x86/hvm/dom0_build.c
> +++ b/xen/arch/x86/hvm/dom0_build.c
> @@ -1108,10 +1108,10 @@ static void __hwdom_init pvh_setup_mmcfg(struct 
> domain *d)
> 
>      for ( i = 0; i < pci_mmcfg_config_num; i++ )
>      {
> -        rc = register_vpci_mmcfg_handler(d, pci_mmcfg_config[i].address,
> -                                         
> pci_mmcfg_config[i].start_bus_number,
> -                                         pci_mmcfg_config[i].end_bus_number,
> -                                         pci_mmcfg_config[i].pci_segment);
> +        rc = hvm_register_mmcfg(d, pci_mmcfg_config[i].address,
> +                                pci_mmcfg_config[i].start_bus_number,
> +                                pci_mmcfg_config[i].end_bus_number,
> +                                pci_mmcfg_config[i].pci_segment);
>          if ( rc )
>              printk("Unable to setup MMCFG handler at %#lx for segment %u\n",
>                     pci_mmcfg_config[i].address,
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index c22cb39cf3..5348186c0c 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -753,7 +753,7 @@ void hvm_domain_destroy(struct domain *d)
>          xfree(ioport);
>      }
> 
> -    destroy_vpci_mmcfg(d);
> +    hvm_free_mmcfg(d);
>  }
> 
>  static int hvm_save_tsc_adjust(struct vcpu *v, hvm_domain_context_t *h)
> diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
> index a5b0a23f06..3334888136 100644
> --- a/xen/arch/x86/hvm/io.c
> +++ b/xen/arch/x86/hvm/io.c
> @@ -279,6 +279,18 @@ unsigned int hvm_pci_decode_addr(unsigned int cf8, 
> unsigned int addr,
>      return CF8_ADDR_LO(cf8) | (addr & 3);
>  }
> 
> +unsigned int hvm_mmcfg_decode_addr(const struct hvm_mmcfg *mmcfg,
> +                                   paddr_t addr, pci_sbdf_t *sbdf)
> +{
> +    addr -= mmcfg->addr;
> +    sbdf->bdf = MMCFG_BDF(addr);
> +    sbdf->bus += mmcfg->start_bus;
> +    sbdf->seg = mmcfg->segment;
> +
> +    return addr & (PCI_CFG_SPACE_EXP_SIZE - 1);
> +}
> +
> +

Extraneous blank line here.

  Paul

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.