[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 09/10] [RFC only] xen: iommu: remove last pcidevs_lock() calls in iommu



On Wed, 31 Aug 2022, Volodymyr Babchuk wrote:
> There are number of cases where pcidevs_lock() is used to protect
> something that is not related to PCI devices per se.
> 
> Probably pcidev_lock in these places should be replaced with some
> other lock.
> 
> This patch is not intended to be merged and is present only to discuss
> this use of pcidevs_lock()
> 
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>

I wonder if we are being too ambitious: is it necessary to get rid of
pcidevs_lock completely, without replacing all its occurrences with
something else?

Because it would be a lot easier to only replace pcidevs_lock with
pdev->lock, replacing the global lock with a per-device lock. That alone
would be a major improvement and would be far easier to verify its
correctness.

While this patch and the previous patch together remove all occurrences
of pcidevs_lock without adding pdev->lock, which is difficult to prove
correct.


> ---
>  xen/drivers/passthrough/vtd/intremap.c | 2 --
>  xen/drivers/passthrough/vtd/iommu.c    | 5 -----
>  xen/drivers/passthrough/x86/iommu.c    | 5 -----
>  3 files changed, 12 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/vtd/intremap.c 
> b/xen/drivers/passthrough/vtd/intremap.c
> index 1512e4866b..44e3b72f91 100644
> --- a/xen/drivers/passthrough/vtd/intremap.c
> +++ b/xen/drivers/passthrough/vtd/intremap.c
> @@ -893,8 +893,6 @@ int pi_update_irte(const struct pi_desc *pi_desc, const 
> struct pirq *pirq,
>  
>      spin_unlock_irq(&desc->lock);
>  
> -    ASSERT(pcidevs_locked());
> -
>      return msi_msg_write_remap_rte(msi_desc, &msi_desc->msg);
>  
>   unlock_out:
> diff --git a/xen/drivers/passthrough/vtd/iommu.c 
> b/xen/drivers/passthrough/vtd/iommu.c
> index 87868188b7..9d258d154d 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -127,8 +127,6 @@ static int context_set_domain_id(struct context_entry 
> *context,
>  {
>      unsigned int i;
>  
> -    ASSERT(pcidevs_locked());
> -
>      if ( domid_mapping(iommu) )
>      {
>          unsigned int nr_dom = cap_ndoms(iommu->cap);
> @@ -1882,7 +1880,6 @@ int domain_context_unmap_one(
>      int iommu_domid, rc, ret;
>      bool_t flush_dev_iotlb;
>  
> -    ASSERT(pcidevs_locked());
>      spin_lock(&iommu->lock);
>  
>      maddr = bus_to_context_maddr(iommu, bus);
> @@ -2601,7 +2598,6 @@ static void __hwdom_init setup_hwdom_rmrr(struct domain 
> *d)
>      u16 bdf;
>      int ret, i;
>  
> -    pcidevs_lock();
>      for_each_rmrr_device ( rmrr, bdf, i )
>      {
>          /*
> @@ -2616,7 +2612,6 @@ static void __hwdom_init setup_hwdom_rmrr(struct domain 
> *d)
>              dprintk(XENLOG_ERR VTDPREFIX,
>                       "IOMMU: mapping reserved region failed\n");
>      }
> -    pcidevs_unlock();
>  }
>  
>  static struct iommu_state {
> diff --git a/xen/drivers/passthrough/x86/iommu.c 
> b/xen/drivers/passthrough/x86/iommu.c
> index f671b0f2bb..4e94ad15df 100644
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -207,7 +207,6 @@ int iommu_identity_mapping(struct domain *d, p2m_access_t 
> p2ma,
>      struct identity_map *map;
>      struct domain_iommu *hd = dom_iommu(d);
>  
> -    ASSERT(pcidevs_locked());
>      ASSERT(base < end);
>  
>      /*
> @@ -479,8 +478,6 @@ domid_t iommu_alloc_domid(unsigned long *map)
>      static unsigned int start;
>      unsigned int idx = find_next_zero_bit(map, UINT16_MAX - DOMID_MASK, 
> start);
>  
> -    ASSERT(pcidevs_locked());
> -
>      if ( idx >= UINT16_MAX - DOMID_MASK )
>          idx = find_first_zero_bit(map, UINT16_MAX - DOMID_MASK);
>      if ( idx >= UINT16_MAX - DOMID_MASK )
> @@ -495,8 +492,6 @@ domid_t iommu_alloc_domid(unsigned long *map)
>  
>  void iommu_free_domid(domid_t domid, unsigned long *map)
>  {
> -    ASSERT(pcidevs_locked());
> -
>      if ( domid == DOMID_INVALID )
>          return;
>  
> -- 
> 2.36.1
> 



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.