[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 2/6] xen: pci: introduce reference counting for pdev


  • To: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 13 Apr 2023 17:00:33 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=klEoG/X0C1nj76ubJnwT76H/xIUNuYrHoSpaG2B1Quc=; b=ngJrEfgKDJ0YHWAhgu9J3pvrGRTVp6XIIHYP47a0CdIJ4ke+SFOii+FUNeE9SEzncC4Q/B6BIh9/sqCqygnMu2O1UrWrHJDRg2E/oePjQixGJNQpfSO6LW27eAgmJzWkkqsKCp3gYtIPHVrodn/8KCKra8j11rIOByIr3BdXMcxYna15wgbjWUKWsaTtBiaGPdULO6ApZFLdCIMFl8jDMP94+ljhru3kVx0VfSm46yN3/5JTMdSckG/JtnfnWxGnZriJ3B2qiLbaobn2dLVGeU+20asJbCYeUSeAzcUdUoCyRYmqcdumWfMWJ3X+ZjNkxhDcHp+zVW7CTi6JpKFidg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=l0L7+DovZFNZOk7OAr3CZDT0NDO8iXWld6ZXDWktOFHmnOkYNNOpuCIOUScffqlmyWFRfZj1y5CY7ad9z8dZNGo0bPA+tqg6Xa7Y1pEaNJrprgWo7xFpnmm1GIW4ng1JkVttgHbIgjIE9IqDcsQiOsBxooLLwmFkE7E0zInaVSLcgNS6wj3rrRfVG90aYR4p7mtuDjB1fQNYw75WS9vd1HOIEzJWT1v96adQVCPYRGAik327aAaPHQIV/HWnHdiSOPWTQx00fLRPE79DCObhMMxkdlcr9iVdx2F7d41C6QJbekYQGE4jQafVzCAJw74FmB6wNMGIitXSUDXiheyVKQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Paul Durrant <paul@xxxxxxx>, Kevin Tian <kevin.tian@xxxxxxxxx>
  • Delivery-date: Thu, 13 Apr 2023 15:00:52 +0000
  • Ironport-data: A9a23:w/VRZq07uE+2i10HtPbD5V1wkn2cJEfYwER7XKvMYLTBsI5bpzEBz TFLD2HUbvuONGH1fNB/O9u1p0IPuJODzt5hHQRtpC1hF35El5HIVI+TRqvS04F+DeWYFR46s J9OAjXkBJppJpMJjk71atANlVEliefTAOK6ULWeUsxIbVcMYD87jh5+kPIOjIdtgNyoayuAo tq3qMDEULOf82cc3lk8tuTS+XuDgNyo4GlD5gBnPqgQ1LPjvyJ94Kw3dPnZw0TQGuG4LsbiL 87fwbew+H/u/htFIrtJRZ6iLyXm6paLVeS/oiI+t5qK23CulQRrukoPD9IOaF8/ttm8t4sZJ OOhF3CHYVxB0qXkwIzxWvTDes10FfUuFLTveRBTvSEPpqFvnrSFL/hGVSkL0YMkFulfOz9zs tkmNwo0QhWng8u4mpGbEvk1mZF2RCXrFNt3VnBI6xj8VK5ja7acBqLA6JlfwSs6gd1IEbDGf c0FZDFzbRPGJRpSJlMQD5F4l+Ct7pX9W2QA9BTJ+uxouy6KlFcZPLvFabI5fvSQQspYhACAr 3/u9GXlGBAKcteYzFJp91r13rOWzXimAt56+LuQx95JkWy5+HQqLgwTXkW4mvK9sFSdRIcKQ 6AT0m90xUQoz2SVSd36Uwy9sWSzlBcWUNpNEMU38AiIjKHT5m6xFmUCCzJMdtEinMs3XiAxk E+EmcvzAj5iu6HTTmiSnp+Wpz6vPSkeLUcZeDQJCwAC5rHLv4Ubnh/JCNF5H8adjMDxGDz26 yCHqm45nbp7pdUQy6yx8FTDgjStjpvEVAg44kPQRG3NxhtweYqNd4Gur1/B4p5oL4uHT1/Ho HkNneCf6vwDCdeGkynlfQkWNLSg5vLAOjuDh1dqRsEl7270oyXlep1M6jZjIksvKtwDZTLif E7Uv0VW+YNXO3ypK6RwZupdFvgX8EQpLvy9Pti8UzaESsQZmNOvlM22WXOt4g==
  • Ironport-hdrordr: A9a23:TJlfe67/rJPu9sbp9QPXwPfXdLJyesId70hD6qm+c20tTiX4rb HXoB1/73XJYVkqKRQdcLy7Scu9qDbnhP1ICOoqXItKPjOW3FdARbsKheDfKn/bexEWndQtsp uIHZIObuEYzmIXsS852mSF+hobr+VvOZrHudvj
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, Apr 12, 2023 at 09:54:12PM +0000, Volodymyr Babchuk wrote:
> 
> Hi Roger,
> 
> First of all, I want to provide link [1] to the RFC series where I tried
> total PCI locking rework. After discussing with Jan, it became clear for
> me, that task is much harder, than I anticipated. So, it was decided to
> move with a smaller steps. First step is to make vPCI code independed
> from the global PCI lock. Actually, this is not the first try.
> Oleksandr Andrushchenko tried to use r/w lock for this: [2]. But,
> Jan suggested to use refcounting instead of r/w locks, and I liked the
> idea. So, this is why you are seeing this patch series.

Thanks, I've been on leave for long periods recently and I've missed
some of the series.

> 
> 
> Roger Pau Monné <roger.pau@xxxxxxxxxx> writes:
> 
> > On Tue, Apr 11, 2023 at 11:41:04PM +0000, Volodymyr Babchuk wrote:
> >> 
> >> Hi Roger,
> >> 
> >> Roger Pau Monné <roger.pau@xxxxxxxxxx> writes:
> >> 
> >> > On Tue, Mar 14, 2023 at 08:56:29PM +0000, Volodymyr Babchuk wrote:
> >> >> Prior to this change, lifetime of pci_dev objects was protected by 
> >> >> global
> >> >> pcidevs_lock(). Long-term plan is to remove this log, so we need some
> >> >                                                    ^ lock
> >> >
> >> > I wouldn't say remove, as one way or another we need a lock to protect
> >> > concurrent accesses.
> >> >
> >> 
> >> I'll write "replace this global lock with couple of more granular
> >> locking devices"
> >> if this is okay for you.
> >> 
> >> >> other mechanism to ensure that those objects will not disappear under
> >> >> feet of code that access them. Reference counting is a good choice as
> >> >> it provides easy to comprehend way to control object lifetime.
> >> >> 
> >> >> This patch adds two new helper functions: pcidev_get() and
> >> >> pcidev_put(). pcidev_get() will increase reference counter, while
> >> >> pcidev_put() will decrease it, destroying object when counter reaches
> >> >> zero.
> >> >> 
> >> >> pcidev_get() should be used only when you already have a valid pointer
> >> >> to the object or you are holding lock that protects one of the
> >> >> lists (domain, pseg or ats) that store pci_dev structs.
> >> >> 
> >> >> pcidev_get() is rarely used directly, because there already are
> >> >> functions that will provide valid pointer to pci_dev struct:
> >> >> pci_get_pdev(), pci_get_real_pdev(). They will lock appropriate list,
> >> >> find needed object and increase its reference counter before returning
> >> >> to the caller.
> >> >> 
> >> >> Naturally, pci_put() should be called after finishing working with a
> >> >> received object. This is the reason why this patch have so many
> >> >> pcidev_put()s and so little pcidev_get()s: existing calls to
> >> >> pci_get_*() functions now will increase reference counter
> >> >> automatically, we just need to decrease it back when we finished.
> >> >
> >> > After looking a bit into this, I would like to ask whether it's been
> >> > considered the need to increase the refcount for each use of a pdev.
> >> >
> >> 
> >> This is how Linux uses reference locking. It decreases cognitive load
> >> and chance for an error, as there is a simple set of rules, which you
> >> follow.
> >> 
> >> > For example I would consider the initial alloc_pdev() to take a
> >> > refcount, and then pci_remove_device() _must_ be the function that
> >> > removes the last refcount, so that it can return -EBUSY otherwise (see
> >> > my comment below).
> >> 
> >> I tend to disagree there, as this ruins the very idea of reference
> >> counting. We can't know who else holds reference right now. Okay, we
> >> might know, but this requires additional lock to serialize
> >> accesses. Which, in turn, makes refcount un-needed.
> >
> > In principle pci_remove_device() must report whether the device is
> > ready to be physically removed from the system, so it must return
> > -EBUSY if there are still users accessing the device.
> >
> > A user would use PHYSDEVOP_manage_pci_remove to signal Xen it's trying
> > to physically remove a PCI device from a system, so we must ensure
> > that when the hypervisor returns success the device is ready to be
> > physically removed.
> >
> > Or at least that's my understanding of how this should work.
> >
> 
> As I can see, this is not how it is implemented right
> now. pci_remove_device() is not checking if device is not assigned to a
> domain. Id does not check if there are still users accessing the
> device. It just relies on a the global PCI lock to ensure that device is
> removed in an orderly manner.

Right, the expectation is that any path inside of the hypervisor using
the device will hold the pcidevs lock, and thus bny holding it while
removing we assert that no users (inside the hypervisor) are left.

I don't think we have been very consistent about the usage of the
pcidevs lock, and hence most of this is likely broken.  Hopefully
removing a PCI device from a system is a very uncommon operation.

> My patch series has no intention to change this behavior. All what I
> want to achieve - is to allow vpci code access struct pdev objects
> without holding the global PCI lock. 

That's all fine, but we need to make sure it doesn't make things worse
and what they currently are, and ideally it should make things easier.

That's why I would like to understand exactly what's the purpose of
the refcount, and how it should be used.  The usage of the refcount
should be compatible with the intended behaviour of
pci_remove_device(), regardless of whether the current implementation
is not correct.  We don't want to be piling up more broken stuff on
top of an already broken implementation.

> >> >
> >> > That makes me wonder if for example callers of pci_get_pdev(d, sbdf)
> >> > do need to take an extra refcount, because such access is already
> >> > protected from the pdev going away by the fact that the device is
> >> > assigned to a guest.  But maybe it's too much work to separate users
> >> > of pci_get_pdev(d, ...); vs pci_get_pdev(NULL, ...);.
> >> >
> >> > There's also a window when the refcount is dropped to 0, and the
> >> > destruction function is called, but at the same time a concurrent
> >> > thread could attempt to take a reference to the pdev still?
> >> 
> >> Last pcidev_put() would be called by pci_remove_device(), after removing
> >> it from all lists. This should prevent other threads from obtaining a valid
> >> reference to the pdev.
> >
> > What if a concurrent user has taken a reference to the object before
> > pci_remove_device() has removed the device from the lists, and still
> > holds it when pci_remove_device() performs the supposedly last
> > pcidev_put() call?
> 
> Well, let's consider VPCI code as this concurrent user, for
> example. First, it will try to take vpci->lock. Depending on where in
> pci_remov_device() there will be three cases:
> 
> 1. Lock is taken before vpci_remove_device() takes the lock. In this
> case vpci code works as always
> 
> 2. It tries to take the lock when vpci_remove_device() is already locked
> this. In this case we are falling to the next case:
> 
> 3. Lock is taken after vpci_remove_device() had finished it's work. In this
> case vPCI code sees that it was called for a device in an invalid state
> and exits.

For 2) and 3) you will hit a dereference, as the lock (vpci->lock)
would have been freed by vpci_remove_device() while a concurrent user
is waiting on pci_remov_device() to release the lock.

I'm not sure how the user sees the device is in an invalid state,
because it was waiting on a lock (vpci->lock) that has been removed
under it's feet.

This is an existing issue not made worse by the refcounting, but it's
not a great example.

> 
> As you can see, there is no case where vPCI code is running on an device
> which was removed.
> 
> After vPCI code drops refcounter, pdev object will be freed once and for
> all. Please node, that I am talking about pdev object there, not about
> PCI device, because PCI device (as a high-level entity) was destroyed by
> pci_remove_device(). refcount is needed just for the last clean-up
> operations.

Right, but pci_remove_device() will return success even when there are
some users holding a refcount to the device, which is IMO undesirable.

As I understand it the purpose of pci_remove_device() is that once it
returns success the device can be physically removed from the system.

> >
> >> >
> >> >>          sbdf.devfn &= ~stride;
> >> >>          pdev = pci_get_pdev(NULL, sbdf);
> >> >>          if ( pdev && stride != pdev->phantom_stride )
> >> >> +        {
> >> >> +            pcidev_put(pdev);
> >> >>              pdev = NULL;
> >> >> +        }
> >> >>      }
> >> >>  
> >> >>      return pdev;
> >> >> @@ -548,13 +526,18 @@ struct pci_dev *pci_get_pdev(const struct domain 
> >> >> *d, pci_sbdf_t sbdf)
> >> >>          list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
> >> >>              if ( pdev->sbdf.bdf == sbdf.bdf &&
> >> >>                   (!d || pdev->domain == d) )
> >> >> +            {
> >> >> +                pcidev_get(pdev);
> >> >>                  return pdev;
> >> >> +            }
> >> >>      }
> >> >>      else
> >> >>          list_for_each_entry ( pdev, &d->pdev_list, domain_list )
> >> >>              if ( pdev->sbdf.bdf == sbdf.bdf )
> >> >> +            {
> >> >> +                pcidev_get(pdev);
> >> >>                  return pdev;
> >> >> -
> >> >> +            }
> >> >>      return NULL;
> >> >>  }
> >> >>  
> >> >> @@ -663,7 +646,10 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
> >> >>                              PCI_SBDF(seg, info->physfn.bus,
> >> >>                                       info->physfn.devfn));
> >> >>          if ( pdev )
> >> >> +        {
> >> >>              pf_is_extfn = pdev->info.is_extfn;
> >> >> +            pcidev_put(pdev);
> >> >> +        }
> >> >>          pcidevs_unlock();
> >> >>          if ( !pdev )
> >> >>              pci_add_device(seg, info->physfn.bus, info->physfn.devfn,
> >> >> @@ -818,7 +804,9 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
> >> >>              if ( pdev->domain )
> >> >>                  list_del(&pdev->domain_list);
> >> >>              printk(XENLOG_DEBUG "PCI remove device %pp\n", 
> >> >> &pdev->sbdf);
> >> >> -            free_pdev(pseg, pdev);
> >> >> +            list_del(&pdev->alldevs_list);
> >> >> +            pdev_msi_deinit(pdev);
> >> >> +            pcidev_put(pdev);
> >> >
> >> > Hm, I think here we want to make sure that the device has been freed,
> >> > or else you would have to return -EBUSY to the calls to notify that
> >> > the device is still in use.
> >> 
> >> Why? As I can see, pdev object is still may potentially be accessed by
> >> some other CPU right now. So pdev object will be freed after last
> >> reference is dropped. As it is already removed from all the lists,
> >> pci_dev_get() will not find it anymore.
> >> 
> >> Actually, I can't see how this can happen in reality, as VPCI, MSI and
> >> IOMMU are already deactivated for this device. So, no one would touch it.
> >
> > Wouldn't it be possible for a concurrent user to hold a reference from
> > befoe the device has been 'deactivated'?
> >
> 
> Yes, it can hold a reference. This is why we need additional locking to
> ensure that, say, pci_cleanup_msi() does not races with rest of the MSI
> code. Right now this is ensured by then global PCI lock.
> 
> >> >
> >> > I think we need an extra pcidev_put_final() or similar that can be
> >> > used in pci_remove_device() to assert that the device has been
> >> > actually removed.
> >> 
> >> Will something break if we don't do this? I can't see how this can
> >> happen.
> >
> > As mentioned above, once pci_remove_device() returns 0 the admin
> > should be capable of physically removing the device from the system.
> >
> 
> This patch series does not alter this requirement. Admin is still
> capable of physically removing the device from the system. After
> successful call to the pci_remove_device()

Indeed, but there might be users in the hypervisor still holding a
reference to the pdev.

> >> >> -static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, 
> >> >> u32 flag)
> >> >> +static int assign_device(struct domain *d, struct pci_dev *pdev, u32 
> >> >> flag)
> >> >>  {
> >> >>      const struct domain_iommu *hd = dom_iommu(d);
> >> >> -    struct pci_dev *pdev;
> >> >> +    uint8_t devfn;
> >> >>      int rc = 0;
> >> >>  
> >> >>      if ( !is_iommu_enabled(d) )
> >> >> @@ -1422,10 +1412,11 @@ static int assign_device(struct domain *d, u16 
> >> >> seg, u8 bus, u8 devfn, u32 flag)
> >> >>  
> >> >>      /* device_assigned() should already have cleared the device for 
> >> >> assignment */
> >> >>      ASSERT(pcidevs_locked());
> >> >> -    pdev = pci_get_pdev(NULL, PCI_SBDF(seg, bus, devfn));
> >> >>      ASSERT(pdev && (pdev->domain == hardware_domain ||
> >> >>                      pdev->domain == dom_io));
> >> >>  
> >> >> +    devfn = pdev->devfn;
> >> >> +
> >> >>      /* Do not allow broken devices to be assigned to guests. */
> >> >>      rc = -EBADF;
> >> >>      if ( pdev->broken && d != hardware_domain && d != dom_io )
> >> >> @@ -1460,7 +1451,7 @@ static int assign_device(struct domain *d, u16 
> >> >> seg, u8 bus, u8 devfn, u32 flag)
> >> >>   done:
> >> >>      if ( rc )
> >> >>          printk(XENLOG_G_WARNING "%pd: assign (%pp) failed (%d)\n",
> >> >> -               d, &PCI_SBDF(seg, bus, devfn), rc);
> >> >> +               d, &PCI_SBDF(pdev->seg, pdev->bus, devfn), rc);
> >> >>      /* The device is assigned to dom_io so mark it as quarantined */
> >> >>      else if ( d == dom_io )
> >> >>          pdev->quarantine = true;
> >> >> @@ -1595,6 +1586,9 @@ int iommu_do_pci_domctl(
> >> >>          ASSERT(d);
> >> >>          /* fall through */
> >> >>      case XEN_DOMCTL_test_assign_device:
> >> >> +    {
> >> >> +        struct pci_dev *pdev;
> >> >> +
> >> >>          /* Don't support self-assignment of devices. */
> >> >>          if ( d == current->domain )
> >> >>          {
> >> >> @@ -1622,26 +1616,36 @@ int iommu_do_pci_domctl(
> >> >>          seg = machine_sbdf >> 16;
> >> >>          bus = PCI_BUS(machine_sbdf);
> >> >>          devfn = PCI_DEVFN(machine_sbdf);
> >> >> -
> >> >>          pcidevs_lock();
> >> >> -        ret = device_assigned(seg, bus, devfn);
> >> >> +        pdev = pci_get_pdev(NULL, PCI_SBDF(seg, bus, devfn));
> >> >> +        if ( !pdev )
> >> >> +        {
> >> >> +            printk(XENLOG_G_INFO "%pp non-existent\n",
> >> >> +                   &PCI_SBDF(seg, bus, devfn));
> >> >> +            ret = -EINVAL;
> >> >> +            break;
> >> >> +        }
> >> >> +
> >> >> +        ret = device_assigned(pdev);
> >> >>          if ( domctl->cmd == XEN_DOMCTL_test_assign_device )
> >> >>          {
> >> >>              if ( ret )
> >> >>              {
> >> >> -                printk(XENLOG_G_INFO "%pp already assigned, or 
> >> >> non-existent\n",
> >> >> +                printk(XENLOG_G_INFO "%pp already assigned\n",
> >> >>                         &PCI_SBDF(seg, bus, devfn));
> >> >>                  ret = -EINVAL;
> >> >>              }
> >> >>          }
> >> >>          else if ( !ret )
> >> >> -            ret = assign_device(d, seg, bus, devfn, flags);
> >> >> +            ret = assign_device(d, pdev, flags);
> >> >> +
> >> >> +        pcidev_put(pdev);
> >> >
> >> > I would think you need to keep the refcount here if ret == 0, so that
> >> > the device cannot be removed while assigned to a domain?
> >> 
> >> Looks like we are perceiving function of refcnt in a different
> >> ways. For me, this is the mechanism to guarantee that if we have a valid
> >> pointer to an object, this object will not disappear under our
> >> feet. This is the main function of krefs in the linux kernel: if your
> >> code holds a reference to an object, you can be sure that this object is
> >> exists in memory.
> >> 
> >> On other hand, it seems that you are considering this refcnt as an usage
> >> counter for an actual PCI device, not "struct pdev" that represent
> >> it. Those are two related things, but not the same. So, I can see why
> >> you are suggesting to get additional reference there. But for me, this
> >> looks unnecessary: the very first refcount is obtained in
> >> pci_add_device() and there is the corresponding function
> >> pci_remove_device() that will drop this refcount. So, for me, if admin
> >> wants to remove a PCI device which is assigned to a domain, they can do
> >> this as they were able to do this prior this patches.
> >
> > This is all fine, but needs to be stated in the commit message.
> >
> 
> Sure, I will add this.
> 
> >> The main value of introducing refcnt is to be able to access pdev objects
> >> without holding the global pcidevs_lock(). This does not mean that you
> >> don't need locking at all. But this allows you to use pdev->lock (which
> >> does not exists in this series, but was introduced in a RFC earlier), or
> >> vpci->lock, or any other subsystem->lock.
> >
> > I guess I was missing this other bit about introducing a
> > per-device lock, would it be possible to bundle all this together into
> > a single patch series?
> 
> As I said at the top of this email, it was tried. You can check RFC at [1].
> 
> >
> > It would be good to place this change together with any other locking
> > related change that you have pending.
> 
> Honestly, my main goal is to fix the current issues with vPCI, so ARM
> can move forward on adding PCI support for the platform. So, I am
> focusing on this right now.

Thanks, we need to be careful however as to not accumulate more
bandaids on top just to workaround the fact that the locking we have
regarding the pci devices is not suitable.

I think it's important to keep all the usages of the pci_dev struct in
mind when designing a solution.

Overall it seems like might help vPCI on Arm, I think the only major
request I have is the one related to pci_remove_device() only
returning success when there are not refcounts left.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.