[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 5/7] VT-d: Refactor iommu_ops .map_page() and unmap_page()



On February 26, 2016 6:11pm, <JBeulich@xxxxxxxx> wrote:
> >>> On 26.02.16 at 10:24, <quan.xu@xxxxxxxxx> wrote:
> > On February 26, 2016 4:15pm, <JBeulich@xxxxxxxx> wrote:
> >> >>> On 26.02.16 at 08:37, <quan.xu@xxxxxxxxx> wrote:
> >> > On February 25, 2016 8:24pm, <JBeulich@xxxxxxxx> wrote:
> >> >> >>> On 25.02.16 at 13:14, <quan.xu@xxxxxxxxx> wrote:
> >> >> > On February 25, 2016 4:59pm, <JBeulich@xxxxxxxx> wrote:
> >> >
> >> >> >> However, the same effect could be achieved by making the lock a
> >> >> >> recursive one, which would then seem to more conventional
> >> >> >> approach (but requiring as much code to be touched).
> >> >
> >> > IMO, for v6, I'd better:
> >> >   (1). replace all of 'spin_lock(&pcidevs_lock)' with
> >> > 'spin_lock_recursive(&pcidevs_lock)',
> >> >   (2). replace all of 'spin_unlock(&pcidevs_lock)' with
> >> > 'spin_unlock_recursive(&pcidevs_lock)',
> >> >   (3). _do_not_ touch 'spin_is_locked(&pcidevs_lock)'.
> >> > Then, ignore v5 [patch4/7], [patch5/7], [patch6/7],  *_locked()
> >> > related code,  and add a new patch for the above (1). (2). (3). Right?
> >>
> >> Yes.
> >>
> >> > BTW, .e.g. 'd->page_alloc_lock', I found the 'd->page_alloc_lock'
> >> > is called by both spin_lock_recursive() and 'spin_lock()'.
> >> > _If_  both spin_lock_recursive(&d->page_alloc_lock) and
> >> > 'spin_lock(&d->page_alloc_lock)' are recursively called in same
> >> > call tree as below, it might be a bug. Right?
> >> >
> >> >
> >> > {
> >> > ...
> >> >     spin_lock()
> >> >     ...
> >> >        spin_lock_recursive()
> >> >        ...
> >> >        spin_unlock_recursive()  <--- (lock might be now free)
> >> >     ...
> >> >     spin_unlock()
> >> > ...
> >> > }
> >>
> >> Well, such a use of course would be a bug. But using plain
> >> spin_lock() on a path where recursive acquire can't occur is fine.
> >>
> >> Nevertheless I'd recommend not mixing things for the pcidevs one, as
> >> its uses are scattered around too much for it to be reasonably
> >> possible to prove correctness if done that way.
> >
> >
> > But I do _not_ mix things for the pcidevs one. As
> >   (1). replace ___all___ of 'spin_lock(&pcidevs_lock)' with
> > 'spin_lock_recursive(&pcidevs_lock)',
> >   (2). replace ___all___ of 'spin_unlock(&pcidevs_lock)' with
> > 'spin_unlock_recursive(&pcidevs_lock)',
> 
> I am not saying you mix things, I'm just saying there is a risk you do 
> perhaps not
> even knowingly, 

I actually don't.

> e.g. when another patch goes in about the same time as yours
> which adds another instance. Plus doing what you state above is going to be
> close to unreviewable, since the reviewer would have to check that indeed you
> caught all instances. By removing global visibility of the lock variable both 
> issues
> are easily avoided, since without catching all cases a build error would 
> result.



I would summarize as follows and make sure we are on the same page.

 1) make pcidevs_lock only visible inside xen/drivers/passthrough/pci.c, 
   turning the definition of the variable into a static one, and getting rid of 
its declaration from xen/include/xen/pci.h

 2) define 3 helpers in 'include/xen/pci.h':
    *pcidevs_lock()
    *pcidevs_unlock()
    *pcidevs_is_locked()
  Then, I'd implement these helpers in xen/drivers/passthrough/pci.c. 
  Of course, the helpers will do spin_lock_recursive() and 
spin_unlock_recursive(). It is quite similar to what is done for domain_lock().

3) use these helpers. Replace
     *spin_lock(&pcidevs_lock) with pcidevs_lock(),
     *spin_unlock(&pcidevs_lock) with pcidevs_unlock(),
     *spin_is_locked(&pcidevs_lock) with pcidevs_is_locked().

Right?

Quan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.