|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5 5/7] VT-d: Refactor iommu_ops .map_page() and unmap_page()
On February 25, 2016 8:24pm, <JBeulich@xxxxxxxx> wrote:
> >>> On 25.02.16 at 13:14, <quan.xu@xxxxxxxxx> wrote:
> > On February 25, 2016 4:59pm, <JBeulich@xxxxxxxx> wrote:
> >> However, the same effect could be achieved by making the lock a
> >> recursive one, which would then seem to more conventional approach
> >> (but requiring as much code to be touched).
IMO, for v6, I'd better:
(1). replace all of 'spin_lock(&pcidevs_lock)' with
'spin_lock_recursive(&pcidevs_lock)',
(2). replace all of 'spin_unlock(&pcidevs_lock)' with
'spin_unlock_recursive(&pcidevs_lock)',
(3). _do_not_ touch 'spin_is_locked(&pcidevs_lock)'.
Then, ignore v5 [patch4/7], [patch5/7], [patch6/7], *_locked() related code,
and add a new patch for the above (1). (2). (3). Right?
BTW, .e.g. 'd->page_alloc_lock', I found the 'd->page_alloc_lock' is called by
both spin_lock_recursive() and 'spin_lock()'.
_If_ both spin_lock_recursive(&d->page_alloc_lock) and
'spin_lock(&d->page_alloc_lock)' are recursively called in same call tree as
below, it might be a bug. Right?
{
...
spin_lock()
...
spin_lock_recursive()
...
spin_unlock_recursive() <--- (lock might be now free)
...
spin_unlock()
...
}
-Quan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |