[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCHv9 3/4] gnttab: make the grant table lock a read-write lock
On 21/05/15 14:36, David Vrabel wrote: > On 21/05/15 11:32, Jan Beulich wrote: >>>>> On 20.05.15 at 17:54, <david.vrabel@xxxxxxxxxx> wrote: >>> @@ -842,8 +845,6 @@ __gnttab_map_grant_ref( >>> mt->ref = op->ref; >>> mt->flags = op->flags; >>> >>> - double_gt_unlock(lgt, rgt); >> >> Don't the mt-> updates above need some kind of protection? > > It depends: > > If not gnttab_need_iommu_mapping() but we only need a write barrier > before the mt->flags store. > > If gnttab_need_iommu_mapping() then a lock is required to prevent racing > with concurrent mapcount() calls. This is not a path I care about so > its easiest to just keep the double lock around this. Like so: @@ -806,12 +806,13 @@ __gnttab_map_grant_ref( goto undo_out; } - double_gt_lock(lgt, rgt); - if ( gnttab_need_iommu_mapping(ld) ) { unsigned int wrc, rdc; int err = 0; + + double_gt_lock(lgt, rgt); + /* We're not translated, so we know that gmfns and mfns are the same things, so the IOMMU entry is always 1-to-1. */ mapcount(lgt, rd, frame, &wrc, &rdc); @@ -837,12 +838,22 @@ __gnttab_map_grant_ref( TRACE_1D(TRC_MEM_PAGE_GRANT_MAP, op->dom); + /* + * All maptrack entry users check mt->flags first before using the + * other fields so just ensure the flags field is stored last. + * + * However, if gnttab_need_iommu_mapping() then this would race + * with a concurrent mapcount() call (on an unmap, for example) + * and a lock is required. + */ mt = &maptrack_entry(lgt, handle); mt->domid = op->dom; mt->ref = op->ref; - mt->flags = op->flags; + wmb(); + write_atomic(&mt->flags, op->flags); - double_gt_unlock(lgt, rgt); + if ( gnttab_need_iommu_mapping(ld) ) + double_gt_unlock(lgt, rgt); op->dev_bus_addr = (u64)frame << PAGE_SHIFT; op->handle = handle; David _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |