[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-unstable] amd-iommu: obtain page_alloc_lock before traversing a domain's page list



# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1233313986 0
# Node ID 2d70ad9c3bc7546e8bd53f55c5f0d05c5852a8a1
# Parent  162cdb596b9a7e49994b9305f34fadf92cfb3933
amd-iommu: obtain page_alloc_lock before traversing a domain's page list

>From all I can tell, this doesn't violate lock ordering as other
places call heap allocation functions from inside hd->mapping_lock.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
---
 xen/drivers/passthrough/amd/iommu_map.c |    5 +++++
 1 files changed, 5 insertions(+)

diff -r 162cdb596b9a -r 2d70ad9c3bc7 xen/drivers/passthrough/amd/iommu_map.c
--- a/xen/drivers/passthrough/amd/iommu_map.c   Fri Jan 30 11:10:43 2009 +0000
+++ b/xen/drivers/passthrough/amd/iommu_map.c   Fri Jan 30 11:13:06 2009 +0000
@@ -567,6 +567,8 @@ int amd_iommu_sync_p2m(struct domain *d)
     if ( hd->p2m_synchronized )
         goto out;
 
+    spin_lock(&d->page_alloc_lock);
+
     page_list_for_each ( page, &d->page_list )
     {
         mfn = page_to_mfn(page);
@@ -579,6 +581,7 @@ int amd_iommu_sync_p2m(struct domain *d)
 
         if ( iommu_l2e == 0 )
         {
+            spin_unlock(&d->page_alloc_lock);
             amd_iov_error("Invalid IO pagetable entry gfn = %lx\n", gfn);
             spin_unlock_irqrestore(&hd->mapping_lock, flags);
             return -EFAULT;
@@ -586,6 +589,8 @@ int amd_iommu_sync_p2m(struct domain *d)
 
         set_iommu_l1e_present(iommu_l2e, gfn, (u64)mfn << PAGE_SHIFT, iw, ir);
     }
+
+    spin_unlock(&d->page_alloc_lock);
 
     hd->p2m_synchronized = 1;
 

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.