[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [for-4.15][PATCH v3 3/3] xen/iommu: x86: Harden the IOMMU page-table allocator
From: Julien Grall <jgrall@xxxxxxxxxx> At the moment, we are assuming that only iommu_map() can allocate IOMMU page-table. Given the complexity of the IOMMU framework, it would be sensible to have a check closer to the IOMMU allocator. This would avoid to leak IOMMU page-tables again in the future. iommu_alloc_pgtable() is now checking if the domain is dying before adding the page in the list. We are relying on &hd->arch.pgtables.lock to synchronize d->is_dying. Take the opportunity to add an ASSERT() in arch_iommu_domain_destroy() to check if we freed all the IOMMU page tables. Signed-off-by: Julien Grall <jgrall@xxxxxxxxxx> --- Changes in v3: - Rename the patch. This was originally "xen/iommu: x86: Don't leak the IOMMU page-tables" - Rework the commit message - Move the patch towards the end of the series Changes in v2: - Rework the approach - Move the patch earlier in the series --- xen/drivers/passthrough/x86/iommu.c | 33 ++++++++++++++++++++++++++++- 1 file changed, 32 insertions(+), 1 deletion(-) diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c index faa0078db595..a67075f0045d 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -149,6 +149,13 @@ int arch_iommu_domain_init(struct domain *d) void arch_iommu_domain_destroy(struct domain *d) { + /* + * There should be not page-tables left allocated by the time the + * domain is destroyed. Note that arch_iommu_domain_destroy() is + * called unconditionally, so pgtables may be unitialized. + */ + ASSERT(dom_iommu(d)->platform_ops == NULL || + page_list_empty(&dom_iommu(d)->arch.pgtables.list)); } static bool __hwdom_init hwdom_iommu_map(const struct domain *d, @@ -279,6 +286,9 @@ int iommu_free_pgtables(struct domain *d) */ hd->platform_ops->clear_root_pgtable(d); + /* After this barrier no new page allocations can occur. */ + spin_barrier(&hd->arch.pgtables.lock); + while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) ) { free_domheap_page(pg); @@ -296,6 +306,7 @@ struct page_info *iommu_alloc_pgtable(struct domain *d) unsigned int memflags = 0; struct page_info *pg; void *p; + bool alive = false; #ifdef CONFIG_NUMA if ( hd->node != NUMA_NO_NODE ) @@ -315,9 +326,29 @@ struct page_info *iommu_alloc_pgtable(struct domain *d) unmap_domain_page(p); spin_lock(&hd->arch.pgtables.lock); - page_list_add(pg, &hd->arch.pgtables.list); + /* + * The IOMMU page-tables are freed when relinquishing the domain, but + * nothing prevent allocation to happen afterwards. There is no valid + * reasons to continue to update the IOMMU page-tables while the + * domain is dying. + * + * So prevent page-table allocation when the domain is dying. + * + * We relying on &hd->arch.pgtables.lock to synchronize d->is_dying. + */ + if ( likely(!d->is_dying) ) + { + alive = true; + page_list_add(pg, &hd->arch.pgtables.list); + } spin_unlock(&hd->arch.pgtables.lock); + if ( unlikely(!alive) ) + { + free_domheap_page(pg); + pg = NULL; + } + return pg; } -- 2.17.1
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |