|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [RFC PATCH v6 18/43] arm/p2m: Invalidate root page table entries and flush TLB in p2m_flush_table
From: Rose Spangler <Rose.Spangler@xxxxxxxxxxxxxx>
This commit invalidates the root page table entries and flushes the TLB
when the table is flushed. The TLB is flushed to ensure that altp2m views
after being reset or torn down. Previously, the code in p2m_flush_table was
only used to free p2m pages during domain teardown. This function will
later be used to teardown/reset altp2m views of a still-running domain, so
the page table entries must be properly invalidated.
Additionally, the p2m_invalidate_root function is split into
p2m_invalidate_root and p2m_invalidate_root_locked. The p2m_flush_table
function already holds the lock, so it calls p2m_invalidate_root_locked
directly, as opposed to the existing callers which don't already hold the
lock.
This is commit 7/12 of the altp2m_init/altp2m_teardown routines phase.
Signed-off-by: Rose Spangler <Rose.Spangler@xxxxxxxxxxxxxx>
Signed-off-by: Sergej Proskurin <proskurin@xxxxxxxxxxxxx>
---
v3: Added a "p2m_flush_tlb" call in "p2m_flush_table". On altp2m reset
in function "altp2m_reset", it is important to flush the TLBs after
clearing the root table pages and before clearing the intermediate
altp2m page tables to prevent illegal access to stalled TLB entries on
currently active VCPUs.
v4: Replaced the former use of clear_and_clean_page in p2m_flush_table
by a routine that invalidates every p2m entry atomically. This
avoids inconsistencies on CPUs that continue to use the views that
are to be flushed (e.g., see altp2m_reset).
v6: Introduced this patch. While the code in this patch is mostly new, it
is the same in spirit as the p2m_flush_table additions in the original
patch series, so the relevant comments have been reproduced above.
In the v4/v5 versions of this patch series, this patch was a part of
the previous patch. It has been split out to minimize the number of
functionality changes in the previous patch.
Additionally, the original patch series used a routine here which was
nearly identical to p2m_invalidate_root, which was implemented a few
years after the patch series. Therefore, the existing
p2m_invalidate_root implementation is used here instead.
Also, since the original patch series p2m_teardown (and by extension
p2m_flush_table, as it was extracted from p2m_teardown) was made
preemptible. As a consequence of this, introducing a call to
p2m_invalidate_root here also means that p2m_invalidate_root and
p2m_tlb_flush_sync is called each time p2m_flush_table is called, even
if a previous call to p2m_flush_table was preempted. This might cause
some additional overhead, as p2m_flush_table will iterate over the root
page tables and flush the TLB before it can return to freeing p2m
pages. I'm not sure if there's a better way of handling this, or if
the overhead here is negligible/acceptable.
I'm not sure how IOMMU interacts with altp2m here. I haven't looked
into it extensively, so I would appreciate some feedback here. I've
just copied over the iommu_use_hap_pt conditional from
p2m_domain_creation_finished, but this is probably not the right
behavior since we probably still need to invalidate the altp2m view
page tables on flush somehow. Is the issue with invalidating root page
tables when using IOMMU only relevant for the hostp2m, or is it also
relevant for the altp2m views?
---
xen/arch/arm/mmu/p2m.c | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/xen/arch/arm/mmu/p2m.c b/xen/arch/arm/mmu/p2m.c
index 1d598c66450b..51753bb2c34d 100644
--- a/xen/arch/arm/mmu/p2m.c
+++ b/xen/arch/arm/mmu/p2m.c
@@ -1271,17 +1271,20 @@ void p2m_clear_root_pages(struct p2m_domain *p2m)
* p2m_invalid_root() should not be called when the P2M is shared with
* the IOMMU because it will cause IOMMU fault.
*/
-static void p2m_invalidate_root(struct p2m_domain *p2m)
+static void p2m_invalidate_root_locked(struct p2m_domain *p2m)
{
unsigned int i;
ASSERT(!iommu_use_hap_pt(p2m->domain));
- p2m_write_lock(p2m);
-
for ( i = 0; i < P2M_ROOT_PAGES; i++ )
p2m_invalidate_table(p2m, page_to_mfn(p2m->root + i));
+}
+static void p2m_invalidate_root(struct p2m_domain *p2m)
+{
+ p2m_write_lock(p2m);
+ p2m_invalidate_root_locked(p2m);
p2m_write_unlock(p2m);
}
@@ -1449,6 +1452,13 @@ int p2m_flush_table(struct p2m_domain *p2m)
unsigned long count = 0;
struct page_info *pg;
+ /* TODO: How does IOMMU interact with altp2m? */
+ if ( !iommu_use_hap_pt(p2m->domain) )
+ {
+ p2m_invalidate_root_locked(p2m);
+ p2m_tlb_flush_sync(p2m);
+ }
+
while ( (pg = page_list_remove_head(&p2m->pages)) )
{
p2m_free_page(p2m->domain, pg);
--
2.34.1
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |