[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/2] x86/mm: factor out the code for shattering an l3 PTE





On 07/12/2019 19:37, Andrew Cooper wrote:
On 07/12/2019 19:04, Julien Grall wrote:
Hi Hongyan,

On 06/12/2019 15:53, Hongyan Xia wrote:
map_pages_to_xen and modify_xen_mappings are performing almost exactly
the same operations when shattering an l3 PTE, the only difference
being whether we want to flush.

Signed-off-by: Hongyan Xia <hongyxia@xxxxxxxxxx>
---
   xen/arch/x86/mm.c | 85 ++++++++++++++++++++++-------------------------
   1 file changed, 40 insertions(+), 45 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 7d4dd80a85..42aaaa1083 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5151,6 +5151,43 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
                            flush_area_local((const void *)v, f) : \
                            flush_area_all((const void *)v, f))
   +/* Shatter an l3 entry and populate l2. If virt is passed in, also
do flush. */
+static void shatter_l3e(l3_pgentry_t *pl3e, l2_pgentry_t *l2t,
+        unsigned long virt, bool locking)
+{
+    unsigned int i;
+    l3_pgentry_t ol3e = *pl3e;
+
+    for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
+        l2e_write(l2t + i,
+                  l2e_from_pfn(l3e_get_pfn(ol3e) +
+                               (i << PAGETABLE_ORDER),
+                               l3e_get_flags(ol3e)));

I understand this is just a factor out of the current code, but the
code feels wrong (at least in theory) and wasteful.

You would allocate the L2 table, prepare it and then free it if the L3
entry has _PAGE_PRESENT or _PAGE_PSE cleared.

Also, in theory, there is nothing preventing the l3 flags to be
modified behind your back. So you could end up to generate the l2
entries with the old l3 flags.

In nutshell, it feels to me that the shattering/allocation should
happen with the lock taken. This would also allow you to not allocate
the l2 table when you are removing the page.

The existing behaviour is deliberate and most likely wants to remain.

It makes adjustments safe to concurrent modifications, while reducing
the critical section of the lock to only the alteration of the live PTEs.

We don't expect concurrent conflicting updates to these pagetables at
all, but extending the locked region around the memory allocation and
writing the new pagetable is a bottlekneck to parallel updates of
independent addresses.

I am quite interested to know a bit more about the potential bottlenecks. When I rewrote the Xen PT helpers for Arm I didn't see many access to the Xen PT.

To make a comparison, I see much more update to the P2M. So I would expect such optimization to be more a concern there. Yet we take the lock for the full update. Maybe this was an oversight when the P2M was created?

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.