[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4] boot allocator: Use arch helper for virt_to_mfn on DIRECTMAP



Hi Vijay,

On 28/03/17 13:35, vijay.kilari@xxxxxxxxx wrote:
From: Vijaya Kumar K <Vijaya.Kumar@xxxxxxxxxx>

On ARM64, virt_to_mfn uses the hardware for address
translation. So if the virtual address is not mapped translation
fault is raised. On ARM64, DIRECTMAP_VIRT region is direct mapped.

You are stating obvious things, a DIRECTMAP_VIRT region is as the name said direct mapped. What matter is all the RAM is mapped in Xen on ARM64.


On ARM platforms with NUMA, While initializing second memory node,

s/While/while/

panic is triggered from init_node_heap() when virt_to_mfn()
is called for DIRECTMAP_VIRT region address.
Here the check is made to ensure that MFN less than max MFN mapped.

"The check is here to know whether the MFN is part of the direct mapping".

The max MFN is found by calling virt_to_mfn of DIRECTMAP_VIRT_END
region.

DIRECTMAP_VIRT_END is the end of the region not a region.

Since DIRECMAP_VIRT region is not mapped to any virtual address

s/DIRECMAP_VIRT/DIRECTMAP_VIRT/

on ARM, it fails.

In this patch, instead of calling virt_to_mfn(), arch helper
arch_mfn_in_directmap() is introduced. On ARM64 this arch helper
will return true, whereas on ARM DIRECTMAP_VIRT region is not directly mapped
only xenheap region is directly mapped.

As said before, there is no DIRECTMAP_VIRT region on ARM. All the RAM is not mapped on Xen but the xenheap.

So on ARM return false always.

I am OK if you always return false on ARM. But you need to explain why not return is_xen_heap_mfn(...);

For x86 this helper does virt_to_mfn.

Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@xxxxxxxxxx>
---
 xen/common/page_alloc.c        |  7 ++-----
 xen/include/asm-arm/arm32/mm.h | 20 ++++++++++++++++++++
 xen/include/asm-arm/arm64/mm.h | 20 ++++++++++++++++++++
 xen/include/asm-arm/mm.h       |  8 ++++++++
 xen/include/asm-x86/mm.h       | 11 +++++++++++
 5 files changed, 61 insertions(+), 5 deletions(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 42c20cb..c4ffb31 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -520,9 +520,6 @@ static unsigned long init_node_heap(int node, unsigned long 
mfn,
     unsigned long needed = (sizeof(**_heap) +
                             sizeof(**avail) * NR_ZONES +
                             PAGE_SIZE - 1) >> PAGE_SHIFT;
-#ifdef DIRECTMAP_VIRT_END
-    unsigned long eva = min(DIRECTMAP_VIRT_END, HYPERVISOR_VIRT_END);
-#endif
     int i, j;

     if ( !first_node_initialised )
@@ -534,7 +531,7 @@ static unsigned long init_node_heap(int node, unsigned long 
mfn,
     }
 #ifdef DIRECTMAP_VIRT_END

Sorry I didn't spot that before. Why do we keep the #ifdef here given that the check is arch specific now?

     else if ( *use_tail && nr >= needed &&
-              (mfn + nr) <= (virt_to_mfn(eva - 1) + 1) &&
+              arch_mfn_in_directmap(mfn + nr) &&
               (!xenheap_bits ||
                !((mfn + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) )
     {
@@ -543,7 +540,7 @@ static unsigned long init_node_heap(int node, unsigned long 
mfn,
                       PAGE_SIZE - sizeof(**avail) * NR_ZONES;
     }
     else if ( nr >= needed &&
-              (mfn + needed) <= (virt_to_mfn(eva - 1) + 1) &&
+              arch_mfn_in_directmap(mfn + needed) &&
               (!xenheap_bits ||
                !((mfn + needed - 1) >> (xenheap_bits - PAGE_SHIFT))) )
     {
diff --git a/xen/include/asm-arm/arm32/mm.h b/xen/include/asm-arm/arm32/mm.h
new file mode 100644
index 0000000..e93d9df
--- /dev/null
+++ b/xen/include/asm-arm/arm32/mm.h
@@ -0,0 +1,20 @@
+#ifndef __ARM_ARM32_MM_H__
+#define __ARM_ARM32_MM_H__
+
+/* On ARM only xenheap memory is directly mapped. Hence return false. */

By reading this comment some people will wonder why you don't check whether the mfn is in xenheap then. As mentioned above, I am ok if you always return false here. But you need to explain why.

+static inline bool arch_mfn_in_directmap(unsigned long mfn)
+{
+    return false;
+}
+
+#endif /* __ARM_ARM32_MM_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-arm/arm64/mm.h b/xen/include/asm-arm/arm64/mm.h
new file mode 100644
index 0000000..36ee9c8
--- /dev/null
+++ b/xen/include/asm-arm/arm64/mm.h
@@ -0,0 +1,20 @@
+#ifndef __ARM_ARM64_MM_H__
+#define __ARM_ARM64_MM_H__
+
+/* On ARM64 DIRECTMAP_VIRT region is directly mapped. Hence return true */

This comment is not correct. A the directmap region is always direct mapped which is quite obvious by the name. What matter is all the RAM is mapped in Xen.

So I would reword the commit message with:

"On ARM64, all the RAM is currently direct mapped in Xen."

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.