[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/arm64: Correctly compute the virtual address in maddr_to_virt()



Hi Stefano,

On 24/07/2019 01:17, Stefano Stabellini wrote:
On Thu, 18 Jul 2019, Julien Grall wrote:
With that, the patch 1191156361 "xen/arm: fix mask calculation in
pdx_init_mask" could be re-instated.
---
  xen/arch/arm/mm.c        | 2 ++
  xen/include/asm-arm/mm.h | 6 ++++--
  2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 44258ad89c..e1cdeaaf2f 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -165,6 +165,7 @@ mfn_t xenheap_mfn_end __read_mostly;
  vaddr_t xenheap_virt_end __read_mostly;
  #ifdef CONFIG_ARM_64
  vaddr_t xenheap_virt_start __read_mostly;
+unsigned long xenheap_base_pdx __read_mostly;
  #endif
unsigned long frametable_base_pdx __read_mostly;
@@ -796,6 +797,7 @@ void __init setup_xenheap_mappings(unsigned long base_mfn,
      if ( mfn_eq(xenheap_mfn_start, INVALID_MFN) )
      {
          xenheap_mfn_start = _mfn(base_mfn);
+        xenheap_base_pdx = mfn_to_pdx(_mfn(base_mfn));
          xenheap_virt_start = DIRECTMAP_VIRT_START +
              (base_mfn - mfn) * PAGE_SIZE;
      }

I can see that this would work, but wouldn't it be a better fit to set
xenheap_base_pdx in xen/arch/arm/setup.c:setup_mm, where we already set:


     xenheap_virt_end = XENHEAP_VIRT_START + ram_end - ram_start;
     xenheap_mfn_start = maddr_to_mfn(ram_start);
     xenheap_mfn_end = maddr_to_mfn(ram_end);

Or it too late by then?

Yes setup_xenheap_mappings() is using __mfn_to_virt() that will call maddr_to_virt(). So we need to setup xenheam_base_start earlier.

TBH, this 3 variables should be set within xenheap as it makes clearer how they are computed. Actually, xenheam_mfn_start will be overidden, thankfully the new and old values are exactly the same...

I have plan to rewrite the xenheap code as there are few problems with it:
1) Chicken and eggs problem with the alloc_boot_pages(...). We may need to allocate memory while doing the xenheap mapping but page are not given to the allocator until late. But if you give to the allocator the page and it is not yet unmap, then you would receive a data abort. 2) We are mapping all the RAMs, include reserved-memory marked no-map. This may result to caching problem later on. 3) We are using 1GB mapping, however if the RAM is less than a 1GB, we will end-up to cover non-RAM. With bad luck, this may cover device memory leading to interesting result. AFAIK, the RPI4 has this exact use case.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.