[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen: arm: invalidate caches after map_domain_page done

On 01/08/14 11:50, Andrii Tseglytskyi wrote:
On Fri, Aug 1, 2014 at 1:28 PM, Julien Grall <julien.grall@xxxxxxxxxx> wrote:

On 01/08/14 11:01, Andrii Tseglytskyi wrote:

On Fri, Aug 1, 2014 at 12:23 PM, Julien Grall <julien.grall@xxxxxxxxxx>

Hi Andrii,

On 01/08/14 08:25, Andrii Tseglytskyi wrote:

In some cases, memory page returned by map_domain_page() contains
invalid data. Issue is observed when map_domain_page() is used
immediately after p2m_lookup() function, when random page of
guest domain memory is need to be mapped to xen. Data on this
already memory page may be not valid. Issue is fixed when
caches are invalidated after mapping is done.

Signed-off-by: Andrii Tseglytskyi <andrii.tseglytskyi@xxxxxxxxxxxxxxx>
    xen/arch/arm/mm.c |    2 +-
    1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 0a243b0..085780a 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -304,7 +304,7 @@ void *map_domain_page(unsigned long mfn)
         * We may not have flushed this specific subpage at map time,
         * since we only flush the 4k page not the superpage
-    flush_xen_data_tlb_range_va_local(va, PAGE_SIZE);

Why did you remove the flush TLB? It's requirement to make sure the VA
will pointed to the right PA.

+    clean_and_invalidate_xen_dcache_va_range((void *)va, PAGE_SIZE);

This is not the right approach, map_domain_page is heavily used to map
hypercall data page. Those pages must reside in normal inner-cacheable

What if use map_domain_page() outside hypercall ?

In general we require cache enabled for any communication with Xen. Until
now, we didn't have any case where map_domain_page is used outside this

memory. So cleaning the cache is useless and time consuming.

I see that without cache invalidating page contains invalid data. Let
me explain more deeply:
As far as you know - I resumed my work with remoteproc MMU driver and
I started fixing review comments. One of your comments is that
ioremap_nocache() API should not be used for mapping domain pages,
therefore I tried using map_domain_page, and this works fine for me
only in case if I invalidate caches of already mapped va.

I compared page dumps - 1 st page was mapped with ioremap_nocache()
function, second page was mapped with map_domain_page() function, and
I got the following output:

(XEN) SGX_L2_MMU: pte_table[0] 0x9d428019 tmp[0] 0x9d428019
(XEN) SGX_L2_MMU: pte_table[1] 0x9d429019 tmp[1] 0x9d429019
(XEN) SGX_L2_MMU: pte_table[2] 0x9d42e019 tmp[2] 0x9d42e019
(XEN) SGX_L2_MMU: pte_table[3] 0x9d42f019 tmp[3] 0x00000000  <-- data
is not valid here

pte_table pointer is mapped using ioremap_nocache(), tmp is mapped
using map_domain_page()

If you want to clean and invalidate the cache, even though I don't think
this right by reading the commit message, you have to introduce a new

Taking in account your previous comment - that map_domain_page() is
used for mapping of hypercall data page, looks like I can't use this
API as is. In my code I solved this by calling of
clean_and_invalidate_xen_dcache_va_range() immediately after page is
mapped. This works fine for me - no invalid data is observed.

What is the attribute of this page in guest? non-cacheable?

It is domain heap memory. I expect it should be cacheable. How can I
make sure with this?

I'm lost...

The page you are trying to map belongs to a guest, right? When the guest writes data in this page. Does it map with cacheable attribute or not? I suspect no.

I think your current problem is the cache contains stall data. In this case you have to only invalidate the cache.

Julien Grall

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.