[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] xen/arm and swiotlb-xen: possible data corruption



Hi Stefano,

On 03/03/17 00:53, Stefano Stabellini wrote:
On Thu, 2 Mar 2017, Julien Grall wrote:
On 02/03/2017 22:39, Stefano Stabellini wrote:
On Thu, 2 Mar 2017, Julien Grall wrote:
Hi Stefano,

On 02/03/17 19:12, Stefano Stabellini wrote:
On Thu, 2 Mar 2017, Julien Grall wrote:
On 02/03/17 08:53, Edgar E. Iglesias wrote:
On Thu, Mar 02, 2017 at 09:38:37AM +0100, Edgar E. Iglesias wrote:
On Wed, Mar 01, 2017 at 05:05:21PM -0800, Stefano Stabellini
wrote:
Julien, from looking at the two diffs, this is simpler and nicer, but if
you look at xen/include/asm-arm/page.h, my patch made
clean_dcache_va_range consistent with invalidate_dcache_va_range. For
consistency, I would prefer to deal with the two functions the same way.
Although it is not a spec requirement, I also think that it is a good
idea to issue cache flushes from cacheline aligned addresses, like
invalidate_dcache_va_range does and Linux does, to make more obvious
what is going on.

invalid_dcache_va_range is split because the cache instruction differs for
the
start and end if unaligned. For them you want to use clean & invalidate
rather
than invalidate.

If you look at the implementation of other cache helpers in Linux (see
dcache_by_line_op in arch/arm64/include/asm/assembler.h), they will only
align
start & end.

I don't think so, unless I am reading dcache_by_line_op wrong.

343         .macro dcache_by_line_op op, domain, kaddr, size, tmp1, tmp2
344         dcache_line_size \tmp1, \tmp2
345         add     \size, \kaddr, \size
346         sub     \tmp2, \tmp1, #1
347         bic     \kaddr, \kaddr, \tmp2
348 9998:
349         .if     (\op == cvau || \op == cvac)
350 alternative_if_not ARM64_WORKAROUND_CLEAN_CACHE
351         dc      \op, \kaddr
352 alternative_else
353         dc      civac, \kaddr
354 alternative_endif
355         .else
356         dc      \op, \kaddr
357         .endif
358         add     \kaddr, \kaddr, \tmp1
359         cmp     \kaddr, \size
360         b.lo    9998b
361         dsb     \domain
362         .endm
363

It has only one cache instruction in the resulting assembly because it has
.if/.else assembly directives.

Yes, but it does not only align start and end, all cache instructions
are called on aligned addresses, right?

I don't think so. The instruction "bic \kaddr, \kaddr, \tmp2" will make sure the start address is aligned to a cache line size.

The C version of the assembly code is exactly what you wrote on the previous e-mail:

   end = p + size;
   p = (void *)ALIGN((uintptr_t)p, cacheline_bytes);

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.