[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH ARM v6 07/14] mini-os: arm: boot code



On 30 July 2014 13:54, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> On Wed, 2014-07-30 at 13:20 +0100, Thomas Leonard wrote:
>
>> > The processor can still reorder writes and things like that, so I think
>> > it is still needed.
>>
>> Ah, I'd assumed it would default everything to Strongly Ordered with
>> the MMU off, but I see it depends on how Xen sets the "Default
>> cacheable" bit.
>
> Even if memory accesses are strongly ordered that doesn't include co
> processor register accesses so they can still be reordered wrt
> loads/stores.
>
>> >> How does that happen? Presumably the isb below will block that.
>> >
>> > I don't think it will, it's effectively just a pipeline flush but that
>> > doesn't necessarily mean there isn't a write from below already in that
>> > pipeline.
>> >
>> > (I think, I haven't actually gone back to the spec on this one...)
>> >
>> > In Xen we do dsb+isb before and after the TLB flush.
>>
>> I've added these in various places, but no improvement:
>>
>> https://github.com/talex5/xen/commit/39199da8493ad6e235849d7ecd51a1415d7d60a7
>
> Ah, you aren't setting the cacheability of the PT walks, I bet that's
> it. If you don't do that then you do need cache maintenance when writing
> PTEs (or to have caches disabled).
>
> Xen uses LPAE mode and therefore the extended format of TTBCR (called
> HTCR for Xen), which is where the bits controlling the cacheability of
> page table walks are held for LPAE.
>
> In non-LPAE mode it looks like the cacheability bits are in TTBR[01] in
> bits 0..6.

You're right - that was it - thanks!


-- 
Dr Thomas Leonard        http://0install.net/
GPG: 9242 9807 C985 3C07 44A6  8B9A AE07 8280 59A5 3CC1
GPG: DA98 25AE CAD0 8975 7CDA  BD8E 0713 3F96 CA74 D8BA

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.