[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC] xen/arm: Handling cache maintenance instructions by set/way



On 12/07/2017 04:58 PM, Marc Zyngier wrote:
> On 07/12/17 16:44, George Dunlap wrote:
>> On 12/07/2017 04:04 PM, Julien Grall wrote:
>>> Hi Jan,
>>>
>>> On 07/12/17 15:45, Jan Beulich wrote:
>>>>>>> On 07.12.17 at 15:53, <marc.zyngier@xxxxxxx> wrote:
>>>>> On 07/12/17 13:52, Julien Grall wrote:
>>>>> There is exactly one case where set/way makes sense, and that's when
>>>>> you're the only CPU left in the system, your MMU is off, and you're
>>>>> about to go down.
>>>>
>>>> With this and ...
>>>>
>>>>> On top of bypassing the coherency, S/W CMOs do not prevent lines from
>>>>> migrating from one CPU to another. So you could happily be flushing by
>>>>> S/W, and still end up with dirty lines in your cache. Success!
>>>>
>>>> ... this I wonder what value emulating those insns then has in the first
>>>> place. Can't you as well simply skip and ignore them, with the same
>>>> (bad) result?
>>>
>>> The result will be much much worst. Here a concrete example with a Linux
>>> Arm 32-bit:
>>>
>>>     1) Cache enabled
>>>     2) Decompress
>>>     3) Nuke cache (S/W)
>>>     4) Cache off
>>>     5) Access new kernel
>>>
>>> If you skip #3, the decompress data may not have reached the memory, so
>>> you would access stall data.
>>>
>>> This would effectively mean we don't support Linux Arm 32-bit.
>>
>> So Marc said that #3 "doesn't make sense", since although it might be
>> the only cpu on in the system, you're not "about to go down"; but Linux
>> 32-bit is doing that anyway.
> 
> "Doesn't make sense" on an ARMv7+ with SMP. That code dates back to
> ARMv4, and has been left untouched ever since. "If it ain't broke..."
> 
>> It sounds like from the slides the purpose of #3 might be to get stuff
>> out of the D-cache into the I-cache.  But why is the cache turned off?
> 
> Linux mandates that the kernel in entered with the MMU off. Which has
> the effect of disabling the caches too (VIVT caches and all that jazz).
> 
>> And why doesn't Linux use the VA-based flushes rather than the S/W flushes?
> 
> Linux/arm64 does. Changing the 32bit port to use VA CMOs would probably
> break stuff from the late 90s, so that's not going to happen. These
> days, I tend to pick my battles... ;-)

OK, so let me try to state this "forwards" for those of us not familiar
with the situation:

1. Linux expects to start in 'linear' mode, with the MMU disabled.

2. On ARM, disabling the MMU disables caching (!).  But disabling
caching doesn't flush the cache; it just means the cache is bypassed (!).

3. Which means for Linux on ARM, after unzipping the kernel image, you
need to flush the cache before disabling the MMU and starting Linux proper

4. For historical reasons, 32-bit ARM Linux uses the S/W instructions to
flush the cache.  This still works on 32-bit hardware, and so the Linux
maintainers are loathe to change it, even though more reliable VA-based
instructions are available (?).

5. For 64-bit hardware, the S/W instructions don't affect the L3 cache
[1] (?!).  So a 32-bit guest on a 64-bit host the above is entirely broken.

6. Rather than fix this in Linux, KVM has added a work-around in which
the *hypervisor* flushes the caches at certain points (!!!).  Julien is
looking into doing the same with Xen.

Is that about right?

Given the variety of hardware that Linux has to run on, it's hard to
understand why 1) 32-bit ARM Linux couldn't detect if it would be
appropriate to use VA-based instructions rather than S/W instructions 2)
There couldn't at least be a Kconfig option to use VA instructions
instead of S/W instructions.

 -George

[1]
https://events.linuxfoundation.org/sites/events/files/slides/slides_10.pdf,
slide 9

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.