[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/AMD: also determine L3 cache size



On 16.04.2021 16:21, Andrew Cooper wrote:
> On 16/04/2021 14:20, Jan Beulich wrote:
>> For Intel CPUs we record L3 cache size, hence we should also do so for
>> AMD and alike.
>>
>> While making these additions, also make sure (throughout the function)
>> that we don't needlessly overwrite prior values when the new value to be
>> stored is zero.
>>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> ---
>> I have to admit though that I'm not convinced the sole real use of the
>> field (in flush_area_local()) is a good one - flushing an entire L3's
>> worth of lines via CLFLUSH may not be more efficient than using WBINVD.
>> But I didn't measure it (yet).
> 
> WBINVD always needs a broadcast IPI to work correctly.
> 
> CLFLUSH and friends let you do this from a single CPU, using cache
> coherency to DTRT with the line, wherever it is.
> 
> 
> Looking at that logic in flush_area_local(), I don't see how it can be
> correct.  The WBINVD path is a decomposition inside the IPI, but in the
> higher level helpers, I don't see how the "area too big, convert to
> WBINVD" can be safe.

Would you mind giving an example? I'm struggling to understand what
exactly you mean to point out.

Jan

> All users of FLUSH_CACHE are flush_all(), except two PCI
> Passthrough-restricted cases. MMUEXT_FLUSH_CACHE_GLOBAL looks to be
> safe, while vmx_do_resume() has very dubious reasoning, and is dead code
> I think, because I'm not aware of a VT-x capable CPU without WBINVD-exiting.
> 
> ~Andrew
> 




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.