[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 0/7] Intel Cache Monitoring: Current Status and Future Opportunities



On 04/07/2015 11:27 AM, Andrew Cooper wrote:
> On 04/04/2015 03:14, Dario Faggioli wrote:
>> Hi Everyone,
>>
>> This RFC series is the outcome of an investigation I've been doing about
>> whether we can take better advantage of features like Intel CMT (and of PSR
>> features in general). By "take better advantage of" them I mean, for example,
>> use the data obtained from monitoring within the scheduler and/or within
>> libxl's automatic NUMA placement algorithm, or similar.
>>
>> I'm putting here in the cover letter a markdown document I wrote to better
>> describe my findings and ideas (sorry if it's a bit long! :-D). You can also
>> fetch it at the following links:
>>
>>  * http://xenbits.xen.org/people/dariof/CMT-in-scheduling.pdf
>>  * http://xenbits.xen.org/people/dariof/CMT-in-scheduling.markdown
>>
>> See the document itself and the changelog of the various patches for details.
>>
>> The series includes one Chao's patch on top, as I found it convenient to 
>> build
>> on top of it. The series itself is available here:
>>
>>   git://xenbits.xen.org/people/dariof/xen.git  wip/sched/icachemon
>>   
>> http://xenbits.xen.org/gitweb/?p=people/dariof/xen.git;a=shortlog;h=refs/heads/wip/sched/icachemon
>>
>> Thanks a lot to everyone that will read and reply! :-)
>>
>> Regards,
>> Dario
>> ---
> 
> There seem to be several areas of confusion indicated in your document. 
> I am unsure whether this is a side effect of the way you have written
> it, but here are (hopefully) some words of clarification.  To the best
> of my knowledge:
> 
> PSR CMT works by tagging cache lines with the currently-active RMID. 
> The cache utilisation is a count of the number of lines which are tagged
> with a specific RMID.  MBM on the other hand counts the number of cache
> line fills and cache line evictions tagged with a specific RMID.

An actual counter, like MBM, we actually don't need different RMIDs* to
implement a per-vcpu counter: we could just read the value on every
context-switch and compare it to the last value and store it in the vcpu
struct.  Having extra RMIDs just makes it easier -- is that right?

I haven't thought about it in detail, but it seems like for that having
an LRU algorithm for allocating MBM RMIDs might work.

* Are the called RMIDs for MBM?  If not replace "RMID" in this paragraph
with the appropriate value.

For CMT, we could imagine setting the RMID as giving the pcpu a
paintbrush with a specific color of paint, with which it paints that
color on the wall (which would represent the L3 cache).  If we give Red
to Andy and Blue to Dario, then after a while we can look at the red and
blue portions of the wall and know which belongs to which.  But if we
then give the red one to Konrad, we'll never be *really* sure how much
of the red on the wall was put there by Konrad and how much was put
there by Andy.  If Dario is a mad painter just painting over everything,
then within a relatively short period of time we can assume that
whatever red there is belongs to Konrad; but if Dario is more
constrained, Andy's paint may stay there indefinitely.

But what we *can* say, I suppose, is that Konrad's "footprint" is
certainly *less than* the amount of red paint on the wall; and that any
*increase* in the amount of red paint since we gave the brush to Konrad
certainly belongs to him.

So we could probably "bracket" the usage by any given vcpu: if the
original RMID occupancy was O, and the current RMID occupancy is N, then
the actual occupancy is between [N-O] and N.

Hmm, although I guess that's not true either -- a vcpu may still have
occupancy from all previous RMIDs that it's used.

Which makes me wonder -- If we were to use an RMID "recycling" scheme,
one of the best algorithms would probably be to recycle one the RMID
which was 1) not running on another core at the time, and 2) had the
lowest count.  With 71 RMIDs, it seems fairly likely to me that in
practice at least one of those will be nearly zero at any given time.
Reassigning only low-occupancy RMIDs also minimizes the effect mentioned
above, where a vcpu gets unaccounted occupancy from previously-used RMIDs.

What do you think?

> As far as MSRs themselves go, an extra MSR write in the context switch
> path is likely to pale into the noise.  However, querying the data is an
> indirect MSR read (write to the event select MSR, read from  the data
> MSR).  Furthermore there is no way to atomically read all data at once
> which means that activity on other cores can interleave with
> back-to-back reads in the scheduler.

I don't think it's a given that an MSR write will be cheap.  Back when I
was doing my thesis (10 years ago now), logging some performance
counters on context switch (which was just an MSR read) added about 7%
to the overhead of a kernel build, IIRC.

Processors have changed quite a bit in that time, and we can hope that
Intel would have tried to make writing the IDs pretty fast.  But before
we enabled anything by default I think we'd want to make sure and take a
look at the overhead first.

 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.