[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 0/7] Intel Cache Monitoring: Current Status and Future Opportunities



On 04/04/2015 03:14, Dario Faggioli wrote:
> Hi Everyone,
>
> This RFC series is the outcome of an investigation I've been doing about
> whether we can take better advantage of features like Intel CMT (and of PSR
> features in general). By "take better advantage of" them I mean, for example,
> use the data obtained from monitoring within the scheduler and/or within
> libxl's automatic NUMA placement algorithm, or similar.
>
> I'm putting here in the cover letter a markdown document I wrote to better
> describe my findings and ideas (sorry if it's a bit long! :-D). You can also
> fetch it at the following links:
>
>  * http://xenbits.xen.org/people/dariof/CMT-in-scheduling.pdf
>  * http://xenbits.xen.org/people/dariof/CMT-in-scheduling.markdown
>
> See the document itself and the changelog of the various patches for details.
>
> The series includes one Chao's patch on top, as I found it convenient to build
> on top of it. The series itself is available here:
>
>   git://xenbits.xen.org/people/dariof/xen.git  wip/sched/icachemon
>   
> http://xenbits.xen.org/gitweb/?p=people/dariof/xen.git;a=shortlog;h=refs/heads/wip/sched/icachemon
>
> Thanks a lot to everyone that will read and reply! :-)
>
> Regards,
> Dario
> ---

There seem to be several areas of confusion indicated in your document. 
I am unsure whether this is a side effect of the way you have written
it, but here are (hopefully) some words of clarification.  To the best
of my knowledge:

PSR CMT works by tagging cache lines with the currently-active RMID. 
The cache utilisation is a count of the number of lines which are tagged
with a specific RMID.  MBM on the other hand counts the number of cache
line fills and cache line evictions tagged with a specific RMID.

By this nature, the information will never reveal the exact state of
play.  e.g. a core with RMID A which gets a cache line hit against a
line currently tagged with RMID B will not alter any accounting. 
Furthermore, as alterations of the RMID only occur in
__context_switch(), Xen actions such as handling an interrupt will be
accounted against the currently active domain (or other future
granularity of RMID).

"max_rmid" is a per-socket property.  There is no requirement for it to
be the same for each socket in a system, although it is likely, given a
homogeneous system.  The limit on RMID is based on the size of the
accounting table.

As far as MSRs themselves go, an extra MSR write in the context switch
path is likely to pale into the noise.  However, querying the data is an
indirect MSR read (write to the event select MSR, read from  the data
MSR).  Furthermore there is no way to atomically read all data at once
which means that activity on other cores can interleave with
back-to-back reads in the scheduler.


As far as the plans here go, I have some concerns.  PSR is only
available on server platforms, which will be 2/4 socket systems with
large numbers of cores.  As you have discovered, there insufficient
RMIDs for redbrick pcpus, and on a system that size, XenServer typically
gets 7x vcpus to pcpus.

I think it is unrealistic to expect to use any scheduler scheme which is
per-pcpu or per-vcpu while the RMID limit is as small as it is. 
Depending on workload, even a per-domain scheme might be problematic. 
One of our tests involves running 500xWin7 VMs on that particular box.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.