[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 3/7] xen: psr: reserve an RMID for each core



On Sat, Apr 04, 2015 at 04:14:41AM +0200, Dario Faggioli wrote:
> This allows for a new item to be passed as part of the psr=
> boot option: "percpu_cmt". If that is specified, Xen tries,
> at boot time, to associate an RMID to each core.
> 
> XXX This all looks rather straightforward, if it weren't
>     for the fact that it is, apparently, more common than
>     I though to run out of RMID. For example, on a dev box
>     we have in Cambridge, there are 144 pCPUs and only 71
>     RMIDs.
> 
>     In this preliminary version, nothing particularly smart
>     happens if we run out of RMIDs, we just fail attaching
>     the remaining cores and that's it. In future, I'd
>     probably like to:
>      + check whether the operation have any chance to
>        succeed up front (by comparing number of pCPUs with
>        available RMIDs)
>      + on unexpected failure, rollback everything... it
>        seems to make more sense to me than just leaving
>        the system half configured for per-cpu CMT
> 
>     Thoughts?
> 
> XXX Another idea I just have is to allow the user to
>     somehow specify a different 'granularity'. Something
>     like allowing 'percpu_cmt'|'percore_cmt'|'persocket_cmt'
>     with the following meaning:
>      + 'percpu_cmt': as in this patch
>      + 'percore_cmt': same RMID to hthreads of the same core
>      + 'persocket_cmt': same RMID to all cores of the same
>         socket.
> 
>     'percore_cmt' would only allow gathering info on a
>     per-core basis... still better than nothing if we
>     do not have enough RMIDs for each pCPUs.
> 
>     'persocket_cmt' would basically only allow to track the
>     amount of free L3 on each socket (by subtracting the
>     monitored value from the total). Again, still better
>     than nothing, would use very few RMIDs, and I could
>     think of ways of using this information in a few
>     places in the scheduler...
> 
>     Again, thought?

This even can be extended to the concept of 'cache monitoring group',
which can hold arbitrary cpus into one group. Actually Linux
implementation does this by using the cgoup mechanism to allocate RMID
to a group of threads. Such design can solve the RMID-shortage somehow.

Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.