[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 3/7] xen: psr: reserve an RMID for each core



On Mon, 2015-04-06 at 09:59 -0400, Konrad Rzeszutek Wilk wrote:
> On Sat, Apr 04, 2015 at 04:14:41AM +0200, Dario Faggioli wrote:

> > XXX Another idea I just have is to allow the user to
> >     somehow specify a different 'granularity'. Something
> >     like allowing 'percpu_cmt'|'percore_cmt'|'persocket_cmt'
> >     with the following meaning:
> >      + 'percpu_cmt': as in this patch
> >      + 'percore_cmt': same RMID to hthreads of the same core
> >      + 'persocket_cmt': same RMID to all cores of the same
> >         socket.
> > 
> >     'percore_cmt' would only allow gathering info on a
> >     per-core basis... still better than nothing if we
> >     do not have enough RMIDs for each pCPUs.
> 
> Could we allocate nr_online_cpus() / nr_pmids() and have
> some CPUs share the same PMIDs?
> 
Mmm... I hope we can (see the reply to Chao about the per-socketness
nature of the RMIDs).

I'm not sure what you mean here, though. In the box I have at hand there
are 144 CPUs and 71 RMIDs. So, 144/71=2... maybe I'm missing something
of what you mean, how should I use these 2 RMIDs?

If RMIDs actually are per-socket, extending the existing Xen support to
reflect that, and take advantage of it would help a lot already. In such
box, it would mean I could use RMIDs 1-36, on each socket, per per-CPU
monitoring, and still have 35 RMIDs free (which could be 35x4=140,
depending *how* we extend te support to match the per-socket nature of
RMIDs).

Let's see if that is confirmed... Of course, I can book the box again
here and test it myself (and will do that, if necessary :-D).

Thanks and Regards,
Dario

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.