[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC 0/5] xen/arm: support big.little SoC



Hi Stefano,

On 22/09/2016 18:31, Stefano Stabellini wrote:
On Thu, 22 Sep 2016, Julien Grall wrote:
Hello Peng,

On 22/09/16 10:27, Peng Fan wrote:
On Thu, Sep 22, 2016 at 10:50:23AM +0200, Dario Faggioli wrote:
On Thu, 2016-09-22 at 14:49 +0800, Peng Fan wrote:
On Wed, Sep 21, 2016 at 08:11:43PM +0100, Julien Grall wrote:
A feature like `xl cpupool-biglittle-split' can still be interesting,

"cpupool-cluster-split" maybe a better name?

You seem to assume that a cluster, from the MPIDR point of view, can only
contain the same set of CPUs. I don't think this is part of the architecture,
so this may not be true in the future.

Interesting. I also understood that a cluster can only have one kind if
cpus. Honestly it would be a little insane for it to be otherwise :-)

I don't think this is insane (or maybe I am insane :)). Cluster usually doesn't share all L2 cache (assuming L1 is local to each core) and L3 cache may not be present, so if you move a task from one cluster to another you will add latency because the new L2 cache has to be refilled.

The use case of big.LITTLE is big cores are used for short period of burst and little core are used for the rest (e.g listening audio, fetching mail...). If you want to reduce latency when switch between big and little CPUs, you may want to put them within the same cluster.

Also, as mentioned in another thread, you may have a platform with the same micro-architecture (e.g Cortex A-53) but different silicon implementation (e.g to have a different frequency, power efficiency). Here the concept of big.LITTLE is more blurred.

That's why I am quite reluctant to name (even if it may be more handy to the user) "big" and "little" the different CPU set.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.