[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC 0/5] xen/arm: support big.little SoC

On Mon, 2016-09-19 at 14:03 -0700, Stefano Stabellini wrote:
> On Mon, 19 Sep 2016, Dario Faggioli wrote:
> > Setting thing up like this, even automatically, either in
> hypervisor or
> > toolstack, is basically already possible (with all the good and bad
> > aspects of pinning, of course).
> > Then, sure (as I said when replying to George), we may want things
> to
> > be more flexible, and we also probably want to be on the safe side
> --if 
> > ever some components manages to undo our automatic pinning-- wrt
> the
> > scheduler not picking up work for the wrong architecture... But
> still
> > I'm a bit surprised this did not came up... Julien, Peng, is that
> > because you think this is not doable for any reason I'm missing?
> Let's suppose that Xen detects big.LITTLE and pins dom0 vcpus to big
> automatically. How can the user know that she really needs to be
> careful
> in the way she pins the vcpus of new VMs? Xen would also need to pin
> automatically vcpus of new VMs to either big or LITTLE cores, or xl
> would have to do it.
Actually doing things with what we currently have for pinning is only
something I've brought up as an example, and (potentially) useful for
proof-of-concept, or very early stage level support.

In the long run, when thinking to the scheduler based solution, I see
things happening the other way round: you specify in xl config file
(and with boot parameters, for dom0) how many big and how many LITTLE
vcpus you want, and the scheduler will know that it can only schedule
the big ones on big physical cores, and the LITTLE ones on LITTLE
physical cores.

Note that we're saying 'pinning' (yeah, I know, I did it myself in the
first place :-/), but that would not be an actual 1-to-1 pinning. For
instance, if domain X has 4 big pcpus, say 0,1,2,3,4, and the host has
8 big pcpus, say 8-15, then dXv1, dXv2, dXv3 and dXv4 will only be run
by the scheduler on pcpus 8-15. Any of them, and with migration and
load balancing within the set possible. This is what I'm talking about.

And this would work even if/when there is only one cpupool, or in
general for domains that are in a pool that has both big and LITTLE
pcpus. Furthermore, big.LITTLE support and cpupools will be orthogonal,
just like pinning and cpupools are orthogonal right now. I.e., once we
will have what I described above, nothing prevents us from implementing
per-vcpu cpupool membership, and either create the two (or more!) big
and LITTLE pools, or from mixing things even more, for more complex and
specific use cases. :-)

Actually, with the cpupool solution, if you want a guest (or dom0) to
actually have both big and LITTLE vcpus, you necessarily have to
implement per-vcpu (rather than per-domain, as it is now) cpupool
membership. I said myself it's not impossible, but certainly it's some
work... with the scheduler solution you basically get that for free!

So, basically, if we use cpupools for the basics of big.LITTLE support,
there's no way out of it (apart from going implementing scheduling
support afterwords, but that looks backwards to me, especially when
thinking at it with the code in mind).

> The whole process would be more explicit and obvious if we used
> cpupools. It would be easier for users to know what it is going on --
> they just need to issue an `xl cpupool-list' command and they would
> see
> two clearly named pools (something like big-pool and LITTLE-pool). 
Well, I guess that, as part of big.LITTLE support, there will be a way
to tell what pcpus are big and which are LITTLE anyway, probably both
from `xl info' and from `xl cpupool-list -c' (and most likely in other
ways too).

> We
> wouldn't have to pin vcpus to cpus automatically in Xen or xl, which
> doesn't sound like fun.
As tried to say above, it will _look_ like some kind of automatic
pinning, but that does not mean it has to be implemented by means of
it, or dealt with by the user in the same way.

<<This happens because I choose it to happen!>> (Raistlin Majere)
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.