[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC 0/5] xen/arm: support big.little SoC



Hi Dario,
On Tue, Sep 20, 2016 at 02:54:06AM +0200, Dario Faggioli wrote:
>On Mon, 2016-09-19 at 17:01 -0700, Stefano Stabellini wrote:
>> On Tue, 20 Sep 2016, Dario Faggioli wrote:
>> > And this would work even if/when there is only one cpupool, or in
>> > general for domains that are in a pool that has both big and LITTLE
>> > pcpus. Furthermore, big.LITTLE support and cpupools will be
>> > orthogonal,
>> > just like pinning and cpupools are orthogonal right now. I.e., once
>> > we
>> > will have what I described above, nothing prevents us from
>> > implementing
>> > per-vcpu cpupool membership, and either create the two (or more!)
>> > big
>> > and LITTLE pools, or from mixing things even more, for more complex
>> > and
>> > specific use cases. :-)
>> 
>> I think that everybody agrees that this is the best long term
>> solution.
>> 
>Well, no, that wasn't obvious to me. If that's the case, it's already
>something! :-)
>
>> > 
>> > Actually, with the cpupool solution, if you want a guest (or dom0)
>> > to
>> > actually have both big and LITTLE vcpus, you necessarily have to
>> > implement per-vcpu (rather than per-domain, as it is now) cpupool
>> > membership. I said myself it's not impossible, but certainly it's
>> > some
>> > work... with the scheduler solution you basically get that for
>> > free!
>> > 
>> > So, basically, if we use cpupools for the basics of big.LITTLE
>> > support,
>> > there's no way out of it (apart from going implementing scheduling
>> > support afterwords, but that looks backwards to me, especially when
>> > thinking at it with the code in mind).
>> 
>> The question is: what is the best short-term solution we can ask Peng
>> to
>> implement that allows Xen to run on big.LITTLE systems today?
>> Possibly
>> getting us closer to the long term solution, or at least not farther
>> from it?
>> 
>So, I still have to look closely at the patches in these series. But,
>with Credit2 in mind, if one:
>
>??- take advantage of the knowledge of what arch a pcpu belongs inside??

>?? ??the code that arrange the pcpus in runqueues, which means we'll end??
>?? ??up with big runqueues and LITTLE runqueues. I re-wrote that code, I
>?? ??can provide pointers and help, if necessary;
>??- tweak the one or two instance of for_each_runqueue() [*] that there
>?? ??are in the code into a for_each_runqueue_of_same_class(), i.e.:

Do you have plan to add this support for big.LITTLE?

I admit that this is the first time I look into the scheduler part.
If I understand wrongly, please correct me.

There is a runqueue for each physical cpu, and there are several vcpus in the 
runqueue.
The scheduler will pick a vcpu in the runqueue to run on the physical cpu.

A vcpu is bind to a physical cpu when alloc_vcpu, but the vcpu can be scheduled
or migrated to a different physical cpu.

Settings cpu soft affinity and hard affinity to restrict vcpus be scheduled
on specific cpus. Then is there a need to introuduce more runqueues?

This seems more complicated than cpupool (:

>
>??if (is_big(this_cpu))
>??{
>?? ??for_each_big_runqueue()
>?? ??{
>?? ?? ?? ..
>?? ??}
>??}
>??else
>??{
>?? ??for_each_LITTLE_runqueue()
>?? ??{
>?? ?? ??..
>?? ??}
>??}??
>
>then big.LITTLE support in Credit2 would be done already, and all it
>would be left is support for the syntax of new config switches in xl,
>and a way of telling, from xl/libxl down to Xen, what arch a vcpu
>belongs to, so that it can be associated with one runqueue of the
>proper class.
>
>Thinking to Credit1, we need to make sure thet, in load_balance() and
>runq_steal(), a LITTLE cpu *only* ever try to steal work from another
>LITTLE cpu, and __never__ from a big cpu (and vice versa). And also
>that when a vcpu wakes up, and what it has in its v->processor is a
>LITTLE pcpu, that only LITTLE processors are considered for being
>tickled (I'm less certain of this last part, but it should be more or
>less like this).
>
>Then, of course the the same glue and vcpu classification code.
>
>However, in Credit1, it's possible that a trick like that would affect
>the accounting and credit algorithm, and hence provide unfair, or in
>general, unexpected results. Credit2 should, OTOH, be a lot mere
>resilient, wrt that.
>
>> > > The whole process would be more explicit and obvious if we used
>> > > cpupools. It would be easier for users to know what it is going
>> > > on --
>> > > they just need to issue an `xl cpupool-list' command and they
>> > > would
>> > > see
>> > > two clearly named pools (something like big-pool and LITTLE-
>> > > pool).??
>> > > 
>> > Well, I guess that, as part of big.LITTLE support, there will be a
>> > way
>> > to tell what pcpus are big and which are LITTLE anyway, probably
>> > both
>> > from `xl info' and from `xl cpupool-list -c' (and most likely in
>> > other
>> > ways too).
>> 
>> Sure, but it needs to be very clear. We cannot ask people to spot
>> architecture specific flags among the output of `xl info' to be able
>> to
>> appropriately start a guest. 
>>
>As mentioned in previous mail, and as drafted when replying to Peng,
>the only think that the user should know is how many big and how many
>LITTLE vcpus she wants (and, potentially, which one would be each). :-)

Yeah. Comes a new question to me.

For big.LITTLE, how to decide the physical cpu is a big CPU or a little cpu?

I'd like to add a computing capability in xen/arm, like this:

struct compute_capatiliby
{
   char *core_name;
   uint32_t rank;
   uint32_t cpu_partnum;
};

struct compute_capatiliby cc=
{
  {"A72", 4, 0xd08},
  {"A57", 3, 0xxxx},
  {"A53", 2, 0xd03},
  {"A35", 1, ...},
}

Then when identify cpu, we decide which cpu is big and which cpu is little
according to the computing rank.

Any comments?

Thanks,
Peng.
>
>Regards,
>Dario
>-- 
><<This happens because I choose it to happen!>> (Raistlin Majere)
>-----------------------------------------------------------------
>Dario Faggioli, Ph.D, http://about.me/dario.faggioli
>Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)



-- 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.