[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC 0/5] xen/arm: support big.little SoC

On 19/09/2016 11:38, Peng Fan wrote:
On Mon, Sep 19, 2016 at 10:53:56AM +0200, Julien Grall wrote:

On 19/09/2016 10:36, Peng Fan wrote:
On Mon, Sep 19, 2016 at 10:09:06AM +0200, Julien Grall wrote:
Hello Peng,

On 19/09/2016 04:08, van.freenix@xxxxxxxxx wrote:
From: Peng Fan <peng.fan@xxxxxxx>

This patchset is to support XEN run on big.little SoC.
The idea of the patch is from

There are some changes to cpupool and add x86 stub functions to avoid build
break. Sending The RFC patchset out is to request for comments to see whether
this implementation is acceptable or not. Patchset have been tested based on
xen-4.8 unstable on NXP i.MX8.

I use Big/Little CPU and cpupool to explain the idea.
A pool contains Big CPUs is called Big Pool.
A pool contains Little CPUs is called Little Pool.
If a pool does not contains any physical cpus, Little CPUs or Big CPUs
can be added to the cpupool. But the cpupool can not contain both Little
and Big CPUs. The CPUs in a cpupool must have the same cpu type(midr value for 
CPUs can not be added to the cpupool which contains cpus that have different 
cpu type.
Little CPUs can not be moved to Big Pool if there are Big CPUs in Big Pool,
and versa. Domain in Big Pool can not be migrated to Little Pool, and versa.
When XEN tries to bringup all the CPUs, only add CPUs with the same cpu 
type(same midr value)
into cpupool0.

As mentioned in the mail you pointed above, this series is not enough to make
big.LITTLE working on then. Xen is always using the boot CPU to detect the
list of features. With big.LITTLE features may not be the same.

And I would prefer to see Xen supporting big.LITTLE correctly before
beginning to think to expose big.LITTLE to the userspace (via cpupool)

Do you mean vcpus be scheduled between big and little cpus freely?

By supporting big.LITTLE correctly I meant Xen thinks that all the cores has
the same set of features. So the feature detection is only done the boot CPU.
See processor_setup for instance...

Moving vCPUs between big and little cores would be a hard task (cache line
issue, and possibly feature) and I don't expect to ever cross this in Xen.
However, I am expecting to see big.LITTLE exposed to the guest (i.e having
big and little vCPUs).

big vCPUs scheduled on big Physical CPUs and little vCPUs scheduled on little
physical cpus, right?
If it is, is there is a need to let Xen think all the cores has the same set
of features?

I think you missed my point. The feature registers on big and little cores may be different. Currently, Xen is reading the feature registers of the CPU boot and wrongly assumes that those features will exists on all CPUs. This is not the case and should be fixed before we are getting in trouble.

Developing big.little guest support, I am not sure how much efforts needed.
Is this really needed?

This is not necessary at the moment, although I have seen some interest about it. Running a guest only on a little core is a nice beginning, but a guest may want to take advantage of big.LITTLE (running hungry app on big one and little on small one).

This patchset is to use cpupool to block the vcpu be scheduled between big and
little cpus.

See for instance v->arch.actlr = READ_SYSREG32(ACTLR_EL1).

Thanks for this. I only expose cpuid to guest, missed actlr. I'll check
the A53 and A72 TRM about AArch64 implementationd defined registers.
This actlr can be added to the cpupool_arch_info as midr.

Reading "vcpu_initialise", seems only MIDR and ACTLR needs to be handled.
Please advise if I missed anything else.

Have you check the register emulation?

Checked midr. Have not checked others.
I think I missed some registers in ctxt_switch_to.

Thinking an SoC with 4 A53(cpu[0-3]) + 2 A72(cpu[4-5]), cpu0 is the first one
that boots up. When XEN tries to bringup secondary CPUs, add cpu[0-3] to
cpupool0 and leave cpu[4-5] not in any cpupool. Then when Dom0 boots up,
`xl cpupool-list -c` will show cpu[0-3] in Pool-0.

Then use the following script to create a new cpupool and add cpu[4-5] to
the cpupool.
#xl cpupool-create name=\"Pool-A72\" sched=\"credit2\"
#xl cpupool-cpu-add Pool-A72 4
#xl cpupool-cpu-add Pool-A72 5
#xl create -d /root/xen/domu-test pool=\"Pool-A72\"

I am a bit confused with these runes. It means that only the first kind of
CPUs have pool assigned. Why don't you directly create all the pools at boot

If need to create all the pools, need to decided how many pools need to be 
I thought about this, but I do not come out a good idea.

The cpupool0 is defined in xen/common/cpupool.c, if need to create many pools,
need to alloc cpupools dynamically when booting. I would not like to change a
lot to common code.

Why? We should avoid to choose a specific design just because the common code
does not allow you to do it without heavy change.

We never came across the big.LITTLE problem on x86, so it is normal to modify
the code.

The implementation in this patchset I think is an easy way to let Big and Little
CPUs all run.

I care about having a design allowing an easy use of big.LITTLE on Xen. Your
solution requires the administrator to know the underlying platform and
create the pool.

I suppose big.little is mainly used in embedded SoC :). So the user(developer?)
needs to know the hardware platform.

The user will always be happy if Xen can save him a bit of time to create cpupool. ;)

In the solution I suggested, the pools would be created by Xen (and the info
exposed to the userspace for the admin).

I think the reason to create cpupools to support big.little SoC is to
avoid vcpus scheduled between big and little physical cpus.

If need to support big.little guest, I think no need to create more
cpupools expect cpupoo0. Need to make sure vcpus not be scheduled between
big and little physical cpus. All the cpus needs to be in one cpupool.

Also, in which pool a domain will be created if none is specified?

Now `xl cpupool-list -c` shows:
Name            CPU list
Pool-0          0,1,2,3
Pool-A72        4,5

`xl cpupool-list` shows:
Name               CPUs   Sched     Active   Domain count
Pool-0               4    credit       y          1
Pool-A72             2   credit2       y          1

`xl cpupool-cpu-remove Pool-A72 4`, then `xl cpupool-cpu-add Pool-0 4`
not success, because Pool-0 contains A53 CPUs, but CPU4 is an A72 CPU.

`xl cpupool-migrate DomU Pool-0` will also fail, because DomU is created
in Pool-A72 with A72 vcpu, while Pool-0 have A53 physical cpus.

Patch 1/5:
use "cpumask_weight(cpupool0->cpu_valid);" to replace "num_online_cpus()",
because num_online_cpus() counts all the online CPUs, but now we only
need Big or Little CPUs.

So if I understand correctly, if the boot CPU is a little CPU, DOM0 will
always be able to only use little ones. Is that right?

Yeah. Dom0 only use the little ones.

This is really bad, dom0 on normal case will have all the backends. It may
not be possible to select the boot CPU, and therefore always get a little

Dom0 runs in cpupool0. cpupool0 only contains the cpu[0-3] in my case.

So the performance of dom0 will be impacted because it will only use little cores.

Creating the pool at boot time would have avoid a such issue because, unless
we expose big.LITTLE to dom0 (I would need the input of George and Dario for
this bits), we could have a parameter to specify which set of CPUs (e.g pool)
to allocate dom0 vCPUs.

dom0 is control domain, I think no need to expose big.little for dom0.
Pin VCPU to specific physical cpus, this may help support big.little guest.

Note, that I am not asking you to implement everything. But I think we need a
coherent view of big.LITTLE support in Xen today to go forward.

Yeah. Then you prefer supporting big.little guest?

I have seen some interest on it.

Please advise if you have any plan/ideas or what I can do on this.

I already gave some ideas on what could be done for big.LITTLE support. But, I admit I haven't yet much think about it, so I may miss some part.


Julien Grall

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.