[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC 0/5] xen/arm: support big.little SoC



On Wed, 21 Sep 2016, Julien Grall wrote:
> > > > And in my suggestion, we allow a richer set of labels, so that the user
> > > > could also be more specific -- e.g., asking for "A15" specifically, for
> > > > example, and failing to build if there are no A15 cores present, while
> > > > allowing users to simply write "big" or "little" if they want simplicity
> > > > / things which work across different platforms.
> > > 
> > > Well, before trying to do something clever like that (i.e naming "big" and
> > > "little"), we need to have upstreamed bindings available to acknowledge
> > > the
> > > difference. AFAICT, it is not yet upstreamed for Device Tree (see [1]) and
> > > I
> > > don't know any static ACPI tables providing the similar information.
> > 
> > I like George's idea that "big" and "little" could be just convenience
> > aliases. Of course they are predicated on the necessary device tree
> > bindings being upstream. We don't need [1] to be upstream in Linux, just
> > the binding:
> > 
> > http://marc.info/?l=linux-arm-kernel&m=147308556729426&w=2
> > 
> > which has already been acked by the relevant maintainers.
> 
> This is device tree only. What about ACPI?

ACPI will come along with similar information at some point. When we'll
have it, we'll use it.


> > > I had few discussions and  more thought about big.LITTLE support in Xen.
> > > The
> > > main goal of big.LITTLE is power efficiency by moving task around and been
> > > able to idle one cluster. All the solutions suggested (including mine) so
> > > far,
> > > can be replicated by hand (except the VPIDR) so they are mostly an
> > > automatic
> > > way. This will also remove the real benefits of big.LITTLE because Xen
> > > will
> > > not be able to migrate vCPU across cluster for power efficiency.
> > 
> > The goal of the architects of big.LITTLE might have been power
> > efficiency, but of course we are free to use any features that the
> > hardware provides in the best way for Xen and the Xen community.
> 
> This is very dependent on how the big.LITTLE has been implemented by the
> hardware. Some platform can not run both big and LITTLE cores at the same
> time. You need a proper switch in the firmware/hypervisor.
 
Fair enough, that hardware wouldn't benefit from this work.


> > > If we care about power efficiency, we would have to handle seamlessly
> > > big.LITTLE in Xen (i.e a guess would only see a kind of CPU). This arise
> > > quite
> > > few problem, nothing insurmountable, similar to migration across two
> > > platforms
> > > with different micro-architecture (e.g processors): errata, features
> > > supported... The guest would have to know the union of all the errata
> > > (this is
> > > done so far via the MIDR, so we would a PV way to do it), and only the
> > > intersection of features would be exposed to the guest. This also means
> > > the
> > > scheduler would have to be modified to handle power efficiency (not
> > > strictly
> > > necessary at the beginning).
> > > 
> > > I agree that a such solution would require some work to implement,
> > > although
> > > Xen will have a better control of the energy consumption of the platform.
> > > 
> > > So the question here, is what do we want to achieve with big.LITTLE?
> > 
> > I don't think that handling seamlessly big.LITTLE in Xen is the best way
> > to do it in the scenarios where Xen on ARM is being used today. I
> > understand the principles behind it, but I don't think that it will lead
> > to good results in a virtualized environment, where there is more
> > activity and more vcpus than pcpus.
> 
> Can you detail why you don't think it will give good results?

I think big.LITTLE works well for cases where you have short clear burst
of activity while most of the time the system is quasi-idle (but not
completely idle). Basically like a smartphone. For other scenarios with
more uniform activity patterns, like a server or an infotainment system,
big.LITTLE is too big of an hammer to be used for dynamic power saving.
In those cases it is more flexible to expose all cores to VMs, so that
they can exploit all resources when necessary and idle them when they
can (with wfi or deeper sleep state if possible).


> > What we discussed in this thread so far is actionable, and gives us
> > big.LITTLE support in a short time frame. It is a good fit for Xen on
> > ARM use cases and still leads to lower power consumption with an wise
> > allocation of big and LITTLE vcpus and pcpus to guests.
> 
> How this would lead to lower power consumption?  If there is nothing
> running of the processor we would have a wfi loop which will never put
> the physical CPU in deep sleep.

I expect that by assigning appropriate tasks to big and LITTLE cores,
some big cores will be left to idle which will lead to some power
saving, especially if put idle cores in deep sleep (maybe using PSCI?).


> The main advantage of big.LITTLE is too be able to switch off a
> cluster/cpu when it is not used.

To me the main advantage is having double number of cores, each of them
better suited for different kinds of tasks :-)


> Without any knowledge in Xen (such as CPU freq), I am afraid the the power
> consumption will still be the same.
>
> > I would start from this approach, then if somebody comes along with a
> > plan to implement a big.LITTLE switcher in Xen, I welcome her to do it
> > and I would be happy to accept the code in Xen. We'll just make it
> > optional.
> 
> I think we are discussing here a simple design for big.LITTLE. I never asked
> Peng to do all the work. I am worry that if we start to expose the big.LITTLE
> to the userspace it will be hard in the future to step back from it.

I don't think so: I think we can make both approaches work without
issue, but you seem to have come to the same conclusion from following
emails.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.