[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC 0/5] xen/arm: support big.little SoC
On Wed, Sep 21, 2016 at 11:15:35AM +0100, Julien Grall wrote: >Hello Peng, > >On 21/09/16 09:38, Peng Fan wrote: >>On Tue, Sep 20, 2016 at 01:17:04PM -0700, Stefano Stabellini wrote: >>>On Tue, 20 Sep 2016, Julien Grall wrote: >>>>On 20/09/2016 20:09, Stefano Stabellini wrote: >>>>>On Tue, 20 Sep 2016, Julien Grall wrote: >>>>>>On 20/09/2016 12:27, George Dunlap wrote: >>>>>>>On Tue, Sep 20, 2016 at 11:03 AM, Peng Fan <van.freenix@xxxxxxxxx> >>>>>>>wrote: >>>>>>>>On Tue, Sep 20, 2016 at 02:54:06AM +0200, Dario Faggioli wrote: >>>>>>>>>On Mon, 2016-09-19 at 17:01 -0700, Stefano Stabellini wrote: >>>>>>>>>>On Tue, 20 Sep 2016, Dario Faggioli wrote: >>>>>It is harder to figure out which one is supposed to be >>>>>big and which one LITTLE. Regardless, we could default to using the >>>>>first cluster (usually big), which is also the cluster of the boot cpu, >>>>>and utilize the second cluster only when the user demands it. >>>> >>>>Why do you think the boot CPU will usually be a big one? In the case of Juno >>>>platform it is configurable, and the boot CPU is a little core on r2 by >>>>default. >>>> >>>>In any case, what we care about is differentiate between two set of CPUs. I >>>>don't think Xen should care about migrating a guest vCPU between big and >>>>LITTLE cpus. So I am not sure why we would want to know that. >>> >>>No, it is not about migrating (at least yet). It is about giving useful >>>information to the user. It would be nice if the user had to choose >>>between "big" and "LITTLE" rather than "class 0x1" and "class 0x100", or >>>even "A7" or "A15". >> >>As Dario mentioned in previous email, >>for dom0 provide like this: >> >>dom0_vcpus_big = 4 >>dom0_vcpus_little = 2 >> >>to dom0. >> >>If these two no provided, we could let dom0 runs on big pcpus or big.little. >>Anyway this is not the important point for dom0 only big or big.little. >> >>For domU, provide "vcpus.big" and "vcpus.little" in xl configuration file. >>Such as: >> >>vcpus.big = 2 >>vcpus.litle = 4 >> >> >>According to George's comments, >>Then, I think we could use affinity to restrict little vcpus be scheduled on >>little vcpus, >>and restrict big vcpus on big vcpus. Seems no need to consider soft affinity, >>use hard >>affinity is to handle this. >> >>We may need to provide some interface to let xl can get the information such >>as >>big.little or smp. if it is big.little, which is big and which is little. >> >>For how to differentiate cpus, I am looking the linaro eas cpu topology code, >>The code has not been upstreamed (:, but merged into google android kernel. >>I only plan to take some necessary code, such as device tree parse and >>cpu topology build, because we only need to know the computing capacity of >>each pcpu. >> >>Some doc about eas piece, including dts node examples: >>https://git.linaro.org/arm/eas/kernel.git/blob/refs/heads/lsk-v4.4-eas-v5.2:/Documentation/devicetree/bindings/scheduler/sched-energy-costs.txt > >I am reluctant to take any non-upstreamed bindings in Xen. There is a similar >series going on the lklm [1]. For how to differentiate cpu classes, how about directly use compatible property of each cpu node? A57_0: cpu@0 { compatible = "arm,cortex-a57","arm,armv8"; reg = <0x0 0x0>; ... }; A53_0: cpu@100 { compatible = "arm,cortex-a53","arm,armv8"; reg = <0x0 0x100>; ..... } Thanks, Peng. > >But it sounds like it is a lot of works for little benefits (i.e giving a >better name to the set of CPUs). The naming will also not fit if in the >future hardware will have more than 2 kind of CPUs. > >[...] > >>I am not sure, but we may also need to handle mpidr for ARM, because big and >>little vcpus are supported. > >I am not sure to understand what you mean here. > >Regards, > >[1] https://lwn.net/Articles/699569/ > >-- >Julien Grall -- _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |