[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 4/5] xen/cpupool: Create different cpupools at boot time



On Wed, 16 Feb 2022, Luca Fancellu wrote:
> > On 16 Feb 2022, at 02:45, Stefano Stabellini <sstabellini@xxxxxxxxxx> wrote:
> > 
> > On Tue, 15 Feb 2022, Luca Fancellu wrote:
> >> Introduce an architecture specific way to create different cpupools
> >> at boot time, this is particularly useful on ARM big.LITTLE system
> >> where there might be the need to have different cpupools for each type
> >> of core, but also systems using NUMA can have different cpu pools for
> >> each node.
> >> 
> >> The feature on arm relies on a specification of the cpupools from the
> >> device tree to build pools and assign cpus to them.
> >> 
> >> Documentation is created to explain the feature.
> >> 
> >> Signed-off-by: Luca Fancellu <luca.fancellu@xxxxxxx>
> >> ---
> >> docs/misc/arm/device-tree/cpupools.txt | 118 +++++++++++++++++++++++++
> >> xen/arch/arm/Kconfig                   |   9 ++
> >> xen/arch/arm/Makefile                  |   1 +
> >> xen/arch/arm/cpupool.c                 | 118 +++++++++++++++++++++++++
> >> xen/common/sched/cpupool.c             |   4 +-
> >> xen/include/xen/sched.h                |  11 +++
> >> 6 files changed, 260 insertions(+), 1 deletion(-)
> >> create mode 100644 docs/misc/arm/device-tree/cpupools.txt
> >> create mode 100644 xen/arch/arm/cpupool.c
> >> 
> >> diff --git a/docs/misc/arm/device-tree/cpupools.txt 
> >> b/docs/misc/arm/device-tree/cpupools.txt
> >> new file mode 100644
> >> index 000000000000..7298b6394332
> >> --- /dev/null
> >> +++ b/docs/misc/arm/device-tree/cpupools.txt
> >> @@ -0,0 +1,118 @@
> >> +Boot time cpupools
> >> +==================
> >> +
> >> +On arm, when BOOT_TIME_CPUPOOLS is enabled in the Xen configuration, it is
> >> +possible to create cpupools during boot phase by specifying them in the 
> >> device
> >> +tree.
> >> +
> >> +Cpupools specification nodes shall be direct childs of /chosen node.
> >> +Each cpupool node contains the following properties:
> >> +
> >> +- compatible (mandatory)
> >> +
> >> +    Must always include the compatiblity string: "xen,cpupool".
> >> +
> >> +- cpupool-id (mandatory)
> >> +
> >> +    Must be a positive integer number.
> > 
> 
> Hi Stefano,
> 
> Thank you for your review,
> 
> > Why is cpupool-id mandatory? It looks like it could be generated by Xen.
> > Or is it actually better to have the user specify it anyway?
> > 
> 
> Yes at first I thought to automatically generate that, however I needed a 
> structure
> to map the id to the cpupool DT node. Here my doubt was about the size of the
> structure, because the user could even specify a cpupool for each cpu. I 
> could allocate
> It dynamically and free it after domUs creation in setup_xen.
> What do you think could be the right way?

Maybe we can achieve the goal with the structure we already have:
pool_cpu_map. pool_cpu_map returns struct cpupool*, which has a
unsigned int cpupool_id field. As a pCPU can only be in 1 cpupool, we
could for each dom0less domain:

- get the xen,cpupool phandle from "domain-cpupool"
- get the first CPU phandle from "cpupool-cpus" in xen,cpupool
- from the CPU node phandle get the CPU number from "reg"
- pool_cpu_map[cpu_reg]->cpupool_id is the id that we need

It should be fast as they are all direct accesses (no walking long lists
or binary trees.)


> Or the dom0less guest could specify the id, but I like it more when using a 
> phandle to the
> Xen,cpupool node. 

No, I think the ID is something Xen should generate.


> >> +- cpupool-cpus (mandatory)
> >> +
> >> +    Must be a list of device tree phandle to nodes describing cpus (e.g. 
> >> having
> >> +    device_type = "cpu"), it can't be empty.
> >> +
> >> +- cpupool-sched (optional)
> >> +
> >> +    Must be a string having the name of a Xen scheduler, it has no effect 
> >> when
> >> +    used in conjunction of a cpupool-id equal to zero, in that case the
> >> +    default Xen scheduler is selected (sched=<...> boot argument).
> > 
> > I don't get why cpupool-id == 0 should trigger a special cpupool-sched
> > behavior.
> 
> Cpupool with id 0 is embedded in Xen, it has its own special case handling in 
> cpupool_create
> that is giving it the default scheduler. I thought it was better to leave it 
> as it was, however the
> cpupool0 scheduler can be modified using sched= boot args as it was before.
> 
> > 
> > 
> >> +Constraints
> >> +===========
> >> +
> >> +The cpupool with id zero is implicitly created even if not specified, 
> >> that pool
> >> +must have at least one cpu assigned, otherwise Xen will stop.
> >> +
> >> +Every cpu brought up by Xen will be assigned to the cpupool with id zero 
> >> if it's
> >> +not assigned to any other cpupool.
> >> +
> >> +If a cpu is assigned to a cpupool, but it's not brought up correctly, Xen 
> >> will
> >> +stop.
> > 
> > Thank you for documenting the constraints, but why do we have them?
> > Imagine a user specifying 3 cpu pools and imagine the cpupool-id is
> > optional and missing. We could take care of the cpupool-id generation in
> > Xen and we could also assign the default scheduler everywhere
> > cpupool-sched is not specified. Maybe I am missing something?
> 
> Yes we could make the cpupool-id optional, my doubts are in the fist comment 
> above.
> Whenever the cpupool-sched is not specified, the current behaviour is to use 
> the default scheduler.
> 
> > 
> > Does cpupool0 has to exist? I guess the answer could be yes, but if it
> > is specified as id of one of the pools we are fine, otherwise it could
> > be automatically generated by Xen.
> 
> Yes cpupool0 needs to exists, however it is still generated by Xen regardless 
> of the DT
> specifications. In fact you could not specify in the DT any xen,cpupool 
> compatible node
> with the cpupool-id == 0 and Xen will generate the cpupool0 anyway
> (Xen internals are tied with the existence of a cpupool0).
> 
> > 
> > In any case, I don't think that cpupool0 has to have the default
> > scheduler?
> 
> Ok I think I can create a function to assign a scheduler to the cpupool0 
> after its creation,
> I would need to test it to be sure I don’t find something strange.
> 
> > 
> > My suggestion would be:
> > 
> > - make cpupool-id optional
> > - assign automatically cpupool-ids starting from 0
> >    - respect cpupool-ids chosen by the user
> 
> Ok, it would start from 1 because cpupool0 always exists
> 
> > - if some CPUs are left out (not specified in any pool) add an extra cpupool
> >    - the extra cpupool doesn't have to be cpupool-id == 0, it could be
> >      cpupool-id == n
> >    - the extra cpupool uses the default scheduler
> 
> I gave all the unassigned cpus to cpupool0 to reflect the current behaviour, 
> so that
> a user that doesn’t specify any xen,cpupool node ends up in a system 
> reflecting the
> current behaviour as the feature is not enabled.
> However I can say, if no xen,cpupool nodes are found then assign cpus to 
> cpupool0,
> else assign them to a new cpupool and...
> 
> > 
> > If the user created cpupools in device tree covering all CPUs and also
> > specified all cpupool-ids everywhere, and none of them are 0 (no cpupool
> > in the system is cpupool0) then panic. (Assuming that cpupool0 is
> > required.)
> 
> … panic if cpupool0 has no cpus.

That could be a good plan.

However, if cpupool0 has to have CPU0 (as Juergen wrote) then we could
automatically assign cpupool-id == 0 to whatever xen,cpupool node has
CPU0:

- if CPU0 is unassigned, cpupool0 is the one with all the unassigned CPUs
- if CPU0 is assigned to one of the xen,cpupool nodes, then that cpupool
  gets id == 0

Alternative we could fix the Xen limitation that cpupool0 has to have
CPU0.

In any case the good thing is that from a device interface perspective,
it doesn't matter. The device tree description doesn't have to change.
The user doesn't need to care how Xen comes up with the IDs.

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.