[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 00/17] xen: support per-cpupool scheduling granularity



On 01/12/2020 08:21, Juergen Gross wrote:
> Support scheduling granularity per cpupool. Setting the granularity is
> done via hypfs, which needed to gain dynamical entries for that
> purpose.
>
> Apart from the hypfs related additional functionality the main change
> for cpupools was the support for moving a domain to a new granularity,
> as this requires to modify the scheduling unit/vcpu relationship.
>
> I have tried to do the hypfs modifications in a rather generic way in
> order to be able to use the same infrastructure in other cases, too
> (e.g. for per-domain entries).
>
> The complete series has been tested by creating cpupools with different
> granularities and moving busy and idle domains between those.
>
> Changes in V2:
> - Added several new patches, especially for some further cleanups in
>   cpupool.c.
> - Completely reworked the locking scheme with dynamical directories:
>   locking of resources (cpupools in this series) is now done via new
>   callbacks which are called when traversing the hypfs tree. This
>   removes the need to add locking to each hypfs related cpupool
>   function and it ensures data integrity across multiple callbacks.
> - Reordered the first few patches in order to have already acked
>   patches in pure cleanup patches first.
> - Addressed several comments.
>
> Juergen Gross (17):
>   xen/cpupool: add cpu to sched_res_mask when removing it from cpupool
>   xen/cpupool: add missing bits for per-cpupool scheduling granularity
>   xen/cpupool: sort included headers in cpupool.c
>   xen/cpupool: switch cpupool id to unsigned
>   xen/cpupool: switch cpupool list to normal list interface
>   xen/cpupool: use ERR_PTR() for returning error cause from
>     cpupool_create()
>   xen/cpupool: support moving domain between cpupools with different
>     granularity
>   docs: fix hypfs path documentation
>   xen/hypfs: move per-node function pointers into a dedicated struct
>   xen/hypfs: pass real failure reason up from hypfs_get_entry()
>   xen/hypfs: add getsize() and findentry() callbacks to hypfs_funcs
>   xen/hypfs: add new enter() and exit() per node callbacks
>   xen/hypfs: support dynamic hypfs nodes
>   xen/hypfs: add support for id-based dynamic directories
>   xen/cpupool: add cpupool directories
>   xen/cpupool: add scheduling granularity entry to cpupool entries
>   xen/cpupool: make per-cpupool sched-gran hypfs node writable

Gitlab CI is fairly (but not completely) reliably hitting an failure in
ARM randconfig against this series only.

https://gitlab.com/xen-project/patchew/xen/-/pipelines/225445864 is one
example.

Error is:

cpupool.c:102:12: error: 'sched_gran_get' defined but not used
[-Werror=unused-function]
  102 | static int sched_gran_get(const char *str, enum sched_gran *mode)
      |            ^~~~~~~~~~~~~~



Weirdly, there is a second diagnostic showing up which appears to be
unrelated and non-fatal, but a concerning non-the-less

mem_access.c: In function 'p2m_mem_access_check':
mem_access.c:227:6: note: parameter passing for argument of type 'const
struct npfec' changed in GCC 9.1
  227 | bool p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct
npfec npfec)
      |      ^~~~~~~~~~~~~~~~~~~~

It appears to be related to
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=88469 and is letting us
know that the ABI changed.  However, Xen is an embedded project with no
external linkage, so we can probably compile with -Wno-psabi and be done
with it.

~Andrew, in lieu of a real CI robot.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.