[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ImageBuilder v2 2/2] Add support for Xen boot-time cpupools



On Wed, 7 Sep 2022, Michal Orzel wrote:
> Introduce support for creating boot-time cpupools in the device tree and
> assigning them to dom0less domUs. Add the following options:
>  - CPUPOOL[number]="cpu@1,...,cpu@N scheduler" to specify the
>    list of cpus' node names and the scheduler to be used to create cpupool
>  - NUM_CPUPOOLS to specify the number of cpupools to create
>  - DOMU_CPUPOOL[number]="<id>" to specify the id of the cpupool to
>    assign to domU
> 
> Example usage:
> CPUPOOL[0]="cpu@1,cpu@2 null"
> DOMU_CPUPOOL[0]=0
> NUM_CPUPOOLS=1
> 
> The above example will create a boot-time cpupool (id=0) with 2 cpus:
> cpu@1, cpu@2 and the null scheduler. It will assign the cpupool with
> id=0 to domU0.
> 
> Signed-off-by: Michal Orzel <michal.orzel@xxxxxxx>

Reviewed-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>


> ---
> Changes in v2:
> - make use of get_next_phandle
> - pass cpus' node names instead of paths to CPUPOOL
> - do not pass NUM_CPUPOOLS as an argument to add_device_tree_cpupools
> ---
>  README.md                | 10 +++++
>  scripts/uboot-script-gen | 79 ++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 89 insertions(+)
> 
> diff --git a/README.md b/README.md
> index bd9dac924b44..041818349954 100644
> --- a/README.md
> +++ b/README.md
> @@ -181,6 +181,9 @@ Where:
>    present. If set to 1, the VM can use PV drivers. Older Linux kernels
>    might break.
>  
> +- DOMU_CPUPOOL[number] specifies the id of the cpupool (created using
> +  CPUPOOL[number] option, where number == id) that will be assigned to domU.
> +
>  - LINUX is optional but specifies the Linux kernel for when Xen is NOT
>    used.  To enable this set any LINUX\_\* variables and do NOT set the
>    XEN variable.
> @@ -223,6 +226,13 @@ Where:
>    include the public key in.  This can only be used with
>    FIT_ENC_KEY_DIR.  See the -u option below for more information.
>  
> +- CPUPOOL[number]="cpu@1,...,cpu@N scheduler"
> +  specifies the list of cpus' node names (separated by commas) and the 
> scheduler
> +  to be used to create boot-time cpupool. If no scheduler is set, the Xen
> +  default one will be used.
> +
> +- NUM_CPUPOOLS specifies the number of boot-time cpupools to create.
> +
>  Then you can invoke uboot-script-gen as follows:
>  
>  ```
> diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
> index 18c0ce10afb4..1f8ab5ffd193 100755
> --- a/scripts/uboot-script-gen
> +++ b/scripts/uboot-script-gen
> @@ -176,6 +176,80 @@ function add_device_tree_static_mem()
>      dt_set "$path" "xen,static-mem" "hex" "${cells[*]}"
>  }
>  
> +function add_device_tree_cpupools()
> +{
> +    local cpu
> +    local cpus
> +    local scheduler
> +    local cpu_list
> +    local phandle
> +    local cpu_phandles
> +    local i
> +    local j
> +
> +    i=0
> +    while test $i -lt $NUM_CPUPOOLS
> +    do
> +        cpus=$(echo ${CPUPOOL[$i]} | awk '{print $1}')
> +        scheduler=$(echo ${CPUPOOL[$i]} | awk '{print $NF}')
> +        cpu_phandles=
> +
> +        for cpu in ${cpus//,/ }
> +        do
> +            cpu="/cpus/$cpu"
> +
> +            # check if cpu exists
> +            if ! fdtget "${DEVICE_TREE}" "$cpu" "reg" &> /dev/null
> +            then
> +                echo "$cpu does not exist"
> +                cleanup_and_return_err
> +            fi
> +
> +            # check if cpu is already assigned
> +            if [[ "$cpu_list" == *"$cpu"* ]]
> +            then
> +                echo "$cpu already assigned to another cpupool"
> +                cleanup_and_return_err
> +            fi
> +
> +            # set phandle for a cpu if there is none
> +            if ! phandle=$(fdtget -t x "${DEVICE_TREE}" "$cpu" "phandle" 2> 
> /dev/null)
> +            then
> +                get_next_phandle phandle
> +            fi
> +
> +            dt_set "$cpu" "phandle" "hex" "$phandle"
> +            cpu_phandles="$cpu_phandles $phandle"
> +            cpu_list="$cpu_list $cpu"
> +        done
> +
> +        # create cpupool node
> +        get_next_phandle phandle
> +        dt_mknode "/chosen" "cpupool_$i"
> +        dt_set "/chosen/cpupool_$i" "phandle" "hex" "$phandle"
> +        dt_set "/chosen/cpupool_$i" "compatible" "str" "xen,cpupool"
> +        dt_set "/chosen/cpupool_$i" "cpupool-cpus" "hex" "$cpu_phandles"
> +
> +        if test "$scheduler" != "$cpus"
> +        then
> +            dt_set "/chosen/cpupool_$i" "cpupool-sched" "str" "$scheduler"
> +        fi
> +
> +        j=0
> +        while test $j -lt $NUM_DOMUS
> +        do
> +            # assign cpupool to domU
> +            if test "${DOMU_CPUPOOL[$j]}" -eq "$i"
> +            then
> +                dt_set "/chosen/domU$j" "domain-cpupool" "hex" "$phandle"
> +            fi
> +            j=$(( $j + 1 ))
> +        done
> +
> +        i=$(( $i + 1 ))
> +    done
> +}
> +
>  function xen_device_tree_editing()
>  {
>      dt_set "/chosen" "#address-cells" "hex" "0x2"
> @@ -252,6 +326,11 @@ function xen_device_tree_editing()
>          fi
>          i=$(( $i + 1 ))
>      done
> +
> +    if test "$NUM_CPUPOOLS" && test "$NUM_CPUPOOLS" -gt 0
> +    then
> +        add_device_tree_cpupools
> +    fi
>  }
>  
>  function linux_device_tree_editing()
> -- 
> 2.25.1
> 



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.