[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ImageBuilder] Add support for Xen boot-time cpupools


  • To: Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • From: Michal Orzel <michal.orzel@xxxxxxx>
  • Date: Wed, 7 Sep 2022 08:49:50 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=kernel.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=2USVaTAicxjRY+lKuGGdLuCt7Msq7kML8XSHQtYTx70=; b=mfh2EjgfCo/gXJ9+izfYulXET+SqstQA5fUMVZelRg9aO1LOnxG4v20sQ7Awp+xsKaUc+VLu5Y/lhXgdIoeuwFmnpzZXpZYHjWK72FNy2sPBYqD6KIGdgojIGHBlogDW/bIl/MCwCCcJYoEZhhhpRGJ8Ip9rBIuvxRF5nknFWK4Rc8ezCmSv1XgdU52NKafwe/1ejy8Rlcz8U9Ag53YwiDgvx7P+CSg1EG4STU7t4aEu2RHi8isRSxaWBTJX9PRkSQLkfBQ2al763u65usPcK+yE/aYW0/+uqV2rv5OhNvNxbAwAz+MjWrh1oNaPLgcY9EDNPEXBiKCSSkaY6GJ0JA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=isbW2x6b/IhdO0VHf5PJNYlYzvyW5E7wYJC2FLZ+LDVdjluexUgEYFWlFtVY9NcHNBVpI9xty2maRqgO1uAwRheYBmsu1feRTx9ZTIOwU6Sv9vi9x705Edw9dDqcGm+yEL1OvH6TuqlsVDl51ijSG7Jicnqx6iH5LkRHKXIBeLBdez0L2zETr7we1hgfE6ikcXs/yyObf/r8hxu5KhfuSutrSCMhqc2lMcjlVfTEZUCP+77KYzPAv1oVH4fR++oa+wtluFtmIJC5nNUxHFbyus0rV7bw2ajb30Kr//IWfSCGmkToMfkJIuYgDC8F4WW9QRnSeKBeslQksu3ZfxQ0FA==
  • Cc: <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 07 Sep 2022 06:50:09 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Hi Stefano,

On 07/09/2022 03:43, Stefano Stabellini wrote:
> 
> On Tue, 6 Sep 2022, Michal Orzel wrote:
>> Introduce support for creating boot-time cpupools in the device tree and
>> assigning them to dom0less domUs. Add the following options:
>>  - CPUPOOL[number]="cpu1_path,...,cpuN_path scheduler" to specify the
>>    list of cpus and the scheduler to be used to create cpupool
>>  - NUM_CPUPOOLS to specify the number of cpupools to create
>>  - DOMU_CPUPOOL[number]="<id>" to specify the id of the cpupool to
>>    assign to domU
>>
>> Example usage:
>> CPUPOOL[0]="/cpus/cpu@1,/cpus/cpu@2 null"
>> DOMU_CPUPOOL[0]=0
>> NUM_CPUPOOLS=1
>>
>> The above example will create a boot-time cpupool (id=0) with 2 cpus:
>> cpu@1, cpu@2 and the null scheduler. It will assign the cpupool with
>> id=0 to domU0.
>>
>> Signed-off-by: Michal Orzel <michal.orzel@xxxxxxx>
> 
> Great patch in record time, thanks Michal!
> 
> 
> On the CPUPOOL string format: do you think we actually need the device
> tree path or could we get away with something like:
> 
> CPUPOOL[0]="cpu@1,cpu@2 null"
> 
> All the cpus have to be under the top-level /cpus node per the device
> tree spec, so maybe the node name should be enough?
> 
According to specs, passing only the node names should be enough
so I will modify it.

> 
> 
>> ---
>>  README.md                | 10 +++++
>>  scripts/uboot-script-gen | 80 ++++++++++++++++++++++++++++++++++++++++
>>  2 files changed, 90 insertions(+)
>>
>> diff --git a/README.md b/README.md
>> index bd9dac924b44..44abb2193142 100644
>> --- a/README.md
>> +++ b/README.md
>> @@ -181,6 +181,9 @@ Where:
>>    present. If set to 1, the VM can use PV drivers. Older Linux kernels
>>    might break.
>>
>> +- DOMU_CPUPOOL[number] specifies the id of the cpupool (created using
>> +  CPUPOOL[number] option, where number == id) that will be assigned to domU.
>> +
>>  - LINUX is optional but specifies the Linux kernel for when Xen is NOT
>>    used.  To enable this set any LINUX\_\* variables and do NOT set the
>>    XEN variable.
>> @@ -223,6 +226,13 @@ Where:
>>    include the public key in.  This can only be used with
>>    FIT_ENC_KEY_DIR.  See the -u option below for more information.
>>
>> +- CPUPOOL[number]="cpu1_path,...,cpuN_path scheduler"
>> +  specifies the list of cpus (separated by commas) and the scheduler to be
>> +  used to create boot-time cpupool. If no scheduler is set, the Xen default
>> +  one will be used.
>> +
>> +- NUM_CPUPOOLS specifies the number of boot-time cpupools to create.
>> +
>>  Then you can invoke uboot-script-gen as follows:
>>
>>  ```
>> diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
>> index 18c0ce10afb4..2e1c80a92ce1 100755
>> --- a/scripts/uboot-script-gen
>> +++ b/scripts/uboot-script-gen
>> @@ -176,6 +176,81 @@ function add_device_tree_static_mem()
>>      dt_set "$path" "xen,static-mem" "hex" "${cells[*]}"
>>  }
>>
>> +function add_device_tree_cpupools()
>> +{
>> +    local num=$1
>> +    local phandle_next="0xfffffff"
> 
> I think phandle_next is a good idea, and I would make it a global
> variable at the top of the uboot-script-gen file or at the top of
> scripts/common.
> 
> The highest valid phandle is actually 0xfffffffe.
> 
This was my original idea so I will do following to properly handle phandles:
- create a global variable phandle_next in scripts/common set to 0xfffffffe
- create a function get_next_phandle in scripts/common to get the next 
available phandle,
  formatted properly in hex, which will also decrement the phandle_next

I will push this as a prerequisite patch for boot-time cpupools.

> 
> 
>> +    local cpus
>> +    local scheduler
>> +    local cpu_list
>> +    local phandle
>> +    local cpu_phandles
>> +    local i
>> +    local j
>> +
>> +    i=0
>> +    while test $i -lt $num
> 
> I don't think there is much value in passing NUM_CPUPOOLS as argument to
> this function given that the function is also accessing CPUPOOL[]
> directly. I would remove $num and just do:
> 
>     while test $i -lt $NUM_CPUPOOLS
ok

~Michal




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.