[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v7 05/12] arm/sve: save/restore SVE context switch


  • To: Julien Grall <julien@xxxxxxx>
  • From: Luca Fancellu <Luca.Fancellu@xxxxxxx>
  • Date: Thu, 25 May 2023 10:01:16 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=h3nwhvaNq+a9tWvRHr72FWk4rDmSs/1wG+E0R3lDKQE=; b=YFgyfrY2g22PnOZYAmCr0nLwh08h+Bxd2LzgJZLuXGYCh6YxDq020sFeatULPD1M+lbbvIMuR0xlU/1ZkAqmWenkkNKFLD4fPHfvOGIHAVvyAUWnBTcLxk+jdLbgDfJVkDlac4oY8nERFwGknIEAj2eD2RX80rt/S7rYoWrVE2dxaD4BJYqLxJm98KXYKY04lLYbnT0ui72JjvEpBrdQU9y/gnLON5HVJtYt5HE8KkRIXnR3DX2rQq8YbjuT7GRZWkS7sGcVZnVGqhWqXDM0bqKV/b5mfoJinEnjNG5TVH4mcilQNy+17KQ8BFuD0frZTQ7J1zt1BG7h5859LZZ9gw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fBdlJMZ/i55y+5oos6o69Q8dETLHkGAlc2BO/3sDMgUhGopsUZDlg2hLYGOmQ1ClzIvhN+FaVEUmjrpUy83o1Jh4iO+vPMmfevtaeoo7+xxEkOZQ9XkYmOurHvFQTwDdlnuXdC+eKBHmljt51xmBGd22KmcwimGjANlq/4E5u93DYRRv0UwioZHhJBTmIVzhoCVb7CZeJ4kht6eu7cyN2BQUD/FLlzhXyp7Q8Bp3P5nl93A97jSYwx/up918q0fNJZDJoxU3Pv5fcvEc95fzycSn3XTP4YzHlcjnpDhGp4pOPkWXW1yw4YCOePTnHA55Rhzk5Lg8LrXuHB2UBzjWHQ==
  • Authentication-results-original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, Wei Chen <Wei.Chen@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Thu, 25 May 2023 10:01:37 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHZjUpcuuLAh7YurESbASUYh0J8dK9qtqGAgAAObYA=
  • Thread-topic: [PATCH v7 05/12] arm/sve: save/restore SVE context switch


> On 25 May 2023, at 10:09, Julien Grall <julien@xxxxxxx> wrote:
> 
> Hi Luca,
> 
> On 23/05/2023 08:43, Luca Fancellu wrote:
>> +int sve_context_init(struct vcpu *v)
>> +{
>> +    unsigned int sve_vl_bits = sve_decode_vl(v->domain->arch.sve_vl);
>> +    uint64_t *ctx = _xzalloc(sve_zreg_ctx_size(sve_vl_bits) +
>> +                             sve_ffrreg_ctx_size(sve_vl_bits),
>> +                             L1_CACHE_BYTES);
>> +
>> +    if ( !ctx )
>> +        return -ENOMEM;
>> +
>> +    /*
>> +     * Point to the end of Z0-Z31 memory, just before FFR memory, to be 
>> kept in
>> +     * sync with sve_context_free()
> 
> Nit: Missing a full stop.

I’ll fix

> 
>> +     */
>> +    v->arch.vfp.sve_zreg_ctx_end = ctx +
>> +        (sve_zreg_ctx_size(sve_vl_bits) / sizeof(uint64_t));
>> +
>> +    v->arch.zcr_el2 = vl_to_zcr(sve_vl_bits);
>> +
>> +    return 0;
>> +}
>> +
>> +void sve_context_free(struct vcpu *v)
>> +{
>> +    unsigned int sve_vl_bits;
>> +
>> +    if ( v->arch.vfp.sve_zreg_ctx_end )
>> +        return;
>> +
>> +    sve_vl_bits = sve_decode_vl(v->domain->arch.sve_vl);
>> +
>> +    /*
>> +    * Point to the end of Z0-Z31 memory, just before FFR memory, to be kept
>> +    * in sync with sve_context_init()
>> +    */
> 
> The spacing looks a bit odd in this comment. Did you miss an extra space?
> 
> Also, I notice this comment is the exact same as on top as 
> sve_context_init(). I think this is a bit misleading because the logic is 
> different. I would suggest the following:
> 
> "Currently points to the end of Z0-Z31 memory which is not the start of the 
> buffer. To be kept in sync with the sve_context_init()."
> 
> Lastly, nit: Missing a full stop.

Ok I’ll change it

> 
>> +    v->arch.vfp.sve_zreg_ctx_end -=
>> +        (sve_zreg_ctx_size(sve_vl_bits) / sizeof(uint64_t));
>> +
>> +    XFREE(v->arch.vfp.sve_zreg_ctx_end);
>> +}
>> +
> 
> [...]
> 
>> diff --git a/xen/arch/arm/include/asm/arm64/vfp.h 
>> b/xen/arch/arm/include/asm/arm64/vfp.h
>> index e6e8c363bc16..4aa371e85d26 100644
>> --- a/xen/arch/arm/include/asm/arm64/vfp.h
>> +++ b/xen/arch/arm/include/asm/arm64/vfp.h
>> @@ -6,7 +6,19 @@
>>    struct vfp_state
>>  {
>> +    /*
>> +     * When SVE is enabled for the guest, fpregs memory will be used to
>> +     * save/restore P0-P15 registers, otherwise it will be used for the 
>> V0-V31
>> +     * registers.
>> +     */
>>      uint64_t fpregs[64] __vfp_aligned;
>> +    /*
>> +     * When SVE is enabled for the guest, sve_zreg_ctx_end points to memory
>> +     * where Z0-Z31 registers and FFR can be saved/restored, it points at 
>> the
>> +     * end of the Z0-Z31 space and at the beginning of the FFR space, it's 
>> done
>> +     * like that to ease the save/restore assembly operations.
>> +     */
>> +    uint64_t *sve_zreg_ctx_end;
> 
> Sorry I only noticed now. But shouldn't this be protected with #ifdef 
> CONFIG_SVE? Same...
> 
>>      register_t fpcr;
>>      register_t fpexc32_el2;
>>      register_t fpsr;
>> diff --git a/xen/arch/arm/include/asm/domain.h 
>> b/xen/arch/arm/include/asm/domain.h
>> index 331da0f3bcc3..814652d92568 100644
>> --- a/xen/arch/arm/include/asm/domain.h
>> +++ b/xen/arch/arm/include/asm/domain.h
>> @@ -195,6 +195,8 @@ struct arch_vcpu
>>      register_t tpidrro_el0;
>>        /* HYP configuration */
>> +    register_t zcr_el1;
>> +    register_t zcr_el2;
> 
> ... here.

Sure I can protect them. It was done on purpose before to avoid ifdefs but I 
think saving space
is better here and also there won’t be any use of them when the config is off.


> 
>>      register_t cptr_el2;
>>      register_t hcr_el2;
>>      register_t mdcr_el2;
> 
> Cheers,
> 
> -- 
> Julien Grall


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.