[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XEN PATCH v7 12/20] xen/arm: ffa: send guest events to Secure Partitions


  • To: Jens Wiklander <jens.wiklander@xxxxxxxxxx>
  • From: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
  • Date: Thu, 2 Mar 2023 07:35:36 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ulrS035++udyLxjlgcWIGvI82hX5VONbyaySdXzfIh0=; b=V11PiSh5vavUK8ky/d4+kZrs09W0R6uAcOsNn6RqClfh/YJTAfg5LhmthT3g+TeA1Yu0/hY5e64wmIYjCWPe8SjuTHAXv72V5eHoQlYjH+uYHuCugq5gzjH3en2To18rhKcyLv1ST1GC3+AWTcWDtUedJKQVmoH2AFhhjZoQtgcPS4NV6U1jHohAxEuNp+nN7Z9EypWpCiATEIWyPcjL/Ott893Lh+mtUViwyVysS2zYDO7UBP9CAuLVScJ18oe+mLE91un8YnwpRR/nlH+tPK5UYDaP//+zVVbtMUT/h979IBUwUrV375cnp2Vk/yW2Sy/JfGXsIBjb/x4fANB4qw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BrerVn7Z7TBcq8cbZwOS8WDN2M8HRDeiOM4CrhgTOo49Lj/pJc5N+zQD/jAPRs7/TTL5nz+DodxJbBGcF7wuhhbaeJnOtf9XsFJIm7AnmciVjPC9O62Id2IlI4UqLojlhZAz6sB7UqFwNX+U3ToBkxwY55fwpSQzllgkxYINNQ7tKK6tvqPI8zE81Tk7Le5kzZyx5zbvXsPNQKIBcCt15jWHLvk5Th6bEwHtdD69xzkh23pQO49Qcqm/Rl0AeLPNB2Gu46GbdD+j5mISR9lZHdcgqQrKGKJbdEVngiLXyGn049/BaEAkbhvaDDtPJaxVBeYwlrs0pEXdyVNNz5pFwA==
  • Authentication-results-original: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Marc Bonnici <Marc.Bonnici@xxxxxxx>, Achin Gupta <Achin.Gupta@xxxxxxx>, Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>
  • Delivery-date: Thu, 02 Mar 2023 07:35:54 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHZRtMbJTLQExtpBE6ThIQepjPcla7km2EAgAEk3QCAAC0KAIAAP38AgAD4rYA=
  • Thread-topic: [XEN PATCH v7 12/20] xen/arm: ffa: send guest events to Secure Partitions

Hi Jens,

> On 1 Mar 2023, at 17:45, Jens Wiklander <jens.wiklander@xxxxxxxxxx> wrote:
> 
> Hi,
> 
> On Wed, Mar 1, 2023 at 1:58 PM Bertrand Marquis
> <Bertrand.Marquis@xxxxxxx> wrote:
>> 
>> Hi Jens,
>> 
>>> On 1 Mar 2023, at 11:16, Jens Wiklander <jens.wiklander@xxxxxxxxxx> wrote:
>>> 
>>> Hi Bertrand,
>>> 
>>> On Tue, Feb 28, 2023 at 5:49 PM Bertrand Marquis
>>> <Bertrand.Marquis@xxxxxxx> wrote:
>>>> 
>>>> Hi Jens,
>>>> 
>>>>> On 22 Feb 2023, at 16:33, Jens Wiklander <jens.wiklander@xxxxxxxxxx> 
>>>>> wrote:
>>>>> 
>>>>> The FF-A specification defines framework messages sent as direct
>>>>> requests when certain events occurs. For instance when a VM (guest) is
>>>>> created or destroyed. Only SPs which have subscribed to these events
>>>>> will receive them. An SP can subscribe to these messages in its
>>>>> partition properties.
>>>>> 
>>>>> Adds a check that the SP supports the needed FF-A features
>>>>> FFA_PARTITION_INFO_GET and FFA_RX_RELEASE.
>>>>> 
>>>>> The partition properties of each SP is retrieved with
>>>>> FFA_PARTITION_INFO_GET which returns the information in our RX buffer.
>>>>> Using FFA_PARTITION_INFO_GET changes the owner of the RX buffer to the
>>>>> caller (us), so once we're done with the buffer it must be released
>>>>> using FFA_RX_RELEASE before another call can be made.
>>>>> 
>>>>> Signed-off-by: Jens Wiklander <jens.wiklander@xxxxxxxxxx>
>>>>> ---
>>>>> xen/arch/arm/tee/ffa.c | 191 ++++++++++++++++++++++++++++++++++++++++-
>>>>> 1 file changed, 190 insertions(+), 1 deletion(-)
>>>>> 
>>>>> diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
>>>>> index 07dd5c36d54b..f1b014b6c7f4 100644
>>>>> --- a/xen/arch/arm/tee/ffa.c
>>>>> +++ b/xen/arch/arm/tee/ffa.c
>>>>> @@ -140,6 +140,14 @@
>>>>> #define FFA_MSG_SEND                    0x8400006EU
>>>>> #define FFA_MSG_POLL                    0x8400006AU
>>>>> 
>>>>> +/* Partition information descriptor */
>>>>> +struct ffa_partition_info_1_1 {
>>>>> +    uint16_t id;
>>>>> +    uint16_t execution_context;
>>>>> +    uint32_t partition_properties;
>>>>> +    uint8_t uuid[16];
>>>>> +};
>>>>> +
>>>>> struct ffa_ctx {
>>>>>   uint32_t guest_vers;
>>>>>   bool interrupted;
>>>>> @@ -148,6 +156,12 @@ struct ffa_ctx {
>>>>> /* Negotiated FF-A version to use with the SPMC */
>>>>> static uint32_t ffa_version __ro_after_init;
>>>>> 
>>>>> +/* SPs subscribing to VM_CREATE and VM_DESTROYED events */
>>>>> +static uint16_t *subscr_vm_created __read_mostly;
>>>>> +static unsigned int subscr_vm_created_count __read_mostly;
>>>> 
>>>> In the spec the number is returned in w2 so you should utse uint32_t here.
>>> 
>>> I don't understand. This value is increased for each SP which has the
>>> property set in the Partition information descriptor.
>> 
>> Using generic types should be prevented when possible.
> 
> I'm usually of the opposite opinion, fixed width integers should only
> be used when there's a good reason to do so. However, if this is the
> Xen philosophy I can replace all those unsigned int with uint32_t if
> that's preferable.

Safety standards usually enforce to use explicit size types to prevent
compiler dependencies.

> 
>> Here this is a subset of the number of partition which is uint32_t (wX reg) 
>> so
>> i think this would be the logical type for this.
> 
> The maximum number is actually UINT16_MAX so an observant reader might
> wonder why the uint32_t type was used here.

Switching to uint16_t might make sense then but you will have to check that you
are not going over UINT16_MAX in the code as you get a uint32_t back from the 
call.


Cheers
Bertrand

> 
>> 
>>> 
>>>> 
>>>>> +static uint16_t *subscr_vm_destroyed __read_mostly;
>>>>> +static unsigned int subscr_vm_destroyed_count __read_mostly;
>>>> 
>>>> Same here
>>>> 
>>>>> +
>>>>> /*
>>>>> * Our rx/tx buffers shared with the SPMC.
>>>>> *
>>>>> @@ -237,6 +251,72 @@ static int32_t ffa_rxtx_map(register_t tx_addr, 
>>>>> register_t rx_addr,
>>>>>   return ffa_simple_call(fid, tx_addr, rx_addr, page_count, 0);
>>>>> }
>>>>> 
>>>>> +static int32_t ffa_partition_info_get(uint32_t w1, uint32_t w2, uint32_t 
>>>>> w3,
>>>>> +                                      uint32_t w4, uint32_t w5,
>>>>> +                                      uint32_t *count)
>>>>> +{
>>>>> +    const struct arm_smccc_1_2_regs arg = {
>>>>> +        .a0 = FFA_PARTITION_INFO_GET,
>>>>> +        .a1 = w1,
>>>>> +        .a2 = w2,
>>>>> +        .a3 = w3,
>>>>> +        .a4 = w4,
>>>>> +        .a5 = w5,
>>>>> +    };
>>>>> +    struct arm_smccc_1_2_regs resp;
>>>>> +    uint32_t ret;
>>>>> +
>>>>> +    arm_smccc_1_2_smc(&arg, &resp);
>>>>> +
>>>>> +    ret = get_ffa_ret_code(&resp);
>>>>> +    if ( !ret )
>>>>> +        *count = resp.a2;
>>>>> +
>>>>> +    return ret;
>>>>> +}
>>>>> +
>>>>> +static int32_t ffa_rx_release(void)
>>>>> +{
>>>>> +    return ffa_simple_call(FFA_RX_RELEASE, 0, 0, 0, 0);
>>>>> +}
>>>>> +
>>>>> +static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id,
>>>>> +                                      uint8_t msg)
>>>> 
>>>> This function is actually only useable to send framework created/destroyed
>>>> messages so the function name should be adapted to be less generic.
>>>> 
>>>> ffa_send_vm_events ?
>>>> 
>>>> unless you want to modify it later to send more framework messages ?
>>> 
>>> That was the plan, but that may never happen. I'll rename it to
>>> ffa_send_vm_event() since we're only sending one event at a time.
>>> 
>>>> 
>>>>> +{
>>>>> +    uint32_t exp_resp = FFA_MSG_FLAG_FRAMEWORK;
>>>>> +    int32_t res;
>>>>> +
>>>>> +    if ( msg == FFA_MSG_SEND_VM_CREATED )
>>>>> +        exp_resp |= FFA_MSG_RESP_VM_CREATED;
>>>>> +    else if ( msg == FFA_MSG_SEND_VM_DESTROYED )
>>>>> +        exp_resp |= FFA_MSG_RESP_VM_DESTROYED;
>>>>> +    else
>>>>> +        return FFA_RET_INVALID_PARAMETERS;
>>>>> +
>>>>> +    do {
>>>>> +        const struct arm_smccc_1_2_regs arg = {
>>>>> +            .a0 = FFA_MSG_SEND_DIRECT_REQ_32,
>>>>> +            .a1 = sp_id,
>>>>> +            .a2 = FFA_MSG_FLAG_FRAMEWORK | msg,
>>>>> +            .a5 = vm_id,
>>>>> +        };
>>>>> +        struct arm_smccc_1_2_regs resp;
>>>>> +
>>>>> +        arm_smccc_1_2_smc(&arg, &resp);
>>>>> +        if ( resp.a0 != FFA_MSG_SEND_DIRECT_RESP_32 || resp.a2 != 
>>>>> exp_resp )
>>>>> +        {
>>>>> +            /*
>>>>> +             * This is an invalid response, likely due to some error in 
>>>>> the
>>>>> +             * implementation of the ABI.
>>>>> +             */
>>>>> +            return FFA_RET_INVALID_PARAMETERS;
>>>>> +        }
>>>>> +        res = resp.a3;
>>>>> +    } while ( res == FFA_RET_INTERRUPTED || res == FFA_RET_RETRY );
>>>> 
>>>> We might end up in an infinite loop here or increase interrupt response 
>>>> time.
>>>> In the general case we could return that code directly to the VM but here 
>>>> we
>>>> are in the VM creation/destroy path so we cannot do that.
>>>> 
>>>> Maybe in debug mode at least we should have a retry counter here for now
>>>> with a print ?
>>> 
>>> OK, I'll add something.
>>> 
>>>> 
>>>>> +
>>>>> +    return res;
>>>>> +}
>>>>> +
>>>>> static uint16_t get_vm_id(const struct domain *d)
>>>>> {
>>>>>   /* +1 since 0 is reserved for the hypervisor in FF-A */
>>>>> @@ -371,6 +451,10 @@ static bool ffa_handle_call(struct cpu_user_regs 
>>>>> *regs)
>>>>> static int ffa_domain_init(struct domain *d)
>>>>> {
>>>>>   struct ffa_ctx *ctx;
>>>>> +    unsigned int n;
>>>>> +    unsigned int m;
>>>>> +    unsigned int c_pos;
>>>>> +    int32_t res;
>>>>> 
>>>>>    /*
>>>>>     * We can't use that last possible domain ID or get_vm_id() would cause
>>>>> @@ -383,24 +467,121 @@ static int ffa_domain_init(struct domain *d)
>>>>>   if ( !ctx )
>>>>>       return -ENOMEM;
>>>>> 
>>>>> +    for ( n = 0; n < subscr_vm_created_count; n++ )
>>>>> +    {
>>>>> +        res = ffa_direct_req_send_vm(subscr_vm_created[n], get_vm_id(d),
>>>>> +                                     FFA_MSG_SEND_VM_CREATED);
>>>>> +        if ( res )
>>>>> +        {
>>>>> +            printk(XENLOG_ERR "ffa: Failed to report creation of vm_id 
>>>>> %u to  %u: res %d\n",
>>>>> +                   get_vm_id(d), subscr_vm_created[n], res);
>>>> 
>>>> in this function, get_vm_id(d) will not change so i would suggest to store 
>>>> it in a local variable
>>>> instead of calling get_vm_id each time.
>>> 
>>> OK
>>> 
>>>> 
>>>>> +            c_pos = n;
>>>>> +            goto err;
>>>>> +        }
>>>>> +    }
>>>>> +
>>>>>   d->arch.tee = ctx;
>>>>> 
>>>>>   return 0;
>>>>> +
>>>>> +err:
>>>>> +    /* Undo any already sent vm created messaged */
>>>>> +    for ( n = 0; n < c_pos; n++ )
>>>>> +        for ( m = 0; m < subscr_vm_destroyed_count; m++ )
>>>>> +            if ( subscr_vm_destroyed[m] == subscr_vm_created[n] )
>>>>> +                ffa_direct_req_send_vm(subscr_vm_destroyed[n], 
>>>>> get_vm_id(d),
>>>>> +                                       FFA_MSG_SEND_VM_DESTROYED);
>>>>> +
>>>>> +    return -ENOMEM;
>>>> 
>>>> The VM creation is not failing due to missing memory here.
>>>> We need to find a better error code.
>>>> Maybe ENOTCONN ?
>>>> I am open to ideas here :-)
>>> 
>>> That makes sense, I'll change it to ENOTCONN.
>>> 
>>>> 
>>>>> }
>>>>> 
>>>>> /* This function is supposed to undo what ffa_domain_init() has done */
>>>>> static int ffa_relinquish_resources(struct domain *d)
>>>>> {
>>>>>   struct ffa_ctx *ctx = d->arch.tee;
>>>>> +    unsigned int n;
>>>>> +    int32_t res;
>>>>> 
>>>>>   if ( !ctx )
>>>>>       return 0;
>>>>> 
>>>>> +    for ( n = 0; n < subscr_vm_destroyed_count; n++ )
>>>>> +    {
>>>>> +        res = ffa_direct_req_send_vm(subscr_vm_destroyed[n], 
>>>>> get_vm_id(d),
>>>>> +                                     FFA_MSG_SEND_VM_DESTROYED);
>>>>> +
>>>>> +        if ( res )
>>>>> +            printk(XENLOG_ERR "ffa: Failed to report destruction of 
>>>>> vm_id %u to  %u: res %d\n",
>>>>> +                   get_vm_id(d), subscr_vm_destroyed[n], res);
>>>>> +    }
>>>>> +
>>>>>   XFREE(d->arch.tee);
>>>>> 
>>>>>   return 0;
>>>>> }
>>>>> 
>>>>> +static bool init_subscribers(void)
>>>>> +{
>>>>> +    struct ffa_partition_info_1_1 *fpi;
>>>>> +    bool ret = false;
>>>>> +    uint32_t count;
>>>>> +    int e;
>>>>> +    uint32_t n;
>>>>> +    uint32_t c_pos;
>>>>> +    uint32_t d_pos;
>>>>> +
>>>>> +    if ( ffa_version < FFA_VERSION_1_1 )
>>>>> +        return true;
>>>> 
>>>> Correct me if i am wrong but on 1.0 version we cannot retrieve the count 
>>>> but
>>>> we could do the slow path and do a first loop on info_get until we get an 
>>>> error ?
>>> 
>>> Sending the events is not supported in 1.0 so there's nothing to
>>> record in that case.
>> 
>> Please add a comment here to say that subscribers are only supported after 
>> 1.1
>> and also mention it in the commit message.
> 
> OK.
> 
> Thanks,
> Jens





 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.