[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 05/10] xen/arm: optee: add std call handling





On 20/03/2019 16:14, Volodymyr Babchuk wrote:

Hi Julien,

Hi Volodymyr,

Julien Grall writes:

[...]
       struct dt_device_node *node;
@@ -48,12 +82,25 @@ static bool optee_probe(void)
            (uint32_t)resp.a3 != OPTEE_MSG_UID_3 )
           return false;
   +    /* Read number of threads */
+    arm_smccc_smc(OPTEE_SMC_GET_CONFIG, OPTEE_SMC_CONFIG_NUM_THREADS, &resp);
+    if ( resp.a0 == OPTEE_SMC_RETURN_OK )

Out of interest, when was this call added?

It is on review right now. We have achieved agreement on that this call
is needed. I believe this will be merged into OP-TEE before I'll send v5
of this series.

Please mention it after --- so we don't forget to check the state of the new call before merging to Xen.


+    {
+        max_optee_threads = resp.a1;
+        printk(XENLOG_DEBUG "OP-TEE supports %u threads.\n", 
max_optee_threads);

extra NIT: I would use XENLOG_INFO rather than XENLOG_DEBUG. This is a
useful information to have in bug report.

Regarding the message, what matters is the number of threads for
guest. So I would rework it to make it more meaning full for a user
that does not know the internal.

You might also want to move the message out of the if. So you have the
message even when OP-TEE does not support the SMC.
In that other case I don't know how much treads OP-TEE really
supports... I think, I will need to rephrase that message. Or, better,
I'll add another message like "Suggesting that OP-TEE supports %d
threads per guest".

Was there any OP-TEE released containing your Virtualization patches?

Depending on the answer, I would consider to mandate OPTEE_SMC_CONFIG_NUM_THREADS so we don't have to deal with a default value in Xen.

Also, should we expose this call to the guest as well?


[...]

+static struct optee_std_call *get_std_call(struct optee_domain *ctx,
+                                           int thread_id)
+{
+    struct optee_std_call *call;
+
+    spin_lock(&ctx->lock);
+    list_for_each_entry( call, &ctx->call_list, list )
+    {
+        if ( call->optee_thread_id == thread_id )
+        {
+            if ( call->in_flight )
+            {
+                gdprintk(XENLOG_WARNING, "Guest tries to execute call which is 
already in flight\n");

NIT: Missing full stop.

Also, the line is over 80 characters. While we don't want to split in
the middle of the message (so ack/grep can be used easily), you will
want to split the line after the comma.

+                goto out;
+            }
+            call->in_flight = true;
+            map_xen_arg(call);

NIT: Does this need to be done with the lock taken?
unmap_xen_arg() is also called while holding the lock. Otherwise there
can be race with a other CPU.

[...]


+            spin_unlock(&ctx->lock);
+
+            return call;
+        }
+    }
+
+out:
+    spin_unlock(&ctx->lock);
+
+    return NULL;
+}
+
+static void put_std_call(struct optee_domain *ctx, struct optee_std_call *call)
+{
+    spin_lock(&ctx->lock);
+    ASSERT(call->in_flight);
+    unmap_xen_arg(call);

Same question for the unmap.
Yeah, in normal circumstances guest should not try to resume call on
another vCPU, because we didn't returned from the original call on
current vCPU. But what if gust will try to do this? There are chances,
that current CPU will unmap buffer that was mapped by other CPU an
instance ago.

Wasn't it the point to have the in_flight field? As soon as you set in_flight to true, then only one CPU can have the optee_std_call structure in hand.

So, as long as you map/unmap within the section protected by "in_flight", you protected against any race. The lock is only here to protect the field in_flight and the list. Did I miss anything?

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.