[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 3/6] xen/arm: ffa: Introduce VM to VM support



Hi Bertrand,

On 22/05/2025 16:08, Bertrand Marquis wrote:
diff --git a/xen/arch/arm/tee/ffa_partinfo.c b/xen/arch/arm/tee/ffa_partinfo.c
index dfa0b23eaf38..66ea1860e97a 100644
--- a/xen/arch/arm/tee/ffa_partinfo.c
+++ b/xen/arch/arm/tee/ffa_partinfo.c
@@ -150,6 +150,67 @@ out:
      return ret;
  }
+static int32_t ffa_get_vm_partinfo(uint32_t *vm_count, void *dst_buf,
+                                   void *end_buf, uint32_t dst_size)
+{
+    struct ffa_ctx *curr_ctx = current->domain->arch.tee;
+    struct ffa_ctx *dest_ctx, *tmp;
+    uint32_t count = 0;
+
+    /*
+     * There could potentially be a lot of VMs in the system and we could
+     * hold the CPU for long here.
+     * Right now there is no solution in FF-A specification to split
+     * the work in this case.
+     * TODO: Check how we could delay the work or have preemption checks.
+     */
+    list_for_each_entry_safe(dest_ctx, tmp, &ffa_ctx_head, ctx_list)

Looking at this code again, I am a bit puzzled why we don't seem to take any lock and use list_for_each_entry_safe().

I was under the impression that list_for_each_entry_safe() is used if we
delete an entry within the loop. But it is not used to protect against a deletion from another core.

Did I misunderstand the logic? If not, we possibly want to use a readlock (over the existing spinlock).

Cheers,

--
Julien Grall




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.