[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XEN v4] xen/arm64: io: Decode ldr/str post-indexing instructions



Hi Ayan,

Below some more comments on a few issues I noticed while reviewing your other patch yesterday.

On 25/01/2022 21:18, Ayan Kumar Halder wrote:
diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
index 729287e37c..b9c15e1fe7 100644
--- a/xen/arch/arm/io.c
+++ b/xen/arch/arm/io.c
@@ -106,14 +106,29 @@ enum io_state try_handle_mmio(struct cpu_user_regs *regs,
          .gpa = gpa,
          .dabt = dabt
      };
+    int rc;
+    union ldr_str_instr_class instr = {0};
ASSERT(hsr.ec == HSR_EC_DATA_ABORT_LOWER_EL); + /*
+     * Armv8 processor does not provide a valid syndrome for post-indexing
+     * ldr/str instructions. So in order to process these instructions,
+     * Xen must decode them.
+     */
+    if ( !info.dabt.valid )
+    {
+        rc = decode_instruction(regs, &info.dabt, &instr);
+        if ( rc )
+        {
+            gprintk(XENLOG_DEBUG, "Unable to decode instruction\n");
+            return IO_ABORT;

Sorry, I should have noticed this earlier.

If we return IO_ABORT here, it means Xen will inject an abort to the domain. However, an I/O may trap in Xen for other reasons. One explain is when Xen modify the P2M it has to temporarily remove the mapping and then recreate it. This leaves a small window when an access may trap.

In this situation, we don't care that the instruction syndrome is invalid. Therefore, it would be wrong to inject an abort to the domain. What we want is looking whether another part of Xen handles it.

One possibility would be to return IO_UNHANDLED here. However... it means that we will end up to decode the instruction when this is not necessary. So I think we want to move the decode after find_mmio_handler() has succeeded.

Regarding, try_fwd_ioserv(). As you rightly pointed out, it already contains a dabt.valid check. We would want to augment it to decode the instruction. I think the thumb workaround should be there as well.

+        }
+    }
+
      handler = find_mmio_handler(v->domain, info.gpa);
      if ( !handler )
      {
-        int rc;
-
          rc = try_fwd_ioserv(regs, v, &info);
          if ( rc == IO_HANDLED )
              return handle_ioserv(regs, v);
@@ -121,10 +136,6 @@ enum io_state try_handle_mmio(struct cpu_user_regs *regs,
          return rc;
      }
- /* All the instructions used on emulated MMIO region should be valid */
-    if ( !dabt.valid )
-        return IO_ABORT;
-
      /*
       * Erratum 766422: Thumb store translation fault to Hypervisor may
       * not have correct HSR Rt value.
@@ -134,7 +145,7 @@ enum io_state try_handle_mmio(struct cpu_user_regs *regs,
      {
          int rc;
- rc = decode_instruction(regs, &info.dabt);
+        rc = decode_instruction(regs, &info.dabt, NULL);
          if ( rc )
          {
              gprintk(XENLOG_DEBUG, "Unable to decode instruction\n");
@@ -143,9 +154,21 @@ enum io_state try_handle_mmio(struct cpu_user_regs *regs,
      }
if ( info.dabt.write )
-        return handle_write(handler, v, &info);
+        rc = handle_write(handler, v, &info);
      else
-        return handle_read(handler, v, &info);
+        rc = handle_read(handler, v, &info);
+
+#if CONFIG_ARM_64
+    if ( (is_64bit_domain(current->domain) && !psr_mode_is_32bit(regs)) )
+    {
+        if ( instr.value != 0 )
+        {
+            post_increment_register(&instr);
+        }

This path will not be reached when the I/O is forwarded to an IOREQ server. I think we need to add similar code in arch_ioreq_complete_mmio() (just before advance_pc()).

Just for completeness, even if it would be easier, I don't think we can update the register before the MMIO is handled (i.e. in try_fwd_ioserv()) because in theory the instruction is not completed. So I am a bit worry that this may impact other subsystem (the obvious one are the registers dump would be incorrect).

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.