[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common



Hi,

On 03/08/2020 19:21, Oleksandr Tyshchenko wrote:
From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>

As a lot of x86 code can be re-used on Arm later on, this patch
splits IOREQ support into common and arch specific parts.

This support is going to be used on Arm to be able run device
emulator outside of Xen hypervisor.

Please note, this is a split/cleanup of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Signed-off-by: Julien Grall <julien.grall@xxxxxxx>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
---
  xen/arch/x86/Kconfig            |    1 +
  xen/arch/x86/hvm/dm.c           |    2 +-
  xen/arch/x86/hvm/emulate.c      |    2 +-
  xen/arch/x86/hvm/hvm.c          |    2 +-
  xen/arch/x86/hvm/io.c           |    2 +-
  xen/arch/x86/hvm/ioreq.c        | 1431 +--------------------------------------
  xen/arch/x86/hvm/stdvga.c       |    2 +-
  xen/arch/x86/hvm/vmx/realmode.c |    1 +
  xen/arch/x86/hvm/vmx/vvmx.c     |    2 +-
  xen/arch/x86/mm.c               |    2 +-
  xen/arch/x86/mm/shadow/common.c |    2 +-
  xen/common/Kconfig              |    3 +
  xen/common/Makefile             |    1 +
  xen/common/hvm/Makefile         |    1 +
  xen/common/hvm/ioreq.c          | 1430 ++++++++++++++++++++++++++++++++++++++
  xen/include/asm-x86/hvm/ioreq.h |   45 +-
  xen/include/asm-x86/hvm/vcpu.h  |    7 -
  xen/include/xen/hvm/ioreq.h     |   89 +++
  18 files changed, 1575 insertions(+), 1450 deletions(-)

That's quite a lot of code moved in a single patch. How can we check the code moved is still correct? Is it a verbatim copy?

  create mode 100644 xen/common/hvm/Makefile
  create mode 100644 xen/common/hvm/ioreq.c
  create mode 100644 xen/include/xen/hvm/ioreq.h

[...]

+static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
+{
+    unsigned int prev_state = STATE_IOREQ_NONE;
+
+    while ( sv->pending )
+    {
+        unsigned int state = p->state;
+
+        smp_rmb();
+
+    recheck:
+        if ( unlikely(state == STATE_IOREQ_NONE) )
+        {
+            /*
+             * The only reason we should see this case is when an
+             * emulator is dying and it races with an I/O being
+             * requested.
+             */
+            hvm_io_assist(sv, ~0ul);
+            break;
+        }
+
+        if ( unlikely(state < prev_state) )
+        {
+            gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> %u\n",
+                     prev_state, state);
+            sv->pending = false;
+            domain_crash(sv->vcpu->domain);
+            return false; /* bail */
+        }
+
+        switch ( prev_state = state )
+        {
+        case STATE_IORESP_READY: /* IORESP_READY -> NONE */
+            p->state = STATE_IOREQ_NONE;
+            hvm_io_assist(sv, p->data);
+            break;
+        case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> IORESP_READY */
+        case STATE_IOREQ_INPROCESS:
+            wait_on_xen_event_channel(sv->ioreq_evtchn,
+                                      ({ state = p->state;
+                                         smp_rmb();
+                                         state != prev_state; }));
+            goto recheck;

I recall some discussion on security@ about this specific code. An IOREQ server can be destroyed at any time. When destroying IOREQ server, the all the vCPUs will be paused to avoid race.

On x86, this was considered to be safe because
wait_on_xen_event_channel() will never return if the vCPU is re-scheduled.

However, on Arm, this function will return even after rescheduling. In this case, sv and p may point to invalid memory.

IIRC, the suggestion was to harden hvm_wait_for_io(). I guess we could fetch the sv and p after wait_on_xen_event_channel.

Any opinions?

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.