[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH] x86emul: adjust handling of AVX2 gathers



HVM's MMIO cache only has a capacity of three entries. Once running out
of entries, hvmemul_linear_mmio_access() will return
X86EMUL_UNHANDLEABLE. Since gathers are an iterative process anyway,
simply commit the portion of work done in this and hypothetical similar
cases, exiting back to guest context for the insn to be retried.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -7639,6 +7639,7 @@ x86_emulate(
             int32_t dw[8];
             int64_t qw[4];
         } index, mask;
+        bool done = false;
 
         ASSERT(ea.type == OP_MEM);
         generate_exception_if(modrm_reg == state->sib_index ||
@@ -7692,12 +7693,23 @@ x86_emulate(
                                ea.mem.off + (idx << state->sib_scale),
                                (void *)mmvalp + i * op_bytes, op_bytes, ctxt);
                 if ( rc != X86EMUL_OKAY )
+                {
+                    /*
+                     * If we've made any progress and the access did not fault,
+                     * force a retry instead. This is for example necessary to
+                     * cope with the limited capacity of HVM's MMIO cache.
+                     */
+                    if ( rc != X86EMUL_EXCEPTION && done )
+                        rc = X86EMUL_RETRY;
                     break;
+                }
 
 #ifdef __XEN__
                 if ( i + 1 < n && local_events_need_delivery() )
                     rc = X86EMUL_RETRY;
 #endif
+
+                done = true;
             }
 
             if ( vex.w )




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.