|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] x86/hvm: fix interaction between internal and external emulation
>>> On 28.11.17 at 15:05, <paul.durrant@xxxxxxxxxx> wrote:
> --- a/xen/arch/x86/hvm/io.c
> +++ b/xen/arch/x86/hvm/io.c
> @@ -88,7 +88,7 @@ bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate,
> const char *descr)
>
> rc = hvm_emulate_one(&ctxt);
>
> - if ( hvm_vcpu_io_need_completion(vio) || vio->mmio_retry )
> + if ( hvm_vcpu_io_need_completion(vio) )
> vio->io_completion = HVMIO_mmio_completion;
> else
> vio->mmio_access = (struct npfec){};
While I can't (yet) say why without this change things would have
behaved better on that old AMD box which is causing the osstest
failure, I think Andrew's suggestion that we might be trying to
emulate from a stale instruction cache is spot on: Doesn't
rc = x86_emulate(&hvmemul_ctxt->ctxt, ops);
if ( rc == X86EMUL_OKAY && vio->mmio_retry )
rc = X86EMUL_RETRY;
if ( rc != X86EMUL_RETRY )
{
vio->mmio_cache_count = 0;
vio->mmio_insn_bytes = 0;
}
else
...
in _hvm_emulate_one() need re-ordering of the two conditionals?
->mmio_retry set, as described earlier, means we're exiting back to
the guest. At that point the guest can take interrupts and alike,
which means that if we're being re-entered we're not necessarily
going to continue emulation of the same previous instruction. I.e.
rc = x86_emulate(&hvmemul_ctxt->ctxt, ops);
if ( rc != X86EMUL_RETRY )
{
vio->mmio_cache_count = 0;
vio->mmio_insn_bytes = 0;
}
else
{
...
}
if ( rc == X86EMUL_OKAY && vio->mmio_retry )
rc = X86EMUL_RETRY;
(or the equivalent thereof with switch() and fall-through from
OKAY to default). Any "more clever" solution (like deferring the
cache invalidation until we're being re-entered, making it
dependent on CS:RIP having changed) feels fragile.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |