[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH] x86/pv: Fix guest crashes following f75b1a5247b "x86/pv: Drop int80_bounce from struct pv_vcpu"
The original init_int80_direct_trap() was in fact buggy; `int $0x80` is not an exception. This went unnoticed for years because int80_bounce and trap_bounce were separate structures, but were combined by this change. Exception handling is different to interrupt handling for PV guests. By reusing trap_bounce, the following corner case can occur: * Handle a guest `int $0x80` instruction. Latches TBF_EXCEPTION into trap_bounce. * Handle an exception, which emulates to success (such as ptwr support), which leaves trap_bounce unmodified. * The exception exit path sees TBF_EXCEPTION set and re-injects the `int $0x80` a second time. Drop the TBF_EXCEPTION from the int80 invocation, which matches the equivalent logic from the syscall/sysenter paths. Reported-by: Sander Eikelenboom <linux@xxxxxxxxxxxxxx> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> --- CC: Jan Beulich <JBeulich@xxxxxxxx> CC: Sander Eikelenboom <linux@xxxxxxxxxxxxxx> --- xen/arch/x86/x86_64/entry.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S index e011c90..f4e1b80 100644 --- a/xen/arch/x86/x86_64/entry.S +++ b/xen/arch/x86/x86_64/entry.S @@ -373,10 +373,10 @@ UNLIKELY_END(msi_check) mov %cx, TRAPBOUNCE_cs(%rdx) mov %rdi, TRAPBOUNCE_eip(%rdx) - /* TB_flags = TBF_EXCEPTION | (TI_GET_IF(ti) ? TBF_INTERRUPT : 0); */ + /* TB_flags = (TI_GET_IF(ti) ? TBF_INTERRUPT : 0); */ testb $4, 0x80 * TRAPINFO_sizeof + TRAPINFO_flags(%rsi) setnz %cl - lea TBF_EXCEPTION(, %rcx, TBF_INTERRUPT), %ecx + lea (, %rcx, TBF_INTERRUPT), %ecx mov %cl, TRAPBOUNCE_flags(%rdx) cmpb $0, DOMAIN_is_32bit_pv(%rax) -- 2.1.4 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |