[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] x86/vmx: Revert "x86/VMX: sanitize rIP before re-entering guest"
On 19/10/2020 10:09, Jan Beulich wrote: > On 16.10.2020 17:38, Andrew Cooper wrote: >> On 15/10/2020 09:01, Jan Beulich wrote: >>> On 14.10.2020 15:57, Andrew Cooper wrote: >>>> On 13/10/2020 16:58, Jan Beulich wrote: >>>>> On 09.10.2020 17:09, Andrew Cooper wrote: >>>>>> At the time of XSA-170, the x86 instruction emulator really was broken, >>>>>> and >>>>>> would allow arbitrary non-canonical values to be loaded into %rip. This >>>>>> was >>>>>> fixed after the embargo by c/s 81d3a0b26c1 "x86emul: limit-check branch >>>>>> targets". >>>>>> >>>>>> However, in a demonstration that off-by-one errors really are one of the >>>>>> hardest programming issues we face, everyone involved with XSA-170, >>>>>> myself >>>>>> included, mistook the statement in the SDM which says: >>>>>> >>>>>> If the processor supports N < 64 linear-address bits, bits 63:N must >>>>>> be identical >>>>>> >>>>>> to mean "must be canonical". A real canonical check is bits 63:N-1. >>>>>> >>>>>> VMEntries really do tolerate a not-quite-canonical %rip, specifically to >>>>>> cater >>>>>> to the boundary condition at 0x0000800000000000. >>>>>> >>>>>> Now that the emulator has been fixed, revert the XSA-170 change to fix >>>>>> architectural behaviour at the boundary case. The XTF test case for >>>>>> XSA-170 >>>>>> exercises this corner case, and still passes. >>>>>> >>>>>> Fixes: ffbbfda377 ("x86/VMX: sanitize rIP before re-entering guest") >>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> >>>>> But why revert the change rather than fix ... >>>>> >>>>>> @@ -4280,38 +4280,6 @@ void vmx_vmexit_handler(struct cpu_user_regs >>>>>> *regs) >>>>>> out: >>>>>> if ( nestedhvm_vcpu_in_guestmode(v) ) >>>>>> nvmx_idtv_handling(); >>>>>> - >>>>>> - /* >>>>>> - * VM entry will fail (causing the guest to get crashed) if rIP (and >>>>>> - * rFLAGS, but we don't have an issue there) doesn't meet certain >>>>>> - * criteria. As we must not allow less than fully privileged mode >>>>>> to have >>>>>> - * such an effect on the domain, we correct rIP in that case >>>>>> (accepting >>>>>> - * this not being architecturally correct behavior, as the injected >>>>>> #GP >>>>>> - * fault will then not see the correct [invalid] return address). >>>>>> - * And since we know the guest will crash, we crash it right away >>>>>> if it >>>>>> - * already is in most privileged mode. >>>>>> - */ >>>>>> - mode = vmx_guest_x86_mode(v); >>>>>> - if ( mode == 8 ? !is_canonical_address(regs->rip) >>>>> ... the wrong use of is_canonical_address() here? By reverting >>>>> you open up avenues for XSAs in case we get things wrong elsewhere, >>>>> including ... >>>>> >>>>>> - : regs->rip != regs->eip ) >>>>> ... for 32-bit guests. >>>> Because the only appropriate alternative would be ASSERT_UNREACHABLE() >>>> and domain crash. >>>> >>>> This logic corrupts guest state. >>>> >>>> Running with corrupt state is every bit an XSA as hitting a VMEntry >>>> failure if it can be triggered by userspace, but the latter safer and >>>> much more obvious. >>> I disagree. For CPL > 0 we don't "corrupt" guest state any more >>> than reporting a #GP fault when one is going to be reported >>> anyway (as long as the VM entry doesn't fail, and hence the >>> guest won't get crashed). IOW this raising of #GP actually is a >>> precautionary measure to _avoid_ XSAs. >> It does not remove any XSAs. It merely hides them. > How that? If we convert the ability of guest user mode to crash > the guest into deliver of #GP(0), how is there a hidden XSA then? Because userspace being able to triggering this fixup is still an XSA. >> There are legal states where RIP is 0x0000800000000000 and #GP is the >> wrong thing to do. Any async VMExit (Processor Trace Prefetch in >> particular), or with debug traps pending. > You realize we're in agreement about this pseudo-canonical check > needing fixing? Anything other than deleting this clause does not fix the bugs above. >>>> It was the appropriate security fix (give or take the functional bug in >>>> it) at the time, given the complexity of retrofitting zero length >>>> instruction fetches to the emulator. >>>> >>>> However, it is one of a very long list of guest-state-induced VMEntry >>>> failures, with non-trivial logic which we assert will pass, on a >>>> fastpath, where hardware also performs the same checks and we already >>>> have a runtime safe way of dealing with errors. (Hence not actually >>>> using ASSERT_UNREACHABLE() here.) >>> "Runtime safe" as far as Xen is concerned, I take it. This isn't safe >>> for the guest at all, as vmx_failed_vmentry() results in an >>> unconditional domain_crash(). >> Any VMEntry failure is a bug in Xen. If userspace can trigger it, it is >> an XSA, *irrespective* of whether we crash the domain then and there, or >> whether we let it try and limp on with corrupted state. > Allowing the guest to continue with corrupted state is not a > useful thing to do, I agree. However, what falls under > "corrupted" seems to be different for you and me. I'd not call > delivery of #GP "corruption" in any way. I can only repeat my previous statement: > There are legal states where RIP is 0x0000800000000000 and #GP is the > wrong thing to do. Blindly raising #GP in is not always the right thing to do. > The primary goal ought > to be that we don't corrupt the guest kernel view of the world. > It may then have the opportunity to kill the offending user > mode process. By the time we have hit this case, all bets are off, because Xen *is* malfunctioning. We have no idea if kernel context is still intact. You don't even know that current user context is the correct offending context to clobber, and might be creating a user=>user DoS vulnerability. We definitely have an XSA to find and fix, and we can either make it very obvious and likely to be reported, or hidden and liable to go unnoticed for a long period of time. Every rational argument is on the side of killing the domain in an obvious way. ~Andrew
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |