[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86/HVM: don't give the wrong impression of WRMSR succeeding
>>> On 22.02.18 at 23:16, <boris.ostrovsky@xxxxxxxxxx> wrote: > On 02/22/2018 10:44 AM, Jan Beulich wrote: >>>>> On 22.02.18 at 15:53, <andrew.cooper3@xxxxxxxxxx> wrote: >>> On 22/02/18 13:44, Jan Beulich wrote: >>>> ... for unknown MSRs: wrmsr_hypervisor_regs()'s comment clearly says >>>> that the function returns 0 for unrecognized MSRs, so >>>> {svm,vmx}_msr_write_intercept() should not convert this into success. >>>> >>>> At the time it went in, commit 013e34f5a6 ("x86: handle paged gfn in >>>> wrmsr_hypervisor_regs") was probably okay, since prior to that the >>>> return value wasn't checked at all. But that's not how we want things >>>> to be handled nowadays. >>>> >>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> >>> I agree in principle, but this does have a large potential risk for >>> guests. Any unknown MSR which guests don't check for #GP faults from >>> will now cause the guests to crash. >>> >>> That said, it is the correct direction to go long-term, and we've got to >>> throw the switch some time, but I expect this will cause problems in the >>> short term, especially for migrated-in guests. >> Thinking about this again, the RDMSR side of things already raises >> #GP for inaccessible MSRs. We obviously can't do a probing WRMSR >> in {svm,vmx}_msr_write_intercept(), but couldn't we rdmsr_safe() >> in the "case 0:" block, treating the result as the verdict whether to >> raise #GP to the guest? As the read path does this anyway, we're >> not exposing ourselves to new risks. > > What about write-only MSRs? Bad luck (I'm sorry to say so, but we have an actual bug to fix here). If we find any such is used, we'll have to add individual case labels. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |