[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86/HVM: correctly deal with benign exceptions when combining two
On 11/04/2019 08:31, Jan Beulich wrote: > >>> That's also the way the XSA-156 advisory describes it. >> XSA-156 was written at a time when we both had far less authority to >> comment on the specific details. Certainly as far as I am concerned, >> the past couple of years have made a massive difference, not least the >> several months spent debugging the STI/singlestep issue with Gil. >> >> The mistake in XSA-156 is the description of "upon completion of the >> delivery of the first exception". >> >> The infinite loop occurs because delivery of the first #DB is never >> considered complete (as #DB is still considered pending once the >> exception frame was written, because it was triggered in the process of >> delivering the exception), and therefore does not move from priority 4 >> to 5, which would allow an NMI to break the cycle. >> >> Similarly for the #AC case, priority never moves from 10 back to 1, >> because delivery of the first #AC is never seen to have completed. > To be honest, to me this continues to be a (mis-)implementation > detail. A proper architectural specification would not allow for such > pathological cases in the first place. I'll let the hardware vendors argue over the details of how exactly the hardware is wrong, but everyone will agree that these pathological cases ought not to exist. > >>>>> Sadly neither AMD nor Intel really define what happens with two benign >>>>> exceptions - the term "sequentially" used by both is poisoned by how the >>>>> combining of benign and non-benign exceptions is described. Since NMI, >>>>> #MC, and hardware interrupts are all benign and (perhaps with the >>>>> exception of #MC) can't occur second, favor the first in order to not >>>>> lose it. >>>> #MC has the highest priority so should only be recognised immediately >>>> after an instruction boundary. >>> Are you sure? What about an issue with one of the memory >>> accesses involved in delivering a previously raised exception? >> #MC is abort, and is imprecise. > Mind me correcting this to "may be imprecise": There's a flag after > all telling whether in fact it is. Hmm ok. This was down to the difference between how it is referenced in the SDMs, but "may be imprecise" is more accurate for an abort. The important point is that you musn't assume that it is precise. >>>> I don't however see a way of stacking #AC, because you can't know that >>>> one has occured until later in the instruction cycle than all other >>>> sources. What would happen is that you'd raise #AC from previous >>>> instruction, and then recognise #MC while starting to execute the #AC >>>> entry point. (I think) >>> Well - see XSA-156 for what hardware does in that situation. >> The details of XSA-156 are inaccurate. >> >> #AC can stack, but the problem only manifests when Xen emulates an >> injection, which is restricted to SVM for the moment. >> >> That said, I'm considering moving it back to being common to provide >> architectural behaviour despite the silicon issue which causes XSA-170 > Would you mind helping me make the connection between #AC > delivery (and its emulation) and XSA-170, being about VM entry > with non-canonical %rip? Ah - that wasn't the connection I was trying to make. Because our emulation of event delivery is currently specific to SVM, and doesn't perform alignment checking, Xen will never end up in a case were #AC will be delivered second. If you recall, the injection support used to be common, then moved to being SVM specific. If it were to move back to being common, we could fix XSA-170 while maintaining architecturally correct behaviour, by fully emulating the event injection, which would bypass the incorrect VMEntry consistency check which causes XSA-170 in the first place. Certainly for the SVM case, the only way I can see to get #DB injection working in a vaguely architectural way is to fully emulate the first event, and put the #DB in the EVENTINJ field. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |