[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [V0 PATCH 3/6] AMD-PVH: call hvm_emulate_one instead of handle_mmio
>>> On 26.08.14 at 03:53, <mukesh.rathor@xxxxxxxxxx> wrote: > On Mon, 25 Aug 2014 08:10:38 +0100 > "Jan Beulich" <JBeulich@xxxxxxxx> wrote: > >> >>> On 22.08.14 at 20:52, <mukesh.rathor@xxxxxxxxxx> wrote: >> > On Fri, 22 Aug 2014 10:50:01 +0100 >> > "Jan Beulich" <JBeulich@xxxxxxxx> wrote: >> >> Also - how come at least the use of the function in VMX's >> >> EXIT_REASON_IO_INSTRUCTION handling is no problem for PVH, >> >> but SVM's VMEXIT_IOIO one is? >> > >> > Yup, missed that one. That would need to be addressed. >> > >> > I guess the first step would be to do a non-pvh patch to fix calling >> > handle_mmio for non-mmio purposes. Since, it applies to both >> > vmx/svm, perhaps an hvm function. Let me do that first, and then >> > pvh can piggyback on that. >> >> Problem being that INS and OUTS can very well address MMIO >> on the memory side of the operation (while right now >> hvmemul_rep_{ins,outs}() fail such operations, this merely means >> they'd get emulated one by one instead of accelerated as multiple >> ops in one go). >> >> Also looking at handle_mmio() once again - it being just a relatively >> thin wrapper around hvm_emulate_one(), can you remind me again >> what in this small function it was that breaks on PVH? It would seem > > handle_mmio -> hvmemul_do_io -> hvm_mmio_intercept(), last one is > fatal as not all handlers are safe for pvh. But you see - that is exactly my point: Avoiding handle_mmio() alone doesn't buy you anything, as hvmemul_do_io() isn't being called directly from that function, but indirectly via the actors specified in hvm_emulate_ops. Which gets me back to suggesting that you need a different struct x86_emulate_ops instance to deal with these non- MMIO emulation needs. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |