[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] x86/vMSI-X emulation issue



>>> On 24.03.16 at 10:09, <Paul.Durrant@xxxxxxxxxx> wrote:
>> From: Xen-devel [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf Of Jan
>> Beulich
>> Sent: 24 March 2016 07:52
>> > 2) Do aforementioned chopping automatically on seeing
>> >     X86EMUL_UNHANDLEABLE, on the basis that the .check
>> >     handler had indicated that the full range was acceptable. That
>> >     would at once cover other similarly undesirable cases like the
>> >     vLAPIC code returning this error. However, any stdvga like
>> >     emulated device would clearly not want such to happen, and
>> >     would instead prefer the entire batch to get forwarded in one
>> >     go (stdvga itself sits on a different path). Otoh, with the
>> >     devices we have currently, this would seem to be the least
>> >     intrusive solution.
>> 
>> Having thought about it more over night, I think this indeed is
>> the most reasonable route, not just because it's least intrusive:
>> For non-buffered internally handled I/O requests, no good can
>> come from forwarding full batches to qemu, when the respective
>> range checking function has indicated that this is an acceptable
>> request. And in fact neither vHPET not vIO-APIC code generate
>> X86EMUL_UNHANDLEABLE. And vLAPIC code doing so is also
>> just apparently so - I'll submit a patch to make this obvious once
>> tested.
>> 
>> Otoh stdvga_intercept_pio() uses X86EMUL_UNHANDLEABLE in
>> a manner similar to the vMSI-X code - for internal caching and
>> then forwarding to qemu. Clearly that is also broken for
>> REP OUTS, and hence a similar rep count reduction is going to
>> be needed for the port I/O case.
> 
> It suggests that such cache-and/or-forward models should probably sit 
> somewhere else in the flow, possibly being invoked from hvm_send_ioreq() 
> since there should indeed be a selected ioreq server for these cases.

I don't really think so. As I have gone through and carried out
what I had described above, I think I managed to address at
least one more issue with not properly handled rep counts, and
hence I think doing it that way is correct. I'll have to test the
thing before I can send it out, for you to take a look.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.