[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] x86/HVM: fix forwarding of internally cached requests



(re-adding xen-devel)

>>> On 21.04.16 at 09:48, <changjzh@xxxxxxxxx> wrote:
> 2016-04-21 14:24 GMT+08:00 Jan Beulich <JBeulich@xxxxxxxx>:
> 
>> >>> On 30.03.16 at 09:28, <changjzh@xxxxxxxxx> wrote:
>> > 2016-03-29 18:39 GMT+08:00 Jan Beulich <JBeulich@xxxxxxxx>:
>> >> Forwarding entire batches to the device model when an individual
>> >> iteration of them got rejected by internal device emulation handlers
>> >> with X86EMUL_UNHANDLEABLE is wrong: The device model would then handle
>> >> all iterations, without the internal handler getting to see any past
>> >> the one it returned failure for. This causes misbehavior in at least
>> >> the MSI-X and VGA code, which want to see all such requests for
>> >> internal tracking/caching purposes. But note that this does not apply
>> >> to buffered I/O requests.
>> >>
>> >> This in turn means that the condition in hvm_process_io_intercept() of
>> >> when to crash the domain was wrong: Since X86EMUL_UNHANDLEABLE can
>> >> validly be returned by the individual device handlers, we mustn't
>> >> blindly crash the domain if such occurs on other than the initial
>> >> iteration. Instead we need to distinguish hvm_copy_*_guest_phys()
>> >> failures from device specific ones, and then the former need to always
>> >> be fatal to the domain (i.e. also on the first iteration), since
>> >> otherwise we again would end up forwarding a request to qemu which the
>> >> internal handler didn't get to see.
>> >>
>> >> The adjustment should be okay even for stdvga's MMIO handling:
>> >> - if it is not caching then the accept function would have failed so we
>> >>   won't get into hvm_process_io_intercept(),
>> >> - if it issued the buffered ioreq then we only get to the p->count
>> >>   reduction if hvm_send_ioreq() actually encountered an error (in which
>> >>   we don't care about the request getting split up).
>> >>
>> >> Also commit 4faffc41d ("x86/hvm: limit reps to avoid the need to handle
>> >> retry") went too far in removing code from hvm_process_io_intercept():
>> >> When there were successfully handled iterations, the function should
>> >> continue to return success with a clipped repeat count.
>> >>
>> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> >> Cc: Chang Jianzhong <changjzh@xxxxxxxxx>
>> >> ---
>> >> v2: Make hvm_process_io_intercept() always return a valid X86EMUL_*
>> >>     code by moving the domain_crash() invocations up. Add a stdvga
>> >>     related paragraph to the commit message.
>> >> ---
>> >> I assume this also addresses the issue which
>> >>
>> http://lists.xenproject.org/archives/html/xen-devel/2016-01/msg03189.html 
>> >> attempted to deal with in a not really acceptable way.
>> >>
>> >> I hope this issue is resolved, but it still exists.
>>
>> Coming back to this: Have you made any progress with the analysis?
>>
> Win-guest xen log message:
> hvmemul_do_io() {
>   ...
>   rc = hvm_io_intercept(&p);
>   // rc == 1 is printed in log
>   ...
> }
> 
>  int hvm_io_intercept(ioreq_t *p){
>  ...
>    handler = hvm_find_io_handler(p);// handler == NULL is printed in log
> 
>    if ( handler == NULL ){
>          return X86EMUL_UNHANDLEABLE;
>      }
> 
>     rc = hvm_process_io_intercept(handler, p);// This function is not
> called.
>  ...
> }
> 
> hvm_find_io_handler(ioreq_t *p)
> -> hvm_mmio_accept(...)
> -> msixtbl_range(...)
> msixtbl_entry can not be found.
> 
> Windows guest try to write msixtbl before it is registed
>  (msixtbl_pt_register()).
> Win-guest do not write msixtbl after masixtble is registed.
> This situation may be the root cause.

Ah, but that's again an issue a patch was posted for already
http://lists.xenproject.org/archives/html/xen-devel/2016-04/msg00041.html
(albeit, as said there, not taking care of all possible cases yet).
And it's a pre-existing issue, not one introduced by any half
way recent change.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.