[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/3] hvmemul_do_io: Do not retry if no ioreq server exists for this I/O.



>>> On 10.02.15 at 23:52, <dslutz@xxxxxxxxxxx> wrote:
> This saves a VMENTRY and a VMEXIT since we no longer retry the
> ioport read on backing DM not handling a given ioreq.
> 
> There are 2 case about "no ioreq server exists for this I/O":
> 
> 1) No ioreq servers (PVH case)
> 2) No ioreq servers for this I/O (non PVH case)
> 
> The routine hvm_has_dm() used to check for the empty list, the PVH
> case (#1).
> 
> By changing from hvm_has_dm() to hvm_select_ioreq_server() both
> cases are considered.  Doing it this way allows
> hvm_send_assist_req() to only have 2 possible return values.
> 
> The key part of skipping the retry is to do "rc = X86EMUL_OKAY"
> which is what the error path on the call to hvm_has_dm() does in
> hvmemul_do_io() (the only call on hvm_has_dm()).
> 
> Since this case is no longer handled in hvm_send_assist_req(), move
> the call to hvm_complete_assist_req() into hvmemul_do_io().
> 
> As part of this change, do the work of hvm_complete_assist_req() in
> the PVH case.  Acting more like real hardware looks to be better.
> 
> Adding "rc = X86EMUL_OKAY" in the failing case of
> hvm_send_assist_req() would break what was done in commit
> bac0999325056a3b3a92f7622df7ffbc5388b1c3 and commit
> f20f3c8ece5c10fa7626f253d28f570a43b23208.  We are currently doing
> the succeeding case of hvm_send_assist_req() and retying the I/O.
> 
> Since hvm_select_ioreq_server() has already been called, switch to
> using hvm_send_assist_req_to_ioreq_server().
> 
> Since there is no longer any calls to hvm_send_assist_req(), drop
> that routine and rename hvm_send_assist_req_to_ioreq_server() to
> hvm_send_assist_req.
> 
> Since hvm_send_assist_req() is an extern, add an ASSERT() on s.
> 
> Signed-off-by: Don Slutz <dslutz@xxxxxxxxxxx>
> ---
> Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx>

So Paul, does you R-b stand despite the code changes in v3?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.