[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/5] x86/HVM: fix direct PCI port I/O emulation retry and error handling



On 30/09/13 13:57, Jan Beulich wrote:
> dpci_ioport_{read,write}() guest memory access failure handling should
> be modelled after process_portio_intercept()'s (and others): Upon
> encountering an error on other than the first iteration, the count
> successfully handled needs to be stored and X86EMUL_OKAY returned, in
> order for the generic instruction emulator to update register state
> correctly before reporting failure or retrying (both of which would
> only happen after re-invoking emulation).
>
> Further we leverage (and slightly extend, due to the above mentioned
> need to return X86EMUL_OKAY) the "large MMIO" retry model.
>
> Note that there is still a special case not explicitly taken care of
> here: While the first retry on the last iteration of a "rep ins"
> correctly recovers the already read data, an eventual subsequent retry
> is being handled by the pre-existing mmio-large logic (through
> hvmemul_do_io() storing the [recovered] data [again], also taking into
> consideration that the emulator converts a single iteration "ins" to
> ->read_io() plus ->write()).
>
> Also fix an off-by-one in the mmio-large-read logic, and slightly
> simplify the copying of the data.
>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

One trivial thought comes to mind which you could easily do when
committing the patch...

> @@ -316,22 +325,51 @@ static int dpci_ioport_read(uint32_t mpo
>  
>          if ( p->data_is_ptr )
>          {
> -            int ret;
> -            ret = hvm_copy_to_guest_phys(p->data + step * i, &data, p->size);
> -            if ( (ret == HVMCOPY_gfn_paged_out) ||
> -                 (ret == HVMCOPY_gfn_shared) )
> -                return X86EMUL_RETRY;
> +            switch ( hvm_copy_to_guest_phys(p->data + step * i,
> +                                            &data, p->size) )
> +            {
> +            case HVMCOPY_okay:
> +                break;
> +            case HVMCOPY_gfn_paged_out:
> +            case HVMCOPY_gfn_shared:
> +                rc = X86EMUL_RETRY;
> +                break;
> +            case HVMCOPY_bad_gfn_to_mfn:
> +                /* Drop the write as real hardware would. */
> +                continue;
> +            case HVMCOPY_bad_gva_to_gfn:
> +                ASSERT(0);
> +                /* fall through */
> +            default:
> +                rc = X86EMUL_UNHANDLEABLE;
> +                break;
> +            }
> +            if ( rc != X86EMUL_OKAY)
> +                break;
>          }
>          else
>              p->data = data;
>      }
>      

Nuke the trailing whitespace on the line above here, which will
fractionally increase the size of the hunk below.

~Andrew

> -    return X86EMUL_OKAY;
> +    if ( rc == X86EMUL_RETRY )
> +    {
> +        vio->mmio_retry = 1;
> +        vio->mmio_large_read_bytes = p->size;
> +        memcpy(vio->mmio_large_read, &data, p->size);
> +    }
> +
> +    if ( i != 0 )
> +    {
> +        p->count = i;
> +        rc = X86EMUL_OKAY;
> +    }
> +
> +    return rc;
>  }
>  


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.