[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] hvmemul_rep_movs() vs MMIO



At 13:41 +0100 on 20 Sep (1379684515), Jan Beulich wrote:
> >>> On 20.09.13 at 14:05, Tim Deegan <tim@xxxxxxx> wrote:
> > At 12:46 +0100 on 20 Sep (1379681171), Jan Beulich wrote:
> >> Tim,
> >> 
> >> was it really intended for "x86/hvm: use unlocked p2m lookups in
> >> hvmemul_rep_movs()" to special case p2m_mmio_dm but not
> >> p2m_mmio_direct?
> > 
> > Hmm.  It certainly doesn't seem to handle that case very well now, but
> > I'm not sure the code before was any better.  AFAICT it would have
> > passed mmio_direct accesses to hvmemul_do_mmio(), which would send them
> > to qemu.
> 
> Hmm, wait - if MMIO of a passed through device, other than for
> its port I/O, doesn't get intercepted at all, but instead gets taken
> care of by there being a valid gfn->mfn translation in place, then
> indeed before and after your change things aren't handled well.
> Perhaps we should bail from there if either side is p2m_mmio_direct
> as well as if both sides are p2m_mmio_dm:

Sounds OK to me.  Clearly mmio-mmio string operations aren't something
we need to make go fast. :)

Reviewed-by: Tim Deegan <tim@xxxxxxx>

> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -799,6 +799,10 @@ static int hvmemul_rep_movs(
>      (void) get_gfn_query_unlocked(current->domain, sgpa >> PAGE_SHIFT, 
> &sp2mt);
>      (void) get_gfn_query_unlocked(current->domain, dgpa >> PAGE_SHIFT, 
> &dp2mt);
>  
> +    if ( sp2mt == p2m_mmio_direct || dp2mt == p2m_mmio_direct ||
> +         (sp2mt == p2m_mmio_dm && dp2mt == p2m_mmio_dm) )
> +        return X86EMUL_UNHANDLEABLE;
> +
>      if ( sp2mt == p2m_mmio_dm )
>          return hvmemul_do_mmio(
>              sgpa, reps, bytes_per_rep, dgpa, IOREQ_READ, df, NULL);
> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.