[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] x86/hvm/emulate: make sure rep I/O emulation does not cross GFN boundaries


  • To: 'Jan Beulich' <JBeulich@xxxxxxxx>
  • From: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
  • Date: Fri, 10 Aug 2018 12:10:24 +0000
  • Accept-language: en-GB, en-US
  • Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 10 Aug 2018 12:10:29 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHUMJYdCPLMt01W9UiTpYtqWC0MMqS4wFyAgAAivcA=
  • Thread-topic: [PATCH 2/2] x86/hvm/emulate: make sure rep I/O emulation does not cross GFN boundaries

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 10 August 2018 12:59
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; xen-devel <xen-
> devel@xxxxxxxxxxxxxxxxxxxx>
> Subject: Re: [PATCH 2/2] x86/hvm/emulate: make sure rep I/O emulation
> does not cross GFN boundaries
> 
> >>> On 10.08.18 at 12:37, <paul.durrant@xxxxxxxxxx> wrote:
> > --- a/xen/arch/x86/hvm/emulate.c
> > +++ b/xen/arch/x86/hvm/emulate.c
> > @@ -184,8 +184,23 @@ static int hvmemul_do_io(
> >          hvmtrace_io_assist(&p);
> >      }
> >
> > -    vio->io_req = p;
> > +    /*
> > +     * Make sure that we truncate rep MMIO at any GFN boundary. This is
> > +     * necessary to ensure that the correct device model is targetted
> > +     * or that we correctly handle a rep op spanning MMIO and RAM.
> > +     */
> > +    if ( unlikely(p.count > 1) && p.type == IOREQ_TYPE_COPY )
> > +    {
> > +        unsigned long off = p.addr & ~PAGE_MASK;
> >
> > +        p.count = min_t(unsigned long,
> > +                        p.count,
> > +                        p.df ?
> > +                        (off + p.size) / p.size :
> > +                        (PAGE_SIZE - off) / p.size);
> 
> For misaligned requests you need to make sure p.count doesn't end
> up as zero (which can now happen in the forwards case). Or do you
> rely on callers (hvmemul_do_io_addr() in particular) splitting such
> requests already?

Well I have a test case where that split is not happening. Adding a safety 
check for p.count == 0 at this point should be done.

> Yet in that case it's not clear to me whether
> anything needs changing here in the first place. (Similarly in the
> backwards case I think the first iteration risks crossing a page
> boundary, and then the batch should be clipped to count 1.)
> 

Ok. Sounds like clipping to 1 rep in both circumstances would be best.

  Paul

> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.