[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 17/17] x86/hvm: track large memory mapped accesses by buffer offset



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 25 June 2015 11:47
> To: Paul Durrant
> Cc: Andrew Cooper; xen-devel@xxxxxxxxxxxxxxxxxxxx; Keir (Xen.org)
> Subject: Re: [PATCH v4 17/17] x86/hvm: track large memory mapped
> accesses by buffer offset
> 
> >>> On 24.06.15 at 13:24, <paul.durrant@xxxxxxxxxx> wrote:
> > @@ -621,14 +574,41 @@ static int hvmemul_phys_mmio_access(
> >
> >      for ( ;; )
> >      {
> > -        rc = hvmemul_do_mmio_buffer(gpa, &one_rep, chunk, dir, 0,
> > -                                    *buffer);
> > -        if ( rc != X86EMUL_OKAY )
> > -            break;
> > +        /* Have we already done this chunk? */
> > +        if ( (*off + chunk) <= vio->mmio_cache[dir].size )
> 
> I can see why you would like to get rid of the address check, but
> I'm afraid you can't: You have to avoid getting mixed up multiple
> same kind (reads or writes) memory accesses that a single
> instruction can do. While generally I would assume that
> secondary accesses (like the I/O bitmap read associated with an
> OUTS) wouldn't go to MMIO, CMPS with both operands being
> in MMIO would break even if neither crosses a page boundary
> (not to think of when the emulator starts supporting the
> scatter/gather instructions, albeit supporting them will require
> further changes, or we could choose to do them one element at
> a time).

Ok. Can I assume at most two distinct set of addresses for read or write? If so 
then I can just keep two sets of caches in the hvm_io struct.

> 
> > +        {
> > +            ASSERT(*off + chunk <= vio->mmio_cache[dir].size);
> 
> I don't see any difference to the if() expression just above.
> 

That's possible  - this has been through a few re-bases.

> > +            if ( dir == IOREQ_READ )
> > +                memcpy(&buffer[*off],
> > +                       &vio->mmio_cache[IOREQ_READ].buffer[*off],
> > +                       chunk);
> > +            else
> > +            {
> > +                if ( memcmp(&buffer[*off],
> 
> "else if" please.
> 

Ok.

> > +                            &vio->mmio_cache[IOREQ_WRITE].buffer[*off],
> > +                            chunk) != 0 )
> > +                    domain_crash(curr->domain);
> > +            }
> > +        }
> > +        else
> > +        {
> > +            ASSERT(*off == vio->mmio_cache[dir].size);
> > +
> > +            rc = hvmemul_do_mmio_buffer(gpa, &one_rep, chunk, dir, 0,
> > +                                        &buffer[*off]);
> > +            if ( rc != X86EMUL_OKAY )
> > +                break;
> > +
> > +            /* Note that we have now done this chunk */
> 
> Missing stop.
> 

Ok.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.