[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 02/16] x86/hvm: remove multiple open coded 'chunking' loops



> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@xxxxxxxxxx]
> Sent: 02 July 2015 16:38
> To: Paul Durrant; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Keir (Xen.org); Jan Beulich
> Subject: Re: [PATCH v5 02/16] x86/hvm: remove multiple open coded
> 'chunking' loops
> 
> On 30/06/15 14:05, Paul Durrant wrote:
> > ...in hvmemul_read/write()
> >
> > Add hvmemul_phys_mmio_access() and hvmemul_linear_mmio_access()
> functions
> > to reduce code duplication.
> >
> > NOTE: This patch also introduces a change in 'chunking' around a page
> >       boundary. Previously (for example) an 8 byte access at the last
> >       byte of a page would get carried out as 8 single-byte accesses.
> >       It will now be carried out as a single-byte access, followed by
> >       a 4-byte access, a 2-byte access and then another single-byte
> >       access.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> > Cc: Keir Fraser <keir@xxxxxxx>
> > Cc: Jan Beulich <jbeulich@xxxxxxxx>
> > Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> > ---
> >  xen/arch/x86/hvm/emulate.c |  223 +++++++++++++++++++++++----------
> -----------
> >  1 file changed, 116 insertions(+), 107 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> > index 8b60843..b67f5db 100644
> > --- a/xen/arch/x86/hvm/emulate.c
> > +++ b/xen/arch/x86/hvm/emulate.c
> > @@ -539,6 +539,117 @@ static int hvmemul_virtual_to_linear(
> >      return X86EMUL_EXCEPTION;
> >  }
> >
> > +static int hvmemul_phys_mmio_access(
> > +    paddr_t gpa, unsigned int size, uint8_t dir, uint8_t *buffer)
> > +{
> > +    unsigned long one_rep = 1;
> > +    unsigned int chunk;
> > +    int rc;
> > +
> > +    /* Accesses must fall within a page */
> 
> Full stop.
> 
> > +    BUG_ON((gpa & (PAGE_SIZE - 1)) + size > PAGE_SIZE);
> 
> ~PAGE_MASK as opposed to (PAGE_SIZE - 1)
> 
> > +
> > +    /*
> > +     * hvmemul_do_io() cannot handle non-power-of-2 accesses or
> > +     * accesses larger than sizeof(long), so choose the highest power
> > +     * of 2 not exceeding sizeof(long) as the 'chunk' size.
> > +     */
> > +    chunk = 1 << (fls(size) - 1);
> 
> Depending on size, chunk can become undefined (shifting by 31 or -1) or
> zero (shifting by 32).
> 
> How about
> 
> if ( size > sizeof(long) )
>     chunk = sizeof(long);
> else
>     chunk = 1U << (fls(size) - 1);
> 

fls(size) - 1 can't be more than 31 (since size is an unsigned int). I can 
assert size != 0. So would

chunk = 1u << (fls(size) - 1);

be ok? (I.e. I just missed the 'u' suffix before).

> ?
> 
> > +    if ( chunk > sizeof (long) )
> > +        chunk = sizeof (long);
> > +
> > +    for ( ;; )
> > +    {
> > +        rc = hvmemul_do_mmio_buffer(gpa, &one_rep, chunk, dir, 0,
> > +                                    buffer);
> > +        if ( rc != X86EMUL_OKAY )
> > +            break;
> > +
> > +        /* Advance to the next chunk */
> 
> Full stop.
> 
> > +        gpa += chunk;
> > +        buffer += chunk;
> > +        size -= chunk;
> > +
> > +        if ( size == 0 )
> > +            break;
> > +
> > +        /*
> > +         * If the chunk now exceeds the remaining size, choose the next
> > +         * lowest power of 2 that will fit.
> > +         */
> > +        while ( chunk > size )
> > +            chunk >>= 1;
> > +    }
> > +
> > +    return rc;
> > +}
> > +
> > +static int hvmemul_linear_mmio_access(
> > +    unsigned long gla, unsigned int size, uint8_t dir, uint8_t *buffer,
> > +    uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt, bool_t
> known_gpfn)
> > +{
> > +    struct hvm_vcpu_io *vio = &current->arch.hvm_vcpu.hvm_io;
> > +    unsigned long page_off = gla & (PAGE_SIZE - 1);
> 
> Can be int as opposed to long, and "offset" appears to be the prevailing
> name.  Also, ~PAGE_MASK.

Ok.

> 
> > +    unsigned int chunk;
> > +    paddr_t gpa;
> > +    unsigned long one_rep = 1;
> > +    int rc;
> > +
> > +    chunk = min_t(unsigned int, size, PAGE_SIZE - page_off);
> > +
> > +    if ( known_gpfn )
> > +        gpa = pfn_to_paddr(vio->mmio_gpfn) | page_off;
> > +    else
> > +    {
> > +        rc = hvmemul_linear_to_phys(gla, &gpa, chunk, &one_rep, pfec,
> > +                                    hvmemul_ctxt);
> > +        if ( rc != X86EMUL_OKAY )
> > +            return rc;
> > +    }
> > +
> > +    for ( ;; )
> > +    {
> > +        rc = hvmemul_phys_mmio_access(gpa, chunk, dir, buffer);
> > +        if ( rc != X86EMUL_OKAY )
> > +            break;
> > +
> > +        gla += chunk;
> > +        buffer += chunk;
> > +        size -= chunk;
> > +
> > +        if ( size == 0 )
> > +            break;
> > +
> > +        ASSERT((gla & (PAGE_SIZE - 1)) == 0);
> 
> ~PAGE_MASK.
> 
> > +        ASSERT(size < PAGE_SIZE);
> 
> Nothing I can see here prevents size being greater than PAGE_SIZE.
> chunk strictly will be, but size -= chunk can still leave size greater
> than a page.
> 

Ok, I'll allow for size >= PAGE_SIZE.

  Paul

> ~Andrew
> 
> > +        chunk = size;
> > +        rc = hvmemul_linear_to_phys(gla, &gpa, chunk, &one_rep, pfec,
> > +                                    hvmemul_ctxt);
> > +        if ( rc != X86EMUL_OKAY )
> > +            return rc;
> > +    }
> > +
> > +    return rc;
> > +}
> > +


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.