[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 05/17] x86/hvm: unify stdvga mmio intercept with standard mmio intercept



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 17 June 2015 16:51
> To: Paul Durrant
> Cc: Andrew Cooper; xen-devel@xxxxxxxxxxxxxxxxxxxx; Keir (Xen.org)
> Subject: Re: [PATCH v2 05/17] x86/hvm: unify stdvga mmio intercept with
> standard mmio intercept
> 
> >>> On 11.06.15 at 17:42, <paul.durrant@xxxxxxxxxx> wrote:
> > It's clear from the following check in hvmemul_rep_movs:
> >
> >     if ( sp2mt == p2m_mmio_direct || dp2mt == p2m_mmio_direct ||
> >          (sp2mt == p2m_mmio_dm && dp2mt == p2m_mmio_dm) )
> >         return X86EMUL_UNHANDLEABLE;
> >
> > that mmio <-> mmio copy is not handled. This means the code in the
> > stdvga mmio intercept that explicitly handles mmio <-> mmio copy when
> > hvm_copy_to/from_guest_phys() fails is never going to be executed.
> 
> Looking at the code I agree with the analysis, but I doubt the code
> was put there originally just for fun, and I wonder whether some of
> the slowness we "gained" over the years in boot loader graphics
> performance isn't related to this code having got hooked off. And in
> the end we should get to the point where MMIO -> MMIO transfers
> get emulated efficiently anyway, so I wonder whether this patch
> moves us in the right direction.
> 

I think removing the stdvga code is the right thing to so. If we want to handle 
MMIO <-> MMIO then this needs to be done in a generic fashon. Given that 
(upstream) QEMU translates the 'data_is_addr' addresses through its memory map 
as well as the actual ioreq addr values I believe it will actually do MMIO <-> 
MMIO moves as it stands, so it's probably only Xen's code that needs modifying 
and I think this can be done fairly easily after this series. I'll see if I can 
come up with something.

> > --- a/xen/arch/x86/hvm/emulate.c
> > +++ b/xen/arch/x86/hvm/emulate.c
> > @@ -273,6 +273,15 @@ static int hvmemul_acquire_page(unsigned long
> gmfn, struct page_info **page)
> >          return X86EMUL_RETRY;
> >      }
> >
> > +    /* This code should not be reached if the gmfn is not RAM */
> > +    if ( p2m_is_mmio(p2mt) )
> > +    {
> > +        domain_crash(d);
> 
> Log a message at least? Silent domain death is rather ugly to analyze.

You do get a file+line (since domain_crash is actually a macro with a printk in 
it too) but I can also log something.

> 
> > --- a/xen/arch/x86/hvm/stdvga.c
> > +++ b/xen/arch/x86/hvm/stdvga.c
> > @@ -275,7 +275,8 @@ static uint8_t stdvga_mem_readb(uint64_t addr)
> >      return ret;
> >  }
> >
> > -static uint64_t stdvga_mem_read(uint64_t addr, uint64_t size)
> > +static int stdvga_mem_read(struct vcpu *v, unsigned long addr,
> > +                           unsigned long size, unsigned long *p_data)
> >  {
> >      uint64_t data = 0;
> >
> > @@ -313,7 +314,9 @@ static uint64_t stdvga_mem_read(uint64_t addr,
> uint64_t size)
> >          break;
> >      }
> >
> > -    return data;
> > +    *p_data = data;
> 
> Even if sizeof(long) == sizeof(uint64_t), please try to use
> consistent types (i.e. the new function parameter should be of
> uint64_t * type).
> 

Ok.

> > +    if (!hvm_buffered_io_send(&p))
> > +        return X86EMUL_UNHANDLEABLE;
> 
> Coding style.
> 

True.

> > +static int stdvga_mem_check(struct vcpu *v, unsigned long addr)
> >  {
> > +    struct hvm_hw_stdvga *s = &v->domain->arch.hvm_domain.stdvga;
> > +    int rc;
> >
> >      spin_lock(&s->lock);
> >
> > +    rc = s->stdvga && s->cache &&
> > +        (addr >= VGA_MEM_BASE) &&
> > +        (addr < (VGA_MEM_BASE + VGA_MEM_SIZE));
> 
> This (compared with the code being removed) makes clear that the
> check() mechanism lacks the size (it did so before, but with your
> generalization I think this needs to be corrected).
> 
> > +const struct hvm_mmio_ops stdvga_mem_ops = {
> 
> static
> 

Ok.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.