[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Overlaped PIO with multiple ioreq_server (Xen4.6.1)



> -----Original Message-----
> From: Paul Durrant
> Sent: 09 May 2016 17:02
> To: Paul Durrant; Paolo Bonzini; Martin Cerveny
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; George Dunlap
> Subject: RE: [Xen-devel] Overlaped PIO with multiple ioreq_server
> (Xen4.6.1)
> 
> > -----Original Message-----
> > From: Xen-devel [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf Of
> > Paul Durrant
> > Sent: 09 May 2016 14:00
> > To: Paolo Bonzini; Martin Cerveny
> > Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; George Dunlap
> > Subject: Re: [Xen-devel] Overlaped PIO with multiple ioreq_server
> > (Xen4.6.1)
> >
> > > -----Original Message-----
> > > From: Paolo Bonzini [mailto:pbonzini@xxxxxxxxxx]
> > > Sent: 09 May 2016 13:56
> > > To: Paul Durrant; Martin Cerveny
> > > Cc: George Dunlap; xen-devel@xxxxxxxxxxxxxxxxxxx
> > > Subject: Re: [Xen-devel] Overlaped PIO with multiple ioreq_server
> > > (Xen4.6.1)
> > >
> > >
> > >
> > > On 28/04/2016 13:25, Paul Durrant wrote:
> > > >> Maybe you are lucky, qemu is registered before your own demu
> > > >> emulator.
> > > >
> > > > I guess I was lucky.
> > >
> > > Yeah, QEMU has been doing that since 2013 (commit 3bb28b7, "memory:
> > > Provide separate handling of unassigned io ports accesses", 2013-09-05).
> > >
> > > >> I used for testing your "demu" 2 years ago, now extending Citrix
> > > >> "vgpu", all was fine up to xen 4.5.2 (with qemu 2.0.2) but
> > > >> problem begin when I switched to 4.6.1 (with qemu 2.2.1), but it
> > > >> maybe lucky timing in registration.
> > > >
> > > > I think Xen should really be spotting range overlaps like this, but
> > > > the QEMU<->Xen interface will clearly need to be fixed to avoid the
> > > > over-claiming of I/O ports like this.
> > >
> > > If the handling of unassigned I/O ports is sane in Xen (in QEMU they
> > > return all ones and discard writes),
> >
> > Yes, it does exactly that.
> >
> > > it would be okay to make the
> > > background 0-65535 range conditional on !xen_enabled().  See
> > > memory_map_init() in QEMU's exec.c file.
> > >
> >
> > Cool. Thanks for the tip. Will have a look at that now.
> >
> 
> Looks like creation of the background range is required. (Well, when I simply
> #if 0-ed out creating it QEMU crashed on invocation). So, I guess I need to be
> able to spot, from the memory listener callback in Xen, when a background
> range is being added so it can be ignored. Same actually goes for memory as
> well as I/O, since Xen will handle access to unimplemented MMIO ranges in a
> similar fashion.
> 

In fact, this patch seems to do the trick for I/O:

diff --git a/xen-hvm.c b/xen-hvm.c
index 039680a..8ab44f0 100644
--- a/xen-hvm.c
+++ b/xen-hvm.c
@@ -510,8 +510,12 @@ static void xen_io_add(MemoryListener *listener,
                        MemoryRegionSection *section)
 {
     XenIOState *state = container_of(listener, XenIOState, io_listener);
+    MemoryRegion *mr = section->mr;

-    memory_region_ref(section->mr);
+    if (mr->ops == &unassigned_io_ops)
+        return;
+
+    memory_region_ref(mr);

     xen_map_io_section(xen_xc, xen_domid, state->ioservid, section);
 }
@@ -520,10 +524,14 @@ static void xen_io_del(MemoryListener *listener,
                        MemoryRegionSection *section)
 {
     XenIOState *state = container_of(listener, XenIOState, io_listener);
+    MemoryRegion *mr = section->mr;
+
+    if (mr->ops == &unassigned_io_ops)
+        return;

     xen_unmap_io_section(xen_xc, xen_domid, state->ioservid, section);

-    memory_region_unref(section->mr);
+    memory_region_unref(mr);
 }

  Paul

>   Paul
> 
> > Cheers,
> >
> >   Paul
> >
> > > Paolo
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxx
> > http://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.