[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] RE: Rather slow time of Pin in Windows with GPL PVdriver



I did post a patch ages ago. It was deemed a bit too hacky. I think it would 
probably be better to re-examine the way Windows PV drivers are handling 
interrupts. It would be much nicer if we could properly bind event channels 
across all our vCPUs; we may be able to leverage what Stefano did for Linux 
PV-on-HVM.

  Paul

> -----Original Message-----
> From: Pasi Kärkkäinen [mailto:pasik@xxxxxx]
> Sent: 10 March 2011 18:23
> To: Paul Durrant
> Cc: James Harper; MaoXiaoyun; xen devel
> Subject: Re: [Xen-devel] RE: Rather slow time of Pin in Windows with
> GPL PVdriver
> 
> On Thu, Mar 10, 2011 at 11:05:56AM +0000, Paul Durrant wrote:
> > It's kind of pointless because you're always having to go to
> vCPU0's shared info for the event info. so you're just going to keep
> pinging this between caches all the time. Same holds true of data
> you access in your DPC if it's constantly moving around. Better IMO
> to keep locality by default and distribute DPCs accessing distinct
> data explicitly.
> >
> 
> Should this patch be upstreamed then?
> 
> -- Pasi
> 
> >   Paul
> >
> > > -----Original Message-----
> > > From: James Harper [mailto:james.harper@xxxxxxxxxxxxxxxx]
> > > Sent: 10 March 2011 10:41
> > > To: Paul Durrant; MaoXiaoyun
> > > Cc: xen devel
> > > Subject: RE: [Xen-devel] RE: Rather slow time of Pin in Windows
> with
> > > GPL PVdriver
> > >
> > > >
> > > > Yeah, you're right. We have a patch in XenServer to just use
> the
> > > lowest
> > > > numbered vCPU but in unstable it still pointlessly round
> robins.
> > > Thus,
> > > if you
> > > > bind DPCs and don't set their importance up you will end up
> with
> > > them
> > > not
> > > > being immediately scheduled quite a lot of the time.
> > > >
> > >
> > > You say "pointlessly round robins"... why is the behaviour
> > > considered pointless? (assuming you don't use bound DPCs)
> > >
> > > I'm looking at my networking code and if I could schedule DPC's
> on
> > > processors on a round-robin basis (eg because the IRQ's are
> > > submitted on a round robin basis), one CPU could grab the rx
> ring
> > > lock, pull the data off the ring into local buffers, release the
> > > lock, then process the local buffers (build packets, submit to
> NDIS,
> > > etc). While the first CPU is processing packets, another CPU can
> > > then start servicing the ring too.
> > >
> > > If Xen is changed to always send the IRQ to CPU zero then I'd
> have
> > > to start round-robining DPC's myself if I wanted to do it that
> > > way...
> > >
> > > Currently I'm suffering a bit from the small ring sizes not
> being
> > > able to hold enough buffers to keep packets flowing quickly in
> all
> > > situations.
> > >
> > > James
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.