[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] theoretical network rx performance of Windows with PV drivers

  • To: "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
  • Date: Tue, 18 Nov 2008 23:50:59 +1100
  • Cc:
  • Delivery-date: Tue, 18 Nov 2008 04:51:25 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AclJdZaq6OGodxhGRNi3MA0h9mpWyQABil1rAAAYkYA=
  • Thread-topic: [Xen-devel] theoretical network rx performance of Windows with PV drivers

> On 18/11/08 12:02, "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
> > I'm finding some odd things during development of the GPLPV and am
> > wondering if I'm just expecting too much of a HVM Windows DomU.
> >
> > I'm using iperf for testing, and the most I can get on a 1.8GHz
> > out of a Dom0->DomU network performance is about 500MBits, and
> > with Dom0 sending packets at close to 1GBit, with about 50% of
> > being lost. But it's not consistent... things seem to stall at
> > times (some of that may be a driver or a windows problem - the time
> > between scheduling a Dpc and the Dpc being executed is up to 3
> > sometimes when this happens...)
> >
> > How much overhead is introduced in the event channel -> HVM IRQ
path, as
> > compared to the normal interdomain event channels? I think that the
> > delay there might be bringing me down, but maybe I'm looking in the
> > wrong place?
> I don't think evtchn->IRQ latency is particularly large. But also I
> know what else might be causing your erratic behaviour.

That's probably all I needed to know for now. I think it might actually
be Windows that's the problem...



Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.