[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Re: [quagga-users 10975] Re: Quagga on Xen - Latency / Bandwidth?

On Wed, Jul 29, 2009 at 03:26:37PM -0700, Robert Bays wrote:
> On 7/29/09 11:57 AM, Alexis Rosen wrote:
> > On Jul 29, 2009, at 5:17 AM, sthaug@xxxxxxxxxx wrote:
> >>> I was wondering if anyone is running Quagga on Xen? What is
> >>> throughput/latency like?
> >>
> >> This is a function of kernel forwarding performance. Quagga doesn't
> >> do forwarding.
> At my company, we have done extensive testing of the forwarding
> performance of Linux vms on Xen.  We use Quagga as our routing suite,
> but as previously mentioned it has nothing to do with forwarding
> performance.  I removed the Quagga list from this thread to stop the
> cross post.
> For testing we follow rfc2544.  To give you some representative numbers,
> we see anywhere between 100-150mbps zero loss throughput for
> bi-directional 64byte packet streams on a 3.0ghz Intel quad core
> processor.  This follows the typical bandwidth curve up to roughly
> 1.6gig for large packet sizes.  We are currently running a Linux 2.6.30
> pv_ops enabled kernel in the domU.  We have noticed that if we share a
> physical processor core with more than one vm we will take a roughly 2%
> hit to overall performance.  Interestingly, a third or fourth vm on the
> same core still only incurs the same 2% penalty.  Throughput is highly
> dependent on the system; i.e. processor model, motherboard chipsets, bus
> type and location of the card on the bus, etc...  Throughput also has a
> fairly high jitter factor.  The system can be tuned to mitigate the
> jitter, but at a loss of overall throughput and an average increase in
> latency.

Interesting. Did you profile what limits the performance, or uses the cpu? 
Bridging in dom0? Xen? 

Are you familiar with the netchannel2 development stuff? 

> If the system is configured for PCI pass through, expect a much higher
> throughput.  It's more on the order of 650mbps zero-loss for
> bi-directional streams of small packet sizes.  HVM domUs aren't even
> worth using for networking.

Yeah, PV guests are much easier, faster and stable for this purpose.

(and yeah I know you can use PV-on-HVM drivers on HVM domain).

-- Pasi

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.