[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] XCP bandwidth management

  • To: "msgbox450@xxxxxxxxx" <msgbox450@xxxxxxxxx>
  • From: Peter Phaal <peter.phaal@xxxxxxxxx>
  • Date: Sat, 21 May 2011 12:05:50 -0700
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Sat, 21 May 2011 12:07:42 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=WyUdiYGsoWh3x7Ts1Wdss8IEn4tx1bP4wNeVNhNx9a9MBhrWdewV5PX9pNP7qhNiNQ qihMq03FicoO+inO1xZljsb1ww5DodwhsTnNEmz3/WzhVUqb3LDunruYpoy4UaaVGWoZ VtT9NJX57xqsYV7nYSqsG4aXNe9wBTxn8mPo8=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

On Fri, May 20, 2011 at 9:38 AM, msgbox450@xxxxxxxxx
<msgbox450@xxxxxxxxx> wrote:
> Hi all,
> I've got XCP 1.0 up and running nicely and would like to use it in
> production. However I'm struggling with the concept of bandwidth management.
> It seems like such a common problem that everyone must have, but I can't
> find any clear direction in which to go.
> The dedicated host I am using (Hetzner) gives me a 5TB monthly bandwidth
> quota which needs to be shared between all the VMs on the XCP.
> Ideally I would like something to automatically manage the bandwidth such
> that each VM is capable of using the full 100mbps speed of the connection,
> but will be throttled back if the throughput is sustained, so we have e.g.
> 24 x 1GB VMs on the host with average of 213GB/month bandwidth usage each.
> Alternatively it might be easier to just route all the virtual interfaces
> though a VM than runs pfsense or use tc on the host to just set some sort of
> shaping on the physical interface itself, but I really don't know the best
> way to go about it.
> Things I've found so far aren't so good:
> 1 - Limit the interface using the XenCenter GUI... but that means the VM
> would never be able to go above about 1mbps, even if it's sat there and used
> no bandwidth for the past week and is well within its quota, so that's not
> ideal.
> 2 - Use sFlow in XCP to capture the data. Well this works for looking at how
> much bandwidth they are using, but I haven't found any existing tool that
> will act on that data to do traffic shaping.
> 3 - Use the XAPI calls to check the bandwidth usage.
> With methods 2 and 3 I guess I could write something that collects the data
> and stores it a database table, somehow work out how much the connection
> needs to be slowed by and then apply it using the XAPI, but that seems
> rather hacky and difficult and there must be a better way?
> If anyone could give some tips on how to do this I'd really appreciate it.
> Basically I just want the quickest and easiest way to make it so that the
> server as a whole doesn't go over its bandwidth limit without limiting all
> the guests to a tiny speed individually.
> Thanks!
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users

I don't know of any shrink wrap solutions that would meet your
requirements today, but XCP does contain the APIs needed to develop a
bandwidth manager.


It is important to distinguish between the amount of local traffic
that a VM generates (inter-VM traffic, backup etc) and non-local
traffic that counts against the 5TB quota. XAPI calls just give the
traffic totals. sFlow monitoring in the vSwitch easily distinguishes
between the local and non-local traffic.

On the control side, Open Flow allows the controller to create
separate policies for forwarding local and non-local traffic.
Combining the two, allows for adaptive management.


OpenFlow has only recently started to be available in production
environments so the management tools are still lagging. There are many
open source and commercial OpenFlow controllers in development and I
expect that there will be a number of solutions available for managing
XCP in the near future.

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.