[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] I/O and Network control on VM

  • To: Xen User-List <xen-users@xxxxxxxxxxxxxxxxxxx>
  • From: "Fajar A. Nugraha" <fajar@xxxxxxxxx>
  • Date: Sun, 14 Jun 2009 00:58:44 +0700
  • Delivery-date: Sat, 13 Jun 2009 10:59:30 -0700
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

On Sat, Jun 13, 2009 at 11:09 PM, Thomas Goirand<thomas@xxxxxxxxxx> wrote:
> Fajar A. Nugraha wrote:
>> Or are you saying that cbq can limit daily transfer as per the
>> original requirement?
> No, such limits can be done only by tight accounting of bandwidth.

Ah, so I'm not getting rusty after all :)
Reading your initial comment I got the impression that cbq suddenly
gain extra ability to limit daily transfer when I wasn't looking. My
bad :P

>>>>> 2. How about I/O limit? Seem xen currently has no way to limit user I/O 
>>>>> usage?
>>>> Your best bet (for now) is probably something like
>>>> http://people.valinux.co.jp/~ryov/dm-ioband/
>>> There's no need to apply any patch to acheive I/O scheduling, it has
>>> been in the kernel for YEARS.
>> Care to provide some reference/example on how this can be used on dom0
>> to limit domU's I/O usage?
> Let's say you have a domU with ID 39, it will use blkback.39.sda1 and
> blkback.39.sda2 let's say. Use ionice to give priorities to the process
> ID of blkback.39.sda1. It's not limits per say, but it's priorities,
> which is quite cool already. If someone is using too much I/O, just give
> the process the lowest priority possible, and it wont bother others too
> much.

While ionice can set priority, it can't set a limit.

> By the way, is it that the above mentioned patch is adding
> max_hw_sectors_kb and max_sectors_kb in /sys/block/dm-XX/queue, like it
> is available for other block devices?

No. AFAIK it creates a new device, /dev/mapper/ioband* (or whatever
you choose to call it) above an existing block device (disk,
partition, LV) on which you can manage per-device and per-job I/O
priority and limit.

It works (i.e. lower io_limit equals lower I/O bandwitdh), but I can't
figure out the exact corelation yet (i.e. how come when using
weight-iosize policy, io_limit=8 equals to 2MBps when tested with dd


Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.