[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Re: [Xen-devel] VM disk I/O limit patch



On Wed, Jun 22, 2011 at 08:06:23PM +0800, Andrew Xu wrote:
> 
> On Tue, 21 Jun 2011 09:33:37 -0400
> Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> wrote:
> 
> > On Tue, Jun 21, 2011 at 04:29:35PM +0800, Andrew Xu wrote:
> > > Hi all,
> > > 
> > > I add a blkback QoS patch.
> > 
> > What tree is this against? 
> This patch is based on suse11.sp1(2.6.32) xen-blkback source. 
> (2.6.18 "Xenlinux" based source trees?)
> 
> > There is a xen-blkback in 3.0-rc4, can you rebase
> > it against that please.
> > 
> Ok, I will rebase it.

Hold on, lets talk about the problem you are trying to solve first.
> 
> > What is the patch solving? 
> > 
> With this path, you can set different speed I/O for different VM disk.
> For example, I set vm17-disk1 4MKB/s 
>                    vm17-disk2 1MKB/s 
>                    vm18-disk3 3MKB/s 
> I/O speed, by writing follow xenstore key-values.
>       /local/domain/17/device/vbd/768/tokens-rate = "4096"
>       /local/domain/17/device/vbd/2048/tokens-rate = "1024"
>       /local/domain/18/device/vbd/768/tokens-rate = "3096"
> 
> > Why can't it be done with dm-ioband?
> Of cause, I/O speed limit also can be done with dm-ioband.
> But with my patch, there is no need to load dm-ioband any more.
> This patch do speed-limit more close disk, more lightweight.

I am not convienced this will be easier to maintain than
using existing code (dm-ioband) that Linux kernel provides already.

Are there other technical reasons 'dm-ioband' is not sufficient
enough? Could it be possible to fix 'dm-ioband' to not have those
bugs? Florian mentioned flush requests not passing through
the DM layers but I am pretty sure those have been fixed.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.