[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen Disk I/O performance vs native performance: Xen I/O



Hmm.. 

Thanks a lot for the tip !
Ok, so I guess it is theoretically possible that the RAID stuff creates
problems with Xen in my case. So, to check that, I'll go install more
disks on the RAID array, create a new device without LVM and see if the
performance is the same.

Since I have done tests with : Xen + No RAID + No LVM, Xen + No RAID +
LVM, and Xen + RAID + LVM, the only missing bench is Xen + RAID + No
LVM.

I'll post back when I have the results of this bench (first need to go
to the data center, etc..)

And I'll also check the performance improvements by upgrading the kernel
to 2.6.24. What do you think of the container support in 2.6.24 ? Isn't
it a better-Xen for servers ? (I believe most people use virtualization
on the server side just to isolate processes, so the containers seem
like a better-xen in this case, isn't it ?)

Regards,
Sami

On Sun, 2008-01-27 at 22:23 -0500, jim burns wrote:
> Sami Dalouche writes:
> > So, conclusion, I am lost :
> > On the one side, it seems that Xen, when used on top of a raid array, is
> > wayyy slower, but when used on top a plain old disk, seems to be pretty
> > much native performance. Is there a potential link between Xen and RAID
> > vs non raid performance ? Or maybe the problem is caused by Xen + RAID +
> > LVM ?
> 
> Hmm, interesting couple of paragraphs in the
> http://kernelnewbies.org/LinuxChanges page on the 2.6.24 kernel. Apparently, 
> lvm is prone to dirty write page deadlocks. Maybe this is being aggravated by 
> raid, at least in your case? I quote:
> 
> 2.7. Per-device dirty memory thresholds
> 
> You can read this recommended article about the "per-device dirty thresholds" 
> feature.
> 
> When a process writes data to the disk, the data is stored temporally 
> in 'dirty' memory until the kernel decides to write the data to the disk 
> ('cleaning' the memory used to store the data). A process can 'dirty' the 
> memory faster than the data is written to the disk, so the kernel throttles 
> processes when there's too much dirty memory around. The problem with this 
> mechanism is that the dirty memory thresholds are global, the mechanism 
> doesn't care if there are several storage devices in the system, much less if 
> some of them are faster than others. There are a lot of scenarios where this 
> design harms performance. For example, if there's a very slow storage device 
> in the system (ex: a USB 1.0 disk, or a NFS mount over dialup), the 
> thresholds are hit very quickly - not allowing other processes that may be 
> working in much faster local disk to progress. Stacked block devices (ex: 
> LVM/DM) are much worse and even deadlock-prone (check the LWN article).
> 
> In 2.6.24, the dirty thresholds are per-device, not global. The limits are 
> variable, depending on the writeout speed of each device. This improves the 
> performance greatly in many situations.
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.