[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Subject: RE: [Xen-users] Poor disk io performance in domUs
On 6/22/07, Andrej Radonic <rado@xxxxxxxxxxxxx> wrote: Mats, >> dd simultaneously in both dom0 = 170 MB/s > I take it you mean "two parallel 'dd' commands at the same time"? That > would still write to the same portion of disk (unless you specifically > choose different partitions?) it's different partitions - one dedicated partition for each domU. The partition are created as "virtual" block devices with the dell storage box manager. >> dd simultaneously in two domU = 34 MB/s > I take it this means two different DomU doing "dd"? > Is that 34 MB/s "total" (i.e. 17MB/s per domain) or per domain (68 MB/s > total)? sorry, good you asked: it's the total, i.e. 17MB/s per domain! I guess you are getting the picture now as to my feelings... ;-) Yeah I've experienced some interesting things with very good I/O performance and xen not handling it very well with the domU's. Since there's a little kernel process running on the dom0 for each virtual block device exported to domU's, which does translation mostly, I've found that the more domU's you bring up all doing I/O the dom0 processes tend to do just as much work as all the dd operations of all the domU's combined. So if you have 6 domU's all doing about 15% using dd's your dom0 is going to be pushing 100% of its cpu usage and going to be doing a crap load of work and the I/O performance in the domU's will be failing. So it does pay to make sure your dom0 can handle translating everything (note this should go away with the IOMMU support, I would hope). Also I'd check what the dell virtual block manager can do, try creating virtual block devices then try dd'ing to them in parallel in the dom0 it might only be that the dell virtual block device manager can handle 60Mb/s total to any of the block devices it creates. I've got experience with the HP virtual block device thingy and you can actually specify to only use two of the disks to raid0/1 them depending on what you want to do. Then export the raided disks to the kernel. Since we have 6 drives that gives at most 3 block devices getting to disk somehow to test if the pipe between the disk and OS can't handle the data. I would suggest doing something similar. Thanks, - David Brown _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |