[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-users] Best Practices for PV Disk IO?
> -----Original Message----- > From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users- > bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Christopher Chen > Sent: Monday, July 20, 2009 8:26 PM > To: xen-users@xxxxxxxxxxxxxxxxxxx > Subject: [Xen-users] Best Practices for PV Disk IO? > > I was wondering if anyone's compiled a list of places to look to > reduce Disk IO Latency for Xen PV DomUs. I've gotten reasonably > acceptable performance from my setup (Dom0 as a iSCSI initiator, > providing phy volumes to DomUs), at about 45MB/sec writes, and > 80MB/sec reads (this is to a IET target running in blockio mode). For domU hosts, xenblk over phy: is the best I've found. I can get 166MB/s read performance from domU with O_DIRECT and 1024k blocks. Smaller block sizes yield progressively lower throughput, presumably due to read latency: 256k: 131MB/s 64k: 71MB/s 16k: 33MB/s 4k: 10MB/s Running the same tests on dom0 against the same block device yields only slightly faster throughput. If there's any additional magic to boost disk I/O under Xen, I'd like to hear it too. I also pin my dom0 to an unused CPU so it is always available. My shared block storage runs the AoE protocol over a pair of 1GbE links. The good news is that there doesn't seem to be much I/O penalty imposed by the hypervisor, so the domU hosts typically enjoy better disk I/O than an inexpensive server with a pair of SATA disks, at far less cost than the interconnects needed to couple a high-performance SAN to many individual hosts. Overall, the performance seems like a win for Xen virtualization. Jeff _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |