[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-users] Re: Xen and I/O Intensive Loads
Nick, Do you mean Groupwise data volume is on one RAID10 comprised of 30 disks dedicated to Groupwise data? Or that this one RAID volume is contending with other volumes using the disks on the SAN? I'm not familiar with how Groupwise works, does ideal deployment suggest separate sets of spindles for temp file, database and transaction logs? Is the RAID block/chunk/stripe size aligned with xfs sunit/swidth parameters? Are the xfs block boundaries aligned with the RAID blocks? Is that 4GB of write back cache? What is the write back delay? How fast are the drives in rpm? > Date: Thu, 27 Aug 2009 08:25:08 -0600 > From: "Nick Couchman" <Nick.Couchman@xxxxxxxxx> > Subject: Re: [Xen-users] Xen and I/O Intensive Loads > > Let's see...the SAN has two controllers with a 4GB cache in each > controller. Each controller has a single 4 x 2Gb FC controller. Two of > those ports go to the switch; the other two create redundant loops with > the disk array (going from the controller to one disk array, then to the > next disk array, then to the second controler). The disks are FCATA > disks, there are 30 active disks (with 2 hot-spares). The SAN does RAIDs > across the disks on a per-volume basis, and my e-mail volume is using a > RAID10 configuration. > > I've done most of the filesystem tuning I can without completely > rebuilding the filesystem - atime is turned off. I've also adjusted the > elevator per previous suggestions and played with some of the tuning > parameters for the elevators. I haven't got around to trying something > other than XFS, yet - it's going to take a while to sync over stuff from > the existing FS to an EXT3 or something similar. I'm also contacting the > SAN vendor to get their help in the situation. > > -Nick _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |