[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Operational System for Storage (DomUs)



On Mon, Apr 16, 2012 at 5:20 PM, Oper.ML <oper.ml@xxxxxxxxx> wrote:
Hi guys,

ÂÂÂ i have a SuperMicro server with an LSI SATA Controler with 16 hdd attached in RAID50.
ÂÂÂ I would like to know wich operational system should i work with in this server? I will have 4 Xen host servers bringing up the DomUs in iSCSI with a dedicated gigabit network.
ÂÂÂ I already had experience with the CenOS 5.5, running the TGTD daemon, e not pleasure one, cause the system keep staying with WAIT (I/O) reaching 100%, causing the DomUs to hang/freeze and remount the partitions in RO.
ÂÂÂ I'm thinking about Ubuntu Server. Does anyone have sugestions and some experience to share about this?

I personally never use iSCSI anymore, but that said, I ran a private cloud of KVM and Xen instances (65 or so dom0's) at a previous job while also doing extensive research on silly ways to combine iSCSI volumes into huge ZFS filesystems. ÂHere is a quick list of stuff to consider:

* consider OpenSolaris ZFS + COMSTAR iSCSI or better yet, NFSv4 (Nexenta, SmartOS, or OmniOS) for you storage machine, it's awesome
* use the Linux LIO iSCSI target stack! It's vastly superior to just about all the others under duress
 * it was really solid when I tested it a couple years ago (had about 400TB across 20+ systems)
 * I also tested ietd, tgtd, COMSTAR, and a few others at that time
* if you can get two switches
 * use Linux multipath instead of bonding if you can
 * DO NOT bridge the switches (separate VLAN's & subnets)
 * this is _really_ critical! You can lose an entire L2 network and keep on going if you get this right
* Linux distribution is irrelevant, CentOS6 or Ubuntu LTS are both fine choices though
 * I'd go with 12.04 LTS if I was doing this today, build packages as necessary if LIO isn't there
* use LVM instead of file backed volumes
 * file backed volumes pretty much always stink, except under ietd where it sucks exactly the same amountÂ
* the iSCSI target machine should almost certainly be running the deadline IO scheduler
* disable Nagle's algorithm
* TCP buffers tuned much larger than the kernel defaults
* kernel is ideally compiled _without_ preemptive mode (most distros ship this!)
 * custom compiled kernels can make a difference in this setting!
* be prepared to install high-quality NIC's if your board didn't ship with them (all the Supermicro boards I've tested had decent Intel NIC's, but I was saddled with some Nforce boards at one point which were a disaster)
* enable large frames on the switch and all targets/initiators
 * but don't set it to 9000 bytes like everybody does -s ome switches, especially in the low-mid range screw up 9000 byte packets
 * I usually err on the side of caution and page alignment and go with around 8400 (easily fit 2 pages + headers, being exact doesn't buy you anything but pain)
* never mix IO and VM traffic if you can help it
 * by extension, don't let your IO traffic go over a bridge device
 * bonded interfaces are OK if you can't use multipath

Good luck,
-Al
Â

Thanks alot.
Tony M.


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.