[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Storage Systems for Virtual Disk Images



Thanks, Adam:

That's all tremendously helpful information!

I do have two questions:

Layering - Are you using logical volumes (LV's) as DRBD backing devices and then using the DRBD resources as iSCSI LUN's? This seems like a fairly labor-intensive approach that might not work at all with automated provisioning.
Networking - What mode are you using to bond the 2 x 1G dom0 interfaces? e.g., balance-rr, active-backup, balance-xor, broadcast, balance-tlb, or balance-alb?

Thanks, again!

Eric Pretorious


From: Adam Goryachev <mailinglists@xxxxxxxxxxxxxxxxxxxxxx>
To: xen-users@xxxxxxxxxxxxx
Sent: Thursday, October 16, 2014 6:36 PM
Subject: Re: [Xen-users] Storage Systems for Virtual Disk Images

Apologies for my blank response...

I personally use a Linux HA + DRBD + LVM + Linux iSCSI solution, and it works very well. Some things I took a lot of time to solve include:
1) A whole bunch of network cabling/config issues, now using 10G between DRBD, 10G iscsi server, and 2 x 1G for dom0's with multipath.
2) Unexpected poor performance with HDD's, concurrent random access from multiple domU's does not work well with HDD. My solution was to upgrade to SSD.
3) Unexpected poor performance with SSD. This came down to testing the wrong thing when calculating expected performance level. Test with small (eg 4k) random read/write and use those results, unless your VM's are only doing large read/write, and these really do get merged, then you will find performance limited to the 4k request size.
4) Still poor performance from SSD (DRBD). Change LVM so that it is below DRBD. ie, one LV for each domU, then on top is DRBD for each domU, then finally iscsi exports the DRBD devices.
5) Still poor performance from SSD (DRBD). DRBD needs to do it's own write for every domU write, plus lvm does it's own, etc. Each layer adds overhead. Solution for me was to disable DRBD disk-barrier, disk-flushes, md-flushes

Other things that helped along the way include:
echo noop > /sys/block/${disk}/queue/scheduler
echo 128 > /sys/block/${disk}/queue/nr_requests
echo 4096 > /sys/block/md1/md/stripe_cache_size # RAID5, test the correct value for your array

Currently, I have 8 dom0's with about 25 domU's and it is working well, including transparently failing over on iscsi server failure. If anyone wants more details, I'm happy to share.

Most of the above is perhaps not specific to xen, but storage in general, but I hope it will be relevant here.

I'd also ask that if you get any direct response, that you please summarise and send back to the list, and/or update the wiki so others can more easily find the information.

Regards,
Adam

On 17/10/14 11:34, Eric wrote:
Hello, All:

I'd built a highly-available, redundant iSCSI SAN in our lab a while back as a proof of concept (using Linux-HA, DRBD, and the Linux iSCSI Target Framework) and it worked pretty well but, as I'm getting ready to build the infrastructure for our [production] cloud, I'm wanting to re-examine the topic again but I just haven't got enough time to sift through all of the outdated or speculative information on the Internet so I'm reaching out to the list for some guidance on hosting virtual disk images.

e.g., I'm curious about other distributed, clustered storage systems (e.g., Gluster, Ceph, Sheepdog, etc); other SAN technologies besides iSCSI (e.g., AoE), and; various targets. e.g., There are at least four different iSCSI targets available for Linux:
And, there are currently five different AoE targets available for Linux:
  • vblade, a userspace daemon that is part of the aoetools package.
  • kvblade, a Linux kernel module.
  • ggaoed, a userspace daemon that takes advantage of Linux-specific performance features.
  • qaoed, a multithreaded userspace daemon.
  • aoede, a userspace daemon with experimental protocol extensions.
I know that it's a lot to ask, but I really need help with this enormous topic and I'd be thankful for any experience, knowledge, or guidance here.

TIA,
Eric Pretorious



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


--
Adam Goryachev Website Managers www.websitemanagers.com.au

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.