[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Shared Storage



I don't have any information to add, but this discussion made me think about how XCP/XenServer handle the LUNs and LVMs.

It's common practice to have one large LUN with several LVMs for each VM -- and that's completely supported by XCP.

I *believe* XCP handles all the locking mechanisms to prevent two hosts from trying to access the same LVM (which exists within the large shared LUN) at the same time, for example, during a live migration.

It does not look like XCP uses cLVM to share the LVM information across all nodes -- does anyone have any deeper information on this?

All the docs from Citrix/Xen mention is that you can simply have a single LUN shared across all nodes (using SR type LVMoiSCSI), but I have not a chance to test live migration under those circumstances yet.

Am I missing something here? Is it possible to do live migrations using SR type LVMoiSCSI? The reason I ask is because the discussion made me think it would not be possible.

Best regards,
Eduardo.

On Apr 25, 2011, at 4:17 PM, Jonathan Tripathy wrote:


On 25/04/2011 19:57, John Madden wrote:
Hands down mamanging LVM is my number one choice. Ideally I would just
like to set up the iSCSI connections once and just leave it

Yeah. iSCSI a few LUNs from your SAN, cLVM across your nodes (do the iSCSI in dom0), create your LV's and you're done.

This is really only half the picture though and touches on another level of storage concepts. What does your backend disk and cache look like? In my clusters, I create two storage pools, one for "fast disk" and the other for "slow disk," then add LUNs from the SAN appropriately. You should get as granular as you can in performance and use-case terms though to keep the right IOs on the right disks but that may not be practical with your SAN (e.g., if you just have 64 spindles in a single RAID-10 or some dumb JBOD or something).

I guess the message is to think about how you're laying out your data and then align that with how you lay out your disks. You may squeeze out an extra 5% by going with multiple LUNs versus a single LUN and another 30% by going with FC instead of multi-GbE, but you can gain even more by utilizing the limited i/o of a spindle more effectively.

John

Thanks for the excellent advice John. Very much appreciated. While I'm not able to disclose our disk setup (for commercial reasons), I am confident that what I have in mind is good for us, as we have been doing this in a non-shared manner (i.e. disks local to the Dom0) for quite some time. But yes, as you say, iSCSI will allow for a little bit of "fine tuning".

I also need to sanity test CLVM and see how well (or how badly) it handles iSCSI lost connections, propagating LVM metadata changes to other nodes, etc..

Now onto some testing to see what works out best...

Cheers

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.