[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] Live migration and drbd
On Mon, Nov 10, 2008 at 8:46 AM, Daniel Asplund <danielsaori@xxxxxxxxx> wrote: >> Hi, >> >> I have a machine running several DomU's, PVM's als well as HVM's. Each >> DomU resides in its own LV. >> Also, I managed drbd to replicate the LV's to a second machine. Now, I >> want to try out Xen's ability to live migrate a running DomU, that >> means, from machine1's LV to machine2's LV. If I understood this >> correct, during the process of migration, both LV's need to be writable. >> In a default installation, drbd only supports one primary node, but I >> need primary/primary, correct? >> Here it comes to a cluster filesystem like OCFS - and I am stuck. How >> can OCFS help me? Or am I totally wrong and what I want is impossible? >> The main goal, of course, is to have a redundant fail over solution. >> If my question is more related to drbd-users, I'll switch over to the >> other list. >> >> Thanks, >> Rainer > > Hi Rainer, > > Basically the only prerequisite is that you run DRBD8, which gives you > primary/primary, to get live migration working. You don't have to > bother with cluster aware FS. Both nodes will not have exclusive > access at the same time so running ext3 is perfectly fine. > > You can have a look at this guide to get some additional information: > http://www.asplund.nu/xencluster/xen-cluster-howto.html I plan to try this soon, I've read the guide and it all makes sense to me, but could somebody explain why using drbd on top of lvm is preferable to using drbd with physical devices and lvm on the drbd devices? Thinking of the layers of devices: disk partition lvm pv lvm vg lvm lv (which is a physical disk to the VM) am I correct in thinking that drbd could be inserted anywhere in that list? where is best? why? Thanks Andy _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |