[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-users] question to DRBD users/experts
> > I'm using DRBD. I was using LVM2 on a multiple-primary DRBD (eg one big > > DRBD volume cut into slices with LVM) and when it worked it was fine but > > it would split brain occasionally (on startup after a crash normally, > > not just spontaneously) and the CLVM daemon would hang on occasion for > > no good reason. > > > > Now I'm using DRBD on LVM on RAID0 and only multiple-primary where > > necessary. Each DRBD is formed from an LV on each node. Extra work to > > create a new DRBD volume (create LV on both nodes then set up the DRBD) > > but much less likely to go wrong during normal use - it hasn't gone > > wrong yet after months of use! > > > > -Ok, so: > -You create the same size LV on each xen host. > -Setup a DRBD using that LV > -Each VM would use that DRBD as it's storage? Correct. It might be a bit of a pain to resize an LV but I haven't had to try yet. > In a split brain you then choose which way to recover the data for each > separate LV/DRBD set. > I'm only using a single primary model now so the problem hasn't come up. > > How difficult/complex is it for you to add a VM this way? I guess once > you get the procedure down it's probably not that difficulty... It's tricker than having LVM on top of DRBD but it's not so bad. > > Any experience with ocfs2 over drbd? In our testing it has actually been > quite stable, and at times even tough to force a split brain situation, > but you never know when it's going to happen! No. And in fact you'd be dealing with the possibility of split brain on DRBD and on OCFS2 so it becomes even trickier. > > > A better setup though would be a SAN consisting of iSCSI on DRBD in > > single primary mode (using HA to handle failover if the primary fails) > > and all the hosts using iSCSI. I don't have enough hardware to make that > > work though unfortunately. > > Using a SAN would be our first choice, unfortunately the costs, even for > a low end SAN, make it not possible. > I've not yet done any performance tests to see if what I can build out of low end equipment would be fast enough... James _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |