[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] Xen on two node DRBD cluster with Pacemaker
On Thursday 20 January 2011 09:44:47 Jean Baptiste Favre wrote: > Hello Bart, > Answers inline. > > Le 19/01/2011 21:49, Bart Coninckx a écrit : > > On Wednesday 19 January 2011 21:15:46 Jean Baptiste FAVRE wrote: > >> Hello Bart, > >> I wrote such an howto some times ago. It's available here in french > >> (http://publications.jbfavre.org/virtualisation/cluster-xen-corosync-pac > >> ema ker-drbd-ocfs2.fr) and here in english > >> (http://publications.jbfavre.org/virtualisation/cluster-xen-corosync-pac > >> ema ker-drbd-ocfs2.en). > >> > >> It's LVM based but could be easily adapted for img files. > >> > >> Regards, > >> JB > >> > >> Le 19/01/2011 17:54, Bart Coninckx a écrit : > >>> Hi all, > >>> > >>> could somebody point me to what is considered a sound way to offer Xen > >>> guests on a two node DRBD cluster in combination with Pacemaker? I > >>> prefer block devices over images for the DomU's. I understand that for > >>> live migration DRBD 8.3 is needed, but I'm not sure as to what kind of > >>> resource > >>> agents/technologies are advised (LVM,cLVM, ...) and what kind of DRBD > >>> config (seperate devices for each DomU I guess?) > >>> Thank you! > >>> Bart > > > > Hi Jean, > > > > thank you for this document, it seems highly educational. Could you > > please verify if I understand correctly: > > > > - you use LVM to build your DRBD resources on (not the other way around) > > At dom0 level, I have 2 VG: > - system, for... system :) > - XenHosting, for domUs and common Xen related FS > > Inside XenHosting, I create LVs: > - One for common stuff (config, kernels, iso, ...). > - One for each domU > > Each LV are defined as DRBD resources. > > Having one DRBD ressource per domU allow you to migrate them > independently. More, if your cluster grows and get a third server, you > can balance DRBD ressources between them. It's just easier to manage. > > Each domU boots on its LV. That means that at domU level, your DRBD > ressource (or LV) is saw as a disk. Then you install your domU system. > The way I install domUs makes me use LV as well > > As a summary, you have LV inside DRBD ressource onto LV. > It seems complicated but in fact it's not so tricky :) > > If you want to use img files, either you store them on one LV only, and > you'll have to create cluster FS on it, or you use separate LV. In case > of separate LV, I prefer installing domU in LV directly instead of using > img file because it's easier to access LV based FS from dom0 in case > domU crashes. > > > - you use a DRBD resource with a OCFS2 filesystem on to offer the ISO's > > and config files on both nodes for every DomU > > Yes, using cluster FS allow you to mount it on each node and provides > concurrent access for each dom0. I choose OCFS2 at the time of writing, > but I had to do it now, I would give a try to GlusterFS. > > You can choose another way (like rsync), but I prefer cluster FS as I > don't have to think about synchronisation. > > > - for each DomU you create a seperate DRBD resource that is allowed to be > > dual primary so you can do live migrations. > > You MUST allow dual primary if you want to use live migration. So it's > not cluster specific. > > > Am I doing all right? ;-) > > Yep, seems ;) > > Regards, > JB > > _______________________________________________ > Xen-users mailing list > Xen-users@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-users Jean, all clear, except for the LVM on top of the DRBD resource: is that LV created on Dom0 or during the installation of Dom0. It seems a sound way of implementing stuff. I roughly did the same in the past but with an external Pacemaker for an iSCSI cluster. Since this project is without it, I was a bit unsure a to how doing the live migration and such. You don't use any stonith? B. _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |