[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: AW: AW: [Xen-users] domU on sparse-file on ocfs2 on drbd(pri/pri): Isthere anyone running this successfully?



Am Freitag, den 13.02.2009, 16:11 +0100 schrieb RafaÅ SkÃra:
> There is a lot of tutorials about it on net. Morover xen has support for 
> drbd based device.
> Search for "drbd xen live migration" on google.
> When You have drbd pri/pri workin You just use drbd as a device backend 
> for domU(where the name of disk resource is name of drbd resource). Then 
> You simple type xm migrate --live etc etc and thats it.

Yeah, but in most setups you have bunches of drbd-devices, which have to
be in sync. So Splitbrain-Situations will be tricky to fix.

In my setup there s only _one_ drbd-device which has to be in sync...

Also the drbd:-VBD are not so performant (if you have many drbd-devs) as
you only have one device holding _all_ your virtual-disks through lvm.

and finally i can do only online backups through lvm-snapshot...

Just my 2 cents..

But if none of you interested i ll keep that secret ;)

Thomas 
> 
> 
> 
> Thomas Halinka pisze:
> > Hello together,
> >
> >
> > im writing atm a howto for this purpose.
> >
> > 2 Boxes with local Disk, drbd, heartbeat, lvm and live-migration
> >
> > Will be online today,
> >
> > So Long,
> >
> > have phun
> >
> > Thomas
> >
> >
> > Am Freitag, den 13.02.2009, 14:38 +0100 schrieb Rustedt, Florian:
> >   
> >> ..Ok, but how do i share a blockdevice concurrently?
> >> Do you think of open-iscsi?
> >>
> >> But i thought, this SAN-"emulation" is a 1-to-n connection?
> >> AFAIK i'll have to run a daemon that distributes the iscsi-device. If i am 
> >> using drbd, i can't run this service on both nodes at the same time, am i 
> >> right?
> >>
> >> So how do i implement "shared block devies" in a way, that mirrors my 
> >> images between two nodes for instant live migration?
> >>
> >> Florian 
> >>
> >>     
> >>> -----UrsprÃngliche Nachricht-----
> >>> Von: Javier Guerra Giraldez [mailto:javier@xxxxxxxxxxx] 
> >>> Gesendet: Freitag, 13. Februar 2009 13:50
> >>> An: xen-users@xxxxxxxxxxxxxxxxxxx
> >>> Cc: Rustedt, Florian; lists@xxxxxxxxx
> >>> Betreff: Re: AW: [Xen-users] domU on sparse-file on ocfs2 on 
> >>> drbd(pri/pri): Isthere anyone running this successfully?
> >>>
> >>> Rustedt, Florian wrote:
> >>>       
> >>>> ..Well, but which filesystem do i take instead?
> >>>> Onto the lvms, i need something that interacts with two 
> >>>>         
> >>> machines using 
> >>>       
> >>>> it, else i couldn't do migration...
> >>>>
> >>>> So i thought, i NEED ocfs or gfs for locking purposes...?
> >>>>         
> >>> if you use block devices (as opposed to image files) you 
> >>> don't need a shared filesystem, just shared blockdevices.
> >>>
> >>> --
> >>> Javier
> >>>
> >>>       
> >> **********************************************************************************************
> >> IMPORTANT: The contents of this email and any attachments are 
> >> confidential. They are intended for the 
> >> named recipient(s) only.
> >> If you have received this email in error, please notify the system manager 
> >> or the sender immediately and do 
> >> not disclose the contents to anyone or make copies thereof.
> >> *** eSafe scanned this email for viruses, vandals, and malicious content. 
> >> ***
> >> **********************************************************************************************
> >>
> >>
> >> _______________________________________________
> >> Xen-users mailing list
> >> Xen-users@xxxxxxxxxxxxxxxxxxx
> >> http://lists.xensource.com/xen-users
> >>     
> >
> >
> > _______________________________________________
> > Xen-users mailing list
> > Xen-users@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-users
> >   


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.