[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Re: LVM on DRBD -OR- DRBD on LVM?


  • To: "xen-users@xxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxx>
  • From: Simon <greminn@xxxxxxxxx>
  • Date: Thu, 22 Jun 2006 08:43:47 +1200
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 21 Jun 2006 13:44:32 -0700
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=coaYgMsCvceathMQByOjYBVg4FmHvD6nqvm8Cm85re/GAz1d5Dm7U4vas87upNSP8hBAREGDJ7WFVNbs2+kqo0vp/LiMaqJNyy9Wp5Mq8MEp+XwK6/Bmh5qDdFaml2rtXDY+CfOWowX4CWPRQxLuCACOwzpOlnhPEi8Kzdgv3WU=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

Hi Guys, Thanks for the replies... see inline below:

On 6/22/06, xen-users@xxxxxxxxxxxxx <xen-users@xxxxxxxxxxxxx> wrote:
Simon,

I think Matthew was making a recommendation based upon how you have your
domU's setup  ... i.e. are your domU's living on LVM LV's or disk files
in one big LV, multiple PV's (i.e. 1 for the boot/root of the dom0 and
one (or more) for the domU's), etc.

Each domU has its own LVs in this case (hda/swap). So im mirroring the
entire pv using drbd0.

We need to know how you have your primary server configured ... I'm not
sure you can reliably mirror your entire root PV since the kernel needs
to be up and have the DRBD driver in place to mount the device the
kernel will be booting on (chicken and egg) ... it may be possible, but
I've not thought about it long enough to decide.

So this means i have to have a script to start lvm, then start each domU...

I *seem* to have this working i think, here is how its currently
configured (note, no XEN yet):

drbd starts before lvm and then i have a script to make server1 the
primary (is there a way to do this in the configuration?). On server2
i have removed lvm from boot init. Once at the command line i can
mount one of the lvs on server1, read/write to it etc. Turn the power
off of server one to fake a hw problem. Then i have a script that i
can run on server2 to make it primary and start lvm, mount the drive
see the changes from server1. And then boot server1 etc... What i have
found is that if i try to start lvm on the secondary i get lots of
errors:

/server2:~# /etc/init.d/lvm start
Setting up LVM Volume Groups...
 Reading all physical volumes.  This may take a while...
 /dev/drbd0: read failed after 0 of 1024 at 0: Input/output error
 /dev/drbd0: read failed after 0 of 2048 at 0: Input/output error
 No volume groups found
 No volume groups found
 /dev/drbd0: read failed after 0 of 2048 at 0: Input/output error
 No volume groups found

Jun 21 13:37:41 server2 kernel: drbd0: 65 messages suppressed in
/usr/src/modules/drbd/drbd/drbd_req.c:197.
Jun 21 13:37:41 server2 kernel: drbd0: Not in Primary state, no IO
requests allowed

So im gussing im correct it leaving lvm off on the secondary?

But all changes are mirrored and resynced etc.

What we've done is set up a pair of physically identical servers with a
GigE crossover cable for the DRBD devices to sync across (and a 100M Eth
for the public interface), and each guest has a set of LV's dedicated to
it.  Each LV is created identically on both servers and are bound under
a DRBD device.  The guest mounts the DRBD device, so all writes to the
domU's "disk" gets replicated via DRBD to both servers.

Now, that said, that's not our entire story, as we have some more
mirroring stuff under the LV's, but you get the point for where we have
DRBD in place.

Each guest generally has 3 LV's associated with it, and only 2 of those
replicated via DRBD:

    xen0:GuestA-root ->   drbd    ->  xen1:GuestA-root      (2gb)
    xen0:GuestA-swap <not replicated> xen1:GuestA-swap      (1gb)
    xen0:GuestA-data ->   drbd    ->  xen1:GuestA-data      (anywhere
from 5gb to 100gb)

I felt that replicating swap was just a waste of resources as we can't
do hot migration anyway, so upon reboot, anything in swap is gone anyhow.

Noted - had the same thoughts.

Currently, we are running with 14 DRBD devices, but none of the guests
are in production yet, so I don't have good load and performance data
yet ... I'm still hunting for good performance monitoring tools (got one
suggestion for ganglia, and I also found bandwidthd as well as the usual
friends like cacti+snmpd), and I've been watching the privage GigE as
well as /proc/drbd.

I'll let the list know how performance seems to be after it's all up and
running!  :)

Oh, and currently, we are running all guests on the primary server, but
we plan to distribute the guests across both servers after all is running.

Hmm - good point. I'll think about that one.

For simplistys sake, i just really want one dom0 where all our domUs
run. This will be running web(LAMP)/mail/dns and secondary mail/dns is
on a seperate server in a different location.

Regards,

Simon

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.