[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] [LVM2 + DRBD + Xen + DRBD 8.0] errors on dom0 and on domU



Maxim Doucet a écrit :
> xen@xxxxxxxxxx a écrit :
>   
>> On Tue, 14 Aug 2007, Maxim Doucet wrote:
>>
>>     
>>> I experience the following error messages when launching the virtual
>>> machine :
>>> *On dom0 : the physical server* (messages coming from dmesg) :
>>> drbd0: bio would need to, but cannot, be split:
>>> (vcnt=2,idx=0,size=2048,sector=126353855)
>>> drbd0: bio would need to, but cannot, be split:
>>> (vcnt=2,idx=0,size=2048,sector=126353855)
>>>       
>> We are using a nearly identical configuration and experienced the same
>> problem just today:
>>
>> LVM2 on DRBD under Xen 3.0.3 w/ DRBD 8.0.4 Using CentOS5 on x86_64
>> dom0 kernel 2.6.18-8.1.8-el5xen
>>
>> The virtual machine is an FC6 x86_64 PV guest and gave similar guest
>> errors.
>>
>> The workaround we are using is to change
>>
>> disk = [ 'phy:/dev/vg-drbd/vm0,xvda,w' ]
>>    to
>> disk = [ 'tap:aio:/dev/vg-drbd/vm0,xvda,w' ]
>>
>> This treats the underlying backing image as a file.  This may have
>> some performance loss since it is not using direct device IO, but as
>> far as I can tell it is stable.  Or at least, phy: fails miserably,
>> where tap:aio: works fine!
>>
>> This seems to indicate that its not an LVM+DRBD or Xen+LVM problem,
>> but rather a Xen+LVM+DRBD using phy: problem.  I tested to see if Xen
>> liked running LVM on a loopback device and loading a VM off it using
>> phy: (see below).  It worked fine, which makes me think this is more
>> of a drbd issue than a Xen or LVM issue.
>>
>> If you are on the DRBD list, please cross-post this (as I am not)
>> since it is probably relevant.
>>
>> -Eric
>>
>>
>> ============== Xen+LVM+loop test:
>>
>> # dd if=/dev/zero bs=1G seek=32 count=1 of=/tmp/testimage
>> # losetup /dev/loop0 /tmp/testimage
>> # pvcreate /dev/loop0
>> # vgcreate vg-loop /dev/loop0
>> # pvscan
>>   [...]
>>   PV /dev/loop0   VG vg-loop   lvm2 [11.00 GB / 6.99 GB free]
>>   [...]
>> # lvcreate -n testvm -l 1025 vg-loop
>>
>> # lvascan
>>   [...]
>>   ACTIVE            '/dev/vg-loop/testvm' [4.00 GB] inherit
>>   [...]
>>
>> # ls -l
>> -rwxr-xr-x 1 root root 4294967297 Jul 23 16:22 disk0
>> # dd if=disk0 bs=4M of=/dev/vg-loop/testvm
>> 1024+1 records in
>> 1024+1 records out
>> 4294967297 bytes (4.3 GB) copied, 396.227 seconds, 10.8 MB/s
>>
>>
>>     
> Thanks a lot for your feedback, I'll try the workaround and report my
> results here.
>
> I have forwarded your message to the DRBD mailing list :
> http://lists.linbit.com/pipermail/drbd-user/2007-August/007267.html
Good news ! Thanks to the driver "tap:aio" workaround, I've been able to
install a clean standard installation of Fedora Core 7 upon LVM, itself
upon DRBD which happily synchronizes the data between the 2 servers. It
just worked like a charm and now paves the way for further development
of a redundant and high availability architecture for the services we
are using.

Now, for the performance cost of using the emulated (correct me if I'm
wrong) driver "tap:aio" rather than the direct I/O access of the "phy"
driver, Lars Ellenberg (->
http://www.linbit.com/en/company/team/lars-g-ellenberg/ : the co-author
of DRBD) gave information concerning the way Xen virtual block device
layer is dealing with block devices I/O.

The link to his post is :
http://lists.linbit.com/pipermail/drbd-user/2007-August/007269.html

The post is interesting, technically informative and is followed by an
answer from Ross S. Walker who gives further technical information about
block layer device handler's development here :
http://lists.linbit.com/pipermail/drbd-user/2007-August/007270.html

Again, thanks for your feedback and let's hope for other developer's
reviews about the core of this problem.

-- 
Maxim Doucet - www.alamaison.fr
sys admin @ la maison


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.