[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Problem with xen VBDs, xen backend, and dom0 drivers on xen 4.0+Debian Squeeze



Thank you again, David! This is helpful.

Ben

On 14/11/11 16:16, David Della Vecchia wrote:
A fix was in squeeze-proposed-updated apparently a few months ago but i have yet to see it propagate to the updates repo. I checked again a week ago and its still broken.

Since then i've used a different backup procedure. I send a sysrq s to the guest then take an lvm snap, in 24 hours i dd the snap, dump it to a file with the original snap time and resnap. This allows my customers to restore (merge) to the snap within 24 hours, making it a much quicker restore process than having to dd in an img file.

-DDV

On Fri, Nov 11, 2011 at 3:16 PM, Benjamin Weaver <benjamin.weaver@xxxxxxxxxxxxxxxxxx> wrote:
Yes, right, thanks again, David: you had helped me a lot with this initially. I am a bit inexperienced, and did not know whether their might be some unusual workaround, patch, etc.

Do you know, by any chance, when a Debian fix might be released?

Thanks again for your help.

________________________________________
From: davidoftheold@xxxxxxxxx [davidoftheold@xxxxxxxxx] On Behalf Of David Della Vecchia [ddv@xxxxxxxxxxxxxxxx]
Sent: 11 November 2011 20:04
To: benjamin.weaver@xxxxxxxxxxxxx
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] Problem with xen VBDs, xen backend, and dom0 drivers on xen 4.0+Debian Squeeze

This is a known issue with debian domU's on debian dom0's

Centos works fine in this regard however.

-ddv

On Fri, Nov 11, 2011 at 10:28 AM, Benjamin Weaver <benjamin.weaver@xxxxxxxxxxxxx<mailto:benjamin.weaver@xxxxxxxxxxxxx>> wrote:
I am running xen 4.0.1 with debian squeeze (kernel: Linux version 2.6.32-5-xen-amd64 (Debian 2.6.32-38). Below is output indicating the problem.

My vms are Ubuntu (lucid). I cannot save and restore my vms properly. A lucid vm works fine when first created by xm create. But then, when I save (xm save hostname filename), and restore from that file (xm restore filename). I get a vm that lets me login, but then freezes its prompt.

This problem with lucid vms surfaced only a few weeks ago, before which I was running linux-base 2.6.32-35. The problem is related to Bug #644604 (http://lists.debian.org/debian-kernel/2011/10/msg00183.html).

I had gotten some good suggestions on how and whether to compile a kernel version later than Squeeze, but had some difficulties compiling and in any event would like to get a stable release to run my vms.

MY GUESS AT THE PROBLEM: I have since come to suspect a problem of communication between Xen Virtual Block Devices, Xen drivers (frontend and backend) and dom0 drivers that I thought be fixable.

Please confirm if so; any suggestions as to how to fix this problem would be greatly appreciated!


Output (see below)

I notice a couple of things:

1. a.when the lucid vm is created a df command shows only xvda2 showing up as a filesystem; b. an lsmod shows only xen_blkfront and xen_netfront. This is all in contrast to output from the same commands regarding a hardy or lenny vm. In these cases, df shows several active file systems, and lsmod shows several modules, ipv6, jbd, etc., in fact several things except xen_blkfront and xen_netfront.

2. after the lucid vm is saved no reads or writes are being done to its VBDs.



# df command on lenny
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/xvda2             2064208    392684   1566668  21% /
varrun                  262252        28    262224   1% /var/run
varlock                 262252         0    262252   0% /var/lock
udev                    262252        12    262240   1% /dev
devshm                  262252         0    262252   0% /dev/shm
root@lucidxentest3:~#


# df command on lucid
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/xvda2             2064208    545596   1413756  28% /
none                    240380       120    240260   1% /dev
none                    252152         0    252152   0% /dev/shm
none                    252152        28    252124   1% /var/run
none                    252152         0    252152   0% /var/lock
none                    252152         0    252152   0% /lib/init/rw
root@lucidxentest:~#



# lsmod on lucid vm
Module                  Size  Used by
xen_netfront           17890  0
xen_blkfront           10665  2
root@lucidxentest:~#

# lsmod on hardy vm
Module                  Size  Used by
ipv6                  313960  10
evdev                  15360  0
ext3                  149520  1
jbd                    57256  1 ext3
mbcache                11392  1 ext3
root@lucidxentest3:~#


Before xm save, VBDs on lucid vm show read/write activity with non-zero values.


After save, xm top shows lucid vm VBDs with zeroed-out values for read/write. That is, values of 0 under the following columns of xm top output:
VBD_OO   VBD_RD   VBD_WR  VBD_RSECT  VBD_WSECT.




_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx<mailto:Xen-users@xxxxxxxxxxxxxxxxxxx>
http://lists.xensource.com/xen-users



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.