[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] trying to diagnose migrated VM



Hi Folks,

Perhaps someone has some ideas.

I recently migrated a system (Debian Lenny) from bare iron into a DomU. Following some suggestions here, I essentially copied the old root volume onto the device I'm using for the virtualized root volume, adjusted the .cfg file to point to the Xen kernel and initrd, and everything is off and running.

But... I've noticed a couple of funny things:

1. there are times when the system starts showing very high i/o wait times

2. there are some funnies in the boot log, particularly <relevant excerpts>:


Sun Jun 6 15:17:52 2010: Loading kernel modules...done.

---------these don't show up in the boot log for a newly built VM ------------- Sun Jun 6 15:17:52 2010: Assembling MD array md0...^[[31mfailed (no devices found).^[[39;49m

Sun Jun 6 15:17:52 2010: Assembling MD array md1...^[[31mfailed (no devices found).^[[39;49m

Sun Jun 6 15:17:52 2010: Assembling MD array md2...^[[31mfailed (no devices found).^[[39;49m

Sun Jun 6 15:17:52 2010: Assembling MD array md3...^[[31mfailed (no devices found).^[[39;49m

Sun Jun 6 15:17:52 2010: Generating udev events for MD arrays...done.

Sun Jun 6 15:17:52 2010: Setting up LVM Volume Groups Reading all physical volumes. This may take a while...

Sun Jun 6 15:17:53 2010: .
------------------------
Sun Jun 6 15:17:53 2010: Checking file systems...fsck 1.41.3 (12-Oct-2008)


Which leads me to think that there are some kernal modules that are getting loaded that shouldn't be, and maybe that's also effecting my performance.

Any suggestions on how to clean this up?

-----
One other funny thing, dmesg for BOTH a newly built domU (using xen-tools and debootstrap) and the old machine, includes the lines:

[ 1.756070] raid6: int32x1 693 MB/s

[ 1.824030] raid6: int32x2 755 MB/s

[ 1.892046] raid6: int32x4 567 MB/s

[ 1.960236] raid6: int32x8 413 MB/s

[ 2.028020] raid6: mmxx1 1162 MB/s

[ 2.096062] raid6: mmxx2 1755 MB/s

[ 2.164031] raid6: sse1x1 1004 MB/s

[ 2.232054] raid6: sse1x2 2046 MB/s

[ 2.300049] raid6: sse2x1 2215 MB/s

[ 2.368045] raid6: sse2x2 3199 MB/s

[ 2.368063] raid6: using algorithm sse2x2 (3199 MB/s)

[ 2.368079] md: raid6 personality registered for level 6

[ 2.368089] md: raid5 personality registered for level 5

[ 2.368098] md: raid4 personality registered for level 4

[ 2.388454] md: md0 stopped.

[ 2.407078] md: md1 stopped.

[ 2.416000] md: md2 stopped.

[ 2.422210] md: md3 stopped.

[ 2.483540] device-mapper: uevent: version 1.0.3

[ 2.484838] device-mapper: ioctl: 4.13.0-ioctl (2007-10-18) initialised: dm-devel@xxxxxxxxxx


Now, my disk stack consists of RAID6 (md) -> LVM -> DRBD, but md should be completely hidden from the domU -- so this seems sort of weird. Any comments or suggestions?


Thank you very much,

Miles Fidelman

--
In theory, there is no difference between theory and practice.
In<fnord>  practice, there is.   .... Yogi Berra



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.