[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] xm/xl block-detach issue



Hi again :)

Going further with my investigations, I'm definitively now sure that it's not an issue with ocfs2.
Same behavior occures on a local file.

There is really something going wrong with xm/xl. In order to continue my tests I've tried to attach
and .img (raw) file and this time not attaching it to dom0 but to a running domU vm:

no way with xl

box# xl block-attach 2 file:/cloud/data2/images/xen/slackware.13-37.x86.20110428/slackware.13-37.x86.20110428.img xvdc w
libxl_device_disk_add failed.

with xm

box# xm block-attach 2 file:/cloud/data2/images/xen/slackware.13-37.x86.20110428/slackware.13-37.x86.20110428.img xvdc w

this worked. Now trying to list with xl :

box# xl block-list 2
Vdev BE handle state evt-ch ring-ref BE-pathÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ
Segmentation fault

box# xm block-list 2
Vdev BE handle state evt-ch ring-ref BE-path
51712Â 0ÂÂÂ 0ÂÂÂÂ 4ÂÂÂÂÂ 23ÂÂÂÂ 8ÂÂÂÂ /local/domain/0/backend/vbd/2/51712Â
51728Â 0ÂÂÂ 0ÂÂÂÂ 4ÂÂÂÂÂ 24ÂÂÂÂ 17ÂÂÂ /local/domain/0/backend/vbd/2/51728Â
51744Â 0ÂÂÂ 0ÂÂÂÂ 4ÂÂÂÂÂ 25ÂÂÂÂ 50ÂÂÂ /local/domain/0/backend/vbd/2/51744Â

Unplugging with xm:

box# xm block-detach 2 51744

box# xm block-list 2
Vdev BE handle state evt-ch ring-ref BE-path
51712Â 0ÂÂÂ 0ÂÂÂÂ 4ÂÂÂÂÂ 23ÂÂÂÂ 8ÂÂÂÂ /local/domain/0/backend/vbd/2/51712Â
51728Â 0ÂÂÂ 0ÂÂÂÂ 4ÂÂÂÂÂ 24ÂÂÂÂ 17ÂÂÂ /local/domain/0/backend/vbd/2/51728Â

Gone :)


Summary:

1) block-attach / block-detach / block-list actions issued from xl and xm are not having the same behaviors,
xl tend to crash a lot and mess things.

2) attaching an .img file with xm and file:/ let me detach it with xm block-detach

3) attaching a .vhd file with xm and tap:vhd:/ does not let me detach it with xm block-detach

I almost got rid of all specialities on my box, except maybe konrad's 2.6.39.2 kernel. This is a quite serious issue, am I the only one able to reproduce it ?

For information, we've tried the same action (xl block-attach and xl block-detach of a vhd on XenServer 6.0 beta (project boston) and the same
issue happens, can't detach it, then it throws ugly messages at console:

Message from syslogd@ at Mon Jul 11 21:13:43 2011 ...
xenblade13 kernel: ------------[ cut here ]------------

Message from syslogd@ at Mon Jul 11 21:13:43 2011 ...
xenblade13 kernel: invalid opcode: 0000 [#1] SMP

Message from syslogd@ at Mon Jul 11 21:13:43 2011 ...
xenblade13 kernel: last sysfs file: /sys/devices/xen-backend/vbd-0-51712/statistics/rd_usecs

Message from syslogd@ at Mon Jul 11 21:13:43 2011 ...
xenblade13 kernel: Process xenwatch (pid: 48, ti=ee9d0000 task=ee8d5070 task.ti=ee9d0000)

Message from syslogd@ at Mon Jul 11 21:13:43 2011 ...
xenblade13 kernel: Stack:

Message from syslogd@ at Mon Jul 11 21:13:43 2011 ...
xenblade13 kernel: Call Trace:

Message from syslogd@ at Mon Jul 11 21:13:43 2011 ...
xenblade13 kernel: Code: 88 ff ff e9 5c ff ff ff 89 44 24 04 c7 44 24 08 2e a1 44 c0 8b 4d ec 89 0c 24 e8 62 88 ff ff 89 f8 e8 db ee ff ff e9 f7 fe ff ff <0f> 0b eb fe 66 90 ba 98 00 00 00 b8 ba a0 44 c0 e8 61 48 e8 ff

Message from syslogd@ at Mon Jul 11 21:13:43 2011 ...
xenblade13 kernel: EIP: [<c02a985a>] blkback_queue_start+0x2ca/0x2f0 SS:ESP 0069:ee9d1f38


So my conclusions are that something is not right with xl/xm commands and block attaching/detaching. I guess XenServer is not using
this method to attach/detach disks as attaching them with xe commands and with xencenter do not show any problems.

Could someone try it on their side and confirms if this is a confirmed problem ? That would be nice to fix it if it's the case, as shipping xen with a "russian roulette" enabled xm/xl is maybe not the best :)

I can provide root access to my test machines if you want to try things on the best :)

Cheers,
SÃbastien






_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.