[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] XenCD xm save problem



[Paul == i_p_a_u_l@xxxxxxxxx on Tue, 08 Feb 2005 01:09:57 +0200]

  Paul> I tried with XenCD 1.0 rc1 in both vmware environments, and it
  Paul> did not work, but I suspect there is something else fishy,
  Paul> because "xm save" did not work either.

Thanks for the report.  I can duplicate the problem that xm save does
not work on XenCD.  Currently, I'm treating this as a release blocker
for 1.0final.

I'm unable at determine a fix for the problem and am soliciting help
from anyone who thinks they could contribute.

After a fresh boot of XenCD 1.0rc01, if I do:

  xm save 1 /tmp/ttylinux1.save

I get:

  Error: Error: [Failure instance: Traceback: \
    xen.xend.XendError.XendError, save failed

The /var/log/xend-debug.log in enclosed below.  I see:

  /dev/loop: Is a directory
  ioctl: LOOP_SET_FD: Device or resource busy
  ioctl: LOOP_SET_FD: Device or resource busy

Relevant to that, I'm using udev with a plain Debian sarge install.
/dev/loop does indeed exist as a directory.  My other udev box,
Gentoo, also has a /dev/loop directory, with 8 block devices inside.
XenCD has on 1 block device.  My Debian woody devfs box has no
/dev/loop directory entry.

dom0 is booted with max_loop=64.

Oddly, there are possible loop leaks in my startup as there are
processes and lsof entries for loop0, loop1, loop2, loop3, loop10, and
loop11, which seems like a few too many.  Only ttylinux-1 and 2's
loopback rootfs should be using up loops, I think.

Possibilities:

Do I need to patch sarge's udev to create a /dev/loop device?

Should I be digging into xend code to see how it uses /dev/loop?  I'm
still wrapping my head around xend and I can't find "loop" in a grep
of the tree.

Is there something about tmpfs that makes xm save work for others but
not XenCD?  XenCD's Xen installation is as absolutely vanilla as I can
make it, other than tmpfs.  And ttylinux is running on a loopback
ext2.

Apologies if I'm missing something obvious.

-- begin /var/log/xend-debug.log --

network start bridge=xen-br0 netdev=eth0 antispoof=no
VIRTUAL MEMORY ARRANGEMENT:
 Loaded kernel: c0100000->c034f6c4
 Init. ramdisk: c0350000->c0350000
 Phys-Mach map: c0350000->c0358000
 Page tables:   c0358000->c035a000
 Start info:    c035a000->c035b000
 Boot stack:    c035b000->c035c000
 TOTAL:         c0000000->c0400000
 ENTRY ADDRESS: c0100000
 VCPUS:         1
/dev/loop: Is a directory
ioctl: LOOP_SET_FD: Device or resource busy
ioctl: LOOP_SET_FD: Device or resource busy
vif-bridge up vif=vif1.0 domain=ttylinux-1 mac=aa:00:00:69:13:b5 bridge=xen-br0
recv_fe_driver_status> {'status': 1}

recv_fe_driver_status>

recv_fe_interface_connect {'tx_shmem_frame': 23907, 'rx_shmem_frame': 23906, 
'handle': 0}
VIRTUAL MEMORY ARRANGEMENT:
 Loaded kernel: c0100000->c034f6c4
 Init. ramdisk: c0350000->c0350000
 Phys-Mach map: c0350000->c0358000
 Page tables:   c0358000->c035a000
 Start info:    c035a000->c035b000
 Boot stack:    c035b000->c035c000
 TOTAL:         c0000000->c0400000
 ENTRY ADDRESS: c0100000
 VCPUS:         1
/dev/loop: Is a directory
ioctl: LOOP_SET_FD: Device or resource busy
ioctl: LOOP_SET_FD: Device or resource busy
ioctl: LOOP_SET_FD: Device or resource busy
vif-bridge up vif=vif2.0 domain=ttylinux-2 mac=aa:00:00:56:44:c6 bridge=xen-br0
recv_fe_driver_status> {'status': 1}

recv_fe_driver_status>

recv_fe_interface_connect {'tx_shmem_frame': 5204, 'rx_shmem_frame': 5203, 
'handle': 0}
sync_session> <type 'str'> 1 ['save', ['id', '1'], ['state', 'begin'], 
['domain', '1'], ['file', '/tmp/ttylinux1.save']]
Started to connect self= <xen.xend.XendMigrate.XfrdClientFactory instance at 
0xb78fbdec> connector= <twisted.internet.tcp.Connector instance at 0xb78fbe0c>
buildProtocol> IPv4Address(TCP, 'localhost', 8002)
***request> (domain (id 1) (name ttylinux-1) (memory 31) (maxmem 32768) (state 
-b---) (cpu 0) (cpu_time 81.206402435) (up_time 59103.1672821) (start_time 
1107920300.57) (console (status listening) (id 12) (domain 1) (local_port 12) 
(remote_port 1) (console_port 9601)) (devices (vif (idx 0) (vif 0) (mac 
aa:00:00:69:13:b5) (evtchn 14 4) (index 0)) (vbd (idx 0) (vdev 2049) (device 
1802) (mode w) (dev sda1) (uname file:/tmp/xen-rootfs.ttylinux-1) (node 
/dev/loop10) (index 0))) (config (vm (name ttylinux-1) (memory 32) (cpu -1) 
(restart always) (image (linux (kernel /media/cdrom/boot/xenU-kernel) (root 
'/dev/sda1 ro') (vcpus 1))) (device (vbd (uname 
file:/tmp/xen-rootfs.ttylinux-1) (dev sda1) (mode w))) (device (vif (mac 
aa:00:00:69:13:b5))) (memmap) (device_model) (device_config))))
***request> begin
xfr_err> ['xfr.err', '0']
xfr_err> <type 'str'> 0
Xfrd>connectionLost> [Failure instance: Traceback: 
twisted.internet.error.ConnectionDone, Connection was closed cleanly.
]
XfrdSaveInfo>connectionLost> [Failure instance: Traceback: 
twisted.internet.error.ConnectionDone, Connection was closed cleanly.
]
XfrdInfo>connectionLost> [Failure instance: Traceback: 
twisted.internet.error.ConnectionDone, Connection was closed cleanly.
]
Error> save failed
Error> calling errback
***cbremove> [Failure instance: Traceback: xen.xend.XendError.XendError, save 
failed
]
***_delete_session> 1
clientConnectionLost> connector= <twisted.internet.tcp.Connector instance at 
0xb78fbe0c> reason= [Failure instance: Traceback: 
twisted.internet.error.ConnectionDone, Connection was closed cleanly.
]

-- end --

-- jared@xxxxxxxxxxx

"A black hole is where God is dividing by zero."
        -- attributed to Roger Smith


-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.