*** case block-create from group default *** Running tests for case block-create make[1]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/block-create' make check-TESTS make[2]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/block-create' cp 01_block_attach_device_pos.py 01_block_attach_device_pos.test chmod +x 01_block_attach_device_pos.test cp 02_block_attach_file_device_pos.py 02_block_attach_file_device_pos.test chmod +x 02_block_attach_file_device_pos.test cp 04_block_attach_device_repeatedly_pos.py 04_block_attach_device_repeatedly_pos.test chmod +x 04_block_attach_device_repeatedly_pos.test cp 05_block_attach_and_dettach_device_repeatedly_pos.py 05_block_attach_and_dettach_device_repeatedly_pos.test chmod +x 05_block_attach_and_dettach_device_repeatedly_pos.test cp 06_block_attach_baddomain_neg.py 06_block_attach_baddomain_neg.test chmod +x 06_block_attach_baddomain_neg.test cp 07_block_attach_baddevice_neg.py 07_block_attach_baddevice_neg.test chmod +x 07_block_attach_baddevice_neg.test cp 08_block_attach_bad_filedevice_neg.py 08_block_attach_bad_filedevice_neg.test chmod +x 08_block_attach_bad_filedevice_neg.test cp 09_block_attach_and_dettach_device_check_data_pos.py 09_block_attach_and_dettach_device_check_data_pos.test chmod +x 09_block_attach_and_dettach_device_check_data_pos.test cp 11_block_attach_shared_dom0.py 11_block_attach_shared_dom0.test chmod +x 11_block_attach_shared_dom0.test cp 12_block_attach_shared_domU.py 12_block_attach_shared_domU.test chmod +x 12_block_attach_shared_domU.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 01_domu_proc-1154892755 1 64 1 -b---- 0.4 Domain-0 0 5000 32 r----- 2950.1 [dom0] Running `xm destroy 01_domu_proc-1154892755' *** Finished cleaning domUs *** Test 01_block_attach_device_pos started at Sun Aug 6 15:32:42 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 2951.3 *** Finished cleaning domUs *** Test 01_block_attach_device_pos started at Sun Aug 6 15:32:42 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 01_block_attach_device_pos-1154892762 Console executing: ['/usr/sbin/xm', 'xm', 'console', '01_block_attach_device_pos-1154892762'] [01_block_attach_device_pos-1154892762] Sending `input' [01_block_attach_device_pos-1154892762] Sending `ls' [01_block_attach_device_pos-1154892762] Sending `echo $?' [dom0] Running `xm block-attach 01_block_attach_device_pos-1154892762 phy:ram1 sdb1 w' [dom0] Running `xm block-list 01_block_attach_device_pos-1154892762 | awk '/^2065/ {print $4}'' 4 [01_block_attach_device_pos-1154892762] Sending `cat /proc/partitions' [01_block_attach_device_pos-1154892762] Sending `echo $?' [dom0] Running `xm shutdown 01_block_attach_device_pos-1154892762' PASS: 01_block_attach_device_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 01_block_attach_device_pos-1154892762 2 64 1 -b---- 0.3 Domain-0 0 5000 32 r----- 2954.4 [dom0] Running `xm destroy 01_block_attach_device_pos-1154892762' *** Finished cleaning domUs *** Test 02_block_attach_file_device_pos started at Sun Aug 6 15:32:51 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 2956.4 *** Finished cleaning domUs *** Test 02_block_attach_file_device_pos started at Sun Aug 6 15:32:52 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 02_block_attach_file_device_pos-1154892772 Console executing: ['/usr/sbin/xm', 'xm', 'console', '02_block_attach_file_device_pos-1154892772'] [02_block_attach_file_device_pos-1154892772] Sending `input' [02_block_attach_file_device_pos-1154892772] Sending `ls' [02_block_attach_file_device_pos-1154892772] Sending `echo $?' [dom0] Running `xm block-attach 02_block_attach_file_device_pos-1154892772 file:/dev/ram1 sdb2 w' [dom0] Running `xm block-list 02_block_attach_file_device_pos-1154892772 | awk '/^2066/ {print $4}'' 4 [02_block_attach_file_device_pos-1154892772] Sending `cat /proc/partitions' [02_block_attach_file_device_pos-1154892772] Sending `echo $?' [dom0] Running `xm shutdown 02_block_attach_file_device_pos-1154892772' PASS: 02_block_attach_file_device_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 02_block_attach_file_device_pos-1154892772 3 64 1 -b---- 0.3 Domain-0 0 5000 32 r----- 2959.7 [dom0] Running `xm destroy 02_block_attach_file_device_pos-1154892772' *** Finished cleaning domUs *** Test 04_block_attach_device_repeatedly_pos started at Sun Aug 6 15:33:01 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 2960.9 *** Finished cleaning domUs *** Test 04_block_attach_device_repeatedly_pos started at Sun Aug 6 15:33:01 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 04_block_attach_device_repeatedly_pos-1154892781 Console executing: ['/usr/sbin/xm', 'xm', 'console', '04_block_attach_device_repeatedly_pos-1154892781'] [04_block_attach_device_repeatedly_pos-1154892781] Sending `input' [04_block_attach_device_repeatedly_pos-1154892781] Sending `ls' [04_block_attach_device_repeatedly_pos-1154892781] Sending `echo $?' [dom0] Running `xm block-attach 04_block_attach_device_repeatedly_pos-1154892781 phy:ram1 sdb1 w' [04_block_attach_device_repeatedly_pos-1154892781] Sending `cat /proc/partitions' [04_block_attach_device_repeatedly_pos-1154892781] Sending `echo $?' [dom0] Running `xm block-attach 04_block_attach_device_repeatedly_pos-1154892781 phy:ram1 sdb1 w' Error: Device sdb1 (2065, vbd) is already connected. [04_block_attach_device_repeatedly_pos-1154892781] Sending `cat /proc/partitions' [04_block_attach_device_repeatedly_pos-1154892781] Sending `echo $?' [dom0] Running `xm block-attach 04_block_attach_device_repeatedly_pos-1154892781 phy:ram1 sdb1 w' Error: Device sdb1 (2065, vbd) is already connected. [04_block_attach_device_repeatedly_pos-1154892781] Sending `cat /proc/partitions' [04_block_attach_device_repeatedly_pos-1154892781] Sending `echo $?' [dom0] Running `xm block-attach 04_block_attach_device_repeatedly_pos-1154892781 phy:ram1 sdb1 w' Error: Device sdb1 (2065, vbd) is already connected. [04_block_attach_device_repeatedly_pos-1154892781] Sending `cat /proc/partitions' [04_block_attach_device_repeatedly_pos-1154892781] Sending `echo $?' [dom0] Running `xm block-attach 04_block_attach_device_repeatedly_pos-1154892781 phy:ram1 sdb1 w' Error: Device sdb1 (2065, vbd) is already connected. [04_block_attach_device_repeatedly_pos-1154892781] Sending `cat /proc/partitions' [04_block_attach_device_repeatedly_pos-1154892781] Sending `echo $?' [dom0] Running `xm block-attach 04_block_attach_device_repeatedly_pos-1154892781 phy:ram1 sdb1 w' Error: Device sdb1 (2065, vbd) is already connected. [04_block_attach_device_repeatedly_pos-1154892781] Sending `cat /proc/partitions' [04_block_attach_device_repeatedly_pos-1154892781] Sending `echo $?' [dom0] Running `xm block-attach 04_block_attach_device_repeatedly_pos-1154892781 phy:ram1 sdb1 w' Error: Device sdb1 (2065, vbd) is already connected. [04_block_attach_device_repeatedly_pos-1154892781] Sending `cat /proc/partitions' [04_block_attach_device_repeatedly_pos-1154892781] Sending `echo $?' [dom0] Running `xm block-attach 04_block_attach_device_repeatedly_pos-1154892781 phy:ram1 sdb1 w' Error: Device sdb1 (2065, vbd) is already connected. [04_block_attach_device_repeatedly_pos-1154892781] Sending `cat /proc/partitions' [04_block_attach_device_repeatedly_pos-1154892781] Sending `echo $?' [dom0] Running `xm block-attach 04_block_attach_device_repeatedly_pos-1154892781 phy:ram1 sdb1 w' Error: Device sdb1 (2065, vbd) is already connected. [04_block_attach_device_repeatedly_pos-1154892781] Sending `cat /proc/partitions' [04_block_attach_device_repeatedly_pos-1154892781] Sending `echo $?' [dom0] Running `xm block-attach 04_block_attach_device_repeatedly_pos-1154892781 phy:ram1 sdb1 w' Error: Device sdb1 (2065, vbd) is already connected. [04_block_attach_device_repeatedly_pos-1154892781] Sending `cat /proc/partitions' [04_block_attach_device_repeatedly_pos-1154892781] Sending `echo $?' [dom0] Running `xm shutdown 04_block_attach_device_repeatedly_pos-1154892781' PASS: 04_block_attach_device_repeatedly_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 04_block_attach_device_repeatedly_pos-1154892781 4 64 1 -b---- 0.4 Domain-0 0 5000 32 r----- 2966.6 [dom0] Running `xm destroy 04_block_attach_device_repeatedly_pos-1154892781' *** Finished cleaning domUs *** Test 05_block_attach_and_dettach_device_repeatedly_pos started at Sun Aug 6 15:33:30 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 2967.8 *** Finished cleaning domUs *** Test 05_block_attach_and_dettach_device_repeatedly_pos started at Sun Aug 6 15:33:30 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 Console executing: ['/usr/sbin/xm', 'xm', 'console', '05_block_attach_and_dettach_device_repeatedly_pos-1154892810'] [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `input' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `ls' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `echo $?' [dom0] Running `xm block-attach 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 phy:ram1 sdb1 w' [dom0] Running `xm block-list 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 | awk '/^2065/ {print $4}'' 4 [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `cat /proc/partitions' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `echo $?' [dom0] Running `xm block-detach 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 sdb1' [dom0] Running `xm block-list 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 | awk '/^2065/ {print $4}'' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `cat /proc/partitions' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `echo $?' [dom0] Running `xm block-attach 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 phy:ram1 sdb1 w' [dom0] Running `xm block-list 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 | awk '/^2065/ {print $4}'' 4 [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `cat /proc/partitions' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `echo $?' [dom0] Running `xm block-detach 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 sdb1' [dom0] Running `xm block-list 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 | awk '/^2065/ {print $4}'' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `cat /proc/partitions' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `echo $?' [dom0] Running `xm block-attach 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 phy:ram1 sdb1 w' [dom0] Running `xm block-list 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 | awk '/^2065/ {print $4}'' 4 [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `cat /proc/partitions' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `echo $?' [dom0] Running `xm block-detach 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 sdb1' [dom0] Running `xm block-list 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 | awk '/^2065/ {print $4}'' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `cat /proc/partitions' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `echo $?' [dom0] Running `xm block-attach 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 phy:ram1 sdb1 w' [dom0] Running `xm block-list 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 | awk '/^2065/ {print $4}'' 4 [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `cat /proc/partitions' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `echo $?' [dom0] Running `xm block-detach 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 sdb1' [dom0] Running `xm block-list 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 | awk '/^2065/ {print $4}'' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `cat /proc/partitions' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `echo $?' [dom0] Running `xm block-attach 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 phy:ram1 sdb1 w' [dom0] Running `xm block-list 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 | awk '/^2065/ {print $4}'' 4 [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `cat /proc/partitions' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `echo $?' [dom0] Running `xm block-detach 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 sdb1' [dom0] Running `xm block-list 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 | awk '/^2065/ {print $4}'' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `cat /proc/partitions' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `echo $?' [dom0] Running `xm block-attach 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 phy:ram1 sdb1 w' [dom0] Running `xm block-list 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 | awk '/^2065/ {print $4}'' 4 [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `cat /proc/partitions' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `echo $?' [dom0] Running `xm block-detach 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 sdb1' [dom0] Running `xm block-list 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 | awk '/^2065/ {print $4}'' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `cat /proc/partitions' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `echo $?' [dom0] Running `xm block-attach 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 phy:ram1 sdb1 w' [dom0] Running `xm block-list 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 | awk '/^2065/ {print $4}'' 4 [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `cat /proc/partitions' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `echo $?' [dom0] Running `xm block-detach 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 sdb1' [dom0] Running `xm block-list 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 | awk '/^2065/ {print $4}'' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `cat /proc/partitions' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `echo $?' [dom0] Running `xm block-attach 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 phy:ram1 sdb1 w' [dom0] Running `xm block-list 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 | awk '/^2065/ {print $4}'' 4 [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `cat /proc/partitions' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `echo $?' [dom0] Running `xm block-detach 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 sdb1' [dom0] Running `xm block-list 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 | awk '/^2065/ {print $4}'' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `cat /proc/partitions' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `echo $?' [dom0] Running `xm block-attach 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 phy:ram1 sdb1 w' [dom0] Running `xm block-list 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 | awk '/^2065/ {print $4}'' 4 [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `cat /proc/partitions' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `echo $?' [dom0] Running `xm block-detach 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 sdb1' [dom0] Running `xm block-list 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 | awk '/^2065/ {print $4}'' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `cat /proc/partitions' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `echo $?' [dom0] Running `xm block-attach 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 phy:ram1 sdb1 w' [dom0] Running `xm block-list 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 | awk '/^2065/ {print $4}'' 4 [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `cat /proc/partitions' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `echo $?' [dom0] Running `xm block-detach 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 sdb1' [dom0] Running `xm block-list 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 | awk '/^2065/ {print $4}'' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `cat /proc/partitions' [05_block_attach_and_dettach_device_repeatedly_pos-1154892810] Sending `echo $?' [dom0] Running `xm shutdown 05_block_attach_and_dettach_device_repeatedly_pos-1154892810' PASS: 05_block_attach_and_dettach_device_repeatedly_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 05_block_attach_and_dettach_device_repeatedly_pos-1154892810 5 64 1 -b---- 0.5 Domain-0 0 5000 32 r----- 2989.5 [dom0] Running `xm destroy 05_block_attach_and_dettach_device_repeatedly_pos-1154892810' *** Finished cleaning domUs *** Test 06_block_attach_baddomain_neg started at Sun Aug 6 15:34:31 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 2991.1 *** Finished cleaning domUs *** Test 06_block_attach_baddomain_neg started at Sun Aug 6 15:34:31 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm block-attach NOT-EXIST phy:ram1 sdb1 w' Error: the domain 'NOT-EXIST' does not exist. PASS: 06_block_attach_baddomain_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 2991.8 *** Finished cleaning domUs *** Test 07_block_attach_baddevice_neg started at Sun Aug 6 15:34:32 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 2992.1 *** Finished cleaning domUs *** Test 07_block_attach_baddevice_neg started at Sun Aug 6 15:34:32 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 07_block_attach_baddevice_neg-1154892872 Console executing: ['/usr/sbin/xm', 'xm', 'console', '07_block_attach_baddevice_neg-1154892872'] [07_block_attach_baddevice_neg-1154892872] Sending `input' [07_block_attach_baddevice_neg-1154892872] Sending `ls' [07_block_attach_baddevice_neg-1154892872] Sending `echo $?' [dom0] Running `xm block-attach 07_block_attach_baddevice_neg-1154892872 phy:NOT-EXIST sdb1 w' Error: Device 2065 (vbd) could not be connected. Hotplug scripts not working. [07_block_attach_baddevice_neg-1154892872] Sending `cat /proc/partitions' [07_block_attach_baddevice_neg-1154892872] Sending `echo $?' [dom0] Running `xm shutdown 07_block_attach_baddevice_neg-1154892872' PASS: 07_block_attach_baddevice_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 07_block_attach_baddevice_neg-1154892872 6 64 1 -b---- 0.3 Domain-0 0 5000 32 r----- 2995.0 [dom0] Running `xm destroy 07_block_attach_baddevice_neg-1154892872' *** Finished cleaning domUs *** Test 08_block_attach_bad_filedevice_neg started at Sun Aug 6 15:34:51 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 2996.9 *** Finished cleaning domUs *** Test 08_block_attach_bad_filedevice_neg started at Sun Aug 6 15:34:51 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 08_block_attach_bad_filedevice_neg-1154892891 Console executing: ['/usr/sbin/xm', 'xm', 'console', '08_block_attach_bad_filedevice_neg-1154892891'] [08_block_attach_bad_filedevice_neg-1154892891] Sending `input' [08_block_attach_bad_filedevice_neg-1154892891] Sending `ls' [08_block_attach_bad_filedevice_neg-1154892891] Sending `echo $?' [dom0] Running `xm block-attach 08_block_attach_bad_filedevice_neg-1154892891 file:/dev/NOT-EXIST sdb1 w' Error: Device 2065 (vbd) could not be connected. File /dev/NOT-EXIST is read-only, and so I will not mount it read-write in a guest domain. [08_block_attach_bad_filedevice_neg-1154892891] Sending `cat /proc/partitions' [08_block_attach_bad_filedevice_neg-1154892891] Sending `echo $?' [dom0] Running `xm shutdown 08_block_attach_bad_filedevice_neg-1154892891' PASS: 08_block_attach_bad_filedevice_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 08_block_attach_bad_filedevice_neg-1154892891 7 64 1 -b---- 0.3 Domain-0 0 5000 32 r----- 2999.7 [dom0] Running `xm destroy 08_block_attach_bad_filedevice_neg-1154892891' *** Finished cleaning domUs *** Test 09_block_attach_and_dettach_device_check_data_pos started at Sun Aug 6 15:35:00 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3001.3 *** Finished cleaning domUs *** Test 09_block_attach_and_dettach_device_check_data_pos started at Sun Aug 6 15:35:00 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 09_block_attach_and_dettach_device_check_data_pos-1154892900 Console executing: ['/usr/sbin/xm', 'xm', 'console', '09_block_attach_and_dettach_device_check_data_pos-1154892900'] [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `input' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `ls' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [dom0] Running `mke2fs -q -F /dev/ram1' [dom0] Running `xm block-attach 09_block_attach_and_dettach_device_check_data_pos-1154892900 phy:ram1 hda1 w' [dom0] Running `xm block-list 09_block_attach_and_dettach_device_check_data_pos-1154892900 | awk '/^769/ {print $4}'' 4 [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /proc/partitions' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `mkdir -p /mnt/hda1; mount /dev/hda1 /mnt/hda1' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo "0" > /mnt/hda1/myfile' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /mnt/hda1/myfile' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' 0 [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `umount /mnt/hda1' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [dom0] Running `xm block-detach 09_block_attach_and_dettach_device_check_data_pos-1154892900 hda1' [dom0] Running `xm block-list 09_block_attach_and_dettach_device_check_data_pos-1154892900 | awk '/^769/ {print $4}'' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /proc/partitions' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [dom0] Running `xm block-attach 09_block_attach_and_dettach_device_check_data_pos-1154892900 phy:ram1 hda1 w' [dom0] Running `xm block-list 09_block_attach_and_dettach_device_check_data_pos-1154892900 | awk '/^769/ {print $4}'' 4 [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /proc/partitions' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `mkdir -p /mnt/hda1; mount /dev/hda1 /mnt/hda1' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /mnt/hda1/myfile | grep 0' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo "1" > /mnt/hda1/myfile' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /mnt/hda1/myfile' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' 1 [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `umount /mnt/hda1' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [dom0] Running `xm block-detach 09_block_attach_and_dettach_device_check_data_pos-1154892900 hda1' [dom0] Running `xm block-list 09_block_attach_and_dettach_device_check_data_pos-1154892900 | awk '/^769/ {print $4}'' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /proc/partitions' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [dom0] Running `xm block-attach 09_block_attach_and_dettach_device_check_data_pos-1154892900 phy:ram1 hda1 w' [dom0] Running `xm block-list 09_block_attach_and_dettach_device_check_data_pos-1154892900 | awk '/^769/ {print $4}'' 4 [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /proc/partitions' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `mkdir -p /mnt/hda1; mount /dev/hda1 /mnt/hda1' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /mnt/hda1/myfile | grep 1' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo "2" > /mnt/hda1/myfile' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /mnt/hda1/myfile' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' 2 [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `umount /mnt/hda1' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [dom0] Running `xm block-detach 09_block_attach_and_dettach_device_check_data_pos-1154892900 hda1' [dom0] Running `xm block-list 09_block_attach_and_dettach_device_check_data_pos-1154892900 | awk '/^769/ {print $4}'' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /proc/partitions' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [dom0] Running `xm block-attach 09_block_attach_and_dettach_device_check_data_pos-1154892900 phy:ram1 hda1 w' [dom0] Running `xm block-list 09_block_attach_and_dettach_device_check_data_pos-1154892900 | awk '/^769/ {print $4}'' 4 [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /proc/partitions' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `mkdir -p /mnt/hda1; mount /dev/hda1 /mnt/hda1' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /mnt/hda1/myfile | grep 2' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo "3" > /mnt/hda1/myfile' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /mnt/hda1/myfile' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' 3 [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `umount /mnt/hda1' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [dom0] Running `xm block-detach 09_block_attach_and_dettach_device_check_data_pos-1154892900 hda1' [dom0] Running `xm block-list 09_block_attach_and_dettach_device_check_data_pos-1154892900 | awk '/^769/ {print $4}'' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /proc/partitions' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [dom0] Running `xm block-attach 09_block_attach_and_dettach_device_check_data_pos-1154892900 phy:ram1 hda1 w' [dom0] Running `xm block-list 09_block_attach_and_dettach_device_check_data_pos-1154892900 | awk '/^769/ {print $4}'' 4 [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /proc/partitions' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `mkdir -p /mnt/hda1; mount /dev/hda1 /mnt/hda1' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /mnt/hda1/myfile | grep 3' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo "4" > /mnt/hda1/myfile' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /mnt/hda1/myfile' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' 4 [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `umount /mnt/hda1' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [dom0] Running `xm block-detach 09_block_attach_and_dettach_device_check_data_pos-1154892900 hda1' [dom0] Running `xm block-list 09_block_attach_and_dettach_device_check_data_pos-1154892900 | awk '/^769/ {print $4}'' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /proc/partitions' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [dom0] Running `xm block-attach 09_block_attach_and_dettach_device_check_data_pos-1154892900 phy:ram1 hda1 w' [dom0] Running `xm block-list 09_block_attach_and_dettach_device_check_data_pos-1154892900 | awk '/^769/ {print $4}'' 4 [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /proc/partitions' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `mkdir -p /mnt/hda1; mount /dev/hda1 /mnt/hda1' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /mnt/hda1/myfile | grep 4' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo "5" > /mnt/hda1/myfile' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /mnt/hda1/myfile' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' 5 [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `umount /mnt/hda1' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [dom0] Running `xm block-detach 09_block_attach_and_dettach_device_check_data_pos-1154892900 hda1' [dom0] Running `xm block-list 09_block_attach_and_dettach_device_check_data_pos-1154892900 | awk '/^769/ {print $4}'' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /proc/partitions' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [dom0] Running `xm block-attach 09_block_attach_and_dettach_device_check_data_pos-1154892900 phy:ram1 hda1 w' [dom0] Running `xm block-list 09_block_attach_and_dettach_device_check_data_pos-1154892900 | awk '/^769/ {print $4}'' 4 [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /proc/partitions' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `mkdir -p /mnt/hda1; mount /dev/hda1 /mnt/hda1' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /mnt/hda1/myfile | grep 5' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo "6" > /mnt/hda1/myfile' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /mnt/hda1/myfile' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' 6 [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `umount /mnt/hda1' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [dom0] Running `xm block-detach 09_block_attach_and_dettach_device_check_data_pos-1154892900 hda1' [dom0] Running `xm block-list 09_block_attach_and_dettach_device_check_data_pos-1154892900 | awk '/^769/ {print $4}'' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /proc/partitions' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [dom0] Running `xm block-attach 09_block_attach_and_dettach_device_check_data_pos-1154892900 phy:ram1 hda1 w' [dom0] Running `xm block-list 09_block_attach_and_dettach_device_check_data_pos-1154892900 | awk '/^769/ {print $4}'' 4 [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /proc/partitions' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `mkdir -p /mnt/hda1; mount /dev/hda1 /mnt/hda1' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /mnt/hda1/myfile | grep 6' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo "7" > /mnt/hda1/myfile' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /mnt/hda1/myfile' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' 7 [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `umount /mnt/hda1' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [dom0] Running `xm block-detach 09_block_attach_and_dettach_device_check_data_pos-1154892900 hda1' [dom0] Running `xm block-list 09_block_attach_and_dettach_device_check_data_pos-1154892900 | awk '/^769/ {print $4}'' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /proc/partitions' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [dom0] Running `xm block-attach 09_block_attach_and_dettach_device_check_data_pos-1154892900 phy:ram1 hda1 w' [dom0] Running `xm block-list 09_block_attach_and_dettach_device_check_data_pos-1154892900 | awk '/^769/ {print $4}'' 4 [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /proc/partitions' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `mkdir -p /mnt/hda1; mount /dev/hda1 /mnt/hda1' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /mnt/hda1/myfile | grep 7' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo "8" > /mnt/hda1/myfile' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /mnt/hda1/myfile' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' 8 [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `umount /mnt/hda1' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [dom0] Running `xm block-detach 09_block_attach_and_dettach_device_check_data_pos-1154892900 hda1' [dom0] Running `xm block-list 09_block_attach_and_dettach_device_check_data_pos-1154892900 | awk '/^769/ {print $4}'' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /proc/partitions' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [dom0] Running `xm block-attach 09_block_attach_and_dettach_device_check_data_pos-1154892900 phy:ram1 hda1 w' [dom0] Running `xm block-list 09_block_attach_and_dettach_device_check_data_pos-1154892900 | awk '/^769/ {print $4}'' 4 [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /proc/partitions' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `mkdir -p /mnt/hda1; mount /dev/hda1 /mnt/hda1' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /mnt/hda1/myfile | grep 8' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo "9" > /mnt/hda1/myfile' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /mnt/hda1/myfile' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' 9 [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `umount /mnt/hda1' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [dom0] Running `xm block-detach 09_block_attach_and_dettach_device_check_data_pos-1154892900 hda1' [dom0] Running `xm block-list 09_block_attach_and_dettach_device_check_data_pos-1154892900 | awk '/^769/ {print $4}'' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `cat /proc/partitions' [09_block_attach_and_dettach_device_check_data_pos-1154892900] Sending `echo $?' [dom0] Running `xm shutdown 09_block_attach_and_dettach_device_check_data_pos-1154892900' PASS: 09_block_attach_and_dettach_device_check_data_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 09_block_attach_and_dettach_device_check_data_pos-1154892900 8 64 1 -b---- 0.7 Domain-0 0 5000 32 r----- 3025.8 [dom0] Running `xm destroy 09_block_attach_and_dettach_device_check_data_pos-1154892900' *** Finished cleaning domUs *** Test 11_block_attach_shared_dom0 started at Sun Aug 6 15:37:39 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3027.4 *** Finished cleaning domUs *** Test 11_block_attach_shared_dom0 started at Sun Aug 6 15:37:39 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `mkfs /dev/ram0' mke2fs 1.38 (30-Jun-2005) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) 4096 inodes, 16384 blocks 819 blocks (5.00%) reserved for the super user First data block=1 2 block groups 8192 blocks per group, 8192 fragments per group 2048 inodes per group Superblock backups stored on blocks: 8193 Writing inode tables: 0/21/2done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 31 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [dom0] Running `mkdir -p mnt' [dom0] Running `mount /dev/ram0 mnt -o rw' [dom0] Running `xm create /tmp/xm-test.conf' Error: Device 769 (vbd) could not be connected. Device /dev/ram0 is mounted in the privileged domain, and so cannot be mounted by a guest. Using config file "/tmp/xm-test.conf". [dom0] Running `umount mnt' [dom0] Running `xm destroy 11_block_attach_shared_dom0-1154893059' Error: an integer is required PASS: 11_block_attach_shared_dom0.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3030.4 *** Finished cleaning domUs *** Test 12_block_attach_shared_domU started at Sun Aug 6 15:37:41 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3030.7 *** Finished cleaning domUs *** Test 12_block_attach_shared_domU started at Sun Aug 6 15:37:42 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 12_block_attach_shared_domU-1154893062 Console executing: ['/usr/sbin/xm', 'xm', 'console', '12_block_attach_shared_domU-1154893062'] [12_block_attach_shared_domU-1154893062] Sending `input' [dom0] Running `xm create /tmp/xm-test.conf' Error: Device 769 (vbd) could not be connected. Device /dev/ram0 is mounted in a guest domain, and so cannot be mounted now. Using config file "/tmp/xm-test.conf". [dom0] Running `xm destroy 12_block_attach_shared_domU-1154893062' [dom0] Running `xm destroy 12_block_attach_shared_domU-1154893062-2' Error: an integer is required PASS: 12_block_attach_shared_domU.test =================== All 10 tests passed =================== make[2]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/block-create' make[1]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/block-create' *** case block-destroy from group default *** Running tests for case block-destroy make[1]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/block-destroy' make check-TESTS make[2]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/block-destroy' cp 01_block-destroy_btblock_pos.py 01_block-destroy_btblock_pos.test chmod +x 01_block-destroy_btblock_pos.test cp 02_block-destroy_rtblock_pos.py 02_block-destroy_rtblock_pos.test chmod +x 02_block-destroy_rtblock_pos.test cp 03_block-destroy_nonexist_neg.py 03_block-destroy_nonexist_neg.test chmod +x 03_block-destroy_nonexist_neg.test cp 04_block-destroy_nonattached_neg.py 04_block-destroy_nonattached_neg.test chmod +x 04_block-destroy_nonattached_neg.test cp 05_block-destroy_byname_pos.py 05_block-destroy_byname_pos.test chmod +x 05_block-destroy_byname_pos.test cp 06_block-destroy_check_list_pos.py 06_block-destroy_check_list_pos.test chmod +x 06_block-destroy_check_list_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3036.2 *** Finished cleaning domUs *** Test 01_block-destroy_btblock_pos started at Sun Aug 6 15:37:47 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3036.6 *** Finished cleaning domUs *** Test 01_block-destroy_btblock_pos started at Sun Aug 6 15:37:48 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 01_block-destroy_btblock_pos-1154893068 Console executing: ['/usr/sbin/xm', 'xm', 'console', '01_block-destroy_btblock_pos-1154893068'] [01_block-destroy_btblock_pos-1154893068] Sending `input' [01_block-destroy_btblock_pos-1154893068] Sending `cat /proc/partitions | grep hda1' [01_block-destroy_btblock_pos-1154893068] Sending `echo $?' [01_block-destroy_btblock_pos-1154893068] Sending `cat /proc/partitions' [01_block-destroy_btblock_pos-1154893068] Sending `echo $?' [dom0] Running `xm block-detach 01_block-destroy_btblock_pos-1154893068 hda1' [dom0] Running `xm block-list 01_block-destroy_btblock_pos-1154893068 | awk '/^769/ {print $4}'' [01_block-destroy_btblock_pos-1154893068] Sending `cat /proc/partitions | grep hda1' [01_block-destroy_btblock_pos-1154893068] Sending `echo $?' [dom0] Running `xm shutdown 01_block-destroy_btblock_pos-1154893068' PASS: 01_block-destroy_btblock_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 01_block-destroy_btblock_pos-1154893068 12 64 1 -b---- 0.4 Domain-0 0 5000 32 r----- 3040.1 [dom0] Running `xm destroy 01_block-destroy_btblock_pos-1154893068' *** Finished cleaning domUs *** Test 02_block-destroy_rtblock_pos started at Sun Aug 6 15:37:59 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3040.9 *** Finished cleaning domUs *** Test 02_block-destroy_rtblock_pos started at Sun Aug 6 15:37:59 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 02_block-destroy_rtblock_pos-1154893079 Console executing: ['/usr/sbin/xm', 'xm', 'console', '02_block-destroy_rtblock_pos-1154893079'] [02_block-destroy_rtblock_pos-1154893079] Sending `input' [dom0] Running `xm block-attach 02_block-destroy_rtblock_pos-1154893079 phy:/dev/ram0 hda1 w' [dom0] Running `xm block-list 02_block-destroy_rtblock_pos-1154893079 | awk '/^769/ {print $4}'' 4 [02_block-destroy_rtblock_pos-1154893079] Sending `cat /proc/partitions | grep hda1' [02_block-destroy_rtblock_pos-1154893079] Sending `echo $?' [dom0] Running `xm block-detach 02_block-destroy_rtblock_pos-1154893079 hda1' [dom0] Running `xm block-list 02_block-destroy_rtblock_pos-1154893079 | awk '/^769/ {print $4}'' [02_block-destroy_rtblock_pos-1154893079] Sending `cat /proc/partitions | grep hda1' [02_block-destroy_rtblock_pos-1154893079] Sending `echo $?' [dom0] Running `xm shutdown 02_block-destroy_rtblock_pos-1154893079' PASS: 02_block-destroy_rtblock_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 02_block-destroy_rtblock_pos-1154893079 13 64 1 -b---- 0.3 Domain-0 0 5000 32 r----- 3044.8 [dom0] Running `xm destroy 02_block-destroy_rtblock_pos-1154893079' *** Finished cleaning domUs *** Test 03_block-destroy_nonexist_neg started at Sun Aug 6 15:38:09 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3045.7 *** Finished cleaning domUs *** Test 03_block-destroy_nonexist_neg started at Sun Aug 6 15:38:10 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm block-detach 9999 769' Error: the domain '9999' does not exist. PASS: 03_block-destroy_nonexist_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3046.4 *** Finished cleaning domUs *** Test 04_block-destroy_nonattached_neg started at Sun Aug 6 15:38:10 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3046.8 *** Finished cleaning domUs *** Test 04_block-destroy_nonattached_neg started at Sun Aug 6 15:38:11 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 04_block-destroy_nonattached_neg-1154893091 [dom0] Running `xm domid 04_block-destroy_nonattached_neg-1154893091' 14 [dom0] Running `xm block-detach 14 sda1' Error: Device sda1 not connected PASS: 04_block-destroy_nonattached_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 04_block-destroy_nonattached_neg-1154893091 14 64 1 -b---- 0.3 Domain-0 0 5000 32 r----- 3048.3 [dom0] Running `xm destroy 04_block-destroy_nonattached_neg-1154893091' *** Finished cleaning domUs *** Test 05_block-destroy_byname_pos started at Sun Aug 6 15:38:12 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3049.1 *** Finished cleaning domUs *** Test 05_block-destroy_byname_pos started at Sun Aug 6 15:38:13 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 05_block-destroy_byname_pos-1154893093 Console executing: ['/usr/sbin/xm', 'xm', 'console', '05_block-destroy_byname_pos-1154893093'] [05_block-destroy_byname_pos-1154893093] Sending `input' [05_block-destroy_byname_pos-1154893093] Sending `cat /proc/partitions | grep hda1' [05_block-destroy_byname_pos-1154893093] Sending `echo $?' [05_block-destroy_byname_pos-1154893093] Sending `cat /proc/partitions' [05_block-destroy_byname_pos-1154893093] Sending `echo $?' [dom0] Running `xm block-detach 05_block-destroy_byname_pos-1154893093 hda1' [dom0] Running `xm block-list 05_block-destroy_byname_pos-1154893093 | awk '/^769/ {print $4}'' [05_block-destroy_byname_pos-1154893093] Sending `cat /proc/partitions | grep hda1' [05_block-destroy_byname_pos-1154893093] Sending `echo $?' [dom0] Running `xm shutdown 05_block-destroy_byname_pos-1154893093' PASS: 05_block-destroy_byname_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 05_block-destroy_byname_pos-1154893093 15 64 1 -b---- 0.4 Domain-0 0 5000 32 r----- 3052.5 [dom0] Running `xm destroy 05_block-destroy_byname_pos-1154893093' *** Finished cleaning domUs *** Test 06_block-destroy_check_list_pos started at Sun Aug 6 15:38:24 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3054.0 *** Finished cleaning domUs *** Test 06_block-destroy_check_list_pos started at Sun Aug 6 15:38:24 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 06_block-destroy_check_list_pos-1154893104 Console executing: ['/usr/sbin/xm', 'xm', 'console', '06_block-destroy_check_list_pos-1154893104'] [06_block-destroy_check_list_pos-1154893104] Sending `input' [dom0] Running `xm block-attach 06_block-destroy_check_list_pos-1154893104 phy:/dev/ram0 hda1 w' [dom0] Running `xm block-list 06_block-destroy_check_list_pos-1154893104 | awk '/^769/ {print $4}'' 4 [dom0] Running `xm list --long 06_block-destroy_check_list_pos-1154893104' (domain (domid 16) (uuid 56ca38e3-96ff-21cd-0327-d1e7defd22d3) (vcpus 1) (cpu_weight 1.0) (memory 64) (maxmem 64) (features ) (name 06_block-destroy_check_list_pos-1154893104) (on_poweroff destroy) (on_reboot restart) (on_crash restart) (image (linux (kernel /boot/vmlinuz-2.6.16.13-xen) (ramdisk /home/unisys/xen-unstable.hg/tools/xm-test/ramdisk/initrd.img ) (root /dev/ram0) ) ) (device (vbd (backend 0) (dev hda1) (uname phy:/dev/ram0) (mode w))) (state -b----) (shutdown_reason poweroff) (cpu_time 0.30665032) (online_vcpus 1) (up_time 3.50040507317) (start_time 1154893105.21) (store_mfn 32782250) (console_mfn 32678893) ) [dom0] Running `xm block-detach 06_block-destroy_check_list_pos-1154893104 hda1' [dom0] Running `xm block-list 06_block-destroy_check_list_pos-1154893104 | awk '/^769/ {print $4}'' [dom0] Running `xm list --long 06_block-destroy_check_list_pos-1154893104' (domain (domid 16) (uuid 56ca38e3-96ff-21cd-0327-d1e7defd22d3) (vcpus 1) (cpu_weight 1.0) (memory 64) (maxmem 64) (features ) (name 06_block-destroy_check_list_pos-1154893104) (on_poweroff destroy) (on_reboot restart) (on_crash restart) (image (linux (kernel /boot/vmlinuz-2.6.16.13-xen) (ramdisk /home/unisys/xen-unstable.hg/tools/xm-test/ramdisk/initrd.img ) (root /dev/ram0) ) ) (state -b----) (shutdown_reason poweroff) (cpu_time 0.307626236) (online_vcpus 1) (up_time 4.59396004677) (start_time 1154893105.21) (store_mfn 32782250) (console_mfn 32678893) ) PASS: 06_block-destroy_check_list_pos.test ================== All 6 tests passed ================== make[2]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/block-destroy' make[1]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/block-destroy' *** case block-list from group default *** Running tests for case block-list make[1]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/block-list' make check-TESTS make[2]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/block-list' cp 01_block-list_pos.py 01_block-list_pos.test chmod +x 01_block-list_pos.test cp 02_block-list_attachbd_pos.py 02_block-list_attachbd_pos.test chmod +x 02_block-list_attachbd_pos.test cp 03_block-list_anotherbd_pos.py 03_block-list_anotherbd_pos.test chmod +x 03_block-list_anotherbd_pos.test cp 04_block-list_nodb_pos.py 04_block-list_nodb_pos.test chmod +x 04_block-list_nodb_pos.test cp 05_block-list_nonexist_neg.py 05_block-list_nonexist_neg.test chmod +x 05_block-list_nonexist_neg.test cp 06_block-list_checkremove_pos.py 06_block-list_checkremove_pos.test chmod +x 06_block-list_checkremove_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 06_block-destroy_check_list_pos-1154893104 16 64 1 -b---- 0.3 Domain-0 0 5000 32 r----- 3058.1 [dom0] Running `xm destroy 06_block-destroy_check_list_pos-1154893104' *** Finished cleaning domUs *** Test 01_block-list_pos started at Sun Aug 6 15:38:30 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3059.0 *** Finished cleaning domUs *** Test 01_block-list_pos started at Sun Aug 6 15:38:31 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 01_block-list_pos-1154893111 Console executing: ['/usr/sbin/xm', 'xm', 'console', '01_block-list_pos-1154893111'] [01_block-list_pos-1154893111] Sending `input' [dom0] Running `xm domid 01_block-list_pos-1154893111' 17 [dom0] Running `xm block-list 17' Vdev BE handle state evt-ch ring-ref BE-path 769 0 0 4 6 8 /local/domain/0/backend/vbd/17/769 [01_block-list_pos-1154893111] Sending `cat /proc/partitions | grep hda1' [01_block-list_pos-1154893111] Sending `echo $?' [dom0] Running `xm shutdown 01_block-list_pos-1154893111' PASS: 01_block-list_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 01_block-list_pos-1154893111 17 64 1 -b---- 0.3 Domain-0 0 5000 32 r----- 3062.0 [dom0] Running `xm destroy 01_block-list_pos-1154893111' *** Finished cleaning domUs *** Test 02_block-list_attachbd_pos started at Sun Aug 6 15:38:38 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3063.3 *** Finished cleaning domUs *** Test 02_block-list_attachbd_pos started at Sun Aug 6 15:38:38 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 02_block-list_attachbd_pos-1154893118 Console executing: ['/usr/sbin/xm', 'xm', 'console', '02_block-list_attachbd_pos-1154893118'] [02_block-list_attachbd_pos-1154893118] Sending `input' [dom0] Running `xm block-attach 02_block-list_attachbd_pos-1154893118 phy:/dev/ram0 hda1 w' [dom0] Running `xm block-list 02_block-list_attachbd_pos-1154893118 | awk '/^769/ {print $4}'' 4 [dom0] Running `xm domid 02_block-list_attachbd_pos-1154893118' 18 [dom0] Running `xm block-list 18' Vdev BE handle state evt-ch ring-ref BE-path 769 0 0 4 6 8 /local/domain/0/backend/vbd/18/769 [02_block-list_attachbd_pos-1154893118] Sending `cat /proc/partitions | grep hda1' [02_block-list_attachbd_pos-1154893118] Sending `echo $?' [dom0] Running `xm shutdown 02_block-list_attachbd_pos-1154893118' PASS: 02_block-list_attachbd_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 02_block-list_attachbd_pos-1154893118 18 64 1 -b---- 0.3 Domain-0 0 5000 32 r----- 3066.6 [dom0] Running `xm destroy 02_block-list_attachbd_pos-1154893118' *** Finished cleaning domUs *** Test 03_block-list_anotherbd_pos started at Sun Aug 6 15:38:46 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3068.1 *** Finished cleaning domUs *** Test 03_block-list_anotherbd_pos started at Sun Aug 6 15:38:46 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 03_block-list_anotherbd_pos-1154893126 Console executing: ['/usr/sbin/xm', 'xm', 'console', '03_block-list_anotherbd_pos-1154893126'] [03_block-list_anotherbd_pos-1154893126] Sending `input' [dom0] Running `xm domid 03_block-list_anotherbd_pos-1154893126' 19 [dom0] Running `xm block-list 19' Vdev BE handle state evt-ch ring-ref BE-path 769 0 0 4 6 8 /local/domain/0/backend/vbd/19/769 [dom0] Running `xm domid 03_block-list_anotherbd_pos-1154893126' 19 [dom0] Running `xm block-attach 19 phy:/dev/ram1 hda2 w' [dom0] Running `xm domid 03_block-list_anotherbd_pos-1154893126' 19 [dom0] Running `xm block-list 19' Vdev BE handle state evt-ch ring-ref BE-path 769 0 0 4 6 8 /local/domain/0/backend/vbd/19/769 770 0 0 4 7 9 /local/domain/0/backend/vbd/19/770 [03_block-list_anotherbd_pos-1154893126] Sending `cat /proc/partitions | grep hda1;cat /proc/partitions | grep hda2' [03_block-list_anotherbd_pos-1154893126] Sending `echo $?' [dom0] Running `xm shutdown 03_block-list_anotherbd_pos-1154893126' PASS: 03_block-list_anotherbd_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 03_block-list_anotherbd_pos-1154893126 19 64 1 -b---- 0.3 Domain-0 0 5000 32 r----- 3072.6 [dom0] Running `xm destroy 03_block-list_anotherbd_pos-1154893126' *** Finished cleaning domUs *** Test 04_block-list_nodb_pos started at Sun Aug 6 15:38:54 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3074.5 *** Finished cleaning domUs *** Test 04_block-list_nodb_pos started at Sun Aug 6 15:38:55 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 04_block-list_nodb_pos-1154893135 [dom0] Running `xm domid 04_block-list_nodb_pos-1154893135' 20 [dom0] Running `xm block-list 20' PASS: 04_block-list_nodb_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 04_block-list_nodb_pos-1154893135 20 64 1 -b---- 0.3 Domain-0 0 5000 32 r----- 3076.1 [dom0] Running `xm destroy 04_block-list_nodb_pos-1154893135' *** Finished cleaning domUs *** Test 05_block-list_nonexist_neg started at Sun Aug 6 15:38:57 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3077.3 *** Finished cleaning domUs *** Test 05_block-list_nonexist_neg started at Sun Aug 6 15:38:57 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm block-list 9999' Error: the domain '9999' does not exist. PASS: 05_block-list_nonexist_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3078.0 *** Finished cleaning domUs *** Test 06_block-list_checkremove_pos started at Sun Aug 6 15:38:58 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3078.3 *** Finished cleaning domUs *** Test 06_block-list_checkremove_pos started at Sun Aug 6 15:38:58 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 06_block-list_checkremove_pos-1154893138 [dom0] Running `xm block-list 06_block-list_checkremove_pos-1154893138' [dom0] Running `xm block-attach 06_block-list_checkremove_pos-1154893138 phy:/dev/ram0 hda1 w' [dom0] Running `xm block-list 06_block-list_checkremove_pos-1154893138 | awk '/^769/ {print $4}'' 4 [dom0] Running `xm block-list 06_block-list_checkremove_pos-1154893138' Vdev BE handle state evt-ch ring-ref BE-path 769 0 0 4 6 8 /local/domain/0/backend/vbd/21/769 [dom0] Running `xm block-attach 06_block-list_checkremove_pos-1154893138 phy:/dev/ram1 hda2 w' [dom0] Running `xm block-list 06_block-list_checkremove_pos-1154893138 | awk '/^770/ {print $4}'' 4 [dom0] Running `xm block-list 06_block-list_checkremove_pos-1154893138' Vdev BE handle state evt-ch ring-ref BE-path 769 0 0 4 6 8 /local/domain/0/backend/vbd/21/769 770 0 0 4 7 9 /local/domain/0/backend/vbd/21/770 [dom0] Running `xm block-detach 06_block-list_checkremove_pos-1154893138 hda1' [dom0] Running `xm block-list 06_block-list_checkremove_pos-1154893138 | awk '/^769/ {print $4}'' [dom0] Running `xm block-list 06_block-list_checkremove_pos-1154893138' Vdev BE handle state evt-ch ring-ref BE-path 770 0 0 4 7 9 /local/domain/0/backend/vbd/21/770 [dom0] Running `xm block-detach 06_block-list_checkremove_pos-1154893138 hda2' [dom0] Running `xm block-list 06_block-list_checkremove_pos-1154893138 | awk '/^770/ {print $4}'' [dom0] Running `xm block-list 06_block-list_checkremove_pos-1154893138' [dom0] Running `xm shutdown 06_block-list_checkremove_pos-1154893138' PASS: 06_block-list_checkremove_pos.test ================== All 6 tests passed ================== make[2]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/block-list' make[1]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/block-list' *** case block-integrity from group default *** Running tests for case block-integrity make[1]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/block-integrity' make check-TESTS make[2]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/block-integrity' cp 01_block_device_read_verify.py 01_block_device_read_verify.test chmod +x 01_block_device_read_verify.test cp 02_block_device_write_verify.py 02_block_device_write_verify.test chmod +x 02_block_device_write_verify.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 06_block-list_checkremove_pos-1154893138 21 64 1 -b---- 0.3 Domain-0 0 5000 32 r----- 3084.7 [dom0] Running `xm destroy 06_block-list_checkremove_pos-1154893138' *** Finished cleaning domUs *** Test 01_block_device_read_verify started at Sun Aug 6 15:39:04 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3085.6 *** Finished cleaning domUs *** Test 01_block_device_read_verify started at Sun Aug 6 15:39:04 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 01_block_device_read_verify-1154893144 Console executing: ['/usr/sbin/xm', 'xm', 'console', '01_block_device_read_verify-1154893144'] [01_block_device_read_verify-1154893144] Sending `input' [dom0] Running `cat /dev/urandom > /dev/ram1' cat: write error: No space left on device [dom0] Running `md5sum /dev/ram1' bfc0b1b9a3528bd841e293afe890229b /dev/ram1 [dom0] Running `xm block-attach 01_block_device_read_verify-1154893144 phy:ram1 hda1 w' [dom0] Running `xm block-list 01_block_device_read_verify-1154893144 | awk '/^769/ {print $4}'' 4 [01_block_device_read_verify-1154893144] Sending `md5sum /dev/hda1' [01_block_device_read_verify-1154893144] Sending `echo $?' [dom0] Running `xm shutdown 01_block_device_read_verify-1154893144' md5sum dom0: bfc0b1b9a3528bd841e293afe890229b md5sum domU: bfc0b1b9a3528bd841e293afe890229b PASS: 01_block_device_read_verify.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 01_block_device_read_verify-1154893144 22 64 1 -b---- 0.6 Domain-0 0 5000 32 r----- 3092.4 [dom0] Running `xm destroy 01_block_device_read_verify-1154893144' *** Finished cleaning domUs *** Test 02_block_device_write_verify started at Sun Aug 6 15:39:16 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3093.9 *** Finished cleaning domUs *** Test 02_block_device_write_verify started at Sun Aug 6 15:39:16 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 02_block_device_write_verify-1154893156 Console executing: ['/usr/sbin/xm', 'xm', 'console', '02_block_device_write_verify-1154893156'] [02_block_device_write_verify-1154893156] Sending `input' [dom0] Running `xm block-attach 02_block_device_write_verify-1154893156 phy:ram1 hda1 w' [dom0] Running `xm block-list 02_block_device_write_verify-1154893156 | awk '/^769/ {print $4}'' 4 [02_block_device_write_verify-1154893156] Sending `dd if=/dev/urandom bs=512 count=`cat /sys/block/hda1/size` | tee /dev/hda1 | md5sum' [02_block_device_write_verify-1154893156] Sending `echo $?' [dom0] Running `xm shutdown 02_block_device_write_verify-1154893156' [dom0] Running `md5sum /dev/ram1' 0c4324aeb57c9f651d0e87f39382e5ac /dev/ram1 md5sum domU: 0c4324aeb57c9f651d0e87f39382e5ac md5sum dom0: 0c4324aeb57c9f651d0e87f39382e5ac PASS: 02_block_device_write_verify.test ================== All 2 tests passed ================== make[2]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/block-integrity' make[1]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/block-integrity' *** case console from group default *** Running tests for case console make[1]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/console' make check-TESTS make[2]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/console' cp 01_console_badopt_neg.py 01_console_badopt_neg.test chmod +x 01_console_badopt_neg.test cp 02_console_baddom_neg.py 02_console_baddom_neg.test chmod +x 02_console_baddom_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 02_block_device_write_verify-1154893156 23 64 1 -b---- 4.9 Domain-0 0 5000 32 r----- 3097.5 [dom0] Running `xm destroy 02_block_device_write_verify-1154893156' *** Finished cleaning domUs *** Test 01_console_badopt_neg started at Sun Aug 6 15:39:28 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3099.2 *** Finished cleaning domUs *** Test 01_console_badopt_neg started at Sun Aug 6 15:39:28 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm console -x' Error: the domain '-x' does not exist. PASS: 01_console_badopt_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3099.9 *** Finished cleaning domUs *** Test 02_console_baddom_neg started at Sun Aug 6 15:39:29 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3100.3 *** Finished cleaning domUs *** Test 02_console_baddom_neg started at Sun Aug 6 15:39:29 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm console 5000' Error: the domain '5000' does not exist. [dom0] Running `xm console NON_EXIST' Error: the domain 'NON_EXIST' does not exist. PASS: 02_console_baddom_neg.test ================== All 2 tests passed ================== make[2]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/console' make[1]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/console' *** case create from group default *** Running tests for case create make[1]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/create' make check-TESTS make[2]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/create' cp 01_create_basic_pos.py 01_create_basic_pos.test chmod +x 01_create_basic_pos.test cp 02_create_noparm_neg.py 02_create_noparm_neg.test chmod +x 02_create_noparm_neg.test cp 03_create_badparm_neg.py 03_create_badparm_neg.test chmod +x 03_create_badparm_neg.test cp 04_create_conflictname_neg.py 04_create_conflictname_neg.test chmod +x 04_create_conflictname_neg.test cp 06_create_mem_neg.py 06_create_mem_neg.test chmod +x 06_create_mem_neg.test cp 07_create_mem64_pos.py 07_create_mem64_pos.test chmod +x 07_create_mem64_pos.test cp 08_create_mem128_pos.py 08_create_mem128_pos.test chmod +x 08_create_mem128_pos.test cp 09_create_mem256_pos.py 09_create_mem256_pos.test chmod +x 09_create_mem256_pos.test cp 10_create_fastdestroy.py 10_create_fastdestroy.test chmod +x 10_create_fastdestroy.test cp 11_create_concurrent_pos.py 11_create_concurrent_pos.test chmod +x 11_create_concurrent_pos.test cp 12_create_concurrent_stress_pos.py 12_create_concurrent_stress_pos.test chmod +x 12_create_concurrent_stress_pos.test cp 13_create_multinic_pos.py 13_create_multinic_pos.test chmod +x 13_create_multinic_pos.test cp 14_create_blockroot_pos.py 14_create_blockroot_pos.test chmod +x 14_create_blockroot_pos.test cp 15_create_smallmem_pos.py 15_create_smallmem_pos.test chmod +x 15_create_smallmem_pos.test cp 16_create_smallmem_neg.py 16_create_smallmem_neg.test chmod +x 16_create_smallmem_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3101.4 *** Finished cleaning domUs *** Test 01_create_basic_pos started at Sun Aug 6 15:39:31 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3101.8 *** Finished cleaning domUs *** Test 01_create_basic_pos started at Sun Aug 6 15:39:31 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm info' host : m1132-xenunstable release : 2.6.16.13-xen version : #1 SMP Sun Aug 6 11:46:44 EDT 2006 machine : x86_64 nr_cpus : 32 nr_nodes : 1 sockets_per_node : 16 cores_per_socket : 2 threads_per_core : 1 cpu_mhz : 3000 hw_caps : bfebfbff:20100800:00000000:00000180:000064bd:00000000:00000001 total_memory : 130943 free_memory : 124387 xen_major : 3 xen_minor : 0 xen_extra : -unstable xen_caps : xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : Fri Aug 04 20:34:44 2006 +0100 10949:ffa5b2975dff cc_compiler : gcc version 4.1.0 (SUSE Linux) cc_compile_by : root cc_compile_domain : site cc_compile_date : Sun Aug 6 11:39:52 EDT 2006 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 01_create_basic_pos-1154893171 Console executing: ['/usr/sbin/xm', 'xm', 'console', '01_create_basic_pos-1154893171'] [01_create_basic_pos-1154893171] Sending `input' [01_create_basic_pos-1154893171] Sending `ls' [01_create_basic_pos-1154893171] Sending `echo $?' [dom0] Running `xm shutdown 01_create_basic_pos-1154893171' PASS: 01_create_basic_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 01_create_basic_pos-1154893171 24 64 1 -b---- 0.3 Domain-0 0 5000 32 r----- 3103.8 [dom0] Running `xm destroy 01_create_basic_pos-1154893171' *** Finished cleaning domUs *** Test 02_create_noparm_neg started at Sun Aug 6 15:39:37 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3104.7 *** Finished cleaning domUs *** Test 02_create_noparm_neg started at Sun Aug 6 15:39:38 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create' Error: Cannot open config file "xmdefconfig" PASS: 02_create_noparm_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3105.3 *** Finished cleaning domUs *** Test 03_create_badparm_neg started at Sun Aug 6 15:39:38 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3105.6 *** Finished cleaning domUs *** Test 03_create_badparm_neg started at Sun Aug 6 15:39:38 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create -x' Error: option -x not recognized PASS: 03_create_badparm_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3106.3 *** Finished cleaning domUs *** Test 04_create_conflictname_neg started at Sun Aug 6 15:39:39 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3106.7 *** Finished cleaning domUs *** Test 04_create_conflictname_neg started at Sun Aug 6 15:39:39 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain default [dom0] Running `xm create /tmp/xm-test.conf' Error: VM name 'default' already in use by domain 25 Using config file "/tmp/xm-test.conf". [dom0] Running `xm shutdown default' PASS: 04_create_conflictname_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3108.3 default 25 64 1 -b---- 0.3 [dom0] Running `xm destroy default' *** Finished cleaning domUs *** Test 06_create_mem_neg started at Sun Aug 6 15:39:42 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3109.5 *** Finished cleaning domUs *** Test 06_create_mem_neg started at Sun Aug 6 15:39:42 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Error: (9, 'Bad file descriptor') Using config file "/tmp/xm-test.conf". [dom0] Running `xm info' host : m1132-xenunstable release : 2.6.16.13-xen version : #1 SMP Sun Aug 6 11:46:44 EDT 2006 machine : x86_64 nr_cpus : 32 nr_nodes : 1 sockets_per_node : 16 cores_per_socket : 2 threads_per_core : 1 cpu_mhz : 3000 hw_caps : bfebfbff:20100800:00000000:00000180:000064bd:00000000:00000001 total_memory : 130943 free_memory : 124387 xen_major : 3 xen_minor : 0 xen_extra : -unstable xen_caps : xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : Fri Aug 04 20:34:44 2006 +0100 10949:ffa5b2975dff cc_compiler : gcc version 4.1.0 (SUSE Linux) cc_compile_by : root cc_compile_domain : site cc_compile_date : Sun Aug 6 11:39:52 EDT 2006 [dom0] Running `xm create /tmp/xm-test.conf' Error: I need 134192128 KiB, but dom0_min_mem is 200704 and shrinking to 200704 KiB would leave only 132292260 KiB free. Using config file "/tmp/xm-test.conf". PASS: 06_create_mem_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3113.7 *** Finished cleaning domUs *** Test 07_create_mem64_pos started at Sun Aug 6 15:41:46 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3114.1 *** Finished cleaning domUs *** Test 07_create_mem64_pos started at Sun Aug 6 15:41:46 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm info' host : m1132-xenunstable release : 2.6.16.13-xen version : #1 SMP Sun Aug 6 11:46:44 EDT 2006 machine : x86_64 nr_cpus : 32 nr_nodes : 1 sockets_per_node : 16 cores_per_socket : 2 threads_per_core : 1 cpu_mhz : 3000 hw_caps : bfebfbff:20100800:00000000:00000180:000064bd:00000000:00000001 total_memory : 130943 free_memory : 124387 xen_major : 3 xen_minor : 0 xen_extra : -unstable xen_caps : xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : Fri Aug 04 20:34:44 2006 +0100 10949:ffa5b2975dff cc_compiler : gcc version 4.1.0 (SUSE Linux) cc_compile_by : root cc_compile_domain : site cc_compile_date : Sun Aug 6 11:39:52 EDT 2006 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 07_create_mem64_pos-1154893307 [dom0] Running `xm domid 07_create_mem64_pos-1154893307' 28 [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 07_create_mem64_pos-1154893307 28 64 1 -b---- 0.3 Domain-0 0 5000 32 r----- 3115.6 [dom0] Running `xm shutdown 07_create_mem64_pos-1154893307' PASS: 07_create_mem64_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 07_create_mem64_pos-1154893307 28 64 1 -b---- 0.3 Domain-0 0 5000 32 r----- 3116.2 [dom0] Running `xm destroy 07_create_mem64_pos-1154893307' *** Finished cleaning domUs *** Test 08_create_mem128_pos started at Sun Aug 6 15:41:49 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3117.9 *** Finished cleaning domUs *** Test 08_create_mem128_pos started at Sun Aug 6 15:41:49 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm info' host : m1132-xenunstable release : 2.6.16.13-xen version : #1 SMP Sun Aug 6 11:46:44 EDT 2006 machine : x86_64 nr_cpus : 32 nr_nodes : 1 sockets_per_node : 16 cores_per_socket : 2 threads_per_core : 1 cpu_mhz : 3000 hw_caps : bfebfbff:20100800:00000000:00000180:000064bd:00000000:00000001 total_memory : 130943 free_memory : 124387 xen_major : 3 xen_minor : 0 xen_extra : -unstable xen_caps : xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : Fri Aug 04 20:34:44 2006 +0100 10949:ffa5b2975dff cc_compiler : gcc version 4.1.0 (SUSE Linux) cc_compile_by : root cc_compile_domain : site cc_compile_date : Sun Aug 6 11:39:52 EDT 2006 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 08_create_mem128_pos-1154893309 [dom0] Running `xm domid 08_create_mem128_pos-1154893309' 29 [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 08_create_mem128_pos-1154893309 29 128 1 -b---- 0.3 Domain-0 0 5000 32 r----- 3119.5 [dom0] Running `xm shutdown 08_create_mem128_pos-1154893309' PASS: 08_create_mem128_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 08_create_mem128_pos-1154893309 29 128 1 -b---- 0.3 Domain-0 0 5000 32 r----- 3120.1 [dom0] Running `xm destroy 08_create_mem128_pos-1154893309' *** Finished cleaning domUs *** Test 09_create_mem256_pos started at Sun Aug 6 15:41:52 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3122.3 *** Finished cleaning domUs *** Test 09_create_mem256_pos started at Sun Aug 6 15:41:52 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm info' host : m1132-xenunstable release : 2.6.16.13-xen version : #1 SMP Sun Aug 6 11:46:44 EDT 2006 machine : x86_64 nr_cpus : 32 nr_nodes : 1 sockets_per_node : 16 cores_per_socket : 2 threads_per_core : 1 cpu_mhz : 3000 hw_caps : bfebfbff:20100800:00000000:00000180:000064bd:00000000:00000001 total_memory : 130943 free_memory : 124387 xen_major : 3 xen_minor : 0 xen_extra : -unstable xen_caps : xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : Fri Aug 04 20:34:44 2006 +0100 10949:ffa5b2975dff cc_compiler : gcc version 4.1.0 (SUSE Linux) cc_compile_by : root cc_compile_domain : site cc_compile_date : Sun Aug 6 11:39:52 EDT 2006 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 09_create_mem256_pos-1154893313 [dom0] Running `xm domid 09_create_mem256_pos-1154893313' 30 [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 09_create_mem256_pos-1154893313 30 256 1 r----- 0.4 Domain-0 0 5000 32 r----- 3124.0 [dom0] Running `xm shutdown 09_create_mem256_pos-1154893313' PASS: 09_create_mem256_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 09_create_mem256_pos-1154893313 30 256 1 -b---- 0.4 Domain-0 0 5000 32 r----- 3124.6 [dom0] Running `xm destroy 09_create_mem256_pos-1154893313' *** Finished cleaning domUs *** Test 10_create_fastdestroy started at Sun Aug 6 15:41:55 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3126.5 *** Finished cleaning domUs *** Test 10_create_fastdestroy started at Sun Aug 6 15:41:56 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain testdomain [dom0] Running `xm destroy testdomain' PASS: 10_create_fastdestroy.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3201.7 *** Finished cleaning domUs *** Test 11_create_concurrent_pos started at Sun Aug 6 15:42:50 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3202.0 *** Finished cleaning domUs *** Test 11_create_concurrent_pos started at Sun Aug 6 15:42:50 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm info' host : m1132-xenunstable release : 2.6.16.13-xen version : #1 SMP Sun Aug 6 11:46:44 EDT 2006 machine : x86_64 nr_cpus : 32 nr_nodes : 1 sockets_per_node : 16 cores_per_socket : 2 threads_per_core : 1 cpu_mhz : 3000 hw_caps : bfebfbff:20100800:00000000:00000180:000064bd:00000000:00000001 total_memory : 130943 free_memory : 124387 xen_major : 3 xen_minor : 0 xen_extra : -unstable xen_caps : xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : Fri Aug 04 20:34:44 2006 +0100 10949:ffa5b2975dff cc_compiler : gcc version 4.1.0 (SUSE Linux) cc_compile_by : root cc_compile_domain : site cc_compile_date : Sun Aug 6 11:39:52 EDT 2006 *** 5182 doms is too many: capping at 50 Watch out! I'm trying to create 50 DomUs! [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_0 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_0'] [11_create_0] Sending `input' [11_create_0] Sending `ls' [11_create_0] Sending `echo $?' [0] Started 11_create_0 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_1 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_1'] [11_create_1] Sending `input' [11_create_1] Sending `ls' [11_create_1] Sending `echo $?' [1] Started 11_create_1 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_2 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_2'] [11_create_2] Sending `input' [11_create_2] Sending `ls' [11_create_2] Sending `echo $?' [2] Started 11_create_2 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_3 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_3'] [11_create_3] Sending `input' [11_create_3] Sending `ls' [11_create_3] Sending `echo $?' [3] Started 11_create_3 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_4 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_4'] [11_create_4] Sending `input' [11_create_4] Sending `ls' [11_create_4] Sending `echo $?' [4] Started 11_create_4 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_5 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_5'] [11_create_5] Sending `input' [11_create_5] Sending `ls' [11_create_5] Sending `echo $?' [5] Started 11_create_5 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_6 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_6'] [11_create_6] Sending `input' [11_create_6] Sending `ls' [11_create_6] Sending `echo $?' [6] Started 11_create_6 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_7 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_7'] [11_create_7] Sending `input' [11_create_7] Sending `ls' [11_create_7] Sending `echo $?' [7] Started 11_create_7 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_8 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_8'] [11_create_8] Sending `input' [11_create_8] Sending `ls' [11_create_8] Sending `echo $?' [8] Started 11_create_8 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_9 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_9'] [11_create_9] Sending `input' [11_create_9] Sending `ls' [11_create_9] Sending `echo $?' [9] Started 11_create_9 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_10 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_10'] [11_create_10] Sending `input' [11_create_10] Sending `ls' [11_create_10] Sending `echo $?' [10] Started 11_create_10 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_11 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_11'] [11_create_11] Sending `input' [11_create_11] Sending `ls' [11_create_11] Sending `echo $?' [11] Started 11_create_11 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_12 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_12'] [11_create_12] Sending `input' [11_create_12] Sending `ls' [11_create_12] Sending `echo $?' [12] Started 11_create_12 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_13 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_13'] [11_create_13] Sending `input' [11_create_13] Sending `ls' [11_create_13] Sending `echo $?' [13] Started 11_create_13 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_14 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_14'] [11_create_14] Sending `input' [11_create_14] Sending `ls' [11_create_14] Sending `echo $?' [14] Started 11_create_14 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_15 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_15'] [11_create_15] Sending `input' [11_create_15] Sending `ls' [11_create_15] Sending `echo $?' [15] Started 11_create_15 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_16 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_16'] [11_create_16] Sending `input' [11_create_16] Sending `ls' [11_create_16] Sending `echo $?' [16] Started 11_create_16 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_17 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_17'] [11_create_17] Sending `input' [11_create_17] Sending `ls' [11_create_17] Sending `echo $?' [17] Started 11_create_17 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_18 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_18'] [11_create_18] Sending `input' [11_create_18] Sending `ls' [11_create_18] Sending `echo $?' [18] Started 11_create_18 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_19 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_19'] [11_create_19] Sending `input' [11_create_19] Sending `ls' [11_create_19] Sending `echo $?' [19] Started 11_create_19 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_20 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_20'] [11_create_20] Sending `input' [11_create_20] Sending `ls' [11_create_20] Sending `echo $?' [20] Started 11_create_20 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_21 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_21'] [11_create_21] Sending `input' [11_create_21] Sending `ls' [11_create_21] Sending `echo $?' [21] Started 11_create_21 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_22 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_22'] [11_create_22] Sending `input' [11_create_22] Sending `ls' [11_create_22] Sending `echo $?' [22] Started 11_create_22 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_23 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_23'] [11_create_23] Sending `input' [11_create_23] Sending `ls' [11_create_23] Sending `echo $?' [23] Started 11_create_23 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_24 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_24'] [11_create_24] Sending `input' [11_create_24] Sending `ls' [11_create_24] Sending `echo $?' [24] Started 11_create_24 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_25 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_25'] [11_create_25] Sending `input' [11_create_25] Sending `ls' [11_create_25] Sending `echo $?' [25] Started 11_create_25 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_26 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_26'] [11_create_26] Sending `input' [11_create_26] Sending `ls' [11_create_26] Sending `echo $?' [26] Started 11_create_26 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_27 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_27'] [11_create_27] Sending `input' [11_create_27] Sending `ls' [11_create_27] Sending `echo $?' [27] Started 11_create_27 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_28 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_28'] [11_create_28] Sending `input' [11_create_28] Sending `ls' [11_create_28] Sending `echo $?' [28] Started 11_create_28 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_29 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_29'] [11_create_29] Sending `input' [11_create_29] Sending `ls' [11_create_29] Sending `echo $?' [29] Started 11_create_29 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_30 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_30'] [11_create_30] Sending `input' [11_create_30] Sending `ls' [11_create_30] Sending `echo $?' [30] Started 11_create_30 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_31 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_31'] [11_create_31] Sending `input' [11_create_31] Sending `ls' [11_create_31] Sending `echo $?' [31] Started 11_create_31 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_32 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_32'] [11_create_32] Sending `input' [11_create_32] Sending `ls' [11_create_32] Sending `echo $?' [32] Started 11_create_32 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 11_create_33 Console executing: ['/usr/sbin/xm', 'xm', 'console', '11_create_33'] Traceback (most recent call last): File "./11_create_concurrent_pos.test", line 46, in ? cons = dom.start() File "/home/unisys/xen-unstable.hg/tools/xm-test/lib/XmTestLib/XenDomain.py", line 233, in start return self.getConsole() File "/home/unisys/xen-unstable.hg/tools/xm-test/lib/XmTestLib/XenDomain.py", line 284, in getConsole self.console.sendInput("input") File "/home/unisys/xen-unstable.hg/tools/xm-test/lib/XmTestLib/Console.py", line 241, in sendInput realOutput = self.__runCmd(input) File "/home/unisys/xen-unstable.hg/tools/xm-test/lib/XmTestLib/Console.py", line 168, in __runCmd self.__getprompt(self.consoleFd) File "/home/unisys/xen-unstable.hg/tools/xm-test/lib/XmTestLib/Console.py", line 139, in __getprompt % self.limit, RUNAWAY) XmTestLib.Console.ConsoleError: Console run-away (exceeded 131072 bytes) FAIL: 11_create_concurrent_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 11_create_0 81 24 1 -b---- 0.3 11_create_1 82 24 1 -b---- 0.3 11_create_10 91 24 1 -b---- 0.3 11_create_11 92 24 1 -b---- 0.3 11_create_12 93 24 1 -b---- 0.3 11_create_13 94 24 1 -b---- 0.4 11_create_14 95 24 1 -b---- 0.4 11_create_15 96 24 1 -b---- 0.3 11_create_16 97 24 1 -b---- 0.3 11_create_17 98 24 1 -b---- 0.3 11_create_18 99 24 1 -b---- 0.3 11_create_19 100 24 1 -b---- 0.3 11_create_2 83 24 1 -b---- 0.3 11_create_20 101 24 1 -b---- 0.3 11_create_21 102 24 1 -b---- 0.3 11_create_22 103 24 1 -b---- 0.3 11_create_23 104 24 1 -b---- 0.3 11_create_24 105 24 1 -b---- 0.3 11_create_25 106 24 1 -b---- 0.3 11_create_26 107 24 1 -b---- 0.3 11_create_27 108 24 1 -b---- 0.3 11_create_28 109 24 1 -b---- 0.3 11_create_29 110 24 1 -b---- 0.3 11_create_3 84 24 1 -b---- 0.3 11_create_30 111 24 1 -b---- 0.4 11_create_31 112 24 1 -b---- 0.4 11_create_32 113 24 1 -b---- 0.3 11_create_33 114 24 1 r----- 3.6 11_create_4 85 24 1 -b---- 0.3 11_create_5 86 24 1 -b---- 0.3 11_create_6 87 24 1 -b---- 0.3 11_create_7 88 24 1 -b---- 0.3 11_create_8 89 24 1 -b---- 0.3 11_create_9 90 24 1 -b---- 0.3 Domain-0 0 5000 32 r----- 3250.3 [dom0] Running `xm destroy 11_create_0' [dom0] Running `xm destroy 11_create_1' [dom0] Running `xm destroy 11_create_10' [dom0] Running `xm destroy 11_create_11' [dom0] Running `xm destroy 11_create_12' [dom0] Running `xm destroy 11_create_13' [dom0] Running `xm destroy 11_create_14' [dom0] Running `xm destroy 11_create_15' [dom0] Running `xm destroy 11_create_16' [dom0] Running `xm destroy 11_create_17' [dom0] Running `xm destroy 11_create_18' [dom0] Running `xm destroy 11_create_19' [dom0] Running `xm destroy 11_create_2' [dom0] Running `xm destroy 11_create_20' [dom0] Running `xm destroy 11_create_21' [dom0] Running `xm destroy 11_create_22' [dom0] Running `xm destroy 11_create_23' [dom0] Running `xm destroy 11_create_24' [dom0] Running `xm destroy 11_create_25' [dom0] Running `xm destroy 11_create_26' [dom0] Running `xm destroy 11_create_27' [dom0] Running `xm destroy 11_create_28' [dom0] Running `xm destroy 11_create_29' [dom0] Running `xm destroy 11_create_3' [dom0] Running `xm destroy 11_create_30' [dom0] Running `xm destroy 11_create_31' [dom0] Running `xm destroy 11_create_32' [dom0] Running `xm destroy 11_create_33' [dom0] Running `xm destroy 11_create_4' [dom0] Running `xm destroy 11_create_5' [dom0] Running `xm destroy 11_create_6' [dom0] Running `xm destroy 11_create_7' [dom0] Running `xm destroy 11_create_8' [dom0] Running `xm destroy 11_create_9' *** Finished cleaning domUs *** Test 12_create_concurrent_stress_pos started at Sun Aug 6 15:46:11 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3275.2 *** Finished cleaning domUs *** Test 12_create_concurrent_stress_pos started at Sun Aug 6 15:46:12 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 12_create_concurrent_stress_pos-1154893572 Console executing: ['/usr/sbin/xm', 'xm', 'console', '12_create_concurrent_stress_pos-1154893572'] [12_create_concurrent_stress_pos-1154893572] Sending `input' [0/5] Started 12_create_concurrent_stress_pos-1154893572 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 12_create_concurrent_stress_pos-1154893575 Console executing: ['/usr/sbin/xm', 'xm', 'console', '12_create_concurrent_stress_pos-1154893575'] [12_create_concurrent_stress_pos-1154893575] Sending `input' [1/5] Started 12_create_concurrent_stress_pos-1154893575 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 12_create_concurrent_stress_pos-1154893578 Console executing: ['/usr/sbin/xm', 'xm', 'console', '12_create_concurrent_stress_pos-1154893578'] [12_create_concurrent_stress_pos-1154893578] Sending `input' [2/5] Started 12_create_concurrent_stress_pos-1154893578 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 12_create_concurrent_stress_pos-1154893581 Console executing: ['/usr/sbin/xm', 'xm', 'console', '12_create_concurrent_stress_pos-1154893581'] [12_create_concurrent_stress_pos-1154893581] Sending `input' [3/5] Started 12_create_concurrent_stress_pos-1154893581 [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 12_create_concurrent_stress_pos-1154893584 Console executing: ['/usr/sbin/xm', 'xm', 'console', '12_create_concurrent_stress_pos-1154893584'] [12_create_concurrent_stress_pos-1154893584] Sending `input' [4/5] Started 12_create_concurrent_stress_pos-1154893584 Starting task on 12_create_concurrent_stress_pos-1154893572 [12_create_concurrent_stress_pos-1154893572] Sending `gzip -c /dev/null & ' Starting task on 12_create_concurrent_stress_pos-1154893575 [12_create_concurrent_stress_pos-1154893575] Sending `gzip -c /dev/null & ' Starting task on 12_create_concurrent_stress_pos-1154893578 [12_create_concurrent_stress_pos-1154893578] Sending `gzip -c /dev/null & ' Starting task on 12_create_concurrent_stress_pos-1154893581 [12_create_concurrent_stress_pos-1154893581] Sending `gzip -c /dev/null & ' Starting task on 12_create_concurrent_stress_pos-1154893584 [12_create_concurrent_stress_pos-1154893584] Sending `gzip -c /dev/null & ' Waiting 60 seconds... Testing domain 12_create_concurrent_stress_pos-1154893572... [12_create_concurrent_stress_pos-1154893572] Sending `ls' [12_create_concurrent_stress_pos-1154893572] Sending `echo $?' Testing domain 12_create_concurrent_stress_pos-1154893575... [12_create_concurrent_stress_pos-1154893575] Sending `ls' [12_create_concurrent_stress_pos-1154893575] Sending `echo $?' Testing domain 12_create_concurrent_stress_pos-1154893578... [12_create_concurrent_stress_pos-1154893578] Sending `ls' [12_create_concurrent_stress_pos-1154893578] Sending `echo $?' Testing domain 12_create_concurrent_stress_pos-1154893581... [12_create_concurrent_stress_pos-1154893581] Sending `ls' [12_create_concurrent_stress_pos-1154893581] Sending `echo $?' Testing domain 12_create_concurrent_stress_pos-1154893584... [12_create_concurrent_stress_pos-1154893584] Sending `ls' [12_create_concurrent_stress_pos-1154893584] Sending `echo $?' PASS: 12_create_concurrent_stress_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 12_create_concurrent_stress_pos-1154893572 115 32 1 r----- 74.8 12_create_concurrent_stress_pos-1154893575 116 32 1 r----- 73.8 12_create_concurrent_stress_pos-1154893578 117 32 1 r----- 72.8 12_create_concurrent_stress_pos-1154893581 118 32 1 r----- 71.8 12_create_concurrent_stress_pos-1154893584 119 32 1 r----- 70.8 Domain-0 0 5000 32 r----- 3283.2 [dom0] Running `xm destroy 12_create_concurrent_stress_pos-1154893572' [dom0] Running `xm destroy 12_create_concurrent_stress_pos-1154893575' [dom0] Running `xm destroy 12_create_concurrent_stress_pos-1154893578' [dom0] Running `xm destroy 12_create_concurrent_stress_pos-1154893581' [dom0] Running `xm destroy 12_create_concurrent_stress_pos-1154893584' *** Finished cleaning domUs *** Test 13_create_multinic_pos started at Sun Aug 6 15:47:44 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3286.1 *** Finished cleaning domUs *** Test 13_create_multinic_pos started at Sun Aug 6 15:47:44 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm domid 13_create_multinic_pos-1154893664' Error: the domain '13_create_multinic_pos-1154893664' does not exist. [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 13_create_multinic_pos-1154893664 [dom0] Running `ip addr add 169.254.0.2 dev vif0.0' Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893664'] [13_create_multinic_pos-1154893664] Sending `input' [13_create_multinic_pos-1154893664] Sending `ifconfig lo 127.0.0.1' [13_create_multinic_pos-1154893664] Sending `echo $?' Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893664'] [13_create_multinic_pos-1154893664] Sending `input' [13_create_multinic_pos-1154893664] Sending `ifconfig eth0 inet 169.254.0.1 netmask 255.255.0.0 up' [13_create_multinic_pos-1154893664] Sending `echo $?' [13_create_multinic_pos-1154893664] Sending `ls' [13_create_multinic_pos-1154893664] Sending `echo $?' [dom0] Running `ip addr del 169.254.0.2 dev vif0.0' Warning: Executing wildcard deletion to stay compatible with old scripts. Explicitly specify the prefix length (169.254.0.2/32) to avoid this warning. This special behaviour is likely to disappear in further releases, fix your scripts! Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893664'] [13_create_multinic_pos-1154893664] Sending `input' [13_create_multinic_pos-1154893664] Sending `ifconfig eth0 down' [13_create_multinic_pos-1154893664] Sending `echo $?' [dom0] Running `xm destroy 13_create_multinic_pos-1154893664' [dom0] Running `xm domid 13_create_multinic_pos-1154893679' Error: the domain '13_create_multinic_pos-1154893679' does not exist. [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 13_create_multinic_pos-1154893679 [dom0] Running `ip addr add 169.254.0.2 dev vif0.0' Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893679'] [13_create_multinic_pos-1154893679] Sending `input' [13_create_multinic_pos-1154893679] Sending `ifconfig lo 127.0.0.1' [13_create_multinic_pos-1154893679] Sending `echo $?' Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893679'] [13_create_multinic_pos-1154893679] Sending `input' [13_create_multinic_pos-1154893679] Sending `ifconfig eth0 inet 169.254.0.1 netmask 255.255.0.0 up' [13_create_multinic_pos-1154893679] Sending `echo $?' [13_create_multinic_pos-1154893679] Sending `ls' [13_create_multinic_pos-1154893679] Sending `echo $?' [dom0] Running `ip addr del 169.254.0.2 dev vif0.0' Warning: Executing wildcard deletion to stay compatible with old scripts. Explicitly specify the prefix length (169.254.0.2/32) to avoid this warning. This special behaviour is likely to disappear in further releases, fix your scripts! Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893679'] [13_create_multinic_pos-1154893679] Sending `input' [13_create_multinic_pos-1154893679] Sending `ifconfig eth0 down' [13_create_multinic_pos-1154893679] Sending `echo $?' [dom0] Running `xm destroy 13_create_multinic_pos-1154893679' [dom0] Running `xm domid 13_create_multinic_pos-1154893693' Error: the domain '13_create_multinic_pos-1154893693' does not exist. [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 13_create_multinic_pos-1154893693 [dom0] Running `ip addr add 169.254.0.2 dev vif0.0' Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893693'] [13_create_multinic_pos-1154893693] Sending `input' [13_create_multinic_pos-1154893693] Sending `ifconfig lo 127.0.0.1' [13_create_multinic_pos-1154893693] Sending `echo $?' Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893693'] [13_create_multinic_pos-1154893693] Sending `input' [13_create_multinic_pos-1154893693] Sending `ifconfig eth0 inet 169.254.0.1 netmask 255.255.0.0 up' [13_create_multinic_pos-1154893693] Sending `echo $?' [13_create_multinic_pos-1154893693] Sending `ls' [13_create_multinic_pos-1154893693] Sending `echo $?' [dom0] Running `ip addr del 169.254.0.2 dev vif0.0' Warning: Executing wildcard deletion to stay compatible with old scripts. Explicitly specify the prefix length (169.254.0.2/32) to avoid this warning. This special behaviour is likely to disappear in further releases, fix your scripts! Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893693'] [13_create_multinic_pos-1154893693] Sending `input' [13_create_multinic_pos-1154893693] Sending `ifconfig eth0 down' [13_create_multinic_pos-1154893693] Sending `echo $?' [dom0] Running `xm destroy 13_create_multinic_pos-1154893693' [dom0] Running `xm domid 13_create_multinic_pos-1154893708' Error: the domain '13_create_multinic_pos-1154893708' does not exist. [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 13_create_multinic_pos-1154893708 [dom0] Running `ip addr add 169.254.0.2 dev vif0.0' Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893708'] [13_create_multinic_pos-1154893708] Sending `input' [13_create_multinic_pos-1154893708] Sending `ifconfig lo 127.0.0.1' [13_create_multinic_pos-1154893708] Sending `echo $?' Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893708'] [13_create_multinic_pos-1154893708] Sending `input' [13_create_multinic_pos-1154893708] Sending `ifconfig eth0 inet 169.254.0.1 netmask 255.255.0.0 up' [13_create_multinic_pos-1154893708] Sending `echo $?' [13_create_multinic_pos-1154893708] Sending `ls' [13_create_multinic_pos-1154893708] Sending `echo $?' [dom0] Running `ip addr del 169.254.0.2 dev vif0.0' Warning: Executing wildcard deletion to stay compatible with old scripts. Explicitly specify the prefix length (169.254.0.2/32) to avoid this warning. This special behaviour is likely to disappear in further releases, fix your scripts! Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893708'] [13_create_multinic_pos-1154893708] Sending `input' [13_create_multinic_pos-1154893708] Sending `ifconfig eth0 down' [13_create_multinic_pos-1154893708] Sending `echo $?' [dom0] Running `xm destroy 13_create_multinic_pos-1154893708' [dom0] Running `xm domid 13_create_multinic_pos-1154893722' Error: the domain '13_create_multinic_pos-1154893722' does not exist. [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 13_create_multinic_pos-1154893722 [dom0] Running `ip addr add 169.254.0.2 dev vif0.0' Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893722'] [13_create_multinic_pos-1154893722] Sending `input' [13_create_multinic_pos-1154893722] Sending `ifconfig lo 127.0.0.1' [13_create_multinic_pos-1154893722] Sending `echo $?' Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893722'] [13_create_multinic_pos-1154893722] Sending `input' [13_create_multinic_pos-1154893722] Sending `ifconfig eth0 inet 169.254.0.1 netmask 255.255.0.0 up' [13_create_multinic_pos-1154893722] Sending `echo $?' [13_create_multinic_pos-1154893722] Sending `ls' [13_create_multinic_pos-1154893722] Sending `echo $?' [dom0] Running `ip addr del 169.254.0.2 dev vif0.0' Warning: Executing wildcard deletion to stay compatible with old scripts. Explicitly specify the prefix length (169.254.0.2/32) to avoid this warning. This special behaviour is likely to disappear in further releases, fix your scripts! Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893722'] [13_create_multinic_pos-1154893722] Sending `input' [13_create_multinic_pos-1154893722] Sending `ifconfig eth0 down' [13_create_multinic_pos-1154893722] Sending `echo $?' [dom0] Running `xm destroy 13_create_multinic_pos-1154893722' [dom0] Running `xm domid 13_create_multinic_pos-1154893736' Error: the domain '13_create_multinic_pos-1154893736' does not exist. [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 13_create_multinic_pos-1154893736 [dom0] Running `ip addr add 169.254.0.2 dev vif0.0' Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893736'] [13_create_multinic_pos-1154893736] Sending `input' [13_create_multinic_pos-1154893736] Sending `ifconfig lo 127.0.0.1' [13_create_multinic_pos-1154893736] Sending `echo $?' Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893736'] [13_create_multinic_pos-1154893736] Sending `input' [13_create_multinic_pos-1154893736] Sending `ifconfig eth0 inet 169.254.0.1 netmask 255.255.0.0 up' [13_create_multinic_pos-1154893736] Sending `echo $?' [13_create_multinic_pos-1154893736] Sending `ls' [13_create_multinic_pos-1154893736] Sending `echo $?' [dom0] Running `ip addr del 169.254.0.2 dev vif0.0' Warning: Executing wildcard deletion to stay compatible with old scripts. Explicitly specify the prefix length (169.254.0.2/32) to avoid this warning. This special behaviour is likely to disappear in further releases, fix your scripts! Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893736'] [13_create_multinic_pos-1154893736] Sending `input' [13_create_multinic_pos-1154893736] Sending `ifconfig eth0 down' [13_create_multinic_pos-1154893736] Sending `echo $?' [dom0] Running `xm destroy 13_create_multinic_pos-1154893736' [dom0] Running `xm domid 13_create_multinic_pos-1154893751' Error: the domain '13_create_multinic_pos-1154893751' does not exist. [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 13_create_multinic_pos-1154893751 [dom0] Running `ip addr add 169.254.0.2 dev vif0.0' Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893751'] [13_create_multinic_pos-1154893751] Sending `input' [13_create_multinic_pos-1154893751] Sending `ifconfig lo 127.0.0.1' [13_create_multinic_pos-1154893751] Sending `echo $?' Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893751'] [13_create_multinic_pos-1154893751] Sending `input' [13_create_multinic_pos-1154893751] Sending `ifconfig eth0 inet 169.254.0.1 netmask 255.255.0.0 up' [13_create_multinic_pos-1154893751] Sending `echo $?' [13_create_multinic_pos-1154893751] Sending `ls' [13_create_multinic_pos-1154893751] Sending `echo $?' [dom0] Running `ip addr del 169.254.0.2 dev vif0.0' Warning: Executing wildcard deletion to stay compatible with old scripts. Explicitly specify the prefix length (169.254.0.2/32) to avoid this warning. This special behaviour is likely to disappear in further releases, fix your scripts! Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893751'] [13_create_multinic_pos-1154893751] Sending `input' [13_create_multinic_pos-1154893751] Sending `ifconfig eth0 down' [13_create_multinic_pos-1154893751] Sending `echo $?' [dom0] Running `xm destroy 13_create_multinic_pos-1154893751' [dom0] Running `xm domid 13_create_multinic_pos-1154893765' Error: the domain '13_create_multinic_pos-1154893765' does not exist. [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 13_create_multinic_pos-1154893765 [dom0] Running `ip addr add 169.254.0.2 dev vif0.0' Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893765'] [13_create_multinic_pos-1154893765] Sending `input' [13_create_multinic_pos-1154893765] Sending `ifconfig lo 127.0.0.1' [13_create_multinic_pos-1154893765] Sending `echo $?' Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893765'] [13_create_multinic_pos-1154893765] Sending `input' [13_create_multinic_pos-1154893765] Sending `ifconfig eth0 inet 169.254.0.1 netmask 255.255.0.0 up' [13_create_multinic_pos-1154893765] Sending `echo $?' [13_create_multinic_pos-1154893765] Sending `ls' [13_create_multinic_pos-1154893765] Sending `echo $?' [dom0] Running `ip addr del 169.254.0.2 dev vif0.0' Warning: Executing wildcard deletion to stay compatible with old scripts. Explicitly specify the prefix length (169.254.0.2/32) to avoid this warning. This special behaviour is likely to disappear in further releases, fix your scripts! Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893765'] [13_create_multinic_pos-1154893765] Sending `input' [13_create_multinic_pos-1154893765] Sending `ifconfig eth0 down' [13_create_multinic_pos-1154893765] Sending `echo $?' [dom0] Running `xm destroy 13_create_multinic_pos-1154893765' [dom0] Running `xm domid 13_create_multinic_pos-1154893779' Error: the domain '13_create_multinic_pos-1154893779' does not exist. [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 13_create_multinic_pos-1154893779 [dom0] Running `ip addr add 169.254.0.2 dev vif0.0' Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893779'] [13_create_multinic_pos-1154893779] Sending `input' [13_create_multinic_pos-1154893779] Sending `ifconfig lo 127.0.0.1' [13_create_multinic_pos-1154893779] Sending `echo $?' Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893779'] [13_create_multinic_pos-1154893779] Sending `input' [13_create_multinic_pos-1154893779] Sending `ifconfig eth0 inet 169.254.0.1 netmask 255.255.0.0 up' [13_create_multinic_pos-1154893779] Sending `echo $?' [13_create_multinic_pos-1154893779] Sending `ls' [13_create_multinic_pos-1154893779] Sending `echo $?' [dom0] Running `ip addr del 169.254.0.2 dev vif0.0' Warning: Executing wildcard deletion to stay compatible with old scripts. Explicitly specify the prefix length (169.254.0.2/32) to avoid this warning. This special behaviour is likely to disappear in further releases, fix your scripts! Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893779'] [13_create_multinic_pos-1154893779] Sending `input' [13_create_multinic_pos-1154893779] Sending `ifconfig eth0 down' [13_create_multinic_pos-1154893779] Sending `echo $?' [dom0] Running `xm destroy 13_create_multinic_pos-1154893779' [dom0] Running `xm domid 13_create_multinic_pos-1154893793' Error: the domain '13_create_multinic_pos-1154893793' does not exist. [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 13_create_multinic_pos-1154893793 [dom0] Running `ip addr add 169.254.0.2 dev vif0.0' Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893793'] [13_create_multinic_pos-1154893793] Sending `input' [13_create_multinic_pos-1154893793] Sending `ifconfig lo 127.0.0.1' [13_create_multinic_pos-1154893793] Sending `echo $?' Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893793'] [13_create_multinic_pos-1154893793] Sending `input' [13_create_multinic_pos-1154893793] Sending `ifconfig eth0 inet 169.254.0.1 netmask 255.255.0.0 up' [13_create_multinic_pos-1154893793] Sending `echo $?' [13_create_multinic_pos-1154893793] Sending `ls' [13_create_multinic_pos-1154893793] Sending `echo $?' [dom0] Running `ip addr del 169.254.0.2 dev vif0.0' Warning: Executing wildcard deletion to stay compatible with old scripts. Explicitly specify the prefix length (169.254.0.2/32) to avoid this warning. This special behaviour is likely to disappear in further releases, fix your scripts! Console executing: ['/usr/sbin/xm', 'xm', 'console', '13_create_multinic_pos-1154893793'] [13_create_multinic_pos-1154893793] Sending `input' [13_create_multinic_pos-1154893793] Sending `ifconfig eth0 down' [13_create_multinic_pos-1154893793] Sending `echo $?' [dom0] Running `xm destroy 13_create_multinic_pos-1154893793' PASS: 13_create_multinic_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3340.5 *** Finished cleaning domUs *** Test 14_create_blockroot_pos started at Sun Aug 6 15:50:09 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3341.3 *** Finished cleaning domUs *** Test 14_create_blockroot_pos started at Sun Aug 6 15:50:09 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 14_create_blockroot Console executing: ['/usr/sbin/xm', 'xm', 'console', '14_create_blockroot'] [14_create_blockroot] Sending `input' [14_create_blockroot] Sending `ls' [14_create_blockroot] Sending `echo $?' PASS: 14_create_blockroot_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) 14_create_blockroot 130 64 1 -b---- 0.3 Domain-0 0 5000 32 r----- 3343.6 [dom0] Running `xm destroy 14_create_blockroot' *** Finished cleaning domUs *** Test 15_create_smallmem_pos started at Sun Aug 6 15:50:16 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3345.3 *** Finished cleaning domUs *** Test 15_create_smallmem_pos started at Sun Aug 6 15:50:16 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 15_create_smallmem_pos-1154893816 Console executing: ['/usr/sbin/xm', 'xm', 'console', '15_create_smallmem_pos-1154893816'] [15_create_smallmem_pos-1154893816] Sending `input' [15_create_smallmem_pos-1154893816] Sending `ls' [15_create_smallmem_pos-1154893816] Sending `echo $?' [dom0] Running `xm destroy 15_create_smallmem_pos-1154893816' PASS: 15_create_smallmem_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3347.5 *** Finished cleaning domUs *** Test 16_create_smallmem_neg started at Sun Aug 6 15:50:22 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3347.8 *** Finished cleaning domUs *** Test 16_create_smallmem_neg started at Sun Aug 6 15:50:22 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 16_create_smallmem_neg-1154893822 Console executing: ['/usr/sbin/xm', 'xm', 'console', '16_create_smallmem_neg-1154893822'] Domain with 16 MB has runaway console as expected [dom0] Running `xm destroy 16_create_smallmem_neg-1154893822' PASS: 16_create_smallmem_neg.test ==================== 1 of 15 tests failed ==================== make[2]: *** [check-TESTS] Error 1 make[2]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/create' make[1]: *** [check-am] Error 2 make[1]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/create' make: *** [check-recursive] Error 1 make: Target `check' not remade because of errors. *** case destroy from group default *** Running tests for case destroy make[1]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/destroy' make check-TESTS make[2]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/destroy' cp 01_destroy_basic_pos.py 01_destroy_basic_pos.test chmod +x 01_destroy_basic_pos.test cp 02_destroy_noparm_neg.py 02_destroy_noparm_neg.test chmod +x 02_destroy_noparm_neg.test cp 03_destroy_nonexist_neg.py 03_destroy_nonexist_neg.test chmod +x 03_destroy_nonexist_neg.test cp 04_destroy_badparm_neg.py 04_destroy_badparm_neg.test chmod +x 04_destroy_badparm_neg.test cp 05_destroy_byid_pos.py 05_destroy_byid_pos.test chmod +x 05_destroy_byid_pos.test cp 06_destroy_dom0_neg.py 06_destroy_dom0_neg.test chmod +x 06_destroy_dom0_neg.test cp 07_destroy_stale_pos.py 07_destroy_stale_pos.test chmod +x 07_destroy_stale_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3356.6 *** Finished cleaning domUs *** Test 01_destroy_basic_pos started at Sun Aug 6 15:50:27 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3357.0 *** Finished cleaning domUs *** Test 01_destroy_basic_pos started at Sun Aug 6 15:50:28 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 01_destroy_basic_pos-1154893828 Console executing: ['/usr/sbin/xm', 'xm', 'console', '01_destroy_basic_pos-1154893828'] [01_destroy_basic_pos-1154893828] Sending `input' [01_destroy_basic_pos-1154893828] Sending `ls' [01_destroy_basic_pos-1154893828] Sending `echo $?' [dom0] Running `xm destroy 01_destroy_basic_pos-1154893828' PASS: 01_destroy_basic_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3359.7 *** Finished cleaning domUs *** Test 02_destroy_noparm_neg started at Sun Aug 6 15:50:34 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3360.1 *** Finished cleaning domUs *** Test 02_destroy_noparm_neg started at Sun Aug 6 15:50:34 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm destroy' Error: 'xm destroy' requires 1 argument. destroy Terminate a domain immediately PASS: 02_destroy_noparm_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3360.8 *** Finished cleaning domUs *** Test 03_destroy_nonexist_neg started at Sun Aug 6 15:50:35 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3361.3 *** Finished cleaning domUs *** Test 03_destroy_nonexist_neg started at Sun Aug 6 15:50:35 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm destroy -x' Error: an integer is required PASS: 03_destroy_nonexist_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3362.0 *** Finished cleaning domUs *** Test 04_destroy_badparm_neg started at Sun Aug 6 15:50:36 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3362.3 *** Finished cleaning domUs *** Test 04_destroy_badparm_neg started at Sun Aug 6 15:50:36 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm destroy 6666' Error: an integer is required PASS: 04_destroy_badparm_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3362.9 *** Finished cleaning domUs *** Test 05_destroy_byid_pos started at Sun Aug 6 15:50:37 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3363.3 *** Finished cleaning domUs *** Test 05_destroy_byid_pos started at Sun Aug 6 15:50:37 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 05_destroy_byid_pos-1154893837 [dom0] Running `xm domid 05_destroy_byid_pos-1154893837' 134 [dom0] Running `xm destroy 134' PASS: 05_destroy_byid_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3365.9 *** Finished cleaning domUs *** Test 06_destroy_dom0_neg started at Sun Aug 6 15:50:39 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3366.2 *** Finished cleaning domUs *** Test 06_destroy_dom0_neg started at Sun Aug 6 15:50:39 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm destroy 0' Error: Cannot destroy privileged domain 0 PASS: 06_destroy_dom0_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3366.9 *** Finished cleaning domUs *** Test 07_destroy_stale_pos started at Sun Aug 6 15:50:40 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3367.3 *** Finished cleaning domUs *** Test 07_destroy_stale_pos started at Sun Aug 6 15:50:40 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever Running stale tests [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 07_destroy_stale_pos-1154893840 Console executing: ['/usr/sbin/xm', 'xm', 'console', '07_destroy_stale_pos-1154893840'] [07_destroy_stale_pos-1154893840] Sending `input' [07_destroy_stale_pos-1154893840] Sending `ls' [07_destroy_stale_pos-1154893840] Sending `echo $?' [dom0] Running `xm destroy 07_destroy_stale_pos-1154893840' [dom0] Running `xm mem-set 07_destroy_stale_pos-1154893840 32' Error: the domain '07_destroy_stale_pos-1154893840' does not exist. [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 07_destroy_stale_pos-1154893846 Console executing: ['/usr/sbin/xm', 'xm', 'console', '07_destroy_stale_pos-1154893846'] [07_destroy_stale_pos-1154893846] Sending `input' [07_destroy_stale_pos-1154893846] Sending `ls' [07_destroy_stale_pos-1154893846] Sending `echo $?' [dom0] Running `xm destroy 07_destroy_stale_pos-1154893846' [dom0] Running `xm pause 07_destroy_stale_pos-1154893846' Error: the domain '07_destroy_stale_pos-1154893846' does not exist. [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 07_destroy_stale_pos-1154893852 Console executing: ['/usr/sbin/xm', 'xm', 'console', '07_destroy_stale_pos-1154893852'] [07_destroy_stale_pos-1154893852] Sending `input' [07_destroy_stale_pos-1154893852] Sending `ls' [07_destroy_stale_pos-1154893852] Sending `echo $?' [dom0] Running `xm destroy 07_destroy_stale_pos-1154893852' [dom0] Running `xm unpause 07_destroy_stale_pos-1154893852' Error: the domain '07_destroy_stale_pos-1154893852' does not exist. [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 07_destroy_stale_pos-1154893857 Console executing: ['/usr/sbin/xm', 'xm', 'console', '07_destroy_stale_pos-1154893857'] [07_destroy_stale_pos-1154893857] Sending `input' [07_destroy_stale_pos-1154893857] Sending `ls' [07_destroy_stale_pos-1154893857] Sending `echo $?' [dom0] Running `xm destroy 07_destroy_stale_pos-1154893857' [dom0] Running `xm reboot 07_destroy_stale_pos-1154893857' Error: the domain '07_destroy_stale_pos-1154893857' does not exist. [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 07_destroy_stale_pos-1154893863 Console executing: ['/usr/sbin/xm', 'xm', 'console', '07_destroy_stale_pos-1154893863'] [07_destroy_stale_pos-1154893863] Sending `input' [07_destroy_stale_pos-1154893863] Sending `ls' [07_destroy_stale_pos-1154893863] Sending `echo $?' [dom0] Running `xm destroy 07_destroy_stale_pos-1154893863' [dom0] Running `xm save 07_destroy_stale_pos-1154893863 /tmp/foo' Error: the domain '07_destroy_stale_pos-1154893863' does not exist. [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 07_destroy_stale_pos-1154893869 Console executing: ['/usr/sbin/xm', 'xm', 'console', '07_destroy_stale_pos-1154893869'] [07_destroy_stale_pos-1154893869] Sending `input' [07_destroy_stale_pos-1154893869] Sending `ls' [07_destroy_stale_pos-1154893869] Sending `echo $?' [dom0] Running `xm destroy 07_destroy_stale_pos-1154893869' [dom0] Running `xm block-list 07_destroy_stale_pos-1154893869' Error: the domain '07_destroy_stale_pos-1154893869' does not exist. [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 07_destroy_stale_pos-1154893874 Console executing: ['/usr/sbin/xm', 'xm', 'console', '07_destroy_stale_pos-1154893874'] [07_destroy_stale_pos-1154893874] Sending `input' [07_destroy_stale_pos-1154893874] Sending `ls' [07_destroy_stale_pos-1154893874] Sending `echo $?' [dom0] Running `xm destroy 07_destroy_stale_pos-1154893874' [dom0] Running `xm shutdown 07_destroy_stale_pos-1154893874' Error: the domain '07_destroy_stale_pos-1154893874' does not exist. [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 07_destroy_stale_pos-1154893880 Console executing: ['/usr/sbin/xm', 'xm', 'console', '07_destroy_stale_pos-1154893880'] [07_destroy_stale_pos-1154893880] Sending `input' [07_destroy_stale_pos-1154893880] Sending `ls' [07_destroy_stale_pos-1154893880] Sending `echo $?' [dom0] Running `xm destroy 07_destroy_stale_pos-1154893880' [dom0] Running `xm domid 07_destroy_stale_pos-1154893880' Error: the domain '07_destroy_stale_pos-1154893880' does not exist. [dom0] Running `xm create /tmp/xm-test.conf' Using config file "/tmp/xm-test.conf". Started domain 07_destroy_stale_pos-1154893886 Console executing: ['/usr/sbin/xm', 'xm', 'console', '07_destroy_stale_pos-1154893886'] [07_destroy_stale_pos-1154893886] Sending `input' [07_destroy_stale_pos-1154893886] Sending `ls' [07_destroy_stale_pos-1154893886] Sending `echo $?' [dom0] Running `xm destroy 07_destroy_stale_pos-1154893886' [dom0] Running `xm domname 07_destroy_stale_pos-1154893886' Error: the domain '07_destroy_stale_pos-1154893886' does not exist. PASS: 07_destroy_stale_pos.test ================== All 7 tests passed ================== make[2]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/destroy' make[1]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/destroy' *** case dmesg from group default *** Running tests for case dmesg make[1]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/dmesg' make check-TESTS make[2]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/dmesg' cp 01_dmesg_basic_pos.py 01_dmesg_basic_pos.test chmod +x 01_dmesg_basic_pos.test cp 02_dmesg_basic_neg.py 02_dmesg_basic_neg.test chmod +x 02_dmesg_basic_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3388.8 *** Finished cleaning domUs *** Test 01_dmesg_basic_pos started at Sun Aug 6 15:51:32 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3389.2 *** Finished cleaning domUs *** Test 01_dmesg_basic_pos started at Sun Aug 6 15:51:33 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm dmesg' eck architecture supported. (XEN) Intel machine check reporting enabled on CPU#22. (XEN) CPU22: Intel P4/Xeon Extended MCE MSRs (24) available (XEN) CPU22: Thermal monitoring enabled (XEN) CPU22: Intel Genuine Intel(R) CPU 3.00GHz stepping 08 (XEN) Booting processor 23/44 eip 90000 (XEN) Initializing CPU#23 (XEN) masked ExtINT on CPU#23 (XEN) CPU: Trace cache: 12K uops, L1 D cache: 16K (XEN) CPU: L2 cache: 2048K (XEN) CPU: Physical Processor ID: 11 (XEN) CPU: Processor Core ID: 0 (XEN) VMXON is done (XEN) Intel machine check architecture supported. (XEN) Intel machine check reporting enabled on CPU#23. (XEN) CPU23: Intel P4/Xeon Extended MCE MSRs (24) available (XEN) CPU23: Thermal monitoring enabled (XEN) CPU23: Intel Genuine Intel(R) CPU 3.00GHz stepping 08 (XEN) Booting processor 24/48 eip 90000 (XEN) Initializing CPU#24 (XEN) masked ExtINT on CPU#24 (XEN) CPU: Trace cache: 12K uops, L1 D cache: 16K (XEN) CPU: L2 cache: 2048K (XEN) CPU: Physical Processor ID: 12 (XEN) CPU: Processor Core ID: 0 (XEN) VMXON is done (XEN) Intel machine check architecture supported. (XEN) Intel machine check reporting enabled on CPU#24. (XEN) CPU24: Intel P4/Xeon Extended MCE MSRs (24) available (XEN) CPU24: Thermal monitoring enabled (XEN) CPU24: Intel Genuine Intel(R) CPU 3.00GHz stepping 08 (XEN) Booting processor 25/54 eip 90000 (XEN) Initializing CPU#25 (XEN) masked ExtINT on CPU#25 (XEN) CPU: Trace cache: 12K uops, L1 D cache: 16K (XEN) CPU: L2 cache: 2048K (XEN) CPU: Physical Processor ID: 13 (XEN) CPU: Processor Core ID: 1 (XEN) VMXON is done (XEN) Intel machine check architecture supported. (XEN) Intel machine check reporting enabled on CPU#25. (XEN) CPU25: Intel P4/Xeon Extended MCE MSRs (24) available (XEN) CPU25: Thermal monitoring enabled (XEN) CPU25: Intel Genuine Intel(R) CPU 3.00GHz stepping 08 (XEN) Booting processor 26/56 eip 90000 (XEN) Initializing CPU#26 (XEN) masked ExtINT on CPU#26 (XEN) CPU: Trace cache: 12K uops, L1 D cache: 16K (XEN) CPU: L2 cache: 2048K (XEN) CPU: Physical Processor ID: 14 (XEN) CPU: Processor Core ID: 0 (XEN) VMXON is done (XEN) Intel machine check architecture supported. (XEN) Intel machine check reporting enabled on CPU#26. (XEN) CPU26: Intel P4/Xeon Extended MCE MSRs (24) available (XEN) CPU26: Thermal monitoring enabled (XEN) CPU26: Intel Genuine Intel(R) CPU 3.00GHz stepping 08 (XEN) Booting processor 27/62 eip 90000 (XEN) Initializing CPU#27 (XEN) masked ExtINT on CPU#27 (XEN) CPU: Trace cache: 12K uops, L1 D cache: 16K (XEN) CPU: L2 cache: 2048K (XEN) CPU: Physical Processor ID: 15 (XEN) CPU: Processor Core ID: 1 (XEN) VMXON is done (XEN) Intel machine check architecture supported. (XEN) Intel machine check reporting enabled on CPU#27. (XEN) CPU27: Intel P4/Xeon Extended MCE MSRs (24) available (XEN) CPU27: Thermal monitoring enabled (XEN) CPU27: Intel Genuine Intel(R) CPU 3.00GHz stepping 08 (XEN) Booting processor 28/50 eip 90000 (XEN) Initializing CPU#28 (XEN) masked ExtINT on CPU#28 (XEN) CPU: Trace cache: 12K uops, L1 D cache: 16K (XEN) CPU: L2 cache: 2048K (XEN) CPU: Physical Processor ID: 12 (XEN) CPU: Processor Core ID: 1 (XEN) VMXON is done (XEN) Intel machine check architecture supported. (XEN) Intel machine check reporting enabled on CPU#28. (XEN) CPU28: Intel P4/Xeon Extended MCE MSRs (24) available (XEN) CPU28: Thermal monitoring enabled (XEN) CPU28: Intel Genuine Intel(R) CPU 3.00GHz stepping 08 (XEN) Booting processor 29/52 eip 90000 (XEN) Initializing CPU#29 (XEN) masked ExtINT on CPU#29 (XEN) CPU: Trace cache: 12K uops, L1 D cache: 16K (XEN) CPU: L2 cache: 2048K (XEN) CPU: Physical Processor ID: 13 (XEN) CPU: Processor Core ID: 0 (XEN) VMXON is done (XEN) Intel machine check architecture supported. (XEN) Intel machine check reporting enabled on CPU#29. (XEN) CPU29: Intel P4/Xeon Extended MCE MSRs (24) available (XEN) CPU29: Thermal monitoring enabled (XEN) CPU29: Intel Genuine Intel(R) CPU 3.00GHz stepping 08 (XEN) Booting processor 30/58 eip 90000 (XEN) Initializing CPU#30 (XEN) masked ExtINT on CPU#30 (XEN) CPU: Trace cache: 12K uops, L1 D cache: 16K (XEN) CPU: L2 cache: 2048K (XEN) CPU: Physical Processor ID: 14 (XEN) CPU: Processor Core ID: 1 (XEN) VMXON is done (XEN) Intel machine check architecture supported. (XEN) Intel machine check reporting enabled on CPU#30. (XEN) CPU30: Intel P4/Xeon Extended MCE MSRs (24) available (XEN) CPU30: Thermal monitoring enabled (XEN) CPU30: Intel Genuine Intel(R) CPU 3.00GHz stepping 08 (XEN) Booting processor 31/60 eip 90000 (XEN) Initializing CPU#31 (XEN) masked ExtINT on CPU#31 (XEN) CPU: Trace cache: 12K uops, L1 D cache: 16K (XEN) CPU: L2 cache: 2048K (XEN) CPU: Physical Processor ID: 15 (XEN) CPU: Processor Core ID: 0 (XEN) VMXON is done (XEN) Intel machine check architecture supported. (XEN) Intel machine check reporting enabled on CPU#31. (XEN) CPU31: Intel P4/Xeon Extended MCE MSRs (24) available (XEN) CPU31: Thermal monitoring enabled (XEN) CPU31: Intel Genuine Intel(R) CPU 3.00GHz stepping 08 (XEN) Total of 32 processors activated. (XEN) ENABLING IO-APIC IRQs (XEN) -> Using new ACK method (XEN) init IO_APIC IRQs (XEN) IO-APIC (apicid-pin) 16-0, 16-16, 16-17, 16-18, 16-19, 16-20, 16-21, 16-22, 16-23, 26-0, 26-1, 26-2, 26-3, 26-4, 26-5, 26-6, 26-7, 26-8, 26-9, 26-10, 26-11, 26-12, 26-13, 26-14, 26-15, 26-16, 26-17, 26-18, 26-19, 26-20, 26-21, 26-22, 26-23, 27-0, 27-1, 27-2, 27-3, 27-4, 27-5, 27-6, 27-7, 27-8, 27-9, 27-10, 27-11, 27-12, 27-13, 27-14, 27-15, 27-16, 27-17, 27-18, 27-19, 27-20, 27-21, 27-22, 27-23 not connected. (XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=0 pin2=0 (XEN) number of MP IRQ sources: 15. (XEN) number of IO-APIC #16 registers: 24. (XEN) number of IO-APIC #26 registers: 24. (XEN) number of IO-APIC #27 registers: 24. (XEN) testing the IO APIC....................... (XEN) IO APIC #16...... (XEN) .... register #00: 00000000 (XEN) ....... : physical APIC id: 00 (XEN) ....... : Delivery Type: 0 (XEN) ....... : LTS : 0 (XEN) .... register #01: 00178020 (XEN) ....... : max redirection entries: 0017 (XEN) ....... : PRQ implemented: 1 (XEN) ....... : IO APIC version: 0020 (XEN) .... register #02: 00000000 (XEN) ....... : arbitration: 00 (XEN) .... register #03: 00000001 (XEN) ....... : Boot DT : 1 (XEN) .... IRQ redirection table: (XEN) NR Log Phy Mask Trig IRR Pol Stat Dest Deli Vect: (XEN) 00 000 00 1 0 0 0 0 0 0 00 (XEN) 01 000 00 0 0 0 0 0 0 0 28 (XEN) 02 000 00 0 0 0 0 0 0 0 F0 (XEN) 03 000 00 0 0 0 0 0 0 0 30 (XEN) 04 000 00 0 0 0 0 0 0 0 F1 (XEN) 05 000 00 0 0 0 0 0 0 0 38 (XEN) 06 000 00 0 0 0 0 0 0 0 40 (XEN) 07 000 00 0 0 0 0 0 0 0 48 (XEN) 08 000 00 0 0 0 0 0 0 0 50 (XEN) 09 000 00 1 1 0 0 0 0 0 58 (XEN) 0a 000 00 0 0 0 0 0 0 0 60 (XEN) 0b 000 00 0 0 0 0 0 0 0 68 (XEN) 0c 000 00 0 0 0 0 0 0 0 70 (XEN) 0d 000 00 0 0 0 0 0 0 0 78 (XEN) 0e 000 00 0 0 0 0 0 0 0 88 (XEN) 0f 000 00 0 0 0 0 0 0 0 90 (XEN) 10 000 00 1 0 0 0 0 0 0 00 (XEN) 11 000 00 1 0 0 0 0 0 0 00 (XEN) 12 000 00 1 0 0 0 0 0 0 00 (XEN) 13 000 00 1 0 0 0 0 0 0 00 (XEN) 14 000 00 1 0 0 0 0 0 0 00 (XEN) 15 000 00 1 0 0 0 0 0 0 00 (XEN) 16 000 00 1 0 0 0 0 0 0 00 (XEN) 17 000 00 1 0 0 0 0 0 0 00 (XEN) IO APIC #26...... (XEN) .... register #00: 00000000 (XEN) ....... : physical APIC id: 00 (XEN) ....... : Delivery Type: 0 (XEN) ....... : LTS : 0 (XEN) .... register #01: 00178020 (XEN) ....... : max redirection entries: 0017 (XEN) ....... : PRQ implemented: 1 (XEN) ....... : IO APIC version: 0020 (XEN) .... register #02: 00000000 (XEN) ....... : arbitration: 00 (XEN) .... register #03: 00000001 (XEN) ....... : Boot DT : 1 (XEN) .... IRQ redirection table: (XEN) NR Log Phy Mask Trig IRR Pol Stat Dest Deli Vect: (XEN) 00 000 00 1 0 0 0 0 0 0 00 (XEN) 01 000 00 1 0 0 0 0 0 0 00 (XEN) 02 000 00 1 0 0 0 0 0 0 00 (XEN) 03 000 00 1 0 0 0 0 0 0 00 (XEN) 04 000 00 1 0 0 0 0 0 0 00 (XEN) 05 000 00 1 0 0 0 0 0 0 00 (XEN) 06 000 00 1 0 0 0 0 0 0 00 (XEN) 07 000 00 1 0 0 0 0 0 0 00 (XEN) 08 000 00 1 0 0 0 0 0 0 00 (XEN) 09 000 00 1 0 0 0 0 0 0 00 (XEN) 0a 000 00 1 0 0 0 0 0 0 00 (XEN) 0b 000 00 1 0 0 0 0 0 0 00 (XEN) 0c 000 00 1 0 0 0 0 0 0 00 (XEN) 0d 000 00 1 0 0 0 0 0 0 00 (XEN) 0e 000 00 1 0 0 0 0 0 0 00 (XEN) 0f 000 00 1 0 0 0 0 0 0 00 (XEN) 10 000 00 1 0 0 0 0 0 0 00 (XEN) 11 000 00 1 0 0 0 0 0 0 00 (XEN) 12 000 00 1 0 0 0 0 0 0 00 (XEN) 13 000 00 1 0 0 0 0 0 0 00 (XEN) 14 000 00 1 0 0 0 0 0 0 00 (XEN) 15 000 00 1 0 0 0 0 0 0 00 (XEN) 16 000 00 1 0 0 0 0 0 0 00 (XEN) 17 000 00 1 0 0 0 0 0 0 00 (XEN) IO APIC #27...... (XEN) .... register #00: 00000000 (XEN) ....... : physical APIC id: 00 (XEN) ....... : Delivery Type: 0 (XEN) ....... : LTS : 0 (XEN) .... register #01: 00178020 (XEN) ....... : max redirection entries: 0017 (XEN) ....... : PRQ implemented: 1 (XEN) ....... : IO APIC version: 0020 (XEN) .... register #02: 00000000 (XEN) ....... : arbitration: 00 (XEN) .... register #03: 00000001 (XEN) ....... : Boot DT : 1 (XEN) .... IRQ redirection table: (XEN) NR Log Phy Mask Trig IRR Pol Stat Dest Deli Vect: (XEN) 00 000 00 1 0 0 0 0 0 0 00 (XEN) 01 000 00 1 0 0 0 0 0 0 00 (XEN) 02 000 00 1 0 0 0 0 0 0 00 (XEN) 03 000 00 1 0 0 0 0 0 0 00 (XEN) 04 000 00 1 0 0 0 0 0 0 00 (XEN) 05 000 00 1 0 0 0 0 0 0 00 (XEN) 06 000 00 1 0 0 0 0 0 0 00 (XEN) 07 000 00 1 0 0 0 0 0 0 00 (XEN) 08 000 00 1 0 0 0 0 0 0 00 (XEN) 09 000 00 1 0 0 0 0 0 0 00 (XEN) 0a 000 00 1 0 0 0 0 0 0 00 (XEN) 0b 000 00 1 0 0 0 0 0 0 00 (XEN) 0c 000 00 1 0 0 0 0 0 0 00 (XEN) 0d 000 00 1 0 0 0 0 0 0 00 (XEN) 0e 000 00 1 0 0 0 0 0 0 00 (XEN) 0f 000 00 1 0 0 0 0 0 0 00 (XEN) 10 000 00 1 0 0 0 0 0 0 00 (XEN) 11 000 00 1 0 0 0 0 0 0 00 (XEN) 12 000 00 1 0 0 0 0 0 0 00 (XEN) 13 000 00 1 0 0 0 0 0 0 00 (XEN) 14 000 00 1 0 0 0 0 0 0 00 (XEN) 15 000 00 1 0 0 0 0 0 0 00 (XEN) 16 000 00 1 0 0 0 0 0 0 00 (XEN) 17 000 00 1 0 0 0 0 0 0 00 (XEN) Using vector-based indexing (XEN) IRQ to pin mappings: (XEN) IRQ240 -> 0:2 (XEN) IRQ40 -> 0:1 (XEN) IRQ48 -> 0:3 (XEN) IRQ241 -> 0:4 (XEN) IRQ56 -> 0:5 (XEN) IRQ64 -> 0:6 (XEN) IRQ72 -> 0:7 (XEN) IRQ80 -> 0:8 (XEN) IRQ88 -> 0:9 (XEN) IRQ96 -> 0:10 (XEN) IRQ104 -> 0:11 (XEN) IRQ112 -> 0:12 (XEN) IRQ120 -> 0:13 (XEN) IRQ136 -> 0:14 (XEN) IRQ144 -> 0:15 (XEN) .................................... done. (XEN) Using local APIC timer interrupts. (XEN) calibrating APIC timer ... (XEN) ..... CPU clock speed is 3000.0171 MHz. (XEN) ..... host bus clock speed is 200.0010 MHz. (XEN) ..... bus_scale = 0x0000CCD7 (XEN) checking TSC synchronization across 32 CPUs: (XEN) CPU#0 had 2124692 usecs TSC skew, fixed it up. (XEN) CPU#1 had 2124712 usecs TSC skew, fixed it up. (XEN) CPU#2 had 2124715 usecs TSC skew, fixed it up. (XEN) CPU#3 had 2124719 usecs TSC skew, fixed it up. (XEN) CPU#4 had 2124708 usecs TSC skew, fixed it up. (XEN) CPU#5 had 2124719 usecs TSC skew, fixed it up. (XEN) CPU#6 had 2124706 usecs TSC skew, fixed it up. (XEN) CPU#7 had 2124719 usecs TSC skew, fixed it up. (XEN) CPU#8 had 553572 usecs TSC skew, fixed it up. (XEN) CPU#9 had 553571 usecs TSC skew, fixed it up. (XEN) CPU#10 had 553571 usecs TSC skew, fixed it up. (XEN) CPU#11 had 553570 usecs TSC skew, fixed it up. (XEN) CPU#12 had 553574 usecs TSC skew, fixed it up. (XEN) CPU#13 had 553571 usecs TSC skew, fixed it up. (XEN) CPU#14 had 553571 usecs TSC skew, fixed it up. (XEN) CPU#15 had 553571 usecs TSC skew, fixed it up. (XEN) CPU#16 had -703940 usecs TSC skew, fixed it up. (XEN) CPU#17 had -703939 usecs TSC skew, fixed it up. (XEN) CPU#18 had -703946 usecs TSC skew, fixed it up. (XEN) CPU#19 had -703937 usecs TSC skew, fixed it up. (XEN) CPU#20 had -703940 usecs TSC skew, fixed it up. (XEN) CPU#21 had -703939 usecs TSC skew, fixed it up. (XEN) CPU#22 had -703940 usecs TSC skew, fixed it up. (XEN) CPU#23 had -703944 usecs TSC skew, fixed it up. (XEN) CPU#24 had -1974339 usecs TSC skew, fixed it up. (XEN) CPU#25 had -1974349 usecs TSC skew, fixed it up. (XEN) CPU#26 had -1974332 usecs TSC skew, fixed it up. (XEN) CPU#27 had -1974342 usecs TSC skew, fixed it up. (XEN) CPU#28 had -1974345 usecs TSC skew, fixed it up. (XEN) CPU#29 had -1974338 usecs TSC skew, fixed it up. (XEN) CPU#30 had -1974351 usecs TSC skew, fixed it up. (XEN) CPU#31 had -1974338 usecs TSC skew, fixed it up. (XEN) Platform timer is 1.193MHz PIT (XEN) Brought up 32 CPUs (XEN) Machine check exception polling timer started. (XEN) *** LOADING DOMAIN 0 *** (XEN) Domain 0 kernel supports features = { 0000000f }. (XEN) Domain 0 kernel requires features = { 00000000 }. (XEN) PHYSICAL MEMORY ARRANGEMENT: (XEN) Dom0 alloc.: 0000000060000000->0000000080000000 (32959401 pages to be allocated) (XEN) VIRTUAL MEMORY ARRANGEMENT: (XEN) Loaded kernel: ffffffff80100000->ffffffff804b4f08 (XEN) Init. ramdisk: ffffffff804b5000->ffffffff80b8de00 (XEN) Phys-Mach map: ffffffff80b8e000->ffffffff90803d48 (XEN) Start info: ffffffff90804000->ffffffff90805000 (XEN) Page tables: ffffffff90805000->ffffffff9088e000 (XEN) Boot stack: ffffffff9088e000->ffffffff9088f000 (XEN) TOTAL: ffffffff80000000->ffffffff90c00000 (XEN) ENTRY ADDRESS: ffffffff80100000 (XEN) Dom0 has maximum 32 VCPUs (XEN) Initrd len 0x6d8e00, start at 0xffffffff804b5000 (XEN) Scrubbing Free RAM: ................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................done. (XEN) Xen trace buffers: disabled (XEN) Xen is relinquishing VGA console. (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen). (XEN) mtrr: type mismatch for f6000000,400000 old: uncachable new: write-combining (XEN) mtrr: type mismatch for f6000000,400000 old: uncachable new: write-combining PASS: 01_dmesg_basic_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3389.8 *** Finished cleaning domUs *** Test 02_dmesg_basic_neg started at Sun Aug 6 15:51:33 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3390.2 *** Finished cleaning domUs *** Test 02_dmesg_basic_neg started at Sun Aug 6 15:51:34 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm dmesg -x' Error: option -x not recognized PASS: 02_dmesg_basic_neg.test ================== All 2 tests passed ================== make[2]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/dmesg' make[1]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/dmesg' *** case domid from group default *** Running tests for case domid make[1]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/domid' make check-TESTS make[2]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/domid' cp 01_domid_basic_pos.py 01_domid_basic_pos.test chmod +x 01_domid_basic_pos.test cp 02_domid_basic_neg.py 02_domid_basic_neg.test chmod +x 02_domid_basic_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3390.9 *** Finished cleaning domUs *** Test 01_domid_basic_pos started at Sun Aug 6 15:51:34 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3391.3 *** Finished cleaning domUs *** Test 01_domid_basic_pos started at Sun Aug 6 15:51:35 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm domid Domain-0' 0 PASS: 01_domid_basic_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3392.1 *** Finished cleaning domUs *** Test 02_domid_basic_neg started at Sun Aug 6 15:51:35 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3392.5 *** Finished cleaning domUs *** Test 02_domid_basic_neg started at Sun Aug 6 15:51:36 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm domid non_existent_domain' Error: the domain 'non_existent_domain' does not exist. PASS: 02_domid_basic_neg.test ================== All 2 tests passed ================== make[2]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/domid' make[1]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/domid' *** case domname from group default *** Running tests for case domname make[1]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/domname' make check-TESTS make[2]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/domname' cp 01_domname_basic_pos.py 01_domname_basic_pos.test chmod +x 01_domname_basic_pos.test cp 02_domname_basic_neg.py 02_domname_basic_neg.test chmod +x 02_domname_basic_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3393.2 *** Finished cleaning domUs *** Test 01_domname_basic_pos started at Sun Aug 6 15:51:36 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3393.6 *** Finished cleaning domUs *** Test 01_domname_basic_pos started at Sun Aug 6 15:51:37 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm domname 0' Domain-0 PASS: 01_domname_basic_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3394.4 *** Finished cleaning domUs *** Test 02_domname_basic_neg started at Sun Aug 6 15:51:38 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3394.8 *** Finished cleaning domUs *** Test 02_domname_basic_neg started at Sun Aug 6 15:51:38 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm domname 1492' Error: the domain '1492' does not exist. PASS: 02_domname_basic_neg.test ================== All 2 tests passed ================== make[2]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/domname' make[1]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/domname' *** case enforce_dom0_cpus from group default *** Running tests for case enforce_dom0_cpus make[1]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/enforce_dom0_cpus' make check-TESTS make[2]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/enforce_dom0_cpus' cp 01_enforce_dom0_cpus_basic_pos.py 01_enforce_dom0_cpus_basic_pos.test chmod +x 01_enforce_dom0_cpus_basic_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3395.5 *** Finished cleaning domUs *** Test 01_enforce_dom0_cpus_basic_pos started at Sun Aug 6 15:51:39 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3395.9 *** Finished cleaning domUs *** Test 01_enforce_dom0_cpus_basic_pos started at Sun Aug 6 15:51:39 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm info' host : m1132-xenunstable release : 2.6.16.13-xen version : #1 SMP Sun Aug 6 11:46:44 EDT 2006 machine : x86_64 nr_cpus : 32 nr_nodes : 1 sockets_per_node : 16 cores_per_socket : 2 threads_per_core : 1 cpu_mhz : 3000 hw_caps : bfebfbff:20100800:00000000:00000180:000064bd:00000000:00000001 total_memory : 130943 free_memory : 124387 xen_major : 3 xen_minor : 0 xen_extra : -unstable xen_caps : xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : Fri Aug 04 20:34:44 2006 +0100 10949:ffa5b2975dff cc_compiler : gcc version 4.1.0 (SUSE Linux) cc_compile_by : root cc_compile_domain : site cc_compile_date : Sun Aug 6 11:39:52 EDT 2006 [dom0] Running `xm info' host : m1132-xenunstable release : 2.6.16.13-xen version : #1 SMP Sun Aug 6 11:46:44 EDT 2006 machine : x86_64 nr_cpus : 32 nr_nodes : 1 sockets_per_node : 16 cores_per_socket : 2 threads_per_core : 1 cpu_mhz : 3000 hw_caps : bfebfbff:20100800:00000000:00000180:000064bd:00000000:00000001 total_memory : 130943 free_memory : 124387 xen_major : 3 xen_minor : 0 xen_extra : -unstable xen_caps : xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : Fri Aug 04 20:34:44 2006 +0100 10949:ffa5b2975dff cc_compiler : gcc version 4.1.0 (SUSE Linux) cc_compile_by : root cc_compile_domain : site cc_compile_date : Sun Aug 6 11:39:52 EDT 2006 [dom0] Running `xm info' host : m1132-xenunstable release : 2.6.16.13-xen version : #1 SMP Sun Aug 6 11:46:44 EDT 2006 machine : x86_64 nr_cpus : 32 nr_nodes : 1 sockets_per_node : 16 cores_per_socket : 2 threads_per_core : 1 cpu_mhz : 3000 hw_caps : bfebfbff:20100800:00000000:00000180:000064bd:00000000:00000001 total_memory : 130943 free_memory : 124387 xen_major : 3 xen_minor : 0 xen_extra : -unstable xen_caps : xen-3.0-x86_64 hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 xen_pagesize : 4096 platform_params : virt_start=0xffff800000000000 xen_changeset : Fri Aug 04 20:34:44 2006 +0100 10949:ffa5b2975dff cc_compiler : gcc version 4.1.0 (SUSE Linux) cc_compile_by : root cc_compile_domain : site cc_compile_date : Sun Aug 6 11:39:52 EDT 2006 [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5000 32 r----- 3396.8 [dom0] Running `sed -e 's,dom0-cpus 0,dom0-cpus 1,' /etc/xen/xend-config.sxp > /tmp/xend-config.sxp' *** Restarting xend ... [dom0] Running `/etc/init.d/xend stop' [dom0] Running `/etc/init.d/xend start' [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 30 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 28 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 26 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 23 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 21 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 20 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `grep "^processor" /proc/cpuinfo | wc -l' 17 [dom0] Running `xm vcpu-set 0 32' *** Restarting xend ... [dom0] Running `/etc/init.d/xend stop' [dom0] Running `/etc/init.d/xend start' REASON: /proc/cpuinfo says xend didn't enforce dom0_cpus (17 != 1) FAIL: 01_enforce_dom0_cpus_basic_pos.test =================== 1 of 1 tests failed =================== make[2]: *** [check-TESTS] Error 1 make[2]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/enforce_dom0_cpus' make[1]: *** [check-am] Error 2 make[1]: Leaving directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/enforce_dom0_cpus' make: *** [check-recursive] Error 1 make: Target `check' not remade because of errors. *** case help from group default *** Running tests for case help make[1]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/help' make check-TESTS make[2]: Entering directory `/home/unisys/xen-unstable.hg/tools/xm-test/tests/help' cp 01_help_basic_pos.py 01_help_basic_pos.test chmod +x 01_help_basic_pos.test cp 02_help_basic_neg.py 02_help_basic_neg.test chmod +x 02_help_basic_neg.test cp 03_help_badparm_neg.py 03_help_badparm_neg.test chmod +x 03_help_badparm_neg.test cp 04_help_long_pos.py 04_help_long_pos.test chmod +x 04_help_long_pos.test cp 05_help_nonroot_pos.py 05_help_nonroot_pos.test chmod +x 05_help_nonroot_pos.test cp 06_help_allcmds.py 06_help_allcmds.test chmod +x 06_help_allcmds.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5001 32 r----- 3403.1 *** Finished cleaning domUs *** Test 01_help_basic_pos started at Sun Aug 6 15:52:16 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5001 32 r----- 3403.5 *** Finished cleaning domUs *** Test 01_help_basic_pos started at Sun Aug 6 15:52:16 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm help' Usage: xm [args] Control, list, and manipulate Xen guest instances xm common subcommands: console Attach to domain DomId's console. create [-c] [Name=Value].. Create a domain based on Config File destroy Terminate a domain immediately help Display this message list [--long] [DomId, ...] List information about domains mem-set Adjust the current memory usage for a domain migrate Migrate a domain to another machine pause Pause execution of a domain reboot [-w][-a] Reboot a domain restore Create a domain from a saved state file save Save domain state (and config) to file shutdown [-w][-a][-R|-H] Shutdown a domain top Monitor system and domains in real-time unpause Unpause a paused domain vcpu-set Set the number of active VCPUs for a domain within the range allowed by the domain configuration can be substituted for in xm subcommands. For a complete list of subcommands run 'xm help --long' For more help on xm see the xm(1) man page For more help on xm create, see the xmdomain.cfg(5) man page PASS: 01_help_basic_pos.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5001 32 r----- 3404.1 *** Finished cleaning domUs *** Test 02_help_basic_neg started at Sun Aug 6 15:52:17 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5001 32 r----- 3404.5 *** Finished cleaning domUs *** Test 02_help_basic_neg started at Sun Aug 6 15:52:17 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm' Usage: xm [args] Control, list, and manipulate Xen guest instances xm common subcommands: console Attach to domain DomId's console. create [-c] [Name=Value].. Create a domain based on Config File destroy Terminate a domain immediately help Display this message list [--long] [DomId, ...] List information about domains mem-set Adjust the current memory usage for a domain migrate Migrate a domain to another machine pause Pause execution of a domain reboot [-w][-a] Reboot a domain restore Create a domain from a saved state file save Save domain state (and config) to file shutdown [-w][-a][-R|-H] Shutdown a domain top Monitor system and domains in real-time unpause Unpause a paused domain vcpu-set Set the number of active VCPUs for a domain within the range allowed by the domain configuration can be substituted for in xm subcommands. For a complete list of subcommands run 'xm help --long' For more help on xm see the xm(1) man page For more help on xm create, see the xmdomain.cfg(5) man page PASS: 02_help_basic_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5001 32 r----- 3405.2 *** Finished cleaning domUs *** Test 03_help_badparm_neg started at Sun Aug 6 15:52:18 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5001 32 r----- 3405.6 *** Finished cleaning domUs *** Test 03_help_badparm_neg started at Sun Aug 6 15:52:18 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm -x' Error: Sub Command -x not found! Usage: xm [args] Control, list, and manipulate Xen guest instances xm common subcommands: console Attach to domain DomId's console. create [-c] [Name=Value].. Create a domain based on Config File destroy Terminate a domain immediately help Display this message list [--long] [DomId, ...] List information about domains mem-set Adjust the current memory usage for a domain migrate Migrate a domain to another machine pause Pause execution of a domain reboot [-w][-a] Reboot a domain restore Create a domain from a saved state file save Save domain state (and config) to file shutdown [-w][-a][-R|-H] Shutdown a domain top Monitor system and domains in real-time unpause Unpause a paused domain vcpu-set Set the number of active VCPUs for a domain within the range allowed by the domain configuration can be substituted for in xm subcommands. For a complete list of subcommands run 'xm help --long' For more help on xm see the xm(1) man page For more help on xm create, see the xmdomain.cfg(5) man page PASS: 03_help_badparm_neg.test *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5001 32 r----- 3406.2 *** Finished cleaning domUs *** Test 04_help_long_pos started at Sun Aug 6 15:52:19 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever *** Cleaning all running domU's [dom0] Running `xm list' Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 5001 32 r----- 3406.6 *** Finished cleaning domUs *** Test 04_help_long_pos started at Sun Aug 6 15:52:19 2006 EDT [dom0] Running `ip addr show |grep "inet 169.254" | grep -v vif' [dom0] Running `ip addr show dev vif0.0' 2: vif0.0: mtu 1500 qdisc noqueue link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever [dom0] Running `xm help --long' Usage: xm [args] Control, list, and manipulate Xen guest instances xm full list of subcommands: Domain Commands: console Attach to domain DomId's console. create [-c] [Name=Value].. Create a domain based on Config File destroy Terminate a domain immediately domid Converts a domain name to a domain id domname Convert a domain id to a domain name list [--long] [DomId, ...] List information about domains list [--label] [DomId, ...] List information about domains including their labels mem-max Set maximum memory reservation for a domain mem-set Adjust the current memory usage for a domain migrate Migrate a domain to another machine pause Pause execution of a domain reboot [-w][-a] Reboot a domain rename Rename a domain restore Create a domain from a saved state file save Save domain state (and config) to file shutdown [-w][-a][-R|-H] Shutdown a domain sysrq Send a sysrq to a domain top Monitor system and domains in real-time unpause Unpause a paused domain vcpu-list List the VCPUs for a domain (or all domains) vcpu-pin Set which cpus a VCPU can use vcpu-set Set the number of active VCPUs for a domain within the range allowed by the domain configuration Xen Host Commands: dmesg [-c|--clear] Read or clear Xen's message buffer info Get information about the xen host log Print the xend log serve Proxy Xend XML-RPC over stdio Scheduler Commands: sched-credit Set or get credit scheduler parameters sched-bvt Set Borrowed Virtual Time scheduler parameters sched-bvt-ctxallow Set the BVT scheduler context switch allowance sched-sedf [DOM] [OPTIONS] Show|Set simple EDF parameters -p, --period Relative deadline(ms). -s, --slice Worst-case execution time(ms) (slice < period). -l, --latency scaled period(ms) in case the domain is doing heavy I/O. -e, --extra flag (0/1) which controls whether the domain can run in extra-time -w, --weight mutually exclusive with period/slice and specifies another way of setting a domain's cpu period/slice. Virtual Device Commands: block-attach [BackDomId] Create a new virtual block device block-detach Destroy a domain's virtual block device, where may either be the device ID or the device name as mounted in the guest block-list [--long] List virtual block devices for a domain network-attach [script=