[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] migration regression in xen-4.11 and qemu-2.11 and qcow2
I assume OSS test does not test realworld live migration, therefore the following regression remained unnoticed: name="hvm" builder="hvm" memory=555 vcpus=4 serial="pty" boot="c" disk=[ 'qcow2:/nfs/vdisk.qcow2,hda,w', ] device_model_version="qemu-xen" xl create -cf hvm.cfg sleep N xl migrate hvm $host On $host the domU becomes unusable, qemu reports: xen be: qdisk-768: xen be: qdisk-768: error: Failed to get "write" lock With qemu-2.10 the sender noticed the error somehow, and migration was aborted: qemu-system-i386: Failed to get "write" lock With qemu-2.11 the sender thinks everything is alright and the domU is moved. What I gathered during debugging so far is that somehow qemu on the receiving side locks a region twice: 2018-05-07T09:49:45.810930Z qemu-system-i386: qemu_lock_fcntl: 39 c9 1 F_UNLCK>F_UNLCK 0 Success 2018-05-07T09:49:45.813717Z qemu-system-i386: qemu_lock_fcntl: 39 c9 1 F_RDLCK>F_RDLCK 0 Success 2018-05-07T09:49:45.814591Z qemu-system-i386: qemu_lock_fd_test: 39 c9 1 F_WRLCK>F_RDLCK 0 Success raw_check_lock_bytes: fcntl on 39 returned -11/0 I do not know how raw_apply_lock_bytes() is supposed to be used. In its current form it does not work. Anyone else seeing this? Olaf Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |