[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Is fallback vhost_net to qemu for live migrate availableï



On 2013/8/30 0:08, Anthony Liguori wrote:
Hi Qin,

KVM and Xen represent memory in a very different way.  KVM can only
track when guest mode code dirties memory.  It relies on QEMU to track
when guest memory is dirtied by QEMU.  Since vhost is running outside
of QEMU, vhost also needs to tell QEMU when it has dirtied memory.

I don't think this is a problem with Xen though.  I believe (although
could be wrong) that Xen is able to track when either the domain or
dom0 dirties memory.

So I think you can simply ignore the dirty logging with vhost and it
should Just Work.

Xen track guest's memory when live migrating as what KVM did (I guess it rely on EPT)ïit couldn't mark dom0's dirty memory automatically.

I did the same dirty log with vhost_net but instead of KVM's api with Xen's dirty memory interfaceïthen live migration work.

--------------------------------------------------------------------
There is a bug on the Xen live migration when using qemu emulate nic(such as virtio_net).
current flow:
    xc_save->dirty memory copy->suspend->stop_vcpu->last memory copy
    stop_qemu->stop_virtio_net
    save_qemu->save_virtio_net
it means virtio_net would dirty memory after the last memory copy.

I have test it both vhost_on_qemu and virtio_net in qemu,there are same problem, the update of vring_index would be mistake and lead network unreachable. my solution is:
    xc_save->dirty memory copy->suspend->stop_vcpu->stop_qemu
                        ->stop_virtio_net->last memory copy
    save_qemu->save_virtio_net

Xen's netfront and netback disconnect and flush IO-ring when live migrateïso it is OK.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.