[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Performance of live-migration on the xl-stack with 10GB-lines

  • To: "'xen-users@xxxxxxxxxxxxx'" <xen-users@xxxxxxxxxxxxx>
  • From: "Hildebrand, Nils (BIT II 9)" <Nils.Hildebrand@xxxxxxxxxxx>
  • Date: Wed, 13 May 2015 07:36:10 +0000
  • Accept-language: de-DE, en-US
  • Delivery-date: Wed, 13 May 2015 07:37:45 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>
  • Thread-index: AdCNT3m5vxb7IvS6Sauf6qrEGwZsgw==
  • Thread-topic: Performance of live-migration on the xl-stack with 10GB-lines



I noticed that xl now uses an ssh-tunnel to do live-migrations.


At the first look I thought: Nice idea. This makes live-migrations secure without reinventing the wheel. This also drops the need to implement an own transfer protocol.


Well – so I tried my first live-migration with my shiny new servers with thick 10GB-network lines…


There is a severe bottleneck here: Since both the standard ssh-server and ssh-client are not able to use more than one CPU, the rate of a single CPU will limit the transfer speed.


So what are possible solutions?


First thought:

Contact the XEN-developers and ask for multiple ssh-sessions for the live-migration. This is propably hard to code since you would need to partition memory of the DomU and start/control a number of transfer threads.


Second thought:

Why is ssh not multi-threaded?


Well after searching a bit I found such a project:



Two drawbacks:

·         You loose encryption completely, or are restricted to just one cipher

·         You need to patch openssh and rebuild it


What other alternatives exist to make use of the full available bandwidth for live-migrations on the xl-stack?

Especially when using secure networks for live-migration (ie. private network, only seen on the Dom0s)?



Kind regards



Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.