[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] slow live magration / xc_restore on xen4 pvops



Andreas,

You can check whether this is fixed by the latest fixes in
http://xenbits.xensource.com/xen-4.0-testing.hg. You should only need to
rebuild and reinstall tools/libxc.

 Thanks,
 Keir

On 02/06/2010 23:59, "Andreas Olsowski" <andreas.olsowski@xxxxxxxxxxxxxxx>
wrote:

> I did some further research now and shut down all virtual machines on
> xenturio1, after that i got (3 runs):
> (xm save takes ~5 seconds , user and sys are always negligible so i
> removed those to reduce text)
> 
> xenturio1:~# time xm restore /var/saverestore-x1.mem
> real    0m25.349s 0m27.456s 0m27.208s
> 
> So the fact that there were running machines did impact performance of
> xc_restore.
> 
> I proceeded to start create 20 "dummy" vms with 1gig ram and 4vpcus
> each  (dom0 has 4096M fixed, 24gig total available):
> xenturio1:~# for i in {1..20} ; do echo creating dummy$i ; xt vm create
> dummy$i -vlan 27 -mem 1024 -cpus 4 ; done
> creating dummy1
> vm/create> successfully created vm 'dummy1'
> ....
> creating dummy20
> vm/create> successfully created vm 'dummy20'
> 
> and started them
> for i in {1..20} ; do echo starting dummy$i ; xm start dummy$i ; done
> 
> So my memory allocation should now  be 100% (4gig dom0 20gig domUs), but
> why did i have 512megs to spare for "saverestore-x1"? Oh well, onwards.
> 
> Once again i ran a save/restore, 3 times to be sure (edited the
> additional results in output).
> 
> With 20 running vms:
> xenturio1:~# time xm restore /var/saverestore-x1.mem
> real    1m16.375s 0m31.306s 1m10.214s
> 
> With 16 running vms:
> xenturio1:~# time xm restore /var/saverestore-x1.mem
> real    1m49.741s 1m38.696s 0m55.615s
> 
> With 12 running vms:
> xenturio1:~# time xm restore /var/saverestore-x1.mem
> real    1m3.101s 2m4.254s 1m27.193s
> 
> With 8 running vms:
> xenturio1:~# time xm restore /var/saverestore-x1.mem
> real    0m36.867s 0m43.513s 0m33.199s
> 
> With 4 running vms:
> xenturio1:~# time xm restore /var/saverestore-x1.mem
> real    0m40.454s 0m44.929s 1m7.215s
> 
> Keep in mind, those dumUs dont do anything at all, they just idle.
> What is going on there the results seem completely random, running more
> domUs can be faster than running less? How is that even possible?
> 
> So i deleted the dummyXs and started the productive domUs again, in 3
> steps to take further measurements:
> 
> 
> after first batch:
> xenturio1:~# time xm restore /var/saverestore-x1.mem
> real    0m23.968s 1m22.133s 1m24.420s
> 
> after second batch:
> xenturio1:~# time xm restore /var/saverestore-x1.mem
> real    1m54.310s 1m11.340s 1m47.643s
> 
> after third batch:
> xenturio1:~# time xm restore /var/saverestore-x1.mem
> real    1m52.065s 1m34.517s 2m8.644s 1m25.473s 1m35.943s 1m45.074s
> 1m48.407s 1m18.277s 1m18.931s 1m27.458s
> 
> So my current guess is, xc_restore speed depends on the amount of used
> memory or rather how much is beeing grabbed by running processes. Does
> that make any sense?
> 
> But if that is so, explain:
> I started 3 vms running "stress" that do:
> load average: 30.94, 30.04, 21.00
> Mem:   5909844k total,  4020480k used,  1889364k free,      288k buffers
> 
> But still:
> tarballerina:~# time xm restore /var/saverestore-t.mem
> real    0m38.654s
> 
> Why doesnt xc_restore slow down on tarballerina, no matter what i do?
> Again: all 3 machines have 24gigs ram, 2x quad xeons and dom0 is fixed
> to 4096M ram.
> all use the same xen4 sources, the same kernels with the same configs.
> 
> Is the Xeon E5520 with DDR3 really this much faster than the L5335 and
> L5410 with DDR2?
> 
> If someone were to tell me, that this is expected behaviour i wouldnt
> like it, but at least i could accept it.
> Are machines doing plenty of cpu and memory utilizaton not a good
> measurement in this or any case?
> 
> I think tomorrow night i will migrate all machines from xenturio1 to
> tarballerina, but first i have to verify that all vlans are available,
> that i cannot do right now.
> 
> ---
> 
> Andreas
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.