[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] performance save/restore under xen-4.3.2 compared to kvm/qemu

  • To: "xen-devel lists.xen.org" <xen-devel@xxxxxxxxxxxxx>
  • From: max ustermann <ustermann.max@xxxxxx>
  • Date: Mon, 25 Aug 2014 13:35:02 +0000
  • Delivery-date: Mon, 25 Aug 2014 13:43:27 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>


first, thank you for the Information.
For clearness i will comment, that i did the tests with an HVM Guest (Windows 
XP) not with an PV-Guest. Make this any diffenrence to your explanations?

and then i would like to ask, are your modifications anywere avaible? are they 
in the unstable-tree of xen ????

all the best
----------------ursprÃngliche Nachricht-----------------
Von: "Andrew Cooper" andrew.cooper3@xxxxxxxxxx 
An: ustermann.max@xxxxxx, xen-devel@xxxxxxxxxxxxx 
Datum: Mon, 25 Aug 2014 13:23:09 +0100
> On 25/08/14 13:06, ustermann.max@xxxxxx wrote:
>> Hello everybody,
>> i hope, i am right here with my question.
>> i have an vm with 1 GB main memory under Xen-4.3.2, if i measure the times 
>> for 
>> save and restore via "time", i got the following values:
>> save:
>> real 0m12.136s
>> user 0m0.175s
>> sys 0m2.662s
>> restore:
>> real 0m8.639s
>> user 0m0.468s
>> sys 0m1.807s
>> if i do the same with an vm under kvm/qemu (1GB main memory), i got this 
>> values:
>> save:
>> real 0m10.024s
>> user 0m0.008s
>> sys 0m0.003s
>> restore:
>> real 0m0.525s
>> user 0m0.015s
>> sys 0m0.004s
>> the host hardware is in both cases the same.
>> iÂm real surprise about the huge difference for the needed time for restore, 
>> also that xen use much more time in kernel-mode (sys).
>> Can anyone give me some hints from where this difference can came?
>> Is there a way to speedup the restore-process in xen?
>> IÂm thankful for every hint
>> all the best 
>> max
> Xen and KVM are two different types of hypervisor. By its nature,
> kvm/qemu has less to do for migration, as it already has full access to
> the VMs memory.
> It looks plausibly like qemu restore is mmap()ing the restore file and
> running straight from there. You are never going to manage this under
> Xen, because of the extra isolation inherent in the Xen model.
> In terms of raw speed, my migration v2 series (still in development) has
> fixed several performance problems in the old migration code 
> Particularly in the case of your example, my new code will be 4 times
> faster as it is not mapping everything up to 4GB in the VM.
> I have also identified a bottleneck in the Linux PVOps kernel where the
> mmap batch ioctl takes a batch size of 1024 and generates 1024 batch
> hypercalls of batch size 1. Fixing this will certainly make the
> mapping/unmapping faster.
> ~Andrew


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.