[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] slow live magration / xc_restore on xen4 pvops



On 03/06/2010 16:03, "Brendan Cully" <brendan@xxxxxxxxx> wrote:

> I see no evidence that Remus has anything to do with the live
> migration performance regression discussed in this thread, and I
> haven't seen any other reported issues either. I think the mlock issue
> is a much more likely candidate.

I agree it's probably lack of batching plus expensive mlocks. The
performance difference between different machines under test is either
because one runs out of 2MB superpage extents before the other (for some
reason) or because mlock operations are for some reason much more likely to
take a slow path in the kernel (possibly including disk i/o) for some
reason.

We need to get batching back, and Edwin is on the case for that: I hope
Andreas will try out Edwin's patch to work towards that. We can also reduce
mlock cost by mlocking some domain_restore arrays across the entire restore
operation, I should imagine.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.