[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] slow live magration / xc_restore on xen4 pvops



On Wednesday, 02 June 2010 at 23:45, Brendan Cully wrote:
> On Thursday, 03 June 2010 at 06:47, Keir Fraser wrote:
> > On 03/06/2010 02:04, "Brendan Cully" <Brendan@xxxxxxxxx> wrote:
> > 
> > > I've done a bit of profiling of the restore code and observed the
> > > slowness here too. It looks to me like it's probably related to
> > > superpage changes. The big hit appears to be at the front of the
> > > restore process during calls to allocate_mfn_list, under the
> > > normal_page case. It looks like we're calling
> > > xc_domain_memory_populate_physmap once per page here, instead of
> > > batching the allocation? I haven't had time to investigate further
> > > today, but I think this is the culprit.
> > 
> > Ccing Edwin Zhai. He wrote the superpage logic for domain restore.
> 
> Here's some data on the slowdown going from 2.6.18 to pvops dom0:
> 
> I wrapped the call to allocate_mfn_list in uncanonicalize_pagetable
> to measure the time to do the allocation.
> 
> kernel, min call time, max call time
> 2.6.18, 4 us, 72 us
> pvops, 202 us, 10696 us (!)
> 
> It looks like pvops is dramatically slower to perform the
> xc_domain_memory_populate_physmap call!

Looking at changeset 20841:

  Allow certain performance-critical hypercall wrappers to register data
  buffers via a new interface which allows them to be 'bounced' into a
  pre-mlock'ed page-sized per-thread data area. This saves the cost of
  mlock/munlock on every such hypercall, which can be very expensive on
  modern kernels.

...maybe the lock_pages call in xc_memory_op (called from
xc_domain_memory_populate_physmap) has gotten very expensive?
Especially considering this hypercall is now issued once per page.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.