[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] slow live magration / xc_restore on xen4 pvops



On Wednesday, 02 June 2010 at 17:24, Keir Fraser wrote:
> On 02/06/2010 17:18, "Ian Jackson" <Ian.Jackson@xxxxxxxxxxxxx> wrote:
> 
> > Andreas Olsowski writes ("[Xen-devel] slow live magration / xc_restore on 
> > xen4
> > pvops"):
> >> [2010-06-01 21:20:57 5211] INFO (XendCheckpoint:423) ERROR Internal
> >> error: Error when reading batch size
> >> [2010-06-01 21:20:57 5211] INFO (XendCheckpoint:423) ERROR Internal
> >> error: error when buffering batch, finishing
> > 
> > These errors, and the slowness of migrations, are caused by changes
> > made to support Remus.  Previously, a migration would be regarded as
> > complete as soon as the final information including CPU states was
> > received at the migration target.  xc_domain_restore would return
> > immediately at that point.
> 
> This probably needs someone with Remus knowledge to take a look, to keep all
> cases working correctly. I'll Cc Brendan. It'd be good to get this fixed for
> a 4.0.1 in a few weeks.

I've done a bit of profiling of the restore code and observed the
slowness here too. It looks to me like it's probably related to
superpage changes. The big hit appears to be at the front of the
restore process during calls to allocate_mfn_list, under the
normal_page case. It looks like we're calling
xc_domain_memory_populate_physmap once per page here, instead of
batching the allocation? I haven't had time to investigate further
today, but I think this is the culprit.

> 
>  -- Keir
> 
> > Since the Remus patches, xc_domain_restore waits until it gets an IO
> > error, and also has a very short timeout which induces IO errors if
> > nothing is received if there is no timeout.  This is correct in the
> > Remus case but wrong in the normal case.
> > 
> > The code should be changed so that xc_domain_restore
> >  (a) takes an explicit parameter for the IO timeout, which
> >      should default to something much longer than the 100ms or so of
> >      the Remus case, and
> >  (b) gets told whether
> >     (i) it should return immediately after receiving the "tail"
> >         which contains the CPU state; or
> >     (ii) it should attempt to keep reading after receiving the "tail"
> >         and only return when the connection fails.
> > 
> > In the case (b)(i), which should be the usual case, the behaviour
> > should be that which we would get if changeset 20406:0f893b8f7c15 was
> > reverted.  The offending code is mostly this, from 20406:
> > 
> > +    // DPRINTF("Buffered checkpoint\n");
> > +
> > +    if ( pagebuf_get(&pagebuf, io_fd, xc_handle, dom) ) {
> > +        ERROR("error when buffering batch, finishing\n");
> > +        goto finish;
> > +    }
> > +    memset(&tmptail, 0, sizeof(tmptail));
> > +    if ( buffer_tail(&tmptail, io_fd, max_vcpu_id, vcpumap,
> > +                     ext_vcpucontext) < 0 ) {
> > +        ERROR ("error buffering image tail, finishing");
> > +        goto finish;
> > +    }
> > +    tailbuf_free(&tailbuf);
> > +    memcpy(&tailbuf, &tmptail, sizeof(tailbuf));
> > +
> > +    goto loadpages;
> > +
> > +  finish:
> > 
> > Ian.
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.