[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] slow live magration / xc_restore on xen4 pvops



On Thursday, 03 June 2010 at 18:15, Ian Jackson wrote:
> Brendan Cully writes ("Re: [Xen-devel] slow live magration / xc_restore on 
> xen4 pvops"):
> > The sender closes the fd, as it always has. xc_domain_restore has
> > always consumed the entire contents of the fd, because the qemu tail
> > has no length header under normal migration. There's no behavioral
> > difference here that I can see.
> 
> No, that is not the case.  Look for example at "save" in
> XendCheckpoint.py in xend, where the save code:
>   1. Converts the domain config to sxp and writes it to the fd
>   2. Calls xc_save (which calls xc_domain_save)
>   3. Writes the qemu save file to the fd

4. (in XendDomain) closed the fd. Again, this is the _sender_. I fail
to see your point.

> > I have no objection to a more explicit interface. The current form is
> > simply Remus trying to be as invisible as possible to the rest of the
> > tool stack.
> 
> My complaint is that that is not currently the case.
> 
> > 1. reads are only supposed to be able to time out after the entire
> > first checkpoint has been received (IOW this wouldn't kick in until
> > normal migration had already completed)
> 
> OMG I hadn't noticed that you had introduced a static variable for
> that; I had assumed that "read_exact_timed" was roughly what it said
> on the tin.
> 
> I think I shall stop now before I become more rude.

Feel free to reply if you have an actual Remus-caused regression
instead of FUD based on misreading the code. I'd certainly be
interested in fixing something real.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.