[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-devel] Live migration fails under heavy network use
> On Thu, Feb 22, 2007 at 10:34:30PM -0000, Ian Pratt wrote: > > > > Not quite sure why the new domain is trying to map 00000000 though. > > > > The messages from the save side are expected. Is the message from the > > restored domain triggered by the restore code i.e. before the domain is > > un-paused? > > I suspect so but haven't proved that. That would be a good test. > > I expect if you change the 'pfn=0' in canonicalize_pagetable:539 to > > 'deadb000' you'll see that propagated through to the restore message. In > > which case, its ugly, but benign. > > Wouldn't that pfn of 0 be an MFN other than 0 though? Fair point. Thinking about it, that should be patched up by the next iteration anyhow. > I do not see any change > when setting pfn as above. Any further ideas? I can try adding some back > traces. I suppose you're not seeing it with a Linux dom0? I don't think so, but I couldn't swear to it. It did used to come out during Linux domain boot at one point, can't remember whether it still does. I presume the domain itself seems to suffer no ill effects? > > > I also see a fair amount of: > > > > > > Dom48 freeing in-use page 2991 (pseudophys 100a4): count=2 > > type=e8000000 > > > > That's fine. Debug builds are a bit chatty for live migration... > > Both of these: > > (XEN) mm.c:590:d0 Error getting mfn a005e (pfn 4c35) from L1 entry > 00000000a005e705 for dom2 > (XEN) mm.c:566:d0 Non-privileged (3) attempt to map I/O space 00000000 > > are also present in a non-debug build. Would you take a patch to make both > of > them be XENLOG_INFO? It's not good that we get console noise for normal > operation > (presuming the I/O space one /is/ normal operation!). I think we need to understand this one first. Thanks, Ian _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |