[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Re: Re: [Xen-devel] a problem about xen4.0 remus
On Fri, Jul 09, 2010 at 08:53:03PM +0800, taojiang628 wrote: > > Hello : > My domain0's kernel is linux-2.6.18-xen.hg . I want > to know the domainU's kernel is necessary to use linux-2.6.18 ? > If you're using PV domU, then yes, you need to use linux-2.6.18-xen. Xen HVM guests don't have that requirement. -- Pasi > 2010-07-09 > > -------------------------------------------------------------------------- > > taojiang628 > > -------------------------------------------------------------------------- > > ÂÂÅÃÃÃÂÂ Pasi KÃrkkÃinen > ÂÂÃÃÃÂÅÃÂÂ 2010-07-09 13:16:47 > ÃÃÅÃÃÃÂÂ taojiang628; xen-devel > ÂÃÃÂÂ > ÃÃÃÃÂÂ Re: Re: [Xen-devel] a problem about xen4.0 remus > On Thu, Jul 08, 2010 at 06:05:07PM -0700, Brendan Cully wrote: > > I have no idea where that kernel comes from or how old it is. It may > > be missing event-channel suspend support. Try building the 2.6.18 tree > > hosted at xenbits: http://xenbits.xen.org/ > > > 2.6.18-128.e15xen is Redhat RHEL 5.3 default kernel.. > It doesn't have the latest Xen bits.. when was that added to > linux-2.6.18-xen.hg ? > -- Pasi > > On Friday, 09 July 2010 at 08:59, taojiang628 wrote: > > > > > > My guest kernel is 2.6.18-128.e15xen . So what should I do about this > problem. Thank you! > > > > > > 2010-07-09 > > > > > > > > > > > > taojiang628 > > > > > > > > > > > > ???????????? Brendan Cully > > > ??????????????? 2010-07-07 01:44:14 > > > ???????????? taojiang628 > > > ????????? xen-devel > > > ????????? Re: [Xen-devel] a problem about xen4.0 remus > > > > > > On Tuesday, 06 July 2010 at 15:20, taojiang628 wrote: > > > > hello: > > > > I have a problem about the remus, I use > xen-4.0-test.hg+linux-2.6.18-xen.hg ,anyone know what should I do about this > problem. Thank you! > > > > [root@localhost ~]# remus -i 100 centos5.4 192.168.10.190 > > > > Disk is not replicated: > tap:aio:/root/xen/domains/centos-5.4/disk.img,xvda,w > > > > modprobe -q ifb > > > > WARNING: suspend event channel unavailable, falling back to slow > xenstore signalling > > > > Had 0 unexplained entries in p2m table > > > > 1: sent 65252, skipped 284, delta 23066ms, dom0 24%, target 0%, > sent 92Mb/s, dirtied 0Mb/s 365 pages > > > > 2: sent 365, skipped 0, delta 128ms, dom0 29%, target 0%, sent > 93Mb/s, dirtied 10Mb/s 41 pages > > > > 3: sent 41, skipped 0, Start last iteration > > > > PROF: suspending at 1278311972.985159 > > > > installing buffer on imq0... done. > > > > SUSPEND shinfo 0000027b > > > > delta 55ms, dom0 94%, target 5%, sent 24Mb/s, dirtied 69Mb/s 117 > pages > > > > 4: sent 117, skipped 0, delta 3ms, dom0 100%, target 0%, sent > 1277Mb/s, dirtied 1277Mb/s 117 pages > > > > Total pages sent= 65775 (0.97x) > > > > (of which 0 were fixups) > > > > All memory is saved > > > > PROF: resumed at 1278311973.044589 > > > > PROF: flushed memory at 1278311973.048302 > > > > PROF: suspending at 1278311973.101436 > > > > timeout polling fd > > > > ERROR Internal error: Suspend request failed > > > > ERROR Internal error: Domain appears not to have suspended > > > > Save exit rc=1 > > > > Exception exceptions.KeyError: 'imq0' in <bound method > BufferedNIC.__del__ of <xen.remus.device.BufferedNIC object at 0xb7971a4c>> > ignored > > > > [root@localhost ~]# > > > What kernel are you using for your guest? The event channel warning > > > suggests it's not 2.6.18. If you can, use the Xen 2.6.18 kernel for > > > your guest as well. Also, take a look at the instructions here: > > > http://nss.cs.ubc.ca/remus/doc.html > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@xxxxxxxxxxxxxxxxxxx > > http://lists.xensource.com/xen-devel > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |