[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-devel] crash on starting new domain
> -----Original Message----- > From: Ian Pratt [mailto:m+Ian.Pratt@xxxxxxxxxxxx] > Sent: Tuesday, 5 April 2005 16:58 > To: James Harper; xen-devel@xxxxxxxxxxxxxxxxxxx > Cc: ian.pratt@xxxxxxxxxxxx; ian.pratt@xxxxxxxxxxxx > Subject: RE: [Xen-devel] crash on starting new domain > > > > Hmmm... they were built while running 2.6.9, but for 2.6.10. > > The iscsi Makefile uses a lot of calls to uname but I'm > > pretty sure I got all the places where it is used. It runs > > fine up until the point where the new domain starts. > > Are you sure they were built with ARCH=xen ? Ah... possibly not. That will only matter if they use privileged instructions won't it? I'll recompile just to be sure and try again. > > > Could there be some interaction between xen vbd support and > > iscsi? It's always worked in the past but I've jumped forward > > a few versions of everything. Maybe I'll try not using disk > > in xenu and make a barebones initrd.. > > There's been no significant changes to the vbd code in 2.0.5. > 2.0-testing (proto 2.0.6) has some new stuff that I doubt has been > tested on iSCSI, but is probably OK. I just ran it again and got a slightly different oops, but again just when xenu is starting up its filesystem: TCP: Hash tables configured (established 8192 bind 16384) NET: Registered protocol family 1 NET: Registered protocol family 17 EXT3-fs: INFO: recovery required on readonly filesystem. EXT3-fs: write access will be enabled during recovery. Segmentation fault xen1:~# Apr 5 17:08:56 xen1 kernel: br1: port 2(vif3.0) entering learning state Apr 5 17:08:56 xen1 kernel: Unable to handle kernel paging request at virtual address c7c78000 Apr 5 17:08:56 xen1 kernel: printing eip: Apr 5 17:08:56 xen1 kernel: c01423bf Apr 5 17:08:56 xen1 kernel: *pde = ma 0141d067 pa 0001d067 Apr 5 17:08:56 xen1 kernel: *pte = ma 00000000 pa 55555000 Apr 5 17:08:56 xen1 kernel: [handle_mm_fault+448/480] handle_mm_fault+0x1c0/0x1e0 Apr 5 17:08:56 xen1 kernel: [do_page_fault+412/1683] do_page_fault+0x19c/0x693 Apr 5 17:08:56 xen1 kernel: [tty_write+527/624] tty_write+0x20f/0x270 Apr 5 17:08:56 xen1 kernel: [write_chan+0/544] write_chan+0x0/0x220 Apr 5 17:08:56 xen1 kernel: [sys_recv+51/64] sys_recv+0x33/0x40 Apr 5 17:08:56 xen1 kernel: [sys_socketcall+356/608] sys_socketcall+0x164/0x260 Apr 5 17:08:56 xen1 kernel: [sys_write+81/128] sys_write+0x51/0x80 Apr 5 17:08:56 xen1 kernel: [page_fault+59/64] page_fault+0x3b/0x40 Apr 5 17:08:56 xen1 kernel: Oops: 0002 [#1] Apr 5 17:08:56 xen1 kernel: Modules linked in: nfsd exportfs lockd sunrpc tlan 8021q loop ext3 jbd mbcache crc32c libcrc32c iscsi_sfnet scsi_transport_iscsi dm_mod sd_mod scsi_mod e1000 eepro100 Apr 5 17:08:56 xen1 kernel: CPU: 0 Apr 5 17:08:56 xen1 kernel: EIP: 0061:[do_wp_page+207/1024] Not tainted VLI Apr 5 17:08:56 xen1 kernel: EFLAGS: 00011287 (2.6.10-xen0) Apr 5 17:08:56 xen1 kernel: EIP is at do_wp_page+0xcf/0x400 Apr 5 17:08:56 xen1 kernel: eax: c1002020 ebx: c10f8f00 ecx: 00000400 edx: c1000000 Apr 5 17:08:56 xen1 kernel: esi: c0c66000 edi: c7c78000 ebp: c1018cc0 esp: c28ffe94 Apr 5 17:08:56 xen1 kernel: ds: 0069 es: 0069 ss: 0069 Apr 5 17:08:56 xen1 kernel: Process python (pid: 3896, threadinfo=c28fe000 task=c4f8f5c0) Apr 5 17:08:56 xen1 kernel: Stack: c4abf5b8 c5fa6840 c28fff34 00000400 00000000 c10f8f00 c4abf5b8 c63870e0 Apr 5 17:08:56 xen1 kernel: 403f3000 00000001 c0143580 c63870e0 c4abf5b8 403f3000 c07bffcc c2648400 Apr 5 17:08:56 xen1 kernel: 02066065 00000000 c63870e0 c638710c 00000007 c4abf5b8 c011340c c63870e0 Apr 5 17:08:56 xen1 kernel: Call Trace: Apr 5 17:08:56 xen1 kernel: [handle_mm_fault+448/480] handle_mm_fault+0x1c0/0x1e0 Apr 5 17:08:56 xen1 kernel: [do_page_fault+412/1683] do_page_fault+0x19c/0x693 Apr 5 17:08:56 xen1 kernel: [tty_write+527/624] tty_write+0x20f/0x270 Thanks James _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |