[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-ia64-devel] crash while unpacking tar.bz/tar.gz archives



hi

i've been running several domU's on my tiger4 system in the last week under heavy load, well everything worked fine, as long as i have not been unpacking tar files.

the whole machine crashes if i try to extract tar / tar.bz2 or tar.gz files, well seems that not every single try crashed the whole machine

happens if i do that in dom0 as well as domU

i'm getting following output on the console:

######################## snip ###########################
INIT: Entering runlevel: 3
* Starting syslog-ng ... [ ok ]

* Mounting network filesystems ... [ ok ]

* Setting clock via the NTP client 'ntpdate' ... [ ok ]

* Starting ntpd ... [ ok ]

* Starting sshd ... [ ok ]

* Starting vixie-cron ... [ ok ]

* Starting local ... [ ok ]

Nothing to flush.
Waiting for peth0 to negotiate link....
(XEN) mm.c:708:d0 vcpu 0 iip 0xa000000100626ba0: bad mpa d 3 0x1ee40000 (=> 0x20000000) (XEN) mm.c:708:d0 vcpu 0 iip 0xa00000010061bb00: bad mpa d 5 0x4f98000 (=> 0x20000000) (XEN) mm.c:708:d0 vcpu 0 iip 0xa0000001004fbe20: bad mpa d 19 0x36bc000 (=> 0x10000000) (XEN) mm.c:708:d0 vcpu 0 iip 0xe000000000000810: bad mpa d 0 0x3ffff00000008 (=> 0x2005c000)
blkback.20.xvda[11592]: Oops 11003706212352 [1]
Modules linked in:

Pid: 11592, CPU 0, comm:      blkback.20.xvda
psr : 0000121008522030 ifs : 8000002fbe1c4408 ip : [<a0000001004f2ef1>] Nottaintedip is at __copy_user+0x891/0x960
unat: 0000000000000000 pfs : 400000000000040b rsc : 0000000000000007
rnat: 00000001c0c16000 bsps: 0000000000000008 pr  : 0000000000005541
ldrs: 0000000000000000 ccv : 0000000000000000 fpsr: 0009804c8a70433f
csd : 0000000000000000 ssd : 0000000000000000
b0  : a0000001004f7770 b6  : a0000001004f2de0 b7  : a0000001004f7820
f6  : 000000000000000000000 f7  : 1003e0000000000000200
f8  : 1003e0000000000000078 f9  : 000000000000000000000
f10 : 000000000000000000000 f11 : 000000000000000000000
r1  : a00000010110fe70 r2  : ffffffff00000020 r3  : ffffffff00000820
r8  : 0000000000002000 r9  : 00000000000000ff r10 : 0000000000000000
r11 : 00000000000049c1 r12 : e00000001b87fd90 r13 : e00000001b878000
r14 : 000000003eb7a000 r15 : ffffffff0000000f r16 : 0000000000002000
r17 : 000000003eb7a000 r18 : 000000003eb7a800 r19 : ffffffff0000100f
r20 : 000000003eb7b000 r21 : 0000000000000020 r22 : 0000000000000100
r23 : 0000000000001000 r24 : 0000000000000000 r25 : a000000100f10628
r26 : 0000000000000001 r27 : 8000000000000001 r28 : 000000003eb78000
r29 : a0000001004f2de0 r30 : 0000000000000000 r31 : 400000000000040b

Call Trace:
[<a00000010001d640>] show_stack+0x40/0xa0
                               sp=e00000001b87f940 bsp=e00000001b879468
[<a00000010001e2a0>] show_regs+0x840/0x880
                               sp=e00000001b87fb10 bsp=e00000001b879410
[<a000000100042a00>] die+0x1c0/0x380
                               sp=e00000001b87fb10 bsp=e00000001b8793c0
[<a000000100066c30>] ia64_do_page_fault+0x870/0x9a0
                               sp=e00000001b87fb30 bsp=e00000001b879370
[<a0000001000693e0>] xen_leave_kernel+0x0/0x3e0
                               sp=e00000001b87fbc0 bsp=e00000001b879370
[<a0000001004f2ef0>] __copy_user+0x890/0x960
                               sp=e00000001b87fd90 bsp=e00000001b879330
[<a0000001004f7770>] sync_single+0xf0/0x1a0
                               sp=e00000001b87fd90 bsp=e00000001b8792f0
[<a0000001004f7b00>] swiotlb_sync_sg_for_device+0x2e0/0x320
                               sp=e00000001b87fd90 bsp=e00000001b879290
[<a000000100777f40>] mbox_post_cmd+0x2c0/0x3a0
                               sp=e00000001b87fd90 bsp=e00000001b879238
[<a000000100778210>] megaraid_mbox_runpendq+0x1f0/0x280
                               sp=e00000001b87fd90 bsp=e00000001b8791e0
[<a00000010077a650>] megaraid_queue_command+0x19b0/0x19e0
                               sp=e00000001b87fd90 bsp=e00000001b879178
[<a00000010072a5b0>] scsi_dispatch_cmd+0x530/0x680
                               sp=e00000001b87fdb0 bsp=e00000001b879140
[<a0000001007370b0>] scsi_request_fn+0x7b0/0xb20
                               sp=e00000001b87fdb0 bsp=e00000001b8790f0
[<a0000001004c87b0>] __generic_unplug_device+0x90/0xc0
                               sp=e00000001b87fdb0 bsp=e00000001b8790d0
[<a0000001004caf50>] generic_unplug_device+0x30/0x140
                               sp=e00000001b87fdb0 bsp=e00000001b8790a8
[<a00000010085d220>] dm_table_unplug_all+0xa0/0x100
                               sp=e00000001b87fdb0 bsp=e00000001b879080
[<a000000100858d00>] dm_unplug_all+0x40/0x80
                               sp=e00000001b87fdb0 bsp=e00000001b879060
[<a0000001006c1410>] unplug_queue+0x70/0xc0
                               sp=e00000001b87fdb0 bsp=e00000001b879040
[<a0000001006c29e0>] blkif_schedule+0x9e0/0xbc0
                               sp=e00000001b87fdb0 bsp=e00000001b878fd0
[<a0000001000beb80>] kthread+0x180/0x200
                               sp=e00000001b87fe20 bsp=e00000001b878f98
[<a00000010001bb10>] kernel_thread_helper+0x30/0x60
                               sp=e00000001b87fe30 bsp=e00000001b878f70
[<a0000001000110e0>] start_kernel_thread+0x20/0x40
                               sp=e00000001b87fe30 bsp=e00000001b878f70

######################## snap ###########################

does anybody have an idea whats wrong?

--
Jan Werner
Network Administrator

Wazap AG
Karl-Liebknecht-Str. 5
D-10178 Berlin, Germany

Tel           +49 (0)30 278744-2811
Fax           +49 (0)30 278744-29

Email         jan.werner@xxxxxxxx
URL           http://wazap.com

Winner of the prestigious 2007 Red Herring Europe 100 award.

Amtsgericht:  Berlin-Charlottenburg
Vorstand:     Andreas Rührig (Vors.), Timo Meyer, Alexander Piutti,
              Philip Gienandt
Aufsichtsrat: Martin Sinner (Vors.), Frank Böhnke, Florian Seubert,
              Markus Jorquera Imbernón, Philippe Collombel




_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.