[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] frequently ballooning results in qemu exit
We created a 64bit SLES11 SP1 guest, and then used a script to change memory (using mem-set command) periodically (in 1 second): set 1G, set 2G, set 1G, set 2G, and so on. After a few minutes, we encountered QEMU exit due to SIGBUS error. Below is the call trace captured by gdb: The call trace: Program received signal SIGBUS, Bus error. 0x00007f94f74773d7 in memcpy () from /lib64/libc.so.6 (gdb) bt #0 0x00007f94f74773d7 in memcpy () from /lib64/libc.so.6 #1 0x00007f94fa67016d in address_space_rw (as=<optimized out>, addr=2042531840, buf=0x7fffa36accf8 "", len=4, is_write=true) at /usr/include/bits/string3.h:52 #2 0x00007f94fa747cf0 in rw_phys_req_item (rw=<optimized out>, val=<optimized out>, i=<optimized out>, req=<optimized out>, addr=<optimized out>) at /opt/new/tools/qemu-xen-dir/xen-all.c:709 #3 write_phys_req_item (val=<optimized out>, i=<optimized out>, req=<optimized out>, addr=<optimized out>) at /opt/new/tools/qemu-xen-dir/xen-all.c:720 #4 cpu_ioreq_pio (req=<optimized out>) at /opt/new/tools/qemu-xen-dir/xen-all.c:736 #5 handle_ioreq (req=0x7f94fa464000) at /opt/new/tools/qemu-xen-dir/xen-all.c:793 #6 0x00007f94fa748abe in cpu_handle_ioreq (opaque=0x7f94fb39d3f0) at /opt/new/tools/qemu-xen-dir/xen-all.c:868 #7 0x00007f94fa5e3262 in qemu_iohandler_poll (readfds=0x7f94faeea7a0 <rfds>, writefds=0x7f94faeea820 <wfds>, xfds=<optimized out>, ret=<optimized out>) at iohandler.c:125 #8 0x00007f94fa5ec51d in main_loop_wait (nonblocking=<optimized out>) at main-loop.c:418 #9 0x00007f94fa6616dc in main_loop () at vl.c:1770 #10 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:3999 It looks mapcache has something wrong because memcpy failed with the address from mapcache. Any ideas about this issue? Thanks! --weidong _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |