[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] kswapd0: page allocation failure. order:0, mode:0x20


  • To: xen-users@xxxxxxxxxxxxxxxxxxx
  • From: "Luke S. Crawford" <lsc@xxxxxxxxx>
  • Date: Fri, 23 Feb 2007 12:48:46 -0800 (PST)
  • Delivery-date: Fri, 23 Feb 2007 12:48:21 -0800
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

One of my Xen Dom0 hosts crashed last night. Looking on the serial console logs I see many errors like the one below. The Dom0 has 256M ram, and plenty of swap as you can see below.

I was having oom killer errors, so I set
vm.min_free_kbytes=8192
in /etc/sysctl.conf, and in grub I set
dom0_mem=256M
and that seemed to help, until yesterday.

This might be more of a "running a heavily used bridge in a small memory Linux box" problem more than a Xen problem, but the two are somewhat related; I was hoping someone might have some experience they could share.

I'm running the xensource 3.0.2-3 xen kernel with the default vmlinuz-2.6.16.13-xen Dom0 kernel compiled from the same place. the userland is Debian 3.1.

console output follows:

kswapd0: page allocation failure. order:0, mode:0x20
 [<c01436a6>] __alloc_pages+0x20a/0x2fa
 [<c015b0a0>] kmem_getpages+0x34/0x95
 [<c013d144>] handle_IRQ_event+0x49/0x9c
 [<c015bf00>] cache_grow+0xd1/0x1c3
 [<c015c154>] cache_alloc_refill+0x162/0x205
 [<c015c404>] kmem_cache_alloc+0x84/0x88
 [<c02f0f55>] alloc_skb_from_cache+0x53/0x103
 [<c02257f5>] __dev_alloc_skb+0x4d/0x77
 [<d1231b87>] e1000_alloc_rx_buffers+0x264/0x462 [e1000]
 [<c02f1003>] alloc_skb_from_cache+0x101/0x103
 [<d1230e2e>] e1000_clean_rx_irq+0x2c1/0x6e5 [e1000]
 [<d123071b>] e1000_intr+0x6f/0xfc [e1000]
 [<c013d144>] handle_IRQ_event+0x49/0x9c
 [<c013d217>] __do_IRQ+0x80/0xe9
 [<c01067f2>] do_IRQ+0x1a/0x25
 [<c022395e>] evtchn_do_upcall+0x8f/0xc4
 [<c02f68bf>] dev_queue_xmit+0x21e/0x329
 [<c0104f38>] hypervisor_callback+0x2c/0x34
 [<d1268130>] ipt_do_table+0x108/0x324 [ip_tables]
 [<d12a71d7>] br_nf_forward_finish+0x0/0x121 [bridge]
 [<d10fa037>] ipt_hook+0x37/0x3b [iptable_filter]
 [<c030f7f3>] nf_iterate+0x6f/0x87
 [<d12a71d7>] br_nf_forward_finish+0x0/0x121 [bridge]
 [<d12a71d7>] br_nf_forward_finish+0x0/0x121 [bridge]
 [<c030f876>] nf_hook_slow+0x6b/0x10d
 [<d12a71d7>] br_nf_forward_finish+0x0/0x121 [bridge]
 [<d12a741a>] br_nf_forward_ip+0x122/0x187 [bridge]
 [<d12a71d7>] br_nf_forward_finish+0x0/0x121 [bridge]
 [<d12a1ec5>] br_forward_finish+0x0/0x67 [bridge]
 [<c030f7f3>] nf_iterate+0x6f/0x87
 [<d12a1ec5>] br_forward_finish+0x0/0x67 [bridge]
 [<d12a1ec5>] br_forward_finish+0x0/0x67 [bridge]
 [<c030f876>] nf_hook_slow+0x6b/0x10d
 [<d12a1ec5>] br_forward_finish+0x0/0x67 [bridge]
 [<d12a2012>] __br_forward+0x73/0x7a [bridge]
 [<d12a1ec5>] br_forward_finish+0x0/0x67 [bridge]
 [<d12a2118>] br_flood+0x75/0x109 [bridge]
 [<d12a1f9f>] __br_forward+0x0/0x7a [bridge]
 [<d12a21fe>] br_flood_forward+0x27/0x2d [bridge]
 [<d12a1f9f>] __br_forward+0x0/0x7a [bridge]
 [<d12a2d63>] br_handle_frame_finish+0xfd/0x15c [bridge]
 [<d12a6691>] br_nf_pre_routing_finish+0xf9/0x363 [bridge]
 [<d12a2c66>] br_handle_frame_finish+0x0/0x15c [bridge]
 [<c030f7f3>] nf_iterate+0x6f/0x87
 [<d12a6598>] br_nf_pre_routing_finish+0x0/0x363 [bridge]
 [<d12a6598>] br_nf_pre_routing_finish+0x0/0x363 [bridge]
 [<c030f876>] nf_hook_slow+0x6b/0x10d
 [<d12a6598>] br_nf_pre_routing_finish+0x0/0x363 [bridge]
 [<d12a2c66>] br_handle_frame_finish+0x0/0x15c [bridge]
 [<d12a6ec5>] br_nf_pre_routing+0x253/0x4f9 [bridge]
 [<d12a6598>] br_nf_pre_routing_finish+0x0/0x363 [bridge]
 [<c030f7f3>] nf_iterate+0x6f/0x87
 [<d12a2c66>] br_handle_frame_finish+0x0/0x15c [bridge]
 [<d12a2c66>] br_handle_frame_finish+0x0/0x15c [bridge]
 [<c030f876>] nf_hook_slow+0x6b/0x10d
 [<d12a2c66>] br_handle_frame_finish+0x0/0x15c [bridge]
 [<d12a2f9e>] br_handle_frame+0x1dc/0x216 [bridge]
 [<d12a2c66>] br_handle_frame_finish+0x0/0x15c [bridge]
 [<c02f6fc7>] netif_receive_skb+0x192/0x303
 [<c022ef08>] netif_idx_release+0x31/0x4d
 [<c02f7202>] process_backlog+0xca/0x177
 [<c02f7384>] net_rx_action+0xd5/0x203
 [<c0122cfa>] __do_softirq+0xe6/0x109
 [<c0122d99>] do_softirq+0x7c/0x7e
 [<c01067f7>] do_IRQ+0x1f/0x25
 [<c022395e>] evtchn_do_upcall+0x8f/0xc4
 [<c0104f38>] hypervisor_callback+0x2c/0x34
 [<c014007b>] filemap_populate+0x40/0x168
 [<c0178369>] prune_dcache+0x46/0x11e
 [<c017876a>] shrink_dcache_memory+0x1f/0x45
 [<c0147860>] shrink_slab+0x18b/0x1e5
 [<c0148ddc>] balance_pgdat+0x2c0/0x3a1
 [<c013262d>] prepare_to_wait+0x20/0x69
 [<c0148fa4>] kswapd+0xe7/0x10e
 [<c013274a>] autoremove_wake_function+0x0/0x57
 [<c013274a>] autoremove_wake_function+0x0/0x57
 [<c0148ebd>] kswapd+0x0/0x10e
 [<c0102ef9>] kernel_thread_helper+0x5/0xb
Mem-info:
DMA per-cpu:
cpu 0 hot: high 90, batch 15 used:14
cpu 0 cold: high 30, batch 7 used:28
cpu 1 hot: high 90, batch 15 used:19
cpu 1 cold: high 30, batch 7 used:5
DMA32 per-cpu: empty
Normal per-cpu: empty
HighMem per-cpu: empty
Free pages:        3052kB (0kB HighMem)
Active:7051 inactive:9227 dirty:0 writeback:0 unstable:0 free:763 slab:13802 
mapped:4180 pagetables:88
DMA free:3052kB min:8192kB low:10240kB high:12288kB active:28204kB 
inactive:36908kB present:270336kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB 
pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
Normal free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB 
pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
HighMem free:0kB min:128kB low:128kB high:128kB active:0kB inactive:0kB 
present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA: 1*4kB 1*8kB 0*16kB 1*32kB 1*64kB 1*128kB 1*256kB 1*512kB 0*1024kB 1*2048kB 
0*4096kB = 3052kB
DMA32: empty
Normal: empty
HighMem: empty
Swap cache: add 0, delete 0, find 0/0, race 0+0
Free swap  = 2097144kB
Total swap = 2097144kB
Free swap:       2097144kB
67584 pages of RAM
0 pages of HIGHMEM
18292 reserved pages
11792 pages shared
0 pages swap cached
0 pages dirty
0 pages writeback
4180 pages mapped
13802 pages slab
88 pages pagetables



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.