[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Attempt to allocate order 5 skbuff. Increase MAX_SKBUFF_ORDER


  • To: xen-users@xxxxxxxxxxxxxxxxxxx
  • From: Steven Timm <timm@xxxxxxxx>
  • Date: Fri, 01 May 2009 14:06:33 -0500 (CDT)
  • Delivery-date: Fri, 01 May 2009 12:45:11 -0700
  • List-id: Xen user discussion <xen-users.lists.xensource.com>


Running redhat-clone xen 3.0.3 (really patched 3.1.2) with
kernel-xen-2.6.18-128.1.6.el5xen both on dom0 (64-bit) and domU (32 bit).
The domU in question is a Squid server.  Actually I have 2 such domU
on different physical hardware, same setup, and they are both
having the same trouble.

In the /var/log/messages there is

2009-05-01T11:00:23-05:00 s_sys@xxxxxxxxxxxxxx kernel: Attempt to allocate order 5 skbuff. Increase MAX_SKBUFF_ORDER.

This error is happening continuously, hundreds of times per minute.

Under the previous kernel I was running, which was the 2.6.18-xen
kernel from the xen.org xen 3.1.0 tarball, I had other problems--

 squid: page allocation fa
ilure. order:5, mode:0x20
2009-04-28T05:17:53-05:00 s_sys@xxxxxxxxxxxxxx kernel: [<c015b2b6>] __alloc_page
s+0x216/0x300
2009-04-28T05:17:53-05:00 s_sys@xxxxxxxxxxxxxx kernel: [<c01772cd>] kmem_getpage
s+0x3d/0xe0
2009-04-28T05:17:53-05:00 s_sys@xxxxxxxxxxxxxx kernel: [<c017824c>] cache_grow+0
xdc/0x200
2009-04-28T05:17:53-05:00 s_sys@xxxxxxxxxxxxxx kernel: [<c017851c>] cache_alloc_
refill+0x1ac/0x200
2009-04-28T05:17:53-05:00 s_sys@xxxxxxxxxxxxxx kernel: [<c01789fd>] __kmalloc+0x
ad/0xc0
2009-04-28T05:17:53-05:00 s_sys@xxxxxxxxxxxxxx kernel: [<c029d4e0>] __alloc_skb+
0x50/0x110
2009-04-28T05:17:53-05:00 s_sys@xxxxxxxxxxxxxx kernel: [<c029dd5a>] skb_copy+0x2 2009-04-28T05:17:53-05:00 s_sys@xxxxxxxxxxxxxx kernel: [<c02be1cc>] skb_make_wri
table+0x3c/0xd0
2009-04-28T05:17:53-05:00 s_sys@xxxxxxxxxxxxxx kernel: [<ee52e718>] manip_pkt+0x
28/0x100 [ip_nat]
2009-04-28T05:17:53-05:00 s_sys@xxxxxxxxxxxxxx kernel: [<ee52e867>] ip_nat_packe
t+0x77/0xa0 [ip_nat]
2009-04-28T05:17:53-05:00 s_sys@xxxxxxxxxxxxxx kernel: [<ee526550>] ip_nat_fn+0x
90/0x230 [iptable_nat]
2009-04-28T05:17:53-05:00 s_sys@xxxxxxxxxxxxxx kernel: [<ee5202bb>] ipt_do_table
+0x28b/0x350 [ip_tables]
2009-04-28T05:17:53-05:00 s_sys@xxxxxxxxxxxxxx kernel: [<ee5267fb>] ip_nat_out+0
x5b/0xe0 [iptable_nat]
2009-04-28T05:17:53-05:00 s_sys@xxxxxxxxxxxxxx kernel: [<c02c7470>] ip_finish_ou
tput+0x0/0x1f0
2009-04-28T05:17:53-05:00 s_sys@xxxxxxxxxxxxxx kernel: [<c02be055>] nf_iterate+0
x55/0x90
2009-04-28T05:17:53-05:00 s_sys@xxxxxxxxxxxxxx kernel: [<c02c7470>] ip_finish_ou
tput+0x0/0x1f0
2009-04-28T05:17:53-05:00 s_sys@xxxxxxxxxxxxxx kernel: [<c02c7470>] ip_finish_ou
tput+0x0/0x1f0
2009-04-28T05:17:53-05:00 s_sys@xxxxxxxxxxxxxx kernel: [<c02be0f6>] nf_hook_slow
+0x66/0x100

and then a dump of the kernel memory info.

xm top shows this domu using up to 100% of cpu.

Any idea what is wrong?  Some googling found a bugzilla ticket
about this in Fedora 6 but it was closed with instructions
to reopen if it happens again in Fedora 7.

Steve Timm


------------------------------------------------------------------
Steven C. Timm, Ph.D  (630) 840-8525
timm@xxxxxxxx  http://home.fnal.gov/~timm/
Fermilab Computing Division, Scientific Computing Facilities,
Grid Facilities Department, FermiGrid Services Group, Assistant Group Leader.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.