[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-bugs] [Bug 530] memory leak eventually causes oom-killer
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=530 ------- Additional Comments From andy@xxxxxxxxxxxxxx 2006-02-20 10:39 ------- (In reply to comment #7) > Are there any guests running when this happens? If so, are they using > file-backed VBDs (i.e. disk=file:/path/to/disk.img...)? 3 guests running from exported LVM LVs. I have been running slabtop for a while, here's the current output: Active / Total Objects (% used) : 2069669 / 2120761 (97.6%) Active / Total Slabs (% used) : 37776 / 37782 (100.0%) Active / Total Caches (% used) : 80 / 125 (64.0%) Active / Total Size (% used) : 136355.18K / 144898.06K (94.1%) Minimum / Average / Maximum Object : 0.01K / 0.07K / 128.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 2021784 2021783 99% 0.06K 33144 61 132576K size-64 22650 8604 37% 0.05K 302 75 1208K buffer_head 19684 3909 19% 0.14K 703 28 2812K dentry_cache 15072 5655 37% 0.47K 1884 8 7536K ext3_inode_cache 6554 4117 62% 0.02K 29 226 116K dm_io 6328 4117 65% 0.02K 28 226 112K dm_tio 4384 2922 66% 0.25K 274 16 1096K ip_conntrack 3317 3249 97% 0.04K 31 107 124K sysfs_dir_cache 2387 2242 93% 0.12K 77 31 308K size-96 2352 1364 57% 0.27K 168 14 672K radix_tree_node 2261 1982 87% 0.03K 19 119 76K size-32 1305 1074 82% 0.09K 29 45 116K vm_area_struct 1130 367 32% 0.02K 5 226 20K biovec-1 1020 948 92% 0.25K 68 15 272K ip_dst_cache 972 938 96% 0.32K 81 12 324K inode_cache 868 846 97% 0.12K 28 31 112K size-128 -- Configure bugmail: http://bugzilla.xensource.com/bugzilla/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are the assignee for the bug, or are watching the assignee. _______________________________________________ Xen-bugs mailing list Xen-bugs@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-bugs
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |