[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] [Xen-devel] unexpected Out Of Memory (OOM)



On Thu, Aug 08, 2013 at 01:43:08PM +0200, Olivier Bonvalet wrote:
> 
> 
> Le jeudi 08 août 2013 à 11:18 +0100, Ian Campbell a écrit :
> > On Thu, 2013-08-08 at 12:10 +0200, Olivier Bonvalet wrote:
> > > 
> > > Le jeudi 08 août 2013 à 09:58 +0100, Ian Campbell a écrit :
> > > > On Wed, 2013-08-07 at 23:37 +0200, Olivier Bonvalet wrote:
> > > > > So I recompiled a kernel with the kmemleak feature. I obtain that kind
> > > > > of list, but not sure that it's usefull :
> > > > 
> > > > These look to me like valid things to be allocating at boot time, and
> > > > even if they are leaked there isn't enough here to exhaust 8GB by a long
> > > > way.
> > > > 
> > > > It'd be worth monitoring to see if it grows at all or if anything
> > > > interesting shows up after running for a while with the leak.
> > > > 
> > > > Likewise it'd be worth keeping an eye on the process list and slabtop
> > > > and seeing if anything appears to be growing without bound.
> > > > 
> > > > Other than that I'm afraid I don't have many smart ideas.
> > > > 
> > > > Ian.
> > > > 
> > > > 
> > > 
> > > Ok, then I will become crazy : when I start the kernel with kmemleak=on
> > > in fact I haven't memory leak. The memory usage stay near 300MB.
> > > 
> > > Then I restart on the same kernel, without kmemleak=on, the memory usage
> > > jump to 600MB and still grow.
> > > 
> > > Olivier
> > > 
> > > PS : I retry several time, to confirm that.
> > 
> > *boggles*
> > 
> > Ian.
> > 
> > 
> 
> 
> So, I retried the slabtop test, with more leaked memory, to have better
> visibility :
> 
> --- a 2013-08-08 12:29:48.437966407 +0200
> +++ c 2013-08-08 13:33:41.213711305 +0200
> @@ -1,23 +1,23 @@
> - Active / Total Objects (% used)    : 186382 / 189232 (98.5%)
> - Active / Total Slabs (% used)      : 6600 / 6600 (100.0%)
> - Active / Total Caches (% used)     : 100 / 151 (66.2%)
> - Active / Total Size (% used)       : 111474.55K / 113631.58K (98.1%)
> - Minimum / Average / Maximum Object : 0.33K / 0.60K / 8.32K
> + Active / Total Objects (% used)    : 2033635 / 2037851 (99.8%)
> + Active / Total Slabs (% used)      : 70560 / 70560 (100.0%)
> + Active / Total Caches (% used)     : 101 / 151 (66.9%)
> + Active / Total Size (% used)       : 1289959.44K / 1292725.98K (99.8%)
> + Minimum / Average / Maximum Object : 0.33K / 0.63K / 8.32K
>  
>    OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME                  
>  
> - 55048  55038  99%    0.56K   1966       28     31456K filp                  
>  
> - 29536  29528  99%    0.50K    923       32     14768K cred_jar              
>  
> - 22909  22909 100%    0.51K    739       31     11824K dentry                
>  
> +831572 831552  99%    0.56K  29699       28    475184K filp                  
>  
> +501664 501635  99%    0.50K  15677       32    250832K cred_jar              
>  
> +172453 172432  99%    0.51K   5563       31     89008K dentry                
>  
> +150920 150906  99%    0.91K   4312       35    137984K proc_inode_cache      
>  
> + 54686  54652  99%    0.43K   1478       37     23648K task_delay_info       
>  
> + 54656  54651  99%    1.98K   3416       16    109312K task_struct           
>  
> + 54652  54651  99%    1.19K   2102       26     67264K task_xstate           
>  
> + 54648  54644  99%    0.44K   1518       36     24288K pid                   
>  
> + 54648  54645  99%    1.38K   2376       23     76032K signal_cache          
>  
> + 38200  38188  99%    0.38K   1910       20     15280K kmalloc-64            
>  
>   11803  11774  99%    0.43K    319       37      5104K sysfs_dir_cache       
>  
> -  7350   7327  99%    0.91K    210       35      6720K proc_inode_cache      
>  
> -  5520   5465  99%    0.38K    276       20      2208K anon_vma_chain        
>  
> -  5216   5137  98%    0.50K    163       32      2608K vm_area_struct        
>  
> -  3984   3978  99%    0.33K    166       24      1328K kmalloc-8             
>  
> -  3811   3798  99%    0.84K    103       37      3296K inode_cache           
>  
> -  3384   3359  99%    0.44K     94       36      1504K pid                   
>  
> -  3381   3362  99%    1.38K    147       23      4704K signal_cache          
>  
> -  3380   3366  99%    1.19K    130       26      4160K task_xstate           
>  
> -  3376   3366  99%    1.98K    211       16      6752K task_struct           
>  
> -  3367   3367 100%    0.43K     91       37      1456K task_delay_info       
>  
> -  2886   2864  99%    0.42K     78       37      1248K buffer_head           
>  
> -  2720   2714  99%    0.93K     80       34      2560K shmem_inode_cache     
>  
> +  7920   7676  96%    0.38K    396       20      3168K anon_vma_chain        
>  
> +  7808   7227  92%    0.50K    244       32      3904K vm_area_struct        
>  
> +  5624   5581  99%    0.42K    152       37      2432K buffer_head           
>  
> +  4316   4308  99%    1.22K    166       26      5312K ext4_inode_cache      
>  
> +  3984   3977  99%    0.33K    166       24      1328K kmalloc-8             
>  
> 
> So in 1 hour, "filp" and "cred_jar" eat a lot of memory.
> 

filp should be the pointer to struct file. cred_jar is slab cache to
store credentials. These two are not Xen-specific.  

Dentry grows too. But that's not Xen-specific either.


Wei.

> But I have no idea what is it...
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.