[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] Memory Allocation



Here is the disk stats

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.13    0.00    0.21    0.06    0.04   98.58

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz 
avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     
0.00    0.00   0.00   0.00
sda1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     
0.00    0.00   0.00   0.00
sda2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     
0.00    0.00   0.00   0.00
sda3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     
0.00    0.00   0.00   0.00
sdb               0.00    38.33    0.00   25.33     0.00     0.23    18.53     
0.12    4.16   0.58   1.47
sdb1              0.00    38.33    0.00   25.33     0.00     0.23    18.53     
0.12    4.16   0.58   1.47

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.15    0.00    0.18    0.60    0.04   98.02

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz 
avgqu-sz   await  svctm  %util
sda               0.00     9.30    0.00   18.27     0.00     0.11    12.22     
1.52   83.13   2.84   5.18
sda1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     
0.00    0.00   0.00   0.00
sda2              0.00     9.30    0.00   18.27     0.00     0.11    12.22     
1.52   83.13   2.84   5.18
sda3              0.00     0.00    0.00    0.00     0.00     0.00     0.00     
0.00    0.00   0.00   0.00
sdb               0.00    19.93    1.33   22.26     0.00     0.19    17.24     
0.85   36.68   2.70   6.38
sdb1              0.00    19.93    1.00   22.26     0.00     0.19    17.49     
0.85   37.20   2.74   6.38


sdb1 is the iSCSI volume. I used the free -o -m command to show that memory was 
consumed at the point. Now it is possible the vm's could have started paging 
and that causes the performance problem I saw. But either way I really just 
need to understand how the memory assignment /allocation works in Xen so I can 
determine where my memory is going. Because with my math I should be getting 
more that 40 machines on a server with 147GB of ram

so here is a few memory snapshots at the moment: As here you can see different 
commands for memory list different responses. Where did my 147GB go?

vmc1n2:~ # free
             total       used       free     shared    buffers     cached
Mem:     131402752   19008992  112393760          0     401988   14859640
-/+ buffers/cache:    3747364  127655388
Swap:      2618584          0    2618584

vmc1n2:~ # free -o -m
             total       used       free     shared    buffers     cached
Mem:        128323      18563     109759          0        392      14511
Swap:         2557          0       2557
vmc1n2:~ #

xentop - 12:10:57   Xen 3.3.1_18546_20-0.1
30 domains: 1 running, 28 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown
Mem: 150984932k total, 148800660k used, 2184272k free    CPUs: 16 @ 2394MHz



-----Original Message-----
From: Fajar A. Nugraha [mailto:fajar@xxxxxxxxx] 
Sent: Monday, February 15, 2010 11:25 AM
To: Joseph Coleman
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] Memory Allocation

On Mon, Feb 15, 2010 at 11:42 PM, Joseph Coleman
<joe.coleman@xxxxxxxxxxxxxxxxxx> wrote:
> All of our VM'S are Windows based and assigned 512 MB  with the ability to
> go up to 1024 MB.

Can HVM domUs balloon up memory? Not the last time I check. Have you tried it?

> I seem to be maxing out
> at 39 to 40 machines and then performance is horrible and it appears I may
> be out of ram.

And how did you determine they are out of ram?

I'm more inclined to think that disk I/O is your problem. Try "iostat
-mx 3" on dom0.

-- 
Fajar

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.