[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] out of memory with xen
As you can see, this is strictly cache... but you could be right, of course, but this is MORE than HALF of the memory!!!. Is there a way to get the kernel to try and flush the cache as much as possible (if possible non-destructive?) # xm list Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 4851 4 r----- 89515.8 xxxxxxxx 7 140 1 -b---- 16382.0 xxxxxxxx 26 140 1 -b---- 462.1 xxxxxxxx 3 268 1 -b---- 23890.8 xxxxxxxx 4 1038 1 r----- 878503.7 xxxxxxxx 22 268 1 -b---- 5406.6 xxxxxxxx 20 525 1 -b---- 5537.4 xxxxxxxx 5 140 1 -b---- 23039.8 xxxxxxxx 14 268 1 -b---- 10236.9 xxxxxxxx 6 268 1 -b---- 18234.1 # free total used free shared buffers cached Mem: 4967424 4947340 20084 0 153064 4262848 -/+ buffers/cache: 531428 4435996 Swap: 2104464 136 2104328 Sincerely, Maarten Op woensdag 27 juni 2007 14:37, schreef Petersson, Mats: > > -----Original Message----- > > From: Maarten Vanraes [mailto:maarten@xxxxx] > > Sent: 27 June 2007 13:28 > > To: xen-users@xxxxxxxxxxxxxxxxxxx > > Cc: Petersson, Mats; Arie Goldfeld > > Subject: Re: [Xen-users] out of memory with xen > > > > well if that is so, how can i free the cache? > > > > I've read that an Out Of Memory kernel event should try to > > free as much cache > > as possible... > > It may be that it's not cache that occupies the memory, perhaps? > > The "balloon" method that Xen uses for assigning one domains memory to > another domain works this way (simplisticly): > - On the "giving up memory" domain, it allocates (in a kernel "ballon > driver") memory. > - On the "receiving" domain, the memory is added to it's list of memory > available. > > If it takes too long to find free memory, it will give up (in fact, for > the release version of Xen, I believe if it "fails to get the requested > memory" it will stop allocation, even if it's still able to allocate > more memory - there's a patch in unstable that allows it to continue as > long as it's making progress, even if it's slower than "instant". > > The problem here is that the cache may be "dirty", meaning that data in > the cache has to be written to the disk before it can be re-used for > other purposes. This in turn means that if you suddenly need a large > amount of space, it can take several seconds to clean up the dirty pages > in the caceh and allow them to be used. > > The other problem with giving Dom0 a huge amount of space and then > "ballon" it off to other domains is that the memory allocations for > certain data structures in the kernel are done as proportion of total > memory, so some static data structure may be 3% of the total memory for > example. There are several different allocations made in this way. So if > you try to shrink Dom0 by a large amount, it may run out of memory, > despite the fact that it would be perfectly fine to run Dom0 on the > memory size it's been shrunk to (because the smaller size doesn't have > as large proportion used by these proportional data structures). > > -- > Mats > > > Sincerely, > > > > Maarten > > > > Op woensdag 27 juni 2007 11:36, schreef Petersson, Mats: > > > > -----Original Message----- > > > > From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx > > > > [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of > > > > Maarten Vanraes > > > > Sent: 27 June 2007 10:28 > > > > To: xen-users@xxxxxxxxxxxxxxxxxxx > > > > Cc: Arie Goldfeld > > > > Subject: Re: [Xen-users] out of memory with xen > > > > > > > > wont the caching require more memory anyway, and won't it > > > > start to swap and > > > > ultimately start to kill processes? > > > > > > No, because the caches are used from "free memory", so if you don't > > > start out with much memory, the cache won't grow so large. > > > > Essentially, > > > > > the principle is that if you have some memory that isn't > > > > being used for > > > > > "anything", it can be used as cache. > > > > > > -- > > > Mats > > > > > > > Sincerely, > > > > > > > > Op woensdag 27 juni 2007 10:04, schreef Arie Goldfeld: > > > > > You could use dom0_mem grub parameter to restrict the size > > > > > > > > of RAM dom0 > > > > > > > > > occupies; it looks somethings like this: > > > > > > > > > > kernel /xen.gz dom0_mem=400000 > > > > > > > > > > On 6/27/07, Maarten Vanraes <maarten@xxxxx> wrote: > > > > > > our xen server has 8GB RAM. > > > > > > > > > > > > dom0 is not doing anything, but has cached about 4.9GB of > > > > > > > > the ram, which > > > > > > > > > > results in failure to create new hosts. > > > > > > > > > > > > i'm using file based storage, and i've read in the > > > > > > > > mailing list archive > > > > > > > > > > about > > > > > > the problem being /dev/loop being used internally in xen. > > > > > > > > > > > > is there a way to flush this cache? is there already > > > > a fix for the > > > > > > > > extreme caching of these devices? > > > > > > > > > > > > I see that lvm is a possible workaround, but i'd rather > > > > > > > > not do this right > > > > > > > > > > now. > > > > > > Plus it is not so easy to convert 500GB into lvm... > > > > > > > > > > > > any solutions yet? > > > > > > > > > > > > Sincerely, > > > > > > -- > > > > > > Maarten Vanraes > > > > > > BA NV: IT & Security > > > > > > > > > > > > _______________________________________________ > > > > > > Xen-users mailing list > > > > > > Xen-users@xxxxxxxxxxxxxxxxxxx > > > > > > http://lists.xensource.com/xen-users > > > > > > > > -- > > > > Maarten Vanraes > > > > BA NV: IT & Security > > > > > > > > _______________________________________________ > > > > Xen-users mailing list > > > > Xen-users@xxxxxxxxxxxxxxxxxxx > > > > http://lists.xensource.com/xen-users > > > > > > _______________________________________________ > > > Xen-users mailing list > > > Xen-users@xxxxxxxxxxxxxxxxxxx > > > http://lists.xensource.com/xen-users > > > > -- > > Maarten Vanraes > > BA NV: IT & Security -- Maarten Vanraes BA NV: IT & Security _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |