|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] balloon: selfballooning and post memory info via xenbus,
hey, thanks for the tip, I've already memory hotplug activated. Now it works fine with 7 domains, but no one uses more than 256MB... I'd like to test the ballooning with more than 2GB memory but at the moment I'ven't a live machine which needs so much memory... but with maxmen and hotplug, this defines the maximum or? greetings Torben Viets Dan Magenheimer wrote: memory = 256 maxmem = 8192By the way, I'm not sure if you knew this, but the above two lines don't work as you might want. The maxmem is ignored. The domain is launched (in this example) with 256MB of memory and (at least without hot-plug memory support in the guest) memory can only be decreased from there, not increased. So to run a guest which adjusts between 256MB and 8192MB of memory, you must launch it with 8192MB and balloon it down to 256MB. If Xen does not have 8192MB free at launch, launching the domain will fail. Dan-----Original Message----- From: Torben Viets [mailto:viets@xxxxxxx] Sent: Friday, May 16, 2008 10:51 AM To: xen-devel@xxxxxxxxxxxxxxxxxxx Cc: dan.magenheimer@xxxxxxxxxxSubject: Re: [Xen-devel] [PATCH] balloon: selfballooning and post memoryinfo via xenbus, Dan Magenheimer wrote:(Your reply came to me but not to the list... not sure why. So I've attached your full reply below.)thanks, hope this time it works....ah ok, that is my failure, I need a bigger swapdisk ;)Yes, definitely. If you are creating the swapdisk on an ext3 filesystem, you might try using sparse files. They won't take up much disk space unless/until they get swapped-to. There might be some performance ramifications though. (My testing has been with swap disk as a logical volume so I can't try sparse.)Ok, our plan is to have a high availbilty xen farm. Now we're beginning with 2 Suns X2200, each has 16GB RAM. The idea, why we like to useselfballooning, because of peak traffic on a server,normal a serverneeds about 256MB, but when it needs more, it shouldn't be a problem to give it 4GB. The idea is not to overbook the memory, but have the ability to get rid of memory failures because of peaks.Exactly what it is intended for! I'd be interested in how it works for guests with memory=4096 and higher. All of my testing so far has been on a machine with only 2GB of physical memory so I can test lots of guests but no large guests.I'll test it on monday, now I'm going into my weekend ;) but I think, that I wasn't able to get more than 2GB RAM allocated, but I will test it on monday again. PS: In my first mail I've attached my whole signatur, I remove it because I get enough spam ;) Thanks Torben VietsThanks, Dan-----Original Message----- From: viets@xxxxxxx [mailto:viets@xxxxxxx] Sent: Friday, May 16, 2008 9:49 AMTo: dan.magenheimer@xxxxxxxxxx;xen-devel-bounces@xxxxxxxxxxxxxxxxxxxSubject: Re: [Xen-devel] [PATCH] balloon: selfballooning and post memory info via xenbus, Dan Magenheimer wrote:thanks for the patch, I was waiting for this feature.Thanks very much for the testing and feedback! Could you comment on what you plan to use it for? (Keir hasn't accepted it yet, so I am looking for user support ;-)Ok, our plan is to have a high availbilty xen farm. Now we're beginning with 2 Suns X2200, each has 16GB RAM. The idea, why we like to useselfballooning, because of peak traffic on a server,normal a serverneeds about 256MB, but when it needs more, it shouldn't be a problem to give it 4GB. The idea is not to overbook the memory, but have the ability to get rid of memory failures because of peaks.First question: Do you have a swap (virtual) disk configured and, if so, how big is it? (Use "swapon -s" and the size shows in KB.) Selfballooning shouldn't be run in a domain with no swap disk. Also, how big is your "memory=" in your vm.cfg file?#kernel = "/boot/xen-3.2.0/vmlinuz-2.6.18.8-xenU" #kernel = "/boot/vmlinuz-2.6.18.8-xenU" kernel = "/boot/vmlinuz-selfballooning" memory = 256 maxmem = 8192 vcpu = 4 name = "test.work.de" vif = [ 'bridge=xenvlan323' ] disk = [ 'phy:/dev/sda,hda,w', 'file:/var/swap.img,hdb,w' ] root = "/dev/hda ro" extra = 'xencons=tty' swap_size = 256MI'm not able to reproduce your dd failure at all, even with bs=2047M (dd doesn't permit larger values for bs). Your program (I called it "mallocmem") does eventually fail forme but not until i==88. However, I have a 2GB swap diskconfigured.ah ok, that is my failure, I need a bigger swapdisk ;)I think both tests are really measuring the total virtual memory space configured, e.g. the sum of physical memory (minus kernel overhead) and configured swap space. I think you will find thatboth will fail similarly with ballooning off and even ona physicalsystem, just at different points in virtual memory usage. Indeed, by adding additional output to mallocmem, I can see that it fails exactly when it attempts to malloc memory larger than the CommitLimit value in /proc/meminfo. I expect the same is true for the dd test. Note that CommitLimit DOES go down when memory is ballooned-out from a guest. So your test does point out to me that I should include a warning in the documentation not only that a swap disk should be configured, but also that the swap disk should be configured larger for a guest if selfballooning will be turned on. Thanks, Dan-----Original Message----- From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx]On Behalf Of viets@xxxxxxx Sent: Friday, May 16, 2008 3:36 AM To: xen-devel@xxxxxxxxxxxxxxxxxxx Subject: RE: [Xen-devel] [PATCH] balloon: selfballooning and post memory info via xenbus, Hello, thanks for the patch, I was waiting for this feature. I've tried this patch and I've seen that if I malloc a great size of memory in time, this fails, but if I malloc a small size first and then resize it slowly, it works. this highly suffisticated (:p) program I use to test theballooning: _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |