[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Whats effect of EXTRA_MEM_RATIO



Sorry about delayed response but I've again got hit by this magic number 10.

While reading and doing more work on subject topic I found a 2 year older commit which gives some clue. https://github.com/torvalds/linux/commit/d312ae878b6aed3912e1acaaf5d0b2a9d08a4f11

It says that the reserved low memory defaults to 1/32 of total RAM so I think EXTRA_MEM_RATIO upto 32 should be ok but has no clue for the number 10.

Specially, Exact Commit https://github.com/torvalds/linux/commit/698bb8d14a5b577b6841acaccdf5095d3b7c7389  says that 10x seems like a reasonable balance but can I make a pull request to make it say 16 or 20.

Any ideas ?


On Mon, Jun 3, 2013 at 11:20 PM, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> wrote:
On Mon, Jun 03, 2013 at 09:58:36PM +0530, Rushikesh Jadhav wrote:
> On Mon, Jun 3, 2013 at 5:40 PM, Konrad Rzeszutek Wilk <
> konrad.wilk@xxxxxxxxxx> wrote:
>
> > On Sun, Jun 02, 2013 at 02:57:11AM +0530, Rushikesh Jadhav wrote:
> > > Hi guys,
> > >
> > > Im fairly new to the Xen Development & trying to understand ballooning.
> >
> > OK.
> > >
> > > While compiling a DomU kernel I'm trying to understand the e820 memory
> > map
> > > w.r.t Xen,
> > >
> > > I have modified arch/x86/xen/setup.c  EXTRA_MEM_RATIO  to 1 and can see
> > > that the guest can not balloon up more than 2GB. Below is the memory map
> > of
> > > DomU with max mem as 16GB.
> > >
> > > for EXTRA_MEM_RATIO  = 1
> > >
> > > BIOS-provided physical RAM map:
> > >  Xen: 0000000000000000 - 00000000000a0000 (usable)
> > >  Xen: 00000000000a0000 - 0000000000100000 (reserved)
> > >  Xen: 0000000000100000 - 0000000080000000 (usable)
> > >  Xen: 0000000080000000 - 0000000400000000 (unusable)
> > > NX (Execute Disable) protection: active
> > > DMI not present or invalid.
> > > e820 update range: 0000000000000000 - 0000000000010000 (usable) ==>
> > > (reserved)
> > > e820 remove range: 00000000000a0000 - 0000000000100000 (usable)
> > > No AGP bridge found
> > > last_pfn = 0x80000 max_arch_pfn = 0x400000000
> > > initial memory mapped : 0 - 0436c000
> > > Base memory trampoline at [ffff88000009b000] 9b000 size 20480
> > > init_memory_mapping: 0000000000000000-0000000080000000
> > >  0000000000 - 0080000000 page 4k
> > > kernel direct mapping tables up to 80000000 @ bfd000-1000000
> > > xen: setting RW the range fd6000 - 1000000
> > >
> > >
> > > for EXTRA_MEM_RATIO  = 10 the map is like below and can balloon up to
> > 16GB.
> > >
> >
> > Right, that is the default value.
> >
>
> What are the good or bad effects of making it 20.
> I found that increasing this number causes base memory to fill up ( in many
> MBs ) and increases the range of Base~Max.

That sounds about right. I would suggest you look in the free Linux
kernel book and look at the section that deals with 'struct page',
Lowmem and highmen. That should explain what is consuming the lowmem
memory.

>
>
> >
> > > BIOS-provided physical RAM map:
> > >  Xen: 0000000000000000 - 00000000000a0000 (usable)
> > >  Xen: 00000000000a0000 - 0000000000100000 (reserved)
> > >  Xen: 0000000000100000 - 0000000400000000 (usable)
> > > NX (Execute Disable) protection: active
> > > DMI not present or invalid.
> > > e820 update range: 0000000000000000 - 0000000000010000 (usable) ==>
> > > (reserved)
> > > e820 remove range: 00000000000a0000 - 0000000000100000 (usable)
> > > No AGP bridge found
> > > last_pfn = 0x400000 max_arch_pfn = 0x400000000
> > > last_pfn = 0x100000 max_arch_pfn = 0x400000000
> > > initial memory mapped : 0 - 0436c000
> > > Base memory trampoline at [ffff88000009b000] 9b000 size 20480
> > > init_memory_mapping: 0000000000000000-0000000100000000
> > >  0000000000 - 0100000000 page 4k
> > > kernel direct mapping tables up to 100000000 @ 7fb000-1000000
> > > xen: setting RW the range fd6000 - 1000000
> > > init_memory_mapping: 0000000100000000-0000000400000000
> > >  0100000000 - 0400000000 page 4k
> > > kernel direct mapping tables up to 400000000 @ 601ef000-62200000
> > > xen: setting RW the range 619fb000 - 62200000
> > >
> > >
> > >
> > > Can someone please help me understand its behavior and importance ?
> >
> > Here is the explanation from the code:
> >
> > 384         /*
> > 385          * Clamp the amount of extra memory to a EXTRA_MEM_RATIO
> > 386          * factor the base size.  On non-highmem systems, the base
> > 387          * size is the full initial memory allocation; on highmem it
> > 388          * is limited to the max size of lowmem, so that it doesn't
> > 389          * get completely filled.
> > 390          *
> >
>
> "highmem is limited to the max size of lowmem"
> Does it mean "1/3" or maximum possible memory or startup memory ?

For my answer to make sense I would steer you toward looking what
highmem and lowmem are. That should give you an idea of the memory
limitations 32-bit kernels have.
> In what cases it can get completely filled ?

Yes.
>
>
> > 391          * In principle there could be a problem in lowmem systems if
> > 392          * the initial memory is also very large with respect to
> > 393          * lowmem, but we won't try to deal with that here.
> > 394          */
> > 395         extra_pages = min(EXTRA_MEM_RATIO * min(max_pfn,
> > PFN_DOWN(MAXMEM)),
> > 396                           extra_pages);
> >
> > I am unclear on what you are exactly want to learn? The hypercalls or how
> > the balloning happens? IF so I would recommend you work backwards - look
> > at the balloon driver itself, how it decreases/increases the memory, and
> > what
> > data structures it uses to figure out how much memory it can use. Then you
> > can go back to the setup.c to get an idea on how the E820 is being created.
> >
> >
> Thanks. I'll check more from drivers/xen/balloon.c
>
>
> >
> > >
> > > Thanks.
> >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@xxxxxxxxxxxxx
> > > http://lists.xen.org/xen-devel
> >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.