[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen: increase static dmesg buffer to 64K



>>> On 17.07.11 at 17:43, Olaf Hering <olaf@xxxxxxxxx> wrote:
> # HG changeset patch
> # User Olaf Hering <olaf@xxxxxxxxx>
> # Date 1310917380 -7200
> # Node ID c6cade90d47f32e19f529930ba9f9acfa69f065f
> # Parent  31dd84463eece20bd01c7aee22b52a0c06c67545
> xen: increase static dmesg buffer to 64K
> 
> On large systems the static dmesg buffer will overflow the 16K buffer, early
> messages are lost. Increase the size to 64K to capture all lines on systems
> without serial console.

Please don't - on small systems it's a waste, and on even larger
systems it still won't help. If anything, the dynamic allocation may
need to happen earlier. As you probably saw, console_init_postirq()
already sizes the buffer dependent on the number of CPUs in the
system.

Additionally I think we greatly reduced the amount of per-CPU
messages printed by default. So one other thing to do would be to
look into completely suppressing all per-CPU messages by default if
these are still causing trouble.

Jan

> Signed-off-by: Olaf Hering <olaf@xxxxxxxxx>
> 
> diff -r 31dd84463eec -r c6cade90d47f xen/drivers/char/console.c
> --- a/xen/drivers/char/console.c
> +++ b/xen/drivers/char/console.c
> @@ -53,7 +53,7 @@ boolean_param("console_timestamps", opt_
>  static uint32_t __initdata opt_conring_size;
>  size_param("conring_size", opt_conring_size);
>  
> -#define _CONRING_SIZE 16384
> +#define _CONRING_SIZE (64 * 1024)
>  #define CONRING_IDX_MASK(i) ((i)&(conring_size-1))
>  static char __initdata _conring[_CONRING_SIZE];
>  static char *__read_mostly conring = _conring;
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx 
> http://lists.xensource.com/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.