[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH 1 of 3] xentrace: correct formula to calculate t_info_pages



Acked-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>

On Wed, 2011-03-30 at 19:04 +0100, Olaf Hering wrote:
> # HG changeset patch
> # User Olaf Hering <olaf@xxxxxxxxx>
> # Date 1301423840 -7200
> # Node ID 8a2ce5e49b2c5f2e013734b5d53eae37572f4101
> # Parent  45eeeb6d0481efaab2a59941e1b8e061aead37d4
> xentrace: correct formula to calculate t_info_pages
> 
> The current formula to calculate t_info_pages, based on the initial
> code, is slightly incorrect. It may allocate more than needed.
> Each cpu has some pages/mfns stored as uint32_t.
> That list is stored with an offset at tinfo.
> 
> Signed-off-by: Olaf Hering <olaf@xxxxxxxxx>
> 
> ---
>  xen/common/trace.c |    7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff -r 45eeeb6d0481 -r 8a2ce5e49b2c xen/common/trace.c
> --- a/xen/common/trace.c      Tue Mar 29 16:34:01 2011 +0100
> +++ b/xen/common/trace.c      Tue Mar 29 20:37:20 2011 +0200
> @@ -110,7 +110,7 @@
>  {
>      struct t_buf dummy;
>      typeof(dummy.prod) size;
> -    unsigned int t_info_words, t_info_bytes;
> +    unsigned int t_info_words;
>  
>      /* force maximum value for an unsigned type */
>      size = -1;
> @@ -125,9 +125,8 @@
>          pages = size;
>      }
>  
> -    t_info_words = num_online_cpus() * pages + t_info_first_offset;
> -    t_info_bytes = t_info_words * sizeof(uint32_t);
> -    t_info_pages = PFN_UP(t_info_bytes);
> +    t_info_words = num_online_cpus() * pages * sizeof(uint32_t);
> +    t_info_pages = PFN_UP(t_info_first_offset + t_info_words);
>      printk(XENLOG_INFO "xentrace: requesting %u t_info pages "
>             "for %u trace pages on %u cpus\n",
>             t_info_pages, pages, num_online_cpus());



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.