[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen-unstable] xentrace: fix type of offset to avoid ouf-of-bounds access



# HG changeset patch
# User Olaf Hering <olaf@xxxxxxxxx>
# Date 1306409730 -3600
# Node ID 3057b531d905fe82dcd8e490e6616bdbbcb59063
# Parent  dd0eb070ee44835324084a343140c87c6b08265c
xentrace: fix type of offset to avoid ouf-of-bounds access

Update the type of the local offset variable to match the type where
this variable is stored. Also update the type of t_info_first_offset
because it has also a limited range.

Signed-off-by: Olaf Hering <olaf@xxxxxxxxx>
Acked-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
---


diff -r dd0eb070ee44 -r 3057b531d905 xen/common/trace.c
--- a/xen/common/trace.c        Thu May 26 12:34:44 2011 +0100
+++ b/xen/common/trace.c        Thu May 26 12:35:30 2011 +0100
@@ -106,7 +106,7 @@
  * The t_info layout is fixed and cant be changed without breaking xentrace.
  * Initialize t_info_pages based on number of trace pages.
  */
-static int calculate_tbuf_size(unsigned int pages, uint32_t 
t_info_first_offset)
+static int calculate_tbuf_size(unsigned int pages, uint16_t 
t_info_first_offset)
 {
     struct t_buf dummy_size;
     typeof(dummy_size.prod) max_size;
@@ -170,8 +170,8 @@
     int i, cpu, order;
     /* Start after a fixed-size array of NR_CPUS */
     uint32_t *t_info_mfn_list;
-    uint32_t t_info_first_offset;
-    int offset;
+    uint16_t t_info_first_offset;
+    uint16_t offset;
 
     if ( t_info )
         return -EBUSY;
@@ -179,7 +179,7 @@
     if ( pages == 0 )
         return -EINVAL;
 
-    /* Calculate offset in u32 of first mfn */
+    /* Calculate offset in units of u32 of first mfn */
     t_info_first_offset = calc_tinfo_first_offset();
 
     pages = calculate_tbuf_size(pages, t_info_first_offset);

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.