[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xentrace: fix off-by-one in calculate_tbuf_size



>>> On 01.03.13 at 16:24, Olaf Hering <olaf@xxxxxxxxx> wrote:
> xentrace: fix off-by-one in calculate_tbuf_size
> 
> Commit "xentrace: reduce trace buffer size to something mfn_offset can
> reach" contains an off-by-one bug. max_mfn_offset needs to be reduced by
> exactly the value of t_info_first_offset.
> 
> If the system has two cpus and the number of requested trace pages is
> very large, the final number of trace pages + the offset will not fit
> into a short. As a result the variable offset in alloc_trace_bufs() will
> wrap while allocating buffers for the second cpu. Later
> share_xen_page_with_privileged_guests() will be called with a wrong page
> and the ASSERT in this function triggers. If the ASSERT is ignored by
> running a non-dbg hypervisor the asserts in xentrace itself trigger
> because "cons" is not aligned because the very last trace page for the
> second cpu is a random mfn.
> 
> Thanks to Jan for the quick analysis.
> 
> 
> Signed-off-by: Olaf Hering <olaf@xxxxxxxxx>

George?

Thanks, Jan

> diff -r 3eb62c576a1a -r 13c43170cd5f xen/common/trace.c
> --- a/xen/common/trace.c
> +++ b/xen/common/trace.c
> @@ -133,7 +133,7 @@ static int calculate_tbuf_size(unsigned 
>       * The array of mfns for the highest cpu can start at the maximum value
>       * mfn_offset can hold. So reduce the number of cpus and also the 
> mfn_offset.
>       */
> -    max_mfn_offset -= t_info_first_offset - 1;
> +    max_mfn_offset -= t_info_first_offset;
>      max_cpus--;
>      if ( max_cpus )
>          max_mfn_offset /= max_cpus;
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.