[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xentrace: handle sparse cpu ids correctly in xen trace buffer handling



>>> On 10.09.18 at 13:34, <jgross@xxxxxxxx> wrote:
> On 30/08/18 11:28, Juergen Gross wrote:
>> On 30/08/18 10:26, Jan Beulich wrote:
>>>>>> On 30.08.18 at 09:52, <jgross@xxxxxxxx> wrote:
>>>> @@ -202,7 +202,7 @@ static int alloc_trace_bufs(unsigned int pages)
>>>>       * Allocate buffers for all of the cpus.
>>>>       * If any fails, deallocate what you have so far and exit.
>>>>       */
>>>> -    for_each_online_cpu(cpu)
>>>> +    for_each_present_cpu(cpu)
>>>>      {
>>>>          offset = t_info_first_offset + (cpu * pages);
>>>>          t_info->mfn_offset[cpu] = offset;
>>>
>>> Doesn't this go a little too far? Why would you allocate buffers for CPUs
>>> which can never be brought online? There ought to be a middle ground,
>>> where online-able CPUs have buffers allocated, but non-online-able ones
>>> won't. On larger systems I guess the difference may be quite noticable.
>> 
>> According to the comments in include/xen/cpumask.h cpu_present_map
>> represents the populated cpus.
>> 
>> I know that currently there is no support for onlining a parked cpu
>> again, but I think having to think about Xentrace buffer allocation in
>> case onlining of parked cpus is added would be a nearly 100% chance to
>> introduce a bug.
>> 
>> Xentrace is used for testing purposes only. So IMHO allocating some more
>> memory is acceptable.
> 
> Are you fine with my reasoning or do you still want me to avoid buffer
> allocation for offline cpus?

I don't object to it, but I'm also not overly happy. IOW - I'd like to leave
it to George as the maintainer of the code (who in turn might leave it to
you).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.