[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 7/7] xen: sched_rt: print useful affinity info when dumping



>>> On 17.03.15 at 12:10, <george.dunlap@xxxxxxxxxxxxx> wrote:
> On 03/16/2015 08:30 PM, Meng Xu wrote:
>> 2015-03-16 13:05 GMT-04:00 Dario Faggioli <dario.faggioli@xxxxxxxxxx>:
>>> This change also takes the chance to add a scratch
>>> cpumask, to avoid having to create one more
>>> cpumask_var_t on the stack of the dumping routine.
>> 
>> Actually, I have a question about the strength of this design. When we
>> have a machine with many cpus, we will end up with allocating a
>> cpumask for each cpu. Is this better than having a cpumask_var_t on
>> the stack of the dumping routine, since the dumping routine is not in
>> the hot path?
> 
> The reason for taking this off the stack is that the hypervisor stack is
> a fairly limited resource -- IIRC it's only 8k (for each cpu).  If the
> call stack gets too deep, the hypervisor will triple-fault.  Keeping
> really large variables like cpumasks off the stack is key to making sure
> we don't get close to that.

Actually here you talk about cpumask_t-s on the stack.
cpumask_var_t-s aren't a problem stack size wise, but are an
issue due to the need to dynamically allocate them (and the
potential failure thereof) when the hypervisor was built for a
large enough number of CPUs.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.