[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 7/7] xen: sched_rt: print useful affinity info when dumping
>>> >>> This change also takes the chance to add a scratch >>> cpumask, to avoid having to create one more >>> cpumask_var_t on the stack of the dumping routine. >> >> Actually, I have a question about the strength of this design. When we >> have a machine with many cpus, we will end up with allocating a >> cpumask for each cpu. Is this better than having a cpumask_var_t on >> the stack of the dumping routine, since the dumping routine is not in >> the hot path? > > The reason for taking this off the stack is that the hypervisor stack is > a fairly limited resource -- IIRC it's only 8k (for each cpu). If the > call stack gets too deep, the hypervisor will triple-fault. Keeping > really large variables like cpumasks off the stack is key to making sure > we don't get close to that. I see. I didn't realize the fact of the limited size of hypervisor stack. That makes sense. Thank you very much for clarification! :-) Best, Meng ----------- Meng Xu PhD Student in Computer and Information Science University of Pennsylvania _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |