[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 3/3] x86: add accessors for scratch cpu mask
On 12.02.2020 17:49, Roger Pau Monne wrote: > @@ -223,7 +223,10 @@ static void _clear_irq_vector(struct irq_desc *desc) > trace_irq_mask(TRC_HW_IRQ_CLEAR_VECTOR, irq, vector, tmp_mask); > > if ( likely(!desc->arch.move_in_progress) ) > + { > + put_scratch_cpumask(); > return; > + } I'm not overly happy to see a need introduced to do cleanup like this one, but at least missing a path is a debug-build problem only. > --- a/xen/arch/x86/smpboot.c > +++ b/xen/arch/x86/smpboot.c > @@ -57,6 +57,30 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_mask); > DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, scratch_cpumask); > static cpumask_t scratch_cpu0mask; > > +#ifndef NDEBUG > +cpumask_t *scratch_cpumask(const char *fn) Please don't pass an argument that you can deduce, and then provide even more meaningful data: > +{ > + static DEFINE_PER_CPU(const char *, scratch_cpumask_use); > + > + /* > + * Scratch cpumask cannot be used in IRQ context, or else we would have > to > + * make sure all users have interrupts disabled while using the scratch > + * mask. > + */ > + BUG_ON(in_irq()); > + > + if ( fn && unlikely(this_cpu(scratch_cpumask_use)) ) > + { > + printk("%s: scratch CPU mask already in use by %s\n", > + fn, this_cpu(scratch_cpumask_use)); Use __builtin_return_address(0) here, which will allow identifying which of perhaps multiple uses in a function is the offending one. Also, why in smpboot.c instead of in smp.c? This isn't a boot or CPU-hot-online related function afaict. Finally, it would seem nice if multiple uses by the same caller could be permitted: for ( ... ) { if ( ... ) { mask = get_scratch_cpumask(); ... } else { /* no use of get_scratch_cpumask() */ ... } } put_scratch_cpumask(); Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |