[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: next->vcpu_dirty_cpumask checking at the top of context_switch()



How big NR_CPUS are we talking about? Is the overhead measurable, or is this
a premature micro-optimisation?

 -- Keir

On 16/04/2009 16:16, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:

> In an attempt to create a patch to remove some of the cpumask copying
> (in order to reduce stack usage when NR_CPUS is huge) one of the obvious
> things to do was to change function parameters to pointer-to-cpumask.
> However, doing so for flush_area_mask() creates the unintended side
> effect of triggering the WARN_ON() at the top of send_IPI_mask_flat(),
> apparently because next->vcpu_dirty_cpumask can occasionally change
> between the call site of flush_tlb_mask() in context_switch() and that low
> level routine.
> 
> That by itself certainly is not a problem, what puzzles me are the redundant
> !cpus_empty() checks prior to the call to flush_tlb_mask() as well as the
> fact that if I'm hitting a possible timing window here, then I can't see why
> it shouldn't be possible to hit the (albeit much smaller) window between the
> second !cpus_empty() check and the point where the cpumask got fully
> copied to the stack as flush_tlb_mask()'s argument.
> 
> Bottom line question is - can't the second !cpus_empty() check go away
> altogether, and shouldn't the argument passed to flush_tlb_mask() be
> dirty_mask instead of next->vcpu_dirty_cpumask?
> 
> Thanks for any insights,
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.