[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Ping: Re: [PATCH 3/3] x86: introduce and use scratch CPU mask



>>> On 09.12.16 at 09:59, <JBeulich@xxxxxxxx> wrote:
>>>> On 08.12.16 at 18:51, <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 08/12/16 16:02, Jan Beulich wrote:
>>> variable. With an IRQ happening at the deepest point of the stack, and
>>> with send_guest_pirq() being called from there (leading to vcpu_kick()
>>> -> ... -> csched_vcpu_wake() -> __runq_tickle() ->
>>> cpumask_raise_softirq(), the last two of which also have CPU mask
>>> variables on their stacks), this has been observed to cause a stack
>> 
>> "stacks), has been".
> 
> Hmm, that looks strange to me: Wouldn't the dropping of "this"
> also requite the "With" at the start of the sentence to be dropped
> (and isn't the sentence okay with both left in place)?
> 
>>> @@ -2509,20 +2510,21 @@ static int __get_page_type(struct page_i
>>>                   * may be unnecessary (e.g., page was GDT/LDT) but those 
>>>                   * circumstances should be very rare.
>>>                   */
>>> -                cpumask_t mask;
>>> +                cpumask_t *mask = this_cpu(scratch_cpumask);
>> 
>> This indirection looks suspicious.  Why do you have a per_cpu pointer to
>> a cpumask, with a dynamically allocated mask?
>> 
>> It would be smaller and more efficient overall to have a fully cpumask
>> allocated in the per-cpu area, and use it via
> 
> Well, as you can see from the smpboot.c context of the
> modifications done, that's how other masks are being dealt with
> too. The reasoning is that it is quite wasteful to pre-allocate 512
> bytes for a CPU mask when on the running system perhaps only
> the low few bytes will be used.
> 
> Overall I'm getting the impression from your comments that you
> simply didn't recognize the use of cpumask_t vs cpumask_var_t
> in the various places.

This one did stall too.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.