[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 for-xen-4.5 2/2] dpci: Replace tasklet with an softirq (v8)



On 27/10/14 09:32, Jan Beulich wrote:
>>>> On 24.10.14 at 22:55, <konrad.wilk@xxxxxxxxxx> wrote:
>> On Fri, Oct 24, 2014 at 11:09:59AM +0100, Jan Beulich wrote:
>>>>>> On 24.10.14 at 03:58, <konrad.wilk@xxxxxxxxxx> wrote:
>>>> On Thu, Oct 23, 2014 at 10:36:07AM +0100, Jan Beulich wrote:
>>>>>>>> On 21.10.14 at 19:19, <konrad.wilk@xxxxxxxxxx> wrote:
>>>> Was not sure whether you prefer 'true'
>>>> or 'false' instead of numbers - decided on numbers since most of the 
>> code-base
>>>> uses numbers.
>>> So far we don't use "true" and "false" in hypervisor code at all (or if
>>> you spotted any such use, it slipped in by mistake). We ought to
>>> consider switching to bool/true/false though.
>> The dec_lzma2 had it.
> With its private.h having
>
> #define false 0
> #define true 1
>
>>> Apart from the indentation being wrong (case labels having the same
>>> indentation as the switch()'s opening brace), this doesn't seem to be a
>>> proper equivalent of the former code: There you cleared ->dom when
>>> STATE_RUN regardless of STATE_SCHED. Leaving out the comments
>>> I'd suggest
>>>
>>>     switch ( cmpxchg(&pirq_dpci->state, STATE_SCHED, 0) )
>>>     {
>>>     case STATE_SCHED:
>>>         put_domain(d);
>>>     case STATE_RUN: case STATE_SCHED|STATE_RUN:
>> .. which made me realize that to testing of values as opposed to
>> bit positions requires ditching the 'enum' and introducing an
>> STATE_SCHED_BIT, STATE_SCHED, STATE_RUN_BIT, and STATE_RUN_BIT
>> to complement each other when checking for values or setting bits.
> No, keep the enum and just use 1 << STATE_... here.
>
>> I added some extra code so that I could reliabily trigger the error paths
>> and got:
>>
>> (XEN) pt_pirq_softirq_active: is 0 (debug: 1)
>>
>> This is the first ever usage of pt_pirq_create_bind and for fun the
>> code returns 'false'
>>
>> (XEN) Assertion '!preempt_count()' failed at preempt.c:37
>> (XEN) ----[ Xen-4.5.0-rc  x86_64  debug=y  Not tainted ]----
>> (XEN) CPU:    9
>> (XEN) RIP:    e008:[<ffff82d08011d6db>] ASSERT_NOT_IN_ATOMIC+0x22/0x67
>> (XEN) RFLAGS: 0000000000010202   CONTEXT: hypervisor
>> (XEN) rax: ffff82d080320d20   rbx: ffff8302fb126470   rcx: 0000000000000001
>> (XEN) rdx: 00000032b8d99300   rsi: 000000000000000a   rdi: ffff8303149670b8
>> (XEN) rbp: ffff8303390a7bd8   rsp: ffff8303390a7bd8   r8:  0000000000000000
>> (XEN) r9:  0000000000000000   r10: 00000000fffffffd   r11: ffff82d08023e5e0
>> (XEN) r12: ffff83025c126700   r13: ffff830314967000   r14: 0000000000000030
>> (XEN) r15: ffff83025c126728   cr0: 0000000080050033   cr4: 00000000000026f0
>> (XEN) cr3: 000000031b4b7000   cr2: 0000000000000000
>> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
>> (XEN) Xen stack trace from rsp=ffff8303390a7bd8:
>> (XEN)    ffff8303390a7c98 ffff82d080149335 ffff8303390a7ca8 ffff82d08016d0e6
>> (XEN)    0000000000000001 0000000000000206 00000003c0027c38 ffff8303390a7d88
>> (XEN)    00000072390a7c68 ffff830312e5f4d0 0000000000000000 0000001000000003
>> (XEN)    0000000000000246 ffff8303390a7c58 ffff82d08012ce66 ffff8303390a7e48
>> (XEN)    0000007f390a7c88 ffff8303149670b8 fffffffffffffffd fffffffffffffffd
>> (XEN)    00007f81af369004 ffff830314967000 ffff8303390a7e38 0000000000000008
>> (XEN)    ffff8303390a7d78 ffff82d080160131 ffff8303390a7cd8 ffff830302526f10
>> (XEN)    ffff8303390a7ce8 0000000000000073 ffff830330907324 ffff830330907300
>> (XEN)    ffff8303390a7d08 ffff82d08016dc79 ffff8303390a7cf8 ffff82d08013b469
>> (XEN)    ffff8303390a7d28 ffff82d08016e628 ffff8303390a7d28 ffff830314967000
>> (XEN)    000000000000007f 0000000000000206 ffff8303390a7dc8 ffff82d0801711dc
>> (XEN)    ffff8303390a7d88 ffff82d08011e203 0000000000000202 fffffffffffffffd
>> (XEN)    00007f81af369004 ffff830314967000 ffff8800331eb488 0000000000000008
>> (XEN)    ffff8303390a7ef8 ffff82d0801048e8 ffff830302526f10 ffff83025c126700
>> (XEN)    ffff8303390a7dc8 ffff830314967000 00000000ffffffff 0000000000000000
>> (XEN)    ffff8303390a7e98 ffff8303390a7e70 ffff8303390a7e48 ffff82d080184019
>> (XEN)    ffff8303390a7f18 ffffffff8158b0de ffff8303390a7e98 ffff8303149670b8
>> (XEN)    ffff83030000007f ffff82d080191105 000000730000f800 ffff8303390a7e74
>> (XEN)    ffff8300bf52e000 000000000000000d 00007f81af369004 ffff8300bf52e000
>> (XEN)    0000000a00000026 0000000000f70002 000000020000007f 00007f8100000002
>> (XEN) Xen call trace:
>> (XEN)    [<ffff82d08011d6db>] ASSERT_NOT_IN_ATOMIC+0x22/0x67
>> (XEN)    [<ffff82d080149335>] pt_irq_create_bind+0xf7/0x5c2
>> (XEN)    [<ffff82d080160131>] arch_do_domctl+0x1131/0x23e0
>> (XEN)    [<ffff82d0801048e8>] do_domctl+0x1934/0x1a9c
>> (XEN)    [<ffff82d08022c71b>] syscall_enter+0xeb/0x145
>>
>> which is entirely due to holding the 'domctl_lock_acquire','
>> 'rcu_read_lock', and 'pcidevs_lock' lock.
>>
>> It seems that the approach of waiting for kicking 'process_pending_softirq'
>> is not good as other softirq might want to grab any of those locks
>> at some point and be unhappy.
>>
>> Ugh.
> Ugly. I'd suggest keeping the ASSERT_NOT_IN_ATOMIC() in place
> but commented out, briefly explaining that this can't be used with
> since the domctl lock is being held. That lock is not really going to be
> acquired by softirq handlers, but we also shouldn't adjust the
> generic macro to account for this one special case.
>
> Jan

The domctl lock is a hot lock, and looping with it held like this will
further starve other toolstack operations.

What is the likelihood that this loop will actually be used?  I am
guessing fairly rare, although it looks more likely to happen for an
interrupt which is firing frequently?

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.