[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/irq: do not release irq until all cleanup is done


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Tue, 15 Nov 2022 18:04:28 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LM+DbkwcTP0ihV97RnOsi4nfOk3Q59+H/U4rqQjX5Qw=; b=U3qOefoZ0CMPCYL4wLiP/aDmpSF+VwsqJC8rUynUA/CcOVvTLoh5vI4zmdLQ0oIDwvGezYm0nPWMXPicfZO3Ank0tD/HNubXu5mYIpVpYPWfx/MW5dSfwc2KrqB1npm1hp1h4QhjIKZg+uEBOSQww13yYQ1GZ4735L07Dcc37b6R57eRKzilHO8WoR5vM/xV/GiYgUawYOipTckFgqCKP8FXXB61C0BfT2y9A2e/0rJ34pU04dLPZRO0tnDYBH1NUhRITQhR+PeL4nd3g3pgmWAgit0G8KhctmQVTsLlr9sdmZtoeBU9z+hmyrUzUlx02x04qHXuoLV1UTB3B8TejA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=S5kGh4jFeTXQihp26MUtMKzGhObvJY84YOYucRtumnGDOiH1kznQuNBzOpVVhh8FPd5riuoy00qZoTUQH7iTEt9m8TS5EcLaejGis44ZR2PH3Zp2DkjUAilyxPT+jsfbLepd+rPab/lXiW2ZlCCYJ4eD/nJ7ZIgDygpaMhe1zmtzDFA68Yt6UJ7E7rmyCSUaIZfCvcnBFDqicaGCFHetBvXSXHOLhpkZNolRwePEpr5QqrFm3gp80dTP+eNIIx3xlre9XRTQlQLM/eC9p7kLe7XE3MJpqWTT+rZ7cvPn44ey3qyF4Ae57DeAiUbrljjcVKnJoquEKJvdaOEnpsa1iA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 15 Nov 2022 17:04:48 +0000
  • Ironport-data: A9a23:5UW01qt3wnQEHGREVOdqf2l5K+fnVHFfMUV32f8akzHdYApBsoF/q tZmKWGHa/iJYmHwcotxbty19k4Fu5+HnIM1SAQ9rHpmFS0a+JbJXdiXEBz9bniYRiHhoOCLz O1FM4Wdc5pkJpP4jk3wWlQ0hSAkjclkfpKlVKiffHg0HVU/IMsYoUoLs/YjhYJ1isSODQqIu Nfjy+XSI1bg0DNvWo4uw/vrRChH4bKj5lv0gnRkPaoR5QaHyiFPZH4iDfrZw0XQE9E88tGSH 44v/JnhlkvF8hEkDM+Sk7qTWiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JFAatjsB2bnsgZ9 Tl4ncfYpTHFnEH7sL91vxFwS0mSNEDdkVPNCSDXXce7lyUqf5ZwqhnH4Y5f0YAwo45K7W9yG fMwKw8AfhCz192MnomhbthJhuB6HODAM9ZK0p1g5Wmx4fcOZ7nmGv+Pz/kImTA6i4ZJAOrUY NcfZXx3dhPcbhZTO1ARTpUjgOOvgXq5eDpdwL6XjfNvvy6Pk0osjf60b4G9lt+iHK25mm6Co W3L5SLhCwwyP92D0zuVtHmrg4cjmAurB95LT+Xkp5aGhnWcyDMdUjBIa2CV+8nokRTkRfMHN xc9r39GQa8asRbDosPGdx+yrWOAvxUcc8FNCOB84waIooLL5y6JC25CSSROAPQ2uclzSTE02 1uhm9LyGScpoLCTUWia9LqfsXW1Iyd9EIMZTSoNTA9A6d+zpog210jLVow6T/XzicDpEzbtx TzMtDI5m7gYkc8M0eO84EzDhDWv4JPOS2bZ+znqY45s1SshDKbNWmBiwQKzASpoRGpBcmS8g Q==
  • Ironport-hdrordr: A9a23:2/GFXKvRXeZTkGDdqlzNNyec7skCHIAji2hC6mlwRA09TyXGra 2TdaUgvyMc1gx7ZJhBo7+90We7MBbhHLpOkPEs1NaZLXDbUQ6TQL2KgrGD/9SNIVycygcZ79 YaT0EcMqyNMbEZt7ec3ODQKb9Jrri6GeKT9IHjJh9WPHxXgspbnmNE42igYy9LrF4sP+tCKH PQ3LsxmxOQPVAsKuirDHgMWObO4/XNiZLdeBYDQzI39QWUijusybjiVzyVxA0XXT9jyaortT GtqX2z2oyT99WAjjPM3W7a6Jpb3PPn19t4HcSJzuQFNzn2jQ6sRYJ5H5mPpio8ru2D4Esj1P PMvxAjFcJu7G65RBD8nTLdny3blBo+4X7rzlGVxVPlvMzCXTo/T+5Mn5hQfBf141cp+IgU6t MD40up875sST/QliX04NbFEzlsi0qPuHIn1coelWZWX4cyYKJY6aYf4ERWOpEdGz+S0vFQLM BeSOXnoNpGe1KTaH7U+kFp3dyXR3w2WiyLR0AT0/bloQR+rTRc9Q811cYflnAP+NYWUJ9f/d nJNaxuifVnUtIWRbgVPpZPfeKHTkj2BT7cOmObJlrqUIsdPWjWlpLx6LIpoMm3ZZ0zyocokp ipaiIViYcLQTOuNSSy5uwKzviUK1/NHggFi/suqqSRg4eMCoYCaka4ORITe8jJmYRtPiSUYY f3BHtsOY6TEYLfI/c34+TAYegtFZA/arxhhj9pYSP7nuv7bqvXi8f8TNH/YJLQLBdMYBKOPp JEZkm4GPl9
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Nov 15, 2022 at 05:29:47PM +0100, Jan Beulich wrote:
> On 15.11.2022 13:35, Roger Pau Monne wrote:
> > --- a/xen/arch/x86/irq.c
> > +++ b/xen/arch/x86/irq.c
> > @@ -220,27 +220,27 @@ static void _clear_irq_vector(struct irq_desc *desc)
> >          clear_bit(vector, desc->arch.used_vectors);
> >      }
> >  
> > -    desc->arch.used = IRQ_UNUSED;
> > -
> > -    trace_irq_mask(TRC_HW_IRQ_CLEAR_VECTOR, irq, vector, tmp_mask);
> 
> The use of tmp_mask here was ...
> 
> > +    if ( unlikely(desc->arch.move_in_progress) )
> > +    {
> > +        /* If we were in motion, also clear desc->arch.old_vector */
> > +        old_vector = desc->arch.old_vector;
> > +        cpumask_and(tmp_mask, desc->arch.old_cpu_mask, &cpu_online_map);
> 
> .. before the variable was updated here. Now you move it past this
> update (to the very end of the function). I wonder whether, to keep
> things simple in this change, it would be tolerable to leave the
> trace_irq_mask() invocation where it was, despite cleanup not having
> completed yet at that point. (The alternative would look to be to
> act directly on desc->arch.old_cpu_mask in the code you re-indent,
> leaving tmp_mask alone. I think that ought to be okay since nothing
> else should access that mask anymore. We could even avoid altering
> the field, by going from cpumask_and() to a cpu_online() check in
> the for_each_cpu() body.)

Hm, I was thinking we should print the old vector mask (since that's
the one still in use because migration hasn't happened yet) but then
we would also need to print the old vector instead of the new one, but
maybe that's too much change given the current behavior.

I think it's fine to set the trace here, before IRQ_UNUSED gets set
(in fact it might be better, as then the trace should be more
coherent).

> >  
> > -    if ( likely(!desc->arch.move_in_progress) )
> > -        return;
> > +        for_each_cpu(cpu, tmp_mask)
> > +        {
> > +            ASSERT(per_cpu(vector_irq, cpu)[old_vector] == irq);
> > +            TRACE_3D(TRC_HW_IRQ_MOVE_FINISH, irq, old_vector, cpu);
> > +            per_cpu(vector_irq, cpu)[old_vector] = ~irq;
> > +        }
> >  
> > -    /* If we were in motion, also clear desc->arch.old_vector */
> > -    old_vector = desc->arch.old_vector;
> > -    cpumask_and(tmp_mask, desc->arch.old_cpu_mask, &cpu_online_map);
> > +        release_old_vec(desc);
> >  
> > -    for_each_cpu(cpu, tmp_mask)
> > -    {
> > -        ASSERT(per_cpu(vector_irq, cpu)[old_vector] == irq);
> > -        TRACE_3D(TRC_HW_IRQ_MOVE_FINISH, irq, old_vector, cpu);
> > -        per_cpu(vector_irq, cpu)[old_vector] = ~irq;
> > +        desc->arch.move_in_progress = 0;
> >      }
> >  
> > -    release_old_vec(desc);
> > +    write_atomic(&desc->arch.used, IRQ_UNUSED);
> 
> Now that there is an ordering requirement between these last two writes,
> I think you want to add smp_wmb() after the first one (still inside the
> inner scope). write_atomic() is only a volatile asm() (which the compiler
> may move under certain conditions) and an access through a volatile
> pointer (which isn't ordered with non-volatile memory accesses), but it
> is specifically not a memory barrier.

Right, sorry - I always get confused and assume that a volatile asm
can't be reordered.  I was about to add the write barrier but didn't
do it because of the volatile in the asm.

I would like to request a backport for this, but I think it's now too
late for the patch to be accepted to 4.17.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.