[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86/flushtlb: remove flush_area check on system state


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Tue, 24 May 2022 09:32:13 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=snC/EjOA4P7q5xYAAD3bWY+4o6HvQJRs6ynm7sTmHtQ=; b=FyGcO1UKsxQ4nGw8lQwIW4duhdBai5PKr038jDwjoj55vKhxQi2cKBb3Z5E+jO6QHjYwOQ67oCGiCIdkpGkQNuNpvODlmgZqx7y8LkxuYqN9VGWV5BlAVnoJnRUfqYStckzHIH2hNxVM2EiOawLoxbrunvo3I/nEGmbWtYIQYuO6eRy9U6gwV/ZPSikvAuI17s6637vNJ4rJaxDV93tsx+S5e7AuhzwMj/eRB7l5GhSESa3Fd2ALVy4BbB3tWvROoK6emMT5C02cYvE2U2ycoLqn8qizXAaREmpyeTvD1wp68hQc1qYR9eI3MDreQ83N2QvZ3UD8MRN918xQ29QOXg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=E2b/ws60VaLYFRTi4EnELJXWRC4/wbuMWHXsVqCZtC0/dlgZ6HaNDKOLKN4snL5U2LqbmYUN2/XwXfWgSct/FHSKjLglnOlBreUrWkN3vrBa2M4CGEcMHKCcRLMaPBD8CSGIclKsu+8XfnpqMqRxFSuHp0PiVSxKjiXCXAhjEToGFkJf1UmZUrVfGJns0Cp1plTTJjYuZi3m+aMwjRM9ADyfPhBp8jgVC9VUxuZ3KtI60B2lsBTKyKDECwyPeKqAK6DyDtKR7eCCGxI//TpRMyE9oDIqvHAgHlIz0HS60UGf8wzqrjR0Wclrab0PG/gCP4x6E+5iK3U30LQpZfrnGA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 24 May 2022 07:32:34 +0000
  • Ironport-data: A9a23:hHB8ka6avmWM+cGsUXi/VQxRtEzGchMFZxGqfqrLsTDasY5as4F+v jccDWCPOqqONDDyfYx+O4Tj9U0Hup7RyYRiSVQ//yw0Hi5G8cbLO4+Ufxz6V8+wwmwvb67FA +E2MISowBUcFyeEzvuVGuG96yE6j8lkf5KkYAL+EnkZqTRMFWFw0HqPp8Zj2tQy2YXjX1vU0 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx umhurSWcyACJLTXkdgceCNSKRtuB40Z/rL+dC3XXcy7lyUqclPK6tA3VQQdGtRd/ex6R2ZT6 fYfNTYBKAiZgP67y666Te8qgdk/KM7sP8UUvXQIITPxVK56B8ycBfiUo4YGjF/chegXdRraT 9AeZjd1KgzJfjVEO0sNCYJ4l+Ct7pX6W2IC8AjE9PFni4TV5AZ82ZLXHvzMRt2Xd9tcnGaBi 3nB9k2sV3n2M/Tak1Jp6EmEhOXCgCf6U4I6D6Cj+7hhh1j77nMXIA0bUx28u/bRol6zXZdTJ lIZ/gIqrLMu7wq7Q9/lRRq6rXWY+BkGVLJt//YS7QiMzu/f5F+fD21dFzpZMoV45IkxWCAg0 UKPk5XxHztzvbaJSHWbsLCJsTe1PitTJmgHDcMZcTY4DxDYiNlbpnryohxLSsZZUvWd9enM/ g23
  • Ironport-hdrordr: A9a23:qfk6CaBPf5rkdQ7lHeg+sceALOsnbusQ8zAXPh9KJCC9I/bzqy nxpp8mPH/P5wr5lktQ/OxoHJPwOU80kqQFmrX5XI3SJTUO3VHFEGgM1+vfKlHbak7DH6tmpN 1dmstFeaLN5DpB/KHHCWCDer5PoeVvsprY49s2p00dMT2CAJsQizuRZDzrcHGfE2J9dOcE/d enl4N6jgvlXU5SQtWwB3EDUeSGj9rXlKj+aRpDIxI88gGBgR6h9ba/SnGjr1wjegIK5Y1n3X nOkgT/6Knmm/anyiXE32uWy5hNgtPuxvZKGcTJoMkILTfHjBquee1aKvW/lQFwhNvqxEchkd HKrRtlF8Nv60nJdmXwmhfp0xmI6kdb11bSjXujxVfzq83wQzw3T+Bbg5hCTxff4008+Plhza NixQuixtVqJCKFuB64y8nDVhlsmEbxi2Eli/Qvg3tWVpZbQKNNrLYY4FheHP47bW7HAbgcYa hT5fznlbZrmQvwVQGbgoAv+q3gYp0LJGbJfqBY0fblkQS/nxhCvj4lLYIk7zI9HakGOuh5Dt T/Q9pVfY51P78rhIJGdZA8qJiMexrwqSylChPgHX3XUIc6Blnql7nbpJ0I2cDCQu178HJ1ou WKbG9l
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

(trying to send again, as I've replied yesterday but the email never
reached xen-devel).

On Mon, May 23, 2022 at 05:13:43PM +0200, Jan Beulich wrote:
> On 23.05.2022 16:37, Roger Pau Monné wrote:
> > On Wed, May 18, 2022 at 10:49:22AM +0200, Jan Beulich wrote:
> >> On 16.05.2022 16:31, Roger Pau Monne wrote:
> >>> --- a/xen/arch/x86/include/asm/flushtlb.h
> >>> +++ b/xen/arch/x86/include/asm/flushtlb.h
> >>> @@ -146,7 +146,8 @@ void flush_area_mask(const cpumask_t *, const void 
> >>> *va, unsigned int flags);
> >>>  #define flush_mask(mask, flags) flush_area_mask(mask, NULL, flags)
> >>>  
> >>>  /* Flush all CPUs' TLBs/caches */
> >>> -#define flush_area_all(va, flags) flush_area_mask(&cpu_online_map, va, 
> >>> flags)
> >>> +#define flush_area(va, flags) \
> >>> +    flush_area_mask(&cpu_online_map, (const void *)(va), flags)
> >>
> >> I have to admit that I would prefer if we kept the "_all" name suffix,
> >> to continue to clearly express the scope of the flush. I'm also not
> >> really happy to see the cast being added globally now.
> > 
> > But there where no direct callers of flush_area_all(), so the name was
> > just relevant for it's use in flush_area().  With that now gone I
> > don't see a need for a flush_area_all(), as flush_area_mask() is more
> > appropriate.
> 
> And flush_area_all() is shorthand for flush_area_mask(&cpu_online_map, ...).
> That's more clearly distinguished from flush_area_local() than simply
> flush_area(); the latter was okay-ish with its mm.c-only exposure, but imo
> isn't anymore when put in a header.

OK, so you would prefer to replace callers to use flush_area_all() and
drop flush_area() altogether.  I can do that.

> >>> --- a/xen/arch/x86/smp.c
> >>> +++ b/xen/arch/x86/smp.c
> >>> @@ -262,7 +262,8 @@ void flush_area_mask(const cpumask_t *mask, const 
> >>> void *va, unsigned int flags)
> >>>  {
> >>>      unsigned int cpu = smp_processor_id();
> >>>  
> >>> -    ASSERT(local_irq_is_enabled());
> >>> +    /* Local flushes can be performed with interrupts disabled. */
> >>> +    ASSERT(local_irq_is_enabled() || cpumask_equal(mask, 
> >>> cpumask_of(cpu)));
> >>
> >> Further down we use cpumask_subset(mask, cpumask_of(cpu)),
> >> apparently to also cover the case where mask is empty. I think
> >> you want to do so here as well.
> > 
> > Hm, yes.  I guess that's cheaper than adding an extra:
> > 
> > if ( cpumask_empty() )
> >     return;
> > 
> > check at the start of the function.
> > 
> >>>      if ( (flags & ~(FLUSH_VCPU_STATE | FLUSH_ORDER_MASK)) &&
> >>>           cpumask_test_cpu(cpu, mask) )
> >>
> >> I suppose we want a further precaution here: Despite the
> >> !cpumask_subset(mask, cpumask_of(cpu)) below I think we want to
> >> extend what c64bf2d2a625 ("x86: make CPU state flush requests
> >> explicit") and later changes (isolating uses of FLUSH_VCPU_STATE
> >> from other FLUSH_*) did and exclude the use of FLUSH_VCPU_STATE
> >> for the local CPU altogether.
> > 
> > If we really want to exclude the use of FLUSH_VCPU_STATE for the local
> > CPU, we might wish to add this as a separate ASSERT, so that such
> > checking doesn't depend on !local_irq_is_enabled():
> > 
> > ASSERT(local_irq_is_enabled() || cpumask_subset(mask, cpumask_of(cpu));
> > ASSERT(!cpumask_subset(mask, cpumask_of(cpu)) || !(flags & 
> > FLUSH_VCPU_STATE));
> > 
> > 
> >> That's because if such somehow made
> >> it into the conditional below here, it would still involve an IPI.
> > 
> > Sorry, I'm confused by this: if the mask is empty there should be no
> > IPI involved at all?  And we shouldn't even get into the second
> > conditional on the function.
> 
> Should perhaps have made more explicit that "somehow" means a hypothetical
> way, perhaps even as a result of some further breakage somewhere.

Oh, OK, then I wasn't so confused after all :).  Given your lack of
comments I assume you are fine with the addition of a separate ASSERT
to cover the usage of FLUSH_VCPU_STATE.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.