[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] x86/flushtlb: remove flush_area check on system state


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Tue, 24 May 2022 18:46:50 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=EdmlYp/o88PZafQydtFsxqPmm47chu1blBI1ttF7+q8=; b=h6kMtHpEdJGmXbConJurJrZkbnmJ/TUYLMm5Qm8wWo9S3sXcq7mIeKap1sEWPont/+mIS5G+VZMVPB7TBoGelSoNg/3BuHmU81ukYmr68zIg0qaC6aj8VmT/T3wps/hF8qACNCQFWBrVMkaMDVZtu7bypSzVOyxESldLBlPPNwUKYijXtvFCoKdRy6nFFOatbeAix4qRLIo2CvYSc6K/WqKb+ewY+9nDRDIlpIVr/K46jpqrxZIzjpaOiPNstZxDtcah3rTDG1kwQPSb15W/D8UI2ECTFMmIlFHDzC7D6NvVRO8e9RkrNv354AFAUeGr6uaFLN613z0KmnUuOJ7nxg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cY1kC9EuCMMeHiVuuAmCzphQilkkwtm/yKsJyx7NIT2Q9IfBnpIc4AJegOwW4TyI/GHcTZkI/I0hOjZ3QaM8zAaiiKGwaAr1Wtv2dh0JK9ocA+jBy1pmjxYKYPPviB7PjG9B6AQN2IgL+HiIPIqAQEDQkl3r0AYNuGk6weHAaVwJz6Nn15475gF+u2sJbvjfkRmA3Fw5qS9nenPvc60qmPKaGsPKewLZkwM02kV5Yypp/Y7vNDiLfRU6Rc2hoPHi1EWiGVrhwuTqRN///d8VgABnRD3PJ9xe4CVI+XK020VyjtIcYcU8IrEjDgIEH4tJ5sNpk4u4+7qDBMIBSfu00Q==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 24 May 2022 16:47:07 +0000
  • Ironport-data: A9a23:jqgqNq1qzhWdBTpVnvbD5adwkn2cJEfYwER7XKvMYLTBsI5bpzcCy WofXDqEP/vfYzbzfIh0btvn8E0Au5XSmNI1HgFppC1hF35El5HIVI+TRqvS04J+DSFhoGZPt Zh2hgzodZhsJpPkjk7xdOCn9xGQ7InQLlbGILes1htZGEk1EU/NtTo5w7Rj2tMy3YDja++wk YiaT/P3aQfNNwFcagr424rbwP+4lK2v0N+wlgVWicFj5DcypVFMZH4sDfjZw0/DaptVBoaHq 9Prl9lVyI97EyAFUbtJmp6jGqEDryW70QKm0hK6UID66vROS7BbPg/W+5PwZG8O4whlkeydx /1ViraeTBo0P5bnv9svfSN0I2JcEq5JreqvzXiX6aR/zmXgWl61m7BLMxtzOocVvOFqHWtJ6 PoUbigXaQyOjP63x7T9TfRwgsMkL4/gO4Z3VnNIlGmFS6p5B82eBfyStLe03x9p7ixKNezZa McDLyJmcTzLYgFVO0dRA5U79AutriamL2EB9wnJzUYxy1KM6g11+eGqCuv+JuyMGZx6nxi8i kuTqgwVBTlfbrRz0wGt8Hihm+vOliPTQ58JGfuz8fsCqEKX7nweDlsRT1TTiem0jAuyVsxSL 2QQ+zEytu4i+UqzVN7/Uhak5nmesXYht8F4FuQ77ESI1fDS6gPAXmwcFGcZOZohqdM8QiEs2 hmRhdT1CDdzsbqTD3WA6rOTqjD0Mi8QRYMfWRI5ocI+y4GLiOkOYtjnF76PzIbdYgXJJAzN
  • Ironport-hdrordr: A9a23:MCLJP64lb6GVeCs7kgPXwVSBI+orL9Y04lQ7vn2ZFiY5TiXIra qTdaogviMc6Ax/ZJjvo6HkBEClewKlyXcV2/hpAV7GZmXbUQSTTL2KgbGSoAEIXheOjdK1tp 0QD5SWaueAamSS5PySiGfYLz9j+qjgzEnBv5ai854Hd3APV0gP1XYaNu7NeXcGPjWuSKBJYq a0145inX6NaH4XZsO0Cj0sWPXCncTCkNbDbQQdDxAqxQGShXfwgYSKWySw71M7aXdi0L0i+W /Kn0jQ4biiieiyzlv523XI55pbtdP9wp9oBdCKiOISNjLw4zzYLbhJavmnhnQYseuv4FElnJ 3lpAohBd167zfrcmS8sXLWqnzd+Qdrz0Wn5U6TgHPlr8C8bik9EdB9iYVQdQacw1Y8vflnuZ g7k16xht5yN1ftjS7979/HW1VBjUyvu0cvluYVkjh2TZYeUrlMtoYSlXklXavoJBiKprzPLd MeTf01vJ1tABOnhjHizyNSKeWXLzsO9kzseDlAhiSXuwIm7kyRgXFohvD3pU1wiq7Ve6M0mN gsDZ4Y5Y2mbvVmGZ6VV91xNvdeNAT2MGLxGVPXB2jbP4c6HF+Ig6LLwdwOlZKXkdozvdAPpK g=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, May 24, 2022 at 05:27:35PM +0200, Jan Beulich wrote:
> On 24.05.2022 12:50, Roger Pau Monne wrote:
> > Booting with Shadow Stacks leads to the following assert on a debug
> > hypervisor:
> > 
> > Assertion 'local_irq_is_enabled()' failed at arch/x86/smp.c:265
> > ----[ Xen-4.17.0-10.24-d  x86_64  debug=y  Not tainted ]----
> > CPU:    0
> > RIP:    e008:[<ffff82d040345300>] flush_area_mask+0x40/0x13e
> > [...]
> > Xen call trace:
> >    [<ffff82d040345300>] R flush_area_mask+0x40/0x13e
> >    [<ffff82d040338a40>] F modify_xen_mappings+0xc5/0x958
> >    [<ffff82d0404474f9>] F 
> > arch/x86/alternative.c#_alternative_instructions+0xb7/0xb9
> >    [<ffff82d0404476cc>] F alternative_branches+0xf/0x12
> >    [<ffff82d04044e37d>] F __start_xen+0x1ef4/0x2776
> >    [<ffff82d040203344>] F __high_start+0x94/0xa0
> > 
> > 
> > This is due to SYS_STATE_smp_boot being set before calling
> > alternative_branches(), and the flush in modify_xen_mappings() then
> > using flush_area_all() with interrupts disabled.  Note that
> > alternative_branches() is called before APs are started, so the flush
> > must be a local one (and indeed the cpumask passed to
> > flush_area_mask() just contains one CPU).
> > 
> > Take the opportunity to simplify a bit the logic and intorduce
> > flush_area_all() as an alias for flush_area_mask(&cpu_online_map...),
> 
> This is now stale - you don't introduce flush_area_all() here.
> Sadly nothing is said to justify the addition of a cast there,
> which - as said before - I think is a little risky (as many
> casts are), and hence would imo better be avoided.

So prior to this change there are no direct callers to
flush_area_all(), and hence all callers use flush_area() which has the
cast.  Now that I remove flush_area() and modify callers to use
flush_area_all() directly it seems natural to also move the cast
there.  While I agree that having casts is not desirable, I wouldn't
consider this change as adding them.  Merely moving them but the
result is that the callers get the cast like they used to do.

> 
> > --- a/xen/arch/x86/smp.c
> > +++ b/xen/arch/x86/smp.c
> > @@ -262,7 +262,10 @@ void flush_area_mask(const cpumask_t *mask, const void 
> > *va, unsigned int flags)
> >  {
> >      unsigned int cpu = smp_processor_id();
> >  
> > -    ASSERT(local_irq_is_enabled());
> > +    /* Local flushes can be performed with interrupts disabled. */
> > +    ASSERT(local_irq_is_enabled() || cpumask_subset(mask, 
> > cpumask_of(cpu)));
> > +    /* Exclude use of FLUSH_VCPU_STATE for the local CPU. */
> > +    ASSERT(!cpumask_test_cpu(cpu, mask) || !(flags & FLUSH_VCPU_STATE));
> 
> What about FLUSH_FORCE_IPI? This won't work either with IRQs off,
> I'm afraid. Or wait - that flag's name doesn't really look to
> force the use of an IPI, it's still constrained to remote
> requests. I think this wants mentioning in one of the comments,
> not the least to also have grep match there then (right now grep
> output gives the impression as if the flag wasn't consumed
> anywhere).

Would you be fine with adding:

Note that FLUSH_FORCE_IPI doesn't need to be handled explicitly, as
it's main purpose is to prevent the usage of the hypervisor assisted
flush if available, not to force the sending of an IPI even for cases
where it won't be sent.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.