[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v3 2/3] pirq_cleanup_check() leaks


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 27 Jul 2023 09:38:29 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=42G1q1cbtqLAMoOILaHON+bdi1vm2gokigDbkHTcLas=; b=im54/SKoplx/5mwC6VfvATT4eOkoJI7tyUjARqxjVN5ipGGcNV7SU+Dd/F61wI29u2vLS/cmzjv6E6ePCZKhD3UeT61rrCdr8omA9YMCHdhGvRBdxDhCYVWEYREY3zqNNrSKEtNS+w3vbAyGOwk7KI6f+eXt15leHVQ7j616GCcNwDyYSqjLSDJwYxzeCEOVXoV7pl8bvO+1MldyQpRkl8w65N6G9/jodYJc6lBD0o5Tx7nH3a8s8T9mTZvd4sbWTih2BcnfvnWFZ113V/05pM5baLH2pxekg4fymNa8f236e9Zj1NehNvyXRybWeuvjAspx/dIM3FvU3RaW88erIg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=S36Ixr4Z+Ln7tN/dSjpT2baYv+w+4WIJiQR4Y7v84Fm+jtYj2MFZjWvXX9jfOYlzeb3//uzQVAU9LPIVp5Y0KMOmFavrmOw2MXoNZrSXTonHXQ7MAbCbFLkP8VsfDVHV9Vo0cKC4TJYM6snAho+ZqwvE7jxptQ1ZyIpDiboUpLvB2WSzEwTlhvA6q5JkrWAHv90IUL++jVckFvUIvgKBHvuoItsuF4V89wPg9Pvu7UJSCwM16rlQ4jILKfUr31WP9HfhWoAYtLlAq9QJOoBQqibggntGDOo4SXRJ0dCPU9LgT/a5/BnuZ0mRfl/AYlNiKtnFKwxQkhk1BdUBiIZBYg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Thu, 27 Jul 2023 07:38:35 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Its original introduction had two issues: For one the "common" part of
the checks (carried out in the macro) was inverted. And then after
removal from the radix tree the structure wasn't scheduled for freeing.
(All structures still left in the radix tree would be freed upon domain
destruction, though.)

For the freeing to be safe even if it didn't use RCU (i.e. to avoid use-
after-free), re-arrange checks/operations in evtchn_close(), such that
the pointer wouldn't be used anymore after calling pirq_cleanup_check()
(noting that unmap_domain_pirq_emuirq() itself calls the function in the
success case).

Fixes: c24536b636f2 ("replace d->nr_pirqs sized arrays with radix tree")
Fixes: 79858fee307c ("xen: fix hvm_domain_use_pirq's behavior")
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
This is fallout from me looking into whether the function and macro of
the same name could be suitably split, to please Misra rule 5.5. The
idea was apparently that the check done in the macro is the "common"
part, and the actual function would be per-architecture. Pretty clearly
this, if need be, could also be achieved by naming the actual function
e.g. arch_pirq_cleanup_check().

Despite my testing of this (to a certain degree), I'm wary of the
change, since the code has been the way it was for about 12 years. It
feels like I'm overlooking something crucial ...

The wrong check is also what explains why Arm got away without
implementing the function (prior to "restrict concept of pIRQ to x86"):
The compiler simply eliminated the two calls from event_channel.c.
---
v3: New.

--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1349,6 +1349,7 @@ void (pirq_cleanup_check)(struct pirq *p
 
     if ( radix_tree_delete(&d->pirq_tree, pirq->pirq) != pirq )
         BUG();
+    free_pirq_struct(pirq);
 }
 
 /* Flush all ready EOIs from the top of this CPU's pending-EOI stack. */
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -711,9 +711,10 @@ int evtchn_close(struct domain *d1, int
             if ( !is_hvm_domain(d1) )
                 pirq_guest_unbind(d1, pirq);
             pirq->evtchn = 0;
-            pirq_cleanup_check(pirq, d1);
-            if ( is_hvm_domain(d1) && domain_pirq_to_irq(d1, pirq->pirq) > 0 )
-                unmap_domain_pirq_emuirq(d1, pirq->pirq);
+            if ( !is_hvm_domain(d1) ||
+                 domain_pirq_to_irq(d1, pirq->pirq) <= 0 ||
+                 unmap_domain_pirq_emuirq(d1, pirq->pirq) < 0 )
+                pirq_cleanup_check(pirq, d1);
         }
         unlink_pirq_port(chn1, d1->vcpu[chn1->notify_vcpu_id]);
         break;
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -158,7 +158,7 @@ extern struct pirq *pirq_get_info(struct
 void pirq_cleanup_check(struct pirq *, struct domain *);
 
 #define pirq_cleanup_check(pirq, d) \
-    ((pirq)->evtchn ? pirq_cleanup_check(pirq, d) : (void)0)
+    (!(pirq)->evtchn ? pirq_cleanup_check(pirq, d) : (void)0)
 
 extern void pirq_guest_eoi(struct pirq *);
 extern void desc_guest_eoi(struct irq_desc *, struct pirq *);




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.