[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [RFC for-4.8 6/6] xen/arm: Avoid multiple dev class lookups in handle_node



From: "Edgar E. Iglesias" <edgar.iglesias@xxxxxxxxxx>

Avoid looking up the device class multiple times in handle_node().
This optimization should not have any functional change.

Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xxxxxxxxxx>
---
 xen/arch/arm/domain_build.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 15b6dbe..65c2df7 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1213,6 +1213,7 @@ static int handle_node(struct domain *d, struct 
kernel_info *kinfo,
         { /* sentinel */ },
     };
     const struct device_desc *desc;
+    enum device_class dev_class;
     struct dt_device_node *child;
     int res;
     const char *name;
@@ -1235,12 +1236,13 @@ static int handle_node(struct domain *d, struct 
kernel_info *kinfo,
     }
 
     desc = device_get_desc(node);
+    dev_class = desc ? desc->class : DEVICE_UNKNOWN;
 
     /*
      * Replace these nodes with our own. Note that the original may be
      * used_by DOMID_XEN so this check comes first.
      */
-    if ( device_get_class(node) == DEVICE_GIC )
+    if ( dev_class == DEVICE_GIC )
         return make_gic_node(d, kinfo->fdt, node);
     if ( dt_match_node(timer_matches, node) )
         return make_timer_node(d, kinfo->fdt, node);
@@ -1256,7 +1258,7 @@ static int handle_node(struct domain *d, struct 
kernel_info *kinfo,
      * Even if the IOMMU device is not used by Xen, it should not be
      * passthrough to DOM0
      */
-    if ( device_get_class(node) == DEVICE_IOMMU )
+    if ( dev_class == DEVICE_IOMMU )
     {
         DPRINT(" IOMMU, skip it\n");
         return 0;
-- 
2.5.0


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.