[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 05/11] xsm: refactor xsm_ops handling


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: "Daniel P. Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>
  • Date: Tue, 7 Sep 2021 09:44:41 -0400
  • Arc-authentication-results: i=1; mx.zohomail.com; dkim=pass header.i=apertussolutions.com; spf=pass smtp.mailfrom=dpsmith@xxxxxxxxxxxxxxxxxxxx; dmarc=pass header.from=<dpsmith@xxxxxxxxxxxxxxxxxxxx>
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1631022387; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; bh=Eg118G40UsDAktVHH5I+xA97QrEkwUrl+xfBDFWQE6I=; b=Z/GnHDIBMRsR+uzsWuAQhp7XNaMctaRpx2vpKOX0lPC4/w6lDla52HKkeR2kmJnQ4STfc0m9DaRxFOLhRUVNx9abrfflmwIEa6fBbQvZrKyxW8xVSpsKG1CF59Z1K2fdVJ89wf5WIC+CbhDN6XbjY1U/dLrrZ4Pz2UO2rLQaclM=
  • Arc-seal: i=1; a=rsa-sha256; t=1631022387; cv=none; d=zohomail.com; s=zohoarc; b=RXKB9dUWNPWkgmcdISqdTGb0F5yGEg6QznZcElAc8HE42k1NyujAz5oBU6OovQgwQHARHCz1ofQqU+IDSGJDUR9L90zXX/zhh3GHuBjkJv4Z5ujZbcbiSl3LAxcPuM/FcJAwgpCEYaelGBInQUJ7x+B0BuF5XtfZGQO+h2T25gc=
  • Cc: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>
  • Delivery-date: Tue, 07 Sep 2021 13:46:34 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>



On 9/6/21 2:31 PM, Andrew Cooper wrote:
On 03/09/2021 20:06, Daniel P. Smith wrote:
This renames the `struct xsm_operations` to the shorter `struct xsm_ops` and
converts the global xsm_ops from being a pointer to an explicit instance. As
part of this conversion, it reworks the XSM modules init function to return
their xsm_ops struct which is copied in to the global xsm_ops instance.

Signed-off-by: Daniel P. Smith <dpsmith@xxxxxxxxxxxxxxxxxxxx>

Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

However, some suggestions...

diff --git a/xen/xsm/xsm_core.c b/xen/xsm/xsm_core.c
index 55483292c5..859af3fe9a 100644
--- a/xen/xsm/xsm_core.c
+++ b/xen/xsm/xsm_core.c
@@ -28,9 +28,17 @@
  #include <asm/setup.h>
  #endif
-#define XSM_FRAMEWORK_VERSION "1.0.0"
+#define XSM_FRAMEWORK_VERSION    "1.0.1"
-struct xsm_operations *xsm_ops;
+struct xsm_ops __read_mostly xsm_ops;
+
+enum xsm_ops_state {
+    XSM_OPS_UNREGISTERED,
+    XSM_OPS_REG_FAILED,
+    XSM_OPS_REGISTERED,
+};
+
+static enum xsm_ops_state xsm_ops_registered = XSM_OPS_UNREGISTERED;

__read_mostly, or can this be __initdata ?

Apologies, I think you may have suggested this before. It would be good to be able to check this later but currently since I just introduced this and it is only used during init, it could be made __initdata for now and later if it gets exposed, then it can be moved to __read_mostly.

Do you agree?

@@ -87,25 +88,35 @@ static int __init xsm_core_init(const void *policy_buffer, 
size_t policy_size)
      }
  #endif
- if ( verify(&dummy_xsm_ops) )
+    if ( xsm_ops_registered != XSM_OPS_UNREGISTERED )
      {
-        printk(XENLOG_ERR "Could not verify dummy_xsm_ops structure\n");
+        printk(XENLOG_ERR
+               "Could not init XSM, xsm_ops register already attempted\n");
          return -EIO;
      }
- xsm_ops = &dummy_xsm_ops;
-
      switch ( xsm_bootparam )
      {
      case XSM_BOOTPARAM_DUMMY:
+        xsm_ops_registered = XSM_OPS_REGISTERED;
          break;
case XSM_BOOTPARAM_FLASK:
-        flask_init(policy_buffer, policy_size);
+        ops = flask_init(policy_buffer, policy_size);
+        if ( ops )
+        {
+            xsm_ops_registered = XSM_OPS_REGISTERED;
+            xsm_ops = *ops;
+        }
          break;
case XSM_BOOTPARAM_SILO:
-        silo_init();
+        ops = silo_init();
+        if ( ops )
+        {
+            xsm_ops_registered = XSM_OPS_REGISTERED;
+            xsm_ops = *ops;
+        }

This if ( ops ) block can be deduplicated by moving after the switch()
statement.  It's going to be common to all everything except dummy.

Good call. I will rework to remove the duplication.

v/r,
dps




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.