[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 11/12] xen/hypfs: add scheduling granularity entry to cpupool entries



On 17.11.20 17:49, Jan Beulich wrote:
On 26.10.2020 10:13, Juergen Gross wrote:
@@ -1057,6 +1063,43 @@ static struct hypfs_entry *cpupool_dir_findentry(struct 
hypfs_entry_dir *dir,
      return hypfs_gen_dyndir_entry_id(&cpupool_pooldir, id);
  }
+static int cpupool_gran_read(const struct hypfs_entry *entry,
+                             XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    const struct hypfs_dyndir_id *data;
+    struct cpupool *cpupool;

const?

Yes.


+    const char *name = "";
+
+    data = hypfs_get_dyndata();
+    if ( !data )
+        return -ENOENT;
+
+    spin_lock(&cpupool_lock);
+
+    cpupool = __cpupool_find_by_id(data->id, true);
+    if ( cpupool )
+        name = sched_gran_get_name(cpupool->gran);
+
+    spin_unlock(&cpupool_lock);
+
+    if ( !cpupool )

May I suggest to check !*name here, to avoid giving the impression
of ...

+        return -ENOENT;
+
+    return copy_to_guest(uaddr, name, strlen(name) + 1) ? -EFAULT : 0;

... success (but an empty name) in this admittedly unlikely event?

Fine with me.


Juergen

Attachment: OpenPGP_0xB0DE9DD628BF132F.asc
Description: application/pgp-keys

Attachment: OpenPGP_signature
Description: OpenPGP digital signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.