[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 10/12] xen/hypfs: add cpupool directories



On 17.11.20 15:13, Jan Beulich wrote:
On 26.10.2020 10:13, Juergen Gross wrote:
@@ -992,6 +994,78 @@ static struct notifier_block cpu_nfb = {
      .notifier_call = cpu_callback
  };
+#ifdef CONFIG_HYPFS
+static HYPFS_DIR_INIT(cpupool_pooldir, "id");

This "id" string won't appear anywhere, will it? I would have
expected this to act as the format string used when generating
names of the dynamic entries. This would e.g. allow CPU pools
to have decimal numbered names, but other entries also hex
ones, and then if so desired also e.g. with leading zeros.

I like that idea.


+static int cpupool_dir_read(const struct hypfs_entry *entry,
+                            XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    int ret = 0;
+    struct cpupool **q;

I was going to ask for const here, but the way for_each_cpupool()
works looks to prohibit this. Nevertheless I wonder whether the
extra level of indirection there wouldn't better be dropped. Of
the users, only cpupool_destroy() looks to need it, so open-
coding the loop there (or introducing an auxiliary variable)
would allow improvements here and elsewhere. (Actually I notice
there's also a similar use in cpupool_create(), but the general
consideration remains.)

I'll have a look.


+    spin_lock(&cpupool_lock);
+
+    for_each_cpupool(q)
+    {
+        ret = hypfs_read_dyndir_id_entry(&cpupool_pooldir, (*q)->cpupool_id,
+                                         !(*q)->next, &uaddr);
+        if ( ret )
+            break;
+    }
+
+    spin_unlock(&cpupool_lock);
+
+    return ret;
+}
+
+static unsigned int cpupool_dir_getsize(const struct hypfs_entry *entry)
+{
+    struct cpupool **q;
+    unsigned int size = 0;
+
+    spin_lock(&cpupool_lock);
+
+    for_each_cpupool(q)
+        size += HYPFS_DIRENTRY_SIZE(snprintf(NULL, 0, "%d", (*q)->cpupool_id));

Beyond the remark above I consider this problematic: If the pool
ID was negative, the use of %d here would get things out of sync
with the %u uses in hypfs.c. I guess exposing
HYPFS_DIRENTRY_SIZE() isn't the right approach, and you instead
need another hypfs library function.

Fine with me.


+static struct hypfs_entry *cpupool_dir_findentry(struct hypfs_entry_dir *dir,
+                                                 const char *name,
+                                                 unsigned int name_len)
+{
+    unsigned long id;
+    const char *end;
+    struct cpupool *cpupool;
+
+    id = simple_strtoul(name, &end, 10);
+    if ( id > INT_MAX || end != name + name_len )

What does this INT_MAX match up with? Afaics
XEN_SYSCTL_CPUPOOL_OP_CREATE is fine to have an effectively
negative pool ID passed in (the public interface struct uses
uint32_t, but this gets converted to plain int first thing in
the sysctl handler).

Oh, this wants to be fixed.


+        return ERR_PTR(-ENOENT);
+
+    spin_lock(&cpupool_lock);
+
+    cpupool = __cpupool_find_by_id(id, true);
+
+    spin_unlock(&cpupool_lock);
+
+    if ( !cpupool )
+        return ERR_PTR(-ENOENT);
+
+    return hypfs_gen_dyndir_entry_id(&cpupool_pooldir, id);

The latest this one makes clear that cpupool_lock nests inside
the hypfs one. I think this wants spelling out next to the
definition of the former, as it implies that there are
restrictions on what can be done from inside cpupool-locked
regions. hypfs_read_dyndir_id_entry(), for example, has to
remain lock free for this reason.

Okay, I'll add a comment.


Juergen

Attachment: OpenPGP_0xB0DE9DD628BF132F.asc
Description: application/pgp-keys

Attachment: OpenPGP_signature
Description: OpenPGP digital signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.