[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 15/19] tools/xenstore: switch hashtable to use the talloc framework



On 20.12.22 22:50, Julien Grall wrote:
Hi Juergen,

On 13/12/2022 16:00, Juergen Gross wrote:
@@ -115,47 +117,32 @@ hashtable_expand(struct hashtable *h)
      if (h->primeindex == (prime_table_length - 1)) return 0;
      newsize = primes[++(h->primeindex)];
-    newtable = (struct entry **)calloc(newsize, sizeof(struct entry*));
-    if (NULL != newtable)
+    newtable = talloc_realloc(h, h->table, struct entry *, newsize);
+    if (!newtable)
      {
-        /* This algorithm is not 'stable'. ie. it reverses the list
-         * when it transfers entries between the tables */
-        for (i = 0; i < h->tablelength; i++) {
-            while (NULL != (e = h->table[i])) {
-                h->table[i] = e->next;
-                index = indexFor(newsize,e->h);
+        h->primeindex--;
+        return 0;
+    }
+
+    h->table = newtable;
+    memset(newtable + h->tablelength, 0,
+           (newsize - h->tablelength) * sizeof(*newtable));
+    for (i = 0; i < h->tablelength; i++) {

I understand this code is taken from the realloc path. However, isn't this algorithm also not stable? If so, then I think we move the comment here.

I'm fine with that, even if I don't see how it would matter. There is no
guarantee regarding the order of entries for a given index.


+        for (pE = &(newtable[i]), e = *pE; e != NULL; e = *pE) {
+            index = indexFor(newsize,e->h);

Missing space after ",".

Will fix.


+            if (index == i)
+            {
+                pE = &(e->next);
+            }
+            else
+            {
+                *pE = e->next;
                  e->next = newtable[index];
                  newtable[index] = e;
              }
          }
-        free(h->table);
-        h->table = newtable;
-    }
-    /* Plan B: realloc instead */
-    else
-    {
-        newtable = (struct entry **)
-                   realloc(h->table, newsize * sizeof(struct entry *));
-        if (NULL == newtable) { (h->primeindex)--; return 0; }
-        h->table = newtable;
-        memset(newtable + h->tablelength, 0,
-               (newsize - h->tablelength) * sizeof(*newtable));
-        for (i = 0; i < h->tablelength; i++) {
-            for (pE = &(newtable[i]), e = *pE; e != NULL; e = *pE) {
-                index = indexFor(newsize,e->h);
-                if (index == i)
-                {
-                    pE = &(e->next);
-                }
-                else
-                {
-                    *pE = e->next;
-                    e->next = newtable[index];
-                    newtable[index] = e;
-                }
-            }
-        }
      }
+
      h->tablelength = newsize;
      h->loadlimit   = (unsigned int)
          (((uint64_t)newsize * max_load_factor) / 100);
@@ -184,7 +171,7 @@ hashtable_insert(struct hashtable *h, void *k, void *v)
           * element may be ok. Next time we insert, we'll try expanding 
again.*/
          hashtable_expand(h);
      }
-    e = (struct entry *)calloc(1, sizeof(struct entry));
+    e = talloc_zero(h, struct entry);
      if (NULL == e) { --(h->entrycount); return 0; } /*oom*/
      e->h = hash(h,k);
      index = indexFor(h->tablelength,e->h);
@@ -238,8 +225,8 @@ hashtable_remove(struct hashtable *h, void *k)
              h->entrycount--;
              v = e->v;
              if (h->flags & HASHTABLE_FREE_KEY)
-                free(e->k);
-            free(e);
+                talloc_free(e->k);
+            talloc_free(e);
              return v;
          }
          pE = &(e->next);
@@ -280,25 +267,20 @@ void
  hashtable_destroy(struct hashtable *h)
  {
      unsigned int i;
-    struct entry *e, *f;
+    struct entry *e;
      struct entry **table = h->table;
      for (i = 0; i < h->tablelength; i++)
      {
-        e = table[i];
-        while (NULL != e)
+        for (e = table[i]; e; e = e->next)
          {
-            f = e;
-            e = e->next;
              if (h->flags & HASHTABLE_FREE_KEY)
-                free(f->k);
+                talloc_free(e->k);
              if (h->flags & HASHTABLE_FREE_VALUE)
-                free(f->v);
-            free(f);

AFAIU, the loop is reworked so you let talloc to free each element with the parent. Using a while loop is definitely cleaner, but now you will end up to have two separate loop for the elements.

There is a risk that the overall performance of hashtable_destroy() will be worse as the data accessed in one loop may not fit in the cache. So you will have to reload it on the second loop.

Therefore, I think it would be better to keep the loop as-is.

What about a completely different approach? I could make the key and value
talloc children of e when _inserting_ the element and the related flag is
set. This would reduce hashtable_destroy to a single talloc_free().


Juergen

Attachment: OpenPGP_0xB0DE9DD628BF132F.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature
Description: OpenPGP digital signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.