[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 10/11] tools/xenstore: drop use of tdb



On 26.06.23 15:10, Julien Grall wrote:
Hi Juergen,

On 26/06/2023 12:06, Juergen Gross wrote:
On 19.06.23 20:22, Julien Grall wrote:
Hi Juergen,

I haven't looked at the code in details yet. But I have a few questions regarding the commit message/

On 30/05/2023 10:13, Juergen Gross wrote:
Today all Xenstore nodes are stored in a TDB data base. This data base
has several disadvantages:

- it is using a fixed sized hash table, resulting in high memory
   overhead for small installations with only very few VMs, and a rather
   large performance hit for systems with lots of VMs due to many
   collisions

Can you provide some concrete numbers and a setup in mind? This would help if someone in the future says that they see the inverse and we need to rework the logic.

The hash table size today is 7919 entries. This means that e.g. in case
of a simple desktop use case with 2 or 3 VMs probably far less than 10%
of the entries will be used (assuming roughly 100 nodes per VM). OTOH a
setup on a large server with 500 VMs would result in heavy conflicts in
the hash list with 5-10 nodes per hash table entry.

Thanks! Can this be written down in the commit message?

Okay.


So drop using TDB and store the nodes directly in memory making them
easily accessible. Use a hash-based lookup mechanism for fast lookup
of nodes by their full path.

For now only replace TDB keeping the current access functions.

Do you plan to have the rest of the work upstreamed for 4.18? Also, if for some reasons, only this work will be merged. Will this have an impact on Xenstored memory usage/performance?

Memory usage should go down, especially after deleting lots of entries
(AFAIK TDB will never free the unused memory again, it will just keep it
for the future).

Memory fragmentation might go up, though.

Performance might be better, too, as there is no need to realloc() the
memory when adding nodes.

What you write seems to be quite hypothetical so far. Given there this is not gated by an #ifdef, I think it would be good to have a good idea of the impact to have only the partial rework.

I have checked it. Without my patches memory consumption is about 80k after
creating and shutting down a guest again. With my patches it is 18k.

Performance seems a little bit better with my patches, but this might be
realted to a bad test (I just used xenstore-test which doesn't do many nodes
in parallel).


Juergen

Attachment: OpenPGP_0xB0DE9DD628BF132F.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature
Description: OpenPGP digital signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.