[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] xl save and compression



Hi,

I don't want to save the whole 7GB RAM of each VM to disk. Obviously. To
save one xen domU in 13 seconds, my disks would need a throughput of 551
megabytes per seconds. They don't have that. If I'm lucky they have a
throughput of 100 megabytes per second. Without compression it takes 90
seconds to save the state on the domU.

So what I ended up doing is the following to save the state in 13
instead of 90 seconds:

xl save myDomU /dev/stdout \
        |  buffer -b 64 \
        | lz4c -1 -f - myDomU.state.lz4

So there are many things that you should be careful about. And I'm not
sure whether any of this is supposed to work or not.

First of all, I couldn't find any documentation about whether it's safe
to use /dev/stdout or a fifo as the target of xl save. If xl writes any
message to stdout, then the myDomU.state.lz4 will be corrupted. So
better use a FIFO (the ones you create with mkfifo). But then again,
where is it written that xl save doesn't expect to be able to have
random access to the file it is writing too? It's up to the Xen
developers to fix this lack of documentation. After all, xl save and xl
restore seems to be happy to write to and read from fifos.

Then, why do I to need to use the buffer tool? Well, because it's faster
that way. The problem is that fifos and pipes seem to have a buffer of
only 64KB. And such a small buffer kills performance, especially if
either xl save is writing in large chunks or lz4c is reading in large
chunks. I think lz4c is to blame here, but I'm not quite sure.

Anyhow, without buffer it takes 20 seconds, with buffer it takes 13
seconds to write the whole 7 gigabytes of RAM to a compressed file.

lz4c is a real-time compression library that provides high throughput at
the expense of compression ratio. Get it here:
  https://code.google.com/p/lz4/
There are others libraries like lz4 as you can see in the diagram here:
  http://fastcompression.blogspot.co.il/p/compression-benchmark.html

I guess lz4 itself is till pretty experimental. There seem to be more
mature solutions like QuickLZ, but I didn't check them.

Using gzip -1 instead of lz4c -1 increases the time from 13 to 47
seconds. So lz4c is definitely a better choice here. Don't try tools
like bzip2 or xz. They are way too slow.

Let's talk about CPU usage. The test above were run on Core i7 950
system with hyperthreading enabled. libxl-save-helper uses 80% CPU while
lz4c uses about 60%. Unfortunately, the buffer tool, which forks into
two processes, uses about 40% per process (so 80% in total).

There's still room for improvement IMHO. Depending on who's to blame,
lz4c or libxl-save-helper, writing or reading data in smaller chunks
(say 16KB) could improve performance while eliminating the need for
using buffer.


Regards,
  Sven

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.