On Fri, 12 Oct 2012, Camillo Bruni wrote:
On 2012-10-12, at 15:02, David T. Lewis lewis@mail.msen.com wrote:
On Fri, Oct 12, 2012 at 11:21:33AM +0000, cog@googlecode.com wrote:
Status: Accepted Owner: camillob...@gmail.com Labels: Type-Defect Priority-Medium
New issue 99 by camillob...@gmail.com: Link LZ4 Compression http://code.google.com/p/cog/issues/detail?id=99
We should build our VM with lz4 support:
https://code.google.com/p/lz4/
Ideally we will compress all internal unused/static data. Another application would be to compress all the fuel-ized data as well.
I do not understand what this means. Is it a request for someone to write a plugin?
Link cog with the lz4 as a first step to write bindings with FFI/NativeBoost.
An external plugin sounds a lot better to me. The larger the VM binary is, the slower it will be on today's CPUs.
What is "internal unused/static data"?
I was referring to image-side data, sorry my bad ;) But basically everything that is not needed directly in the image could be serialized and compressed.
Make it work, make it right, make it fast. You don't have the system yet, but you want to make it fast already? When you have the system is ready (to test if compression makes sense), then try the compression via FFI (pick your favorite implementation), and if it seems like it gives enough benefits, and the system is about to be used by a wide enough audience, then (and only then) consider adding it to the VM.
Plus by having a super-fast compression library at hand decompression would essentially be a NOP.
I didn't see any benchmarks where (de)compression is done on small chunks of data (a few kilobytes at most - which is your intended use case). And even though the (de)compression might not make much difference in runtime, it definitely will give higher CPU usage, which is unwelcome in some cases (e.g. mobile devices). It might result in lower overall CPU usage too, but the ~2 compression ration makes me think that it's unlikely.
Levente