Friday, February 5, 2016

Compressing small data

 Data compression is primarily seen as a file compression algorithm. After all, the main objective is to save storage space, is it ?
With this background in mind, it's also logical to focus on bigger files. Good compression achieved on a single large archive is worth the savings for countless smaller ones.

However, this is no longer where the bulk of compression happen. Today, compression is everywhere, embedded within systems, achieving its space and transmission savings without user intervention, nor awareness. The key to these invisible gains is to remain below the end-user perception threshold. To achieve this objective, it's not possible to wait for some large amount of data to process. Instead, data is processed in small amounts.



This would be all good and well if it wasn't for a simple observation : the smaller the amount to compress, the worse the compression ratio.
The reason is pretty simple : data compression works by finding redundancy within the processed source. When a new source starts, there is not yet any redundancy to build upon. And it takes time for any algorithm to achieve meaningful outcome.

Therefore, as the issue comes from starting from a blank history, what about starting from an already populated history ?

Streaming to the rescue

A first solution is streaming : data is cut into smaller blocks, but each block can make reference to previously sent ones. And it works quite well. In spite of some minor losses at block borders, most of the compression opportunities of a single large data source are preserved, but now with the advantage to process, send, and receive tiny blocks on the fly, making the experience smooth.

However, this scenario only works with serial data, a communication channel for example, where order is known and preserved.

For a large category of applications, such as database and storage, this cannot work : data must remain accessible in a random fashion, no known "a priori" order. Reaching a specific block sector should not require to decode all preceding ones just to rebuild the dynamic context.

For such use case, a common work-around is to create some "not too small blocks". Say there are many records of a few hundred bytes each. Group them in packs of at least 16 KB. Now this achieves some nice middle-ground between not-to-poor compression ratio and good enough random access capability.
This is still not ideal though, since it's required to decompress a full block just to get a single random record out of it. Therefore, each application will settle for its own middle ground, using block sizes of 4 KB, 16 KB or even 128 KB, depending on usage pattern.


Dictionary compression

Preserving random access at record level and good compression ratio, is hard. But it's achievable too, using a dictionary. To summarize, it's a kind of common prefix, shared by all compressed objects. It makes every compression and decompression operation start from the same populated history.

Dictionary compression has the great property to be compatible with random access. Even for communication scenarios, it can prove easier to manage at scale than "per-connection streaming", since instead of storing one different context per connection, there is always the same context to start from when compressing or decompressing any new data block.

A good dictionary can compress small records into tiny compressed blobs. Sometimes, the current record can be found "as is" entirely within the dictionary, reducing it to a single reference. More likely, some critical redundant elements will be detected (header, footer, keywords) leaving only variable ones to be described (ID fields, date, etc.).

For this situation to work properly, the dictionary needs to be tuned for the underlying structure of objects to compress. There is no such thing as a "universal dictionary". One must be created and used for a target data type.

Fortunately, this condition can be met quite often.
Just created some new protocol for a transaction engine or an online game ? It's likely based on a few common important messages and keywords (even binary ones). Have some event or log records ? There is likely a grammar for them (json, xml maybe). The same can be said of digital resources, be it html files, css stylesheets, javascript programs, etc.
If you know what you are going to compress, you can create a dictionary for it.

The key is, since it's not possible to create a meaningful "universal dictionary", one must create one dictionary per resource type.


Example of a structured JSON message

How to create a dictionary from a training set ? Well, even though one could be tempted to manually create one, by compacting all keywords and repeatable sequences into a file, this can be a tedious task. Moreover, there is always a chance that the dictionary will have to be updated regularly due to moving conditions.
This is why, starting from v0.5, zstd offers a dictionary builder utility.

Using `dictBuilder`, it's possible to quickly create a dictionary from a list of samples. The process is relatively fast (a matter of seconds), which makes it possible to generate and update multiple dictionaries for multiple targets.

But what good can achieve dictionary compression ?
To answer this question, a few tests were run on some typical samples. A flow of JSON records from a probe, some Mercurial log events, and a collection of large JSON documents, provided by @KryzFr.

Collection Namedirect
compression
Dictionary
compression
GainsAverage
unit
Range
Small JSON recordsx1.331 - x1.366x5.860 - x6.830~ x4.7300200 - 400
Mercurial eventsx2.322 - x2.538x3.377 - x4.462~ x1.51.5 KB20 - 200 KB
Large JSON docsx3.813 - x4.043x8.935 - x13.366~ x2.86 KB800 - 20 KB

These compression gains are achieved without any speed loss, and even feature faster decompression processing. As one can see, it's no "small improvement". This method can achieve transformative gains, especially for very small records.

Large documents will benefit proportionally less, since dictionary gains are mostly effective in the first few KB. Then there is enough history to build upon, and the compression algorithm can rely on it to compress the rest of the file.

Dictionary compression will work if there is some correlation in a family of small data (common keywords and structure). Hence, deploying one dictionary per type of data will provide the greater benefits.

Anyway, if you are in a situation where compressing small data can be useful for your use case (databases and contextless communication scenarios come to mind, but there are likely other ones), you are welcomed to have a look at this new open source tool and compression methodology and report your experience or feature requests.

Zstd is now getting closer to v1.0 release, it's a good time to provide feedback and integrate them into final specification.

Wednesday, October 14, 2015

Huffman revisited part 5 : combining multi-streams with multi-symbols

 In previous article, a method to create a fast multi-symbols Huffman decoder has been described. The research was using single bitstream encoding, for simplicity. However, earlier investigation proved that using multiple bitstreams was good for speed on modern OoO (Out of Order) cpus, such as Intel's Core. So it seems only logical to combine both ideas and see where it leads.

The previous multi-streams format produced an entangled output, where each stream contributes regularly to 1-in-4 symbols, as shown below :

Multi-Streams single-symbol entangled output pattern

This pattern is very predictable, therefore decoding operations can be done in no particular order, as each stream knows at which position to write its next symbol.
This critical property is lost with multi-symbols decoding operations :

Multi-Streams multi-symbols entangled output pattern (example)

It's no longer clear where next symbols must be written. Hence, parallel-streams decoding becomes synchronization-dependent, nullifying multi-streams speed advantage.

There are several solutions to this problem :
- On the decoder side, reproduce regular output pattern, by breaking multi-symbols sequence into several single-symbol write operations. It works, but cost performance, since a single decode now produces multiple writes (or worse, introduce an unpredictable branch) and each stream requires its own tracking pointer.
- On the encoder side, take into consideration the decoder natural pattern, by grouping symbols exactly the same way they will be regenerated. This works too, and is the fastest method from a decoder perspective, introducing just some non-negligible complexity on the encoder side.

Ultimately, none of these solutions looked particularly attractive. I was especially worried about introducing a "rigid format", specifically built for a single efficient way to decode. For example, taking into consideration the way symbols will be grouped during decoding ties the format to a specific table depth.
An algorithm created for a large number of platforms cannot accept such rigidity. Maybe some implementations will prefer single-symbol decoding, maybe other ones will select a custom amount of memory for decoding tables. Such flexibility must be possible.

Final choice was to remove entanglement. And the new output pattern becomes :

Multi-Streams multi-symbols segment output pattern (example)

With 4 separate segments being decoded in parallel, the design looks a lot like classical multi-threading, at micro-op level. Which would be a fair enough description.

It looks simpler, but from a coding perspective, it's not.
The first issue is that each segment has its own tracking pointer during decoding operation. It increases the number of required registers from 1 to 4. Not a huge deal when registers are plentiful, but that's not always the case (x86 32-bits mode notably).
The second more important issue is that each segment gets decoded at its own speed, meaning some of them will be finished before other ones. Previously, entanglement ensured that all streams would finish together, with just a small tail to take care off. This is now more complex : we don't know which segment will finish first, and the "tail" sequence is now spread over multiple streams, of unpredictable length.

These complexities will cost a bit of performance, but we get serious benefits in exchange :
- Multi-streams operations is an option : platforms may decide to decode segments serially, one after another, or 2 by 2, depending on their optimal capabilities.
- Single-symbol and multi-symbols decoding strategies are compatible 
- Decoding table depth can be any size, including "frugal" ones trading cpu operations for memory space.
In essence, it's opened to a lot more trade-offs.

These new properties introduce a new API requirement : regenerated size must be known, exactly, to start decoding operation (previously, upper regenerated size limit was enough). This is required to guess where each segment starts before even finishing previous ones.

So, what kind of performance this new design delivers ? Here is an example, based on generic samples :

Decoding speed, multi-streams, 32 KB blocks

The picture looks similar to previous "single-stream" measurements, although featuring much higher speeds. Single-symbol variant wins when compression is very poor. Quite quickly though, double-symbols variant dominates the region where Huffman compression makes most sense (underlined in red boxes). Quad-symbols performance catch up when distribution becomes more favorable, and clearly dominates later on, but that's a region where Huffman is no longer an optimal choice for entropy compression.

Still, by providing speed in the range of 800-900 MB/s, the new multi-symbol decoder delivers sensible improvements over previous version. Job done ?

Let's dig a little deeper. You may have noticed that measurements were produced on block sizes of 32 KB, which is a nice "average" situation. However, in many compressors such as zstd, blocks of symbols are the product of (LZ) transformation, and their size can vary, a lot. Is above conclusion still valid when block size change ?

Let's test this hypothesis in both directions, by measuring 128 KB and 8 KB block sizes. Results become :

Decoding speed, multi-streams, 128 KB blocks

Decoding speed, multi-streams, 8 KB blocks

While the general picture may look similar, some differences can indeed be spotted.

First, 128 KB blocks are remarkably faster than 8 KB ones. This is a natural consequence of table construction times, which remain static whatever the size of blocks. Hence, their relative impact is inversely proportional to block sizes.
At 128 KB, symbol decoding dominates. It makes the quad-symbols version slightly better compared to double-symbols. Not necessarily enough, but still an alternative to consider when the right conditions are met.
At 8 KB, the reverse situation happens : quad-symbols is definitely out of the equation, due to its larger table construction time. Single-symbol relative performance is now better, taking the top spot when compression ratio is low enough.

With so many parameters, it can seem difficult to guess which version will perform best on a given compressed block, since it also depends on the content to decode. Fortunately, you won't have to.
huff0's solution is to propose a single decoder (HUF_decompress()) which makes such selection transparently. Given a set of heuristic values (table construction time, raw decoding speed, quantized compression ratio), it will automatically select which decoding algorithm it believes is a better fit for the job.

Decoding speed, auto-mode, 32 KB blocks

Ultimately, it's just a matter of faster speed, since all versions are compatible and produce valid results. And should you don't like its default choices, you can still manually override which version you prefer or want to test.

As usual, the result of this investigation is made available as open source software, at github, under a BSD license. If you are used to previous versions of fse, pay attention that the directory and file structures have been changed a bit. In an attempt to provide a clearer interface, huff0 gets its own file and header from now on.

Huffman revisited, Part 4 : Multi-bytes decoding

 In most Huffman implementations I'm aware of, decoding symbols is achieved in a serial fashion, one-symbol-after-another.

Decoding fast is not that trivial, but it has been well studied already. Eventually, the one symbol per decoding operation becomes its upper limit.

Consider how work a fast Huffman decoder : all possible bit combinations are pre-calculated into a table, of predefined maximum depth. For each bit combination, it's a simple table lookup to get the symbol decoded and the number of bits to consume.

Huffman Table lookup (example)

More complex schemes may break the decoding into 2 steps, most notably in an attempt to reduce look-up table sizes and still manage to decode symbols which exceed table depth. But it doesn't change the whole picture : that's still a number of operations to decode a single symbol.

In an attempt to extract more speed from decoding operation, I was curious to investigate if it would be possible to decode more than one symbol per lookup.

Intuitively, that sounds plausible. Consider some large Huffman decoding table, there is ample room for some bit sequences to represent 2 or more unequivocal symbols. For example, if one symbol is dominant, it only needs 1 bit. So, with only 2 bits, we have 25% chances to get a sequence which means "decode 2 dominant symbols in a row", in a single decode operation.

This can be visualized on below example :

Example of small single-symbol decoding table

which can be transformed into :

Example of multi-symbols decoding table

In some ways, it can look reminiscent of Tunstall codes, since we basically try to fit as many symbols as possible into a given depth. But it's not : we don't guarantee reading the entire depth each time, the number of bits read is still variable, just more regular. And there is no "order 1 correlation" : probabilities remain the same per symbol, without depending on prior prefix.

Even with above table available, there is still the question of using it efficiently. It doesn't make any good if a single decoding step is now a lot more complex in order to potentially decode multiple symbols. As an example of what not to do, a straightforward approach would be to start decoding the first symbol, then figure out if there is some place left for another one, proceed with the second symbol, then test for a 3rd one, etc. Each of these tests become an unpredictable branch, destroying performance in the process.

The breakthrough came by observing LZ decompression process such as lz4 : it's insanely fast, because it decodes matches, aka. suite of symbols, as a single copy operation.
This is in essence what we should do here : copy a sequence of multiple symbols, and then decide how many symbols there really is. It will avoid branches.
On current generation CPU, copying 2 or 4 bytes is not much slower than copying a single byte, so the strategy is effective. Overwriting same position is also not an issue thanks to modern cache structure.

With this principle settled, it now requires an adapted lookup table structure to work with. I finally settled with these ones :
Huffman lookup cell structure

The double-symbols structure could seem poorly ambitious : after all, it is only able to store up to 2 symbols into the `sequence` field. But in fact, tests will show it's a good trade-off, since most of the time, 2 symbols is what can be reasonably stored into a table lookup depth.

Some quick maths : depth of a lookup table is necessarily limited, in order to fit into memory cache where access times are best. An Intel's cpu L1 data cache is typically 32 KB (potentially shared due to hyper-threading). Since no reasonable OS is single-threaded anymore, let's not use the entire cache : half seems good enough, that's 16 KB. Since a single cell for double-symbols is now 4 bytes (incidentally, the same size as FSE decoder), that means 4K cells, hence a maximum depth of 12 bits. Within 12 bits, it's unlikely to get more than 2 symbols at a time. But this conclusion entirely depends on alphabet distribution.

This limitation must be balanced with increased complexity for table lookup construction. The quad-symbols one is significantly slower, due to more fine-tuned decisions and recursive nature of the algorithm, potentially defeating inlining optimizations. Below graph show the relative speed of each construction algorithm (right side, in grey, is provided for information, since if target distribution falls into this category, Huffman entropy is no longer a recommended choice).



Lookup table construction speed

The important part is roughly underlined in red boxes, showing areas which are relevant for some typical LZ symbols. The single-symbol lut construction is always faster, significantly. To make sense, slower table construction must be compensated by improved symbol decoding speed. Which, fortunately, is the case.

Decoding speed, at 32 KB block

As suspected, the "potentially faster" quad-symbols variant is hampered by its slower construction time. It manages to become competitive at "length & offset" area, but since it costs 50% more memory, it needs to be unquestionably better to justify that cost. Which is the case as alphabet distribution become more squeezed. By that time though, it becomes questionable if Huffman is still a reasonable choice for the selected alphabet, since its compression power will start to wane significantly against more precise methods such as FSE.
The "double-symbols" variant, on the other hand, takes off relatively fast and dominate the distribution region where Huffman makes most sense, making it a prime contender for an upgrade.

By moving from a 260 MB/s baseline to a faster 350-450 MB/s region, the new decoding algorithm is providing fairly sensible gains, but we still have not reached the level of previous multi-stream variant, which gets closer to 600 MB/s. The logical next step is to combine both ideas, creating a multi-streams multi-symbols variant. A challenge which proved more involving than it sounds. But that's for another post ...

Tuesday, August 25, 2015

Fuzz testing Zstandard

 An advance issue that any production-grade codec must face is the ability to deal with erroneous data.

Such requirement tends to come at a second development stage, since it's already difficult enough to make an algorithm work under "normal conditions". Before reaching erroneous data, there is already a large number of valid edge cases to properly deal with.

Erroneous input is nonetheless important, not least because it can degenerate into a full program crash if not properly taken care of. At a more advanced level, it can even serve as an attack vector, trying to push some executable code into unauthorized memory segments. Even without reaching that point, just the perspective to make a system crash with the use of a predictable pattern is a good enough nuisance.

Dealing with such problems can be partially mitigated using stringent unit tests. But that's more easily said than done. Sometimes, not only is it painful to build and maintain a thorough and wishfully complete list of unit test for each function, it's also useless in predicting some unexpected behavior resulting from an improbable chain of events at different stages in the program.

Hence the idea to find such bugs at "system level". The system's input will be fed with a set of data, and the results will be observed. If you create test set manually, you will likely test some important, visible and expected use cases, which is still a pretty good start. But some less obvious interaction patterns will be missed.

That's where starts the realm of Fuzz Testing. The main idea is that random will make a better job at finding stupid forgotten edge cases, which are good candidates to crash a program. And it works pretty well. But how to setup "random" ?

In fact, even "random" must be defined within some limits. For example, if you only feed a lossless compression algorithm with some random input, it will simply not be able to compress it, meaning you will always test the same code path. 

The way I've dealt with such issue for lz4 or zstd is to create programs able to generate "random compressible data", with some programmable characteristics (compressibility, symbol variation, reproducible by seed). And it helped a lot to test valid code path.

The decompression side is more interested by resistance to invalid input. But even with random parameters, there is a need to target interesting properties to test. Typically, a valid decompression stage is first run, to serve as a model. Then some "credible" fail scenarios are built from them. Zstd fuzzer tool typically tests : truncated input, too small destination buffer, and noisy source created from a valid one with some random changes, in order to bypass too simple screening stages.

All these tests were extremely useful to strengthen the reliability of the code. But the idea that "random" was in fact defined within some limits make it clear that maybe some other code path, outside of limits of "random", may still fail if properly triggered.

But how to find them ? As stated earlier, brute force is not a good approach. There are too many similar cases which would be trivially reduced to a single code path. For example, the compressed format of zstd includes an initial 4-bytes identifier. A dumb random input would therefore have a 1 in 4 billion chances to pass such early screening, leaving little energy to test the rest of the code.

For a long time, I believed it was necessary to know in details one's code to create some useful fuzzer tool. Thanks to kind notification from Vitaly Magerya, it seems this is no longer the only one solution. I discovered earlier today the American Fuzzy Lop. No, not the rabbit; this test tool, by MichaƂ Zalewski.

It's relatively easy to setup (for Unix programmers). Build, install and usage follow clean conventions, and the Readme is a fairly good read, easy to follow. With just a few initial test cases to provide, a special compilation stage and a command line, the tool is ready to go.

American Fuzzy Lop, testing zstd decoder

It displays a simple live board in text mode, which successfully captures the mind. One can see, or rather guess, how the genetic algorithm tries to create new use cases. It basically starts from the initially provided set of tests, and create new ones by modifying them using simple transformations. It analyzes the results, which are relatively precise thanks to special instrumentation installed in the target binary during the compilation stage. It deduces from them the triggered code path and if it has found a new one. Then generate new test cases built on top of "promising" previous ones, restart, ad infinitum. 

This is simple and brilliant. Most importantly, it is generic, meaning no special knowledge of zstd was required for it to test thoroughly the algorithm and its associated source code.

There are obviously limits. For example, the amount of memory that can be spent for each test. Therefore, successfully resisting for hours the tricky tests created by this fuzzer tool is not the same as "bug free", but it's a damn good step into this direction, and would at least deserve the term "robust".

Anyway, the result of all these tests, using internal and external fuzzer tools, is a first release of Zstandard. It's not yet "format stable", meaning specifically that the current format is not guaranteed to remain unmodified in the future (such stage is planned to be reached early 2016). But it's already quite robust. So if you wanted to test the algorithm in your application, now seems a good time, even in production environment.

[Edit] : If you're interested in fuzz testing, I recommend reading an excellent follow up by Maciej Adamczyk, which get into great details on how to do your own fuzz testing for your project.

Wednesday, August 19, 2015

Accessing unaligned memory

 Thanks to Herman Brule, I recently received an access to real ARM hardware systems, in order to test C code and tune them for performance. It proved a great experience, with lots of learnings.

It started with the finding that xxhash speed was rubbish on ARM systems. To this end, 2 systems were benchmarked : first, an ARMv6-J, and then an ARMv7-A.

This was a unwelcomed surprise, and among the multiple potential reasons, it turns out that accessing unaligned data became the most critical one.

Since my latest blog entry on this issue, I converted unaligned-access code to the QEMU-promoted solution using `memcpy()`. Compared with earlier method (`pack` statement), the `memcpy()` version has a big advantage : it's highly portable. It's also supposed to be correctly optimized by the compiler, to end up to a trivial `unaligned load` instruction on CPU architecture which support this feature.

Well, supposed to is really the right word. It turns out, this is not true in a number of cases. While initially only direct benchmark tests were my main investigation tool, I was pointed towards godbolt online assembly generator, which became an invaluable asset to properly understand what was going on at assembly level.

Thanks to these new tools, the issue could be summarized into a selection between 3 possibilities to access unaligned memory :

1. Using `memcpy()` : this is the most portable and safe one.
It's also efficient in a large number of situations. For example, on all tested targets, clang translates `memcpy()` into a single `load` instruction when hardware supports it. gcc is also good on most target tested (x86, x64, arm64, ppc), with just arm 32bits standing out.
The issue here is that your mileage will vary depending on specific compiler / targets. And it's difficult, if not impossible, to test and check all possible combinations. But at least, `memcpy()` is a good generic backup, a safe harbour to be compared to.

2. `pack` instruction : the problem is that it's a compiler-specific extension. It tends to be present on most compilers, but using multiple different, and incompatible, semantics. Therefore, it's a pain for portability and maintenance.

That being said, in a number of cases where `memcpy()` doesn't produce optimal code, `pack` tends to do a better job. So it's possible to `special case` these situations, and left the rest to `memcpy`.

The most important use case was gcc with ARMv7, basically the most important 32-bits ARM version nowadays (included in current crop of smartphones and tablets).
Here, using `pack` for unaligned memory improved performance from 120 MB/s to 765 MB/s compared to `memcpy()`. That's definitely a too large difference to be missed.

Unfortunately, on gcc with ARMv6, this solution was still as bad as `memcpy()`.

3. direct `u32` access : the only solution I could find for gcc on ARMv6.
This solution is not recommended, as it basically "lies" to the compiler by pretending data is properly aligned, thus generating a fast `load` instruction. It works when the target cpu is hardware compatible with unaligned memory access, and does not risk generating some opcode which are only compatible with strictly-aligned memory accesses.
This is exactly the situation of ARMv6.
Don't use it for ARMv7 though : although it's compatible with unaligned load, it can also issue multiple load instruction, which is a strict-align only opcode. So the resulting binary would crash.

In this case too, the performance gain is too large to be neglected : on unaligned memory access, read speed went up from 75 MB/s to 390 MB/s compared to `memcpy()` or `pack`. That's more than 5 times faster.

So there you have it, a complex setup, which tries to select the best possible method depending on compiler and target. Current findings can be summarized as below :

Better unaligned read method :
------------------------------
| compiler  | x86/x64 | ARMv7  | ARMv6  | ARM64  |  PPC   |
|-----------|---------|--------|--------|--------|--------|
| GCC 4.8   | memcpy  | packed | direct | memcpy | memcpy |
| clang 3.6 | memcpy  | memcpy | memcpy | memcpy |   ?    |
| icc 13    | packed  | N/A    | N/A    | N/A    | N/A    |
A good news is that there is a safe default method, which tends to work well in a majority of situations. Now, it's only a matter of special-casing specific combinations, to use alternate method.

Of course, a better solution would be for all compilers, and gcc specifically, to properly translate `memcpy()` into efficient assembly for all targets. But that's wishful thinking, clearly outside of our responsibility. Even if it does improve some day, we nonetheless need an efficient solution now, for current crop of compilers.

The new unaligned memory access design is currently available within xxHash source code on github, dev branch.

Summary of gains on tested platforms :
compiled with gcc v4.7.4
| program            | platform|  before  |  after   | 
|--------------------|---------|----------|----------|
| xxhash32 unaligned |  ARMv6  |  75 MB/s | 390 MB/s |
| xxhash32 unaligned |  ARMv7  | 122 MB/s | 765 MB/s |
| lz4 compression    |  ARMv6  |  13 MB/s |  18 MB/s |
| lz4 compression    |  ARMv7  |  33 MB/s |  49 MB/s |
[Edit] : apparently, this issue will help improve GCC for the better

Thursday, July 30, 2015

Huffman revisited - Part 3 - Depth limited tree

Huffman tree
 A secondary issue that most real-world Huffman implementations must deal with is tree depth limitation.



Huffman construction doesn't limit the depth. If it would, it would no longer be "optimal". Granted, the maximum depth of an Huffman tree is bounded by the Fibonacci serie, but that leave ample room for larger depth than wanted.
Why limit Huffman tree depth ? Fast huffman decoders use lookup tables. It's possible to use multiple table levels to mitigate the memory cost, but a very fast decoder such as Huff0 goes for a single table, both for simplicity and speed. In which case the table size is a direct product of the tree depth.
For the benefit of speed and memory management, a limit had to be selected : it's 8 KB for the decoding table, which nicely fits into Intel's L1 cache, and leave some room to combine it with other tables if need be. Since latest decoding table uses 2 bytes per cell, it translates into 4K cells, hence a maximum tree depth of 12 bits.
12 bits for compressing literals is generally too little, at least according to optimal Huffman construction. Creating a depth-limited tree is therefore a practical issue to solve. The question is : how to achieve this objective with minimum impact on compression ratio, and how to do it fast ?
Depth-limited huffman trees have been studied since the 1960's, so there is ample literature available. What's more surprising is how complex the proposed solutions can be, and how many decades were
necessary to converge towards an optimal solution.
Edit : in below paragraph, n is the alphabet size, and D is the maximum tree Depth.
It started with Karp, in 1961 (Minimum-redundancy coding for the discrete noiseless channel), proposing a solution in exponential time. Then Gilbert, in 1971 (Codes based on inaccurate source probabilities), still in exponential time. Hu and Tan, in 1972 (Path length of binary search trees), with a solution in O(n.D.2^D). Finally, a solution in polynomial time was proposed by Garey in 1974 (Optimal binary search trees with restricted maximal depth), but still O(n^2.D) time and using O(n^2.D) space. In 1987, Larmore proposed an improved solution using O(n^3/2.D.log1/2.n) time and space (Height restricted optimal binary trees). The breakthrough happened in 1990 (A fast algorithm for optimal length-limited Huffman codes), when Larmore and Hirschberg propose the Package_Merge algoritm, a completely different kind of solution using only O(n.D) time and O(n) space. It became a classic, and was refined a few times over the next decades, with the notable contribution of Mordecai Golin in 2008 (A Dynamic Programming Approach To Length-Limited Huffman Coding).

Most of these papers are plain difficult to read, and it's usually harder than necessary to develop a working solution just by reading them (at least, I couldn't. Honorable mention for Mordecai Golin, which proposes a graph-traversal formulation relatively straightforward. Alas, it was still too much CPU workload for my taste).
In practice, most fast Huffman implementations don't bother with them. Sure, when optimal compression is required, the PackageMerge algorithm is preferred, but in most circumstances, being optimal is not really the point. After all, Huffman is already a trade-off between optimal and speed. By following this logic, we don't want to sacrifice everything for an optimal solution, we just need a good enough one, fast and light.
That's why you'll find some cheap heuristics in many huffman codes. A simple one : start with a classic Huffman tree, flatten all leaves beyond maximum depth, then flatten enough higher leaves to maxBits to get back the total length to one. It's fast, it's certainly not optimal, but in practice, the difference is small and barely noticeable. Only when the tree depth is very constrained does it make a visible difference (see great comments from Charles Bloom on its blog).
Nonetheless, for huff0, I was willing to find a solution a bit better than cheap heuristic, closer to optimal. 12 bits is not exactly "very constrained", so the pressure is not high, but it's still constrained enough that the depth-limited algorithm is going to be necessary in most circumstances. So better have a good one.
I started by realizing some simple observations : after completing an huffman tree, all symbols are sorted in decreasing count order. That means that the number of bits required to represent each symbol must follow a strict increasing order. That means the only thing I need to track is the border decision (from 5 to 6 bits, from 6 to 7 bits, etc.).
Example Huffman Distribution
So now, the algorithm will concentrate on moving the arrows.
The first part is the same as the cheap heuristic : flatten everything that needs more than maxBits. This will create a "debt" : a symbol requiring maxBits+1 bits creates a debt of 1/2=0.5 when pushed to maxBits. A symbol requiring maxBits+2 creates a debt of 3/4=0.75, and so on. What may not be totally obvious is that the sum of these fractional debts is necessarily an integer number. This is a consequence of starting from a solved huffman tree, and can be proven by simple recurrence : if the huffman tree natural length is maxBits+1, then the number of elements at maxBits+1 is necessarily even, otherwise the sum of probabilities can't be equal to one. The debt's sum is therefore necessarily a multiple of 2 * 0.5 = 1, hence an integer number. Rince and repeat formaxBits+2 and further depth.
So now we have a debt to repay. Each time you demote a symbol from maxBits-1 to maxBits, you repay 1 debt. Since the symbols are already sorted in decreasing frequency, it's easy to just grab the latest maxBits-1 ones and demote them to maxBits up to repaying the debt. This is in essence what the cheap heuristic does.
But one must note that demoting a symbol from maxBits-2 to maxBits-1 repay not 1 but 2 debts. Demoting from maxbits-3 tomaxBits-2 repay 4 debts. And so on. So now the question becomes : is it preferable to demote a single maxBits-2 symbol or 2maxBits-1 symbols ?
The answer to this question is trivial since we deal with integer number of bits : just compare the sum of occurences of the 2maxBits-1 symbols with the occurence of the maxBits-2 one. Whichever is smallest costs less bits to demote. Proceed.
This approach can be scaled. Need to repay 16 debts ? A single symbol at maxBits-5 might be enough, or 2 at maxBits-4. By recurrence, each maxBits-4 symbol might be better replaced by 2 maxBits-3 ones, and so on. The best solution will show up by a simple recurrence algorithm.
Sometimes, it might be better to overshoot : if you have to repay a debt of 7, which formula is better ? 4+2+1, or 8-1 ? (the -1 can be achieved by promoting the best maxBits symbol to maxBits-1). In theory, you would have to compare both and select the better one. Doing so leads to an optimal algorithm. In practice though, the positive debt repay (4+2+1) is most likely the better one, since some rare twisted distribution is required for the overshoot solution to win.
The algorithm becomes a bit more complex when some bits ranks are missing. For example, you need to repay a debt of 2, but there is no symbol left at maxBits-2. In such case, you can still default to maxBits-1, but maybe there is no more symbol left there either. In which case, you are forced to overshoot (maxBits-3) and promote enough elements to get the debt back to zero.
On average, the fast variant of this algorithm remains very fast. Its CPU cost is unnoticeable, compared to the decoding cost itself, and the final compression ratio is barely affected (<0.1%) compared to unconstrained tree depth. So that's mission accomplished.
The fast variant of the algorithm is available in open source and can be grabbed at github, under the function nameHUF_setMaxHeight().

Wednesday, July 29, 2015

Huffman revisited - Part 2 : the Decoder

Huffman tree The first attempt to decompress the Huffman bitStream created by anhuff0 version modified to use FSE bitStream ended up in brutal disenchanting. While the decoding itself worked fine, the resulting speed was a mere 180 MB/s.
OK, in absolute, it looks reasonable speed, but keep in mind this is far off the objective of beating FSE (which decodes at 475 MB/s on the same system), and even worse than reference zlib huffman. Some generic attempts at improving speed barely changed this, moving up just above 190 MB/s.
This was a disappointment, and a clear proof that the bitStream alone wasn't enough to explain FSE speed. So what could produce such a large difference ?
Let's look at the code. The critical section of FSE decoding loop looks like this :
    DInfo = table[state];
    nbBits = DInfo.nbBits;
    symbol = DInfo.symbol;
    lowBits = FSE_readBits(bitD, nbBits);
    state = DInfo.newState + lowBits;
    return symbol;
while for Huff0, it would look like this :
    symbol = tableSymbols[state];
    nbBits = tableNbBits[symbol];
    lowBits = FSE_readBits(bitD, nbBits);
    state = ((state << nbBits) & mask) + lowBits;
    return symbol;
There are some similarities, but also some visible differences. First, Huff0 creates 2 decoding tables, one to determine the symbol being decoded, the other one to determine how many bits are read. This is a good design for memory space : the larger table is tableSymbols, as its size primarily depends on 1<<maxNbBits. The second table, tableNbBits, is much smaller : its size only depends on nbSymbols. This construction allows using only 1 byte per cell. It favorably compares to 4 bytes per cell for FSE. This memory advantage can be used either as a net space saver, or as a way to boost accuracy, by increasing maxNbBits.
The cost for it is that there are 2 interdependent operations : first decode the state to get the symbol, then use the symbol to get nbBits.
This interdependance is likely the bottleneck. When trying to design high performance computation loops, there are 3 major rules to keep in mind :
  • Ensure hot data is already in the cache.
  • Avoid badly predictable branches (predictable ones are fine)
  • For modern OoO (Out of Order) CPU : keep their multiple execution units busy by feeding them with independent (parallelizable) operations.
This list is given in priority order. It makes no sense to try optimizing your code for OoO operations if the CPU has to wait for data from main memory, as the latency cost is much higher than any CPU operation. If your code is full of badly predictable branches, resulting in branch flush penalties, this is also a much larger problem than having some idle execution units. So you can only get to the third set of optimization after properly solving the previous ones.
This is exactly the situation where Huff0 is, with a fully branchless bitstream and data tables entirely within L1 cache. So the next performance boost will likely be found into OoO operations.
In order to avoid dependency between symbol first, then nbBits , let's try a different table design, where nbBits is directly stored alongside symbol, in the state table. This double the memory cost, hence reducing the memory advantage enjoyed by Huffman compared to FSE. But let's see where it goes :
    symbol = table[state].symbol;
    nbBits = table[state].nbBits;
    lowBits = FSE_readBits(bitD, nbBits);
    state = ((state << nbBits) & mask) + lowBits;
    return symbol;
This simple change alone is enough to boost the speed to 250 MB/s. Still quite far from the 475 MB/s enjoyed by FSE on the same system, but nonetheless a nice performance boost. More critically, it underlines that the diagnosis was correct : untangling operation dependency free up CPU OoO execution units, they can do more work within each cycle.
So let's ramp up the concept. We have removed one operation dependancy. Is there another one ?
Yes. When looking at the main decoding loop from a higher perspective, we can see there are 4 decoding operations per loop. But each decoding operation must wait for the previous one to be completed, because in order to know how to read the bitStream for symbol 2, we need first to know of many bits were consumed by symbol 1.
Compare with how FSE work : since state values are separated from bitStream, it's possible to decode symbol1 and symbol2, and retrieve their respective nbBits, in any order, without any dependency. Only later operations, retrieving lowBits from the bitStream to calculate the next state values, introduce some ordering dependency (and even this one can be partially unordered).
The main idea is this one : to decode faster, it's necessary retrieve several symbols in parallel, without dependency. So let's create a compressed data flow which makes such operation possible.
Re-using FSE principles "as is" to design a faster Huffman decoding is an obvious choice, but it predictably results in about the same speed. As stated previously, it's not interesting to design a new Huffman encoder/decoder if it just ends up being as fast as FSE. If that is the outcome, then let's simply use FSE instead.
Fortunately, we already know that compression can be faster. So let's concentrate on the decoding side. Since it seems impossible to decode the next symbol without first decoding the previous one from the same bitStream, let's design multiple bitStreams.
The new design is a bit more complex. Compression side is affected : in order to create multiple bitStreams, one solution is to scan input data block multiple times. It proved efficient enough to not bother with a different design. On top of that, a jumptable is required at the beginning of the block, to let the decoder know where each bitStream starts.
Huff0 bitStream design
Within each bitStream, it's still necessary to decode the first symbol to read the second. But each bitStream is independent, so it's possible to decode up to 4 symbols in parallel.
This proved a design win. The new huff0 decompresses at 600 MB/s while preserving the compression speed of 500 MB/s. This compares favorably to FSE or zlib's huffman, as detailed below :
AlgorithmCompressionDecompression
huff0500 MB/s600 MB/s
FSE320 MB/s475 MB/s
zlib-h250 MB/s250 MB/s
With that part solved, it was possible to check that there is no visible compression difference between FSE and Huff0 on Literals data. To be more precise, compression is slightly worse, but header size is slightly better (huffman headers are simpler to describe). On average, both effects compensate.
The resulting code is open sourced and currently available at :https://github.com/Cyan4973/FiniteStateEntropy (dev branch)
The new API mimic its FSE counterparts, and provides only the higher (simpler) prototypes for now :
size_t HUF_compress (void* dst, size_t dstSize, 
               const void* src, size_t srcSize);
size_t HUF_decompress(void* dst,  size_t maxDstSize,
                const void* cSrc, size_t cSrcSize);
For the time being, both FSE and huff0 are available within the same library, and even within the same file. The reasoning is that they share the same bitStream code. Obviously, many design choices will have the opportunity to be challenged and improved in the near future.
Having created a new competitor to FSE, it was only logical to check how it would behave withinZstandard. It's almost a drop-in replacement for literal compression.
ZstandardpreviousHuff0 literals
compression speed200 MB/s240 MB/s
decompression speed540 MB/s620 MB/s
A nice speed boost with no impact on compression ratio. Overall, a fairly positive outcome.