Friday, February 28, 2014

FSE tricks - Memory efficient subrange maps

 With last blog post, we were left with a description of the FSE compression algorithm. However, the code associated was a bit cryptic. So let's spend some time understanding it.

Here are the lines of code :
nbBitsOut = symbolTT[symbol].minBitsOut;     nbBitsOut -= (int)((symbolTT[symbol].maxState - *state) >> 31);     FSE_addBits(bitC, *state, nbBitsOut);     *state = stateTable[(*state>>nbBitsOut) + symbolTT[symbol].deltaFindState];
The first 3 ones are easiest to understand.
As suggested in an earlier blog post, the first task is to determine the number of bits to flush. This is basically one of 2 values, n or n+1, depending on crossing a threshold.
The threshold is stored into  symbolTT[symbol].maxState .The second line looks a bit complex, but it's just a convoluted way to say :
    nbBitsOut += (int)(*state > symbolTT[symbol].maxState);
The >>31 trick transforms a negative number into a -1. Such trick is supposed to be done automatically by the compiler, but since my tests showed there was a benefit in performance in explicitly coding it, I did. (Edit : See this complement information from Arseny Kapoulkine)
The 3rd line just flushes the bits. So we are left with the more complex 4th line.

The 4th line is the one realizing the conversion from newState to oldState (since we are encoding in backward direction). Let's describe how it works.

An naive way to do this conversion would be to create conversion tables, one per symbol, providing the destination state for each origin state. It works. It's just memory wasteful.
Consider for example a 4096 states table, for a 256 alphabet. Each state value uses 2 bytes. It results into 4K * 256 * 2 = 2 MB of memory. This is way too large for any L1 cache, with immediate consequences on performance.

So we need a trick to reduce that amount of memory.
Let's have another look at a sub-range map :


Remember, we have the same destination state for all origin state values within a given sub-range. So what seems clear here is that we can simply reduce all origin state values by 9 bits, and get a much smaller map, with essentially the same information.


It's simple yet extremely effective. We now have a smaller 8-state map for a symbol of probability 5/4096. This trick can be achieved with all other symbols, reducing the sum of all sub-ranges map to a variable total between number of state and number of states x 2.

But we can do even better. Notice that the blue sub-ranges occupy 2 slots, providing the same destination state.
Remember that the red area corresponds to n=9 bits, and the blue area corresponds to n+1=10 bits.  What we just have to do then is to shift origin state by this amount of bits. Looks complex ? not really, we already have calculated this number of bits. We just have to use it now.

For this trick to work properly, we need to scale state values, not from 0 to 4095, but from 4096 to 8191. If you do this, it results in the following sub-range map :


A few important properties to this transformation :
- There are as many cells as the probability of Symbol. It's not a random example, it's guaranteed to be always the case.
- The first cell Id is the same as symbol probability (in this example, 5). It's also guaranteed.
- The sub-ranges are now stored in order (from 1 to 5). This is desirable, as it will simplify the creation of the map : we will just store the 5 destination states in order.
- Since sub-ranges map have now the same size as symbol probability, and since the sum of probabilities is equal to the size of state table, the sum of all sub-ranges map is the size of state table ! We can now store all sub-range maps into a common table, of size number of states.

Using again the previous example, of a 4096 states table, for a 256 alphabet. Each state value uses 2 bytes. We essentially now disregard the alphabet size, which has no more impact on memory allocation. It results into 4K * 2 = 8 KB of memory, which is much more manageable, and suitable for an L1 cache.

We now have all sub-ranges map stored into a single common table. We just need to find within it the segment corresponding to the current symbol to be encoded. This is what symbolTT[symbol].deltaFindState does : it provides the offset to find the correct segment into the table.
Hence :
 *state = stateTable[(*state>>nbBitsOut) + symbolTT[symbol].deltaFindState];

This trick is extremely significant. In fact, it was the decisive factor in the decision to publish an open source FSE implementation.

11 comments:

  1. Thanks for the blog posts Yann. With which compilers are you testing? Also in my experience, i have found that some times compilers do a better job if you give them longer calculation strings. So instead of doing:
    a = (xxxxxxx);
    a -= (yyyyyyy);
    it is better to just do:
    a = (xxxxxxx) - (yyyyyyyy);

    Also is FSE_symbolCompressionTransform struct cache optimized? I suspect that making minBitsOut a U16 and changing the order of attributes in the struct to minBitsOut, maxState, deltaFindState would be better.

    ReplyDelete
    Replies
    1. typedef struct
      {
      int deltaFindState;
      U16 maxState;
      BYTE minBitsOut;
      } FSE_symbolCompressionTransform;

      FSE_symbolCompressionTransform is an 8-bytes structure. The C Compiler is required to align it properly on int size, which means it will round it to 8.

      With you proposal, it should not change the size of the structure, which will still be 8 bytes. What will change is where stands the "dummy" 8th byte.

      With BYTE minBitsOut first, the second byte will be the dummy one, in order to guarantee that U16 maxState is 16-bits aligned.

      Delete
    2. Yes, the structure (with padding) size will remain the same. The change to U16 for minBitsOut, will permit the compiler to use U16 instructions, which in my experience are some times faster than BYTE ones.

      I also suggested to reorder the struct to match the access patterns to it, from the code. It isn't an exact science, because the compiler also affects the access patterns, but it is a good start.

      For very performance sensitive code, even the order with which you declare the variables of a function matters, because it affects how they are arranged in the stack.

      Delete
  2. Also (with a quick glance at the code) another possible optimization would be to have state1, state2 as ints (if you are sure that the pointer diff range is capped) and have FSE_encodeByte directly return them (to avoid passing an 8 byte reference pointer into it). On the other hand the compiler might be doing this optimization already.

    ReplyDelete
    Replies
    1. due to the way FSE_addBits() works, it's necessary to cast state value to size_t. The cast from ptrdiff_t to size_t is costless. On the other hand, the cast from int to size_t is costly on 64-bits systems. Hence a small performance hit.

      Delete
    2. To clarify my suggestion a little more.

      Instead of passing state as a reference to "FSE_encodeByte(ptrdiff_t* state...", my suggestion is to do this:

      state = FSE_encodeByte(state, ....)

      so you avoid dereferencing it. Some times, avoiding the dereference is faster.

      Delete
    3. I think I tried this idea in the past, and it resulted in a small speed loss. Although it's still possible that the code being now different, the result may be different this time. We could also be in a situation where it's better for one compiler, and worse for another one.

      Anyway, I would like to invite you to test your suggestions, since the code is available. From experience, good ideas tend to come from people which actually test them and provide some figures to illustrate the results.

      Delete
  3. Have you also looked into using alloca (or Visual Studio's _alloca) to allocate some of the needed arrays in the stack?

    ReplyDelete
    Replies
    1. Actually, FSE default implementation already allocates tables on stack. But it does not use alloca(), since it is not recommended practice.

      Delete
  4. Minor correction: in order for the compiler to automatically convert comparison to a signed shift it has to know the ranges (i.e. for 32-bit integers it has to know that both values are in [0..2^31-1] range). I won't be surprised if compilers don't tend to implement this optimization since the ranges are generally unknown and on some architectures there are instructions to convert condition result to 0/1 (i.e. setne)

    Thank you for publishing these posts! I enjoy reading them.

    ReplyDelete
  5. You're welcomed. I added a link to your comment from the article.

    ReplyDelete