Saturday, July 19, 2014

xxHash wider : 64-bits

 The initial version of xxHash was created in a bid to find a companion error detection algorithm for LZ4 decoder. The objective was set after discovering that usual implementation of CRC32 were so slow that the overall process of decoding + error check would be dominated by error check.
The bet was ultimately successful, and borrowed some its success from Murmurhash, most notably its test tool smHasher, the best of its kind to measure the quality of a hash algorithm. xxHash speed advantage stems from its explicit usage of ILP (Instruction Level Parallelism) to keep the multiple ALU of modern CPU cores busy.

Fast forward to 2014, the computing world has evolved a bit. Laptops, desktops and servers have massively transitioned to 64-bits, while 32-bits is still widely used but mostly within smartphones and tablets. 64-bits computing is now part of the daily experience, and it becomes more natural to create algorithms targeting primarily 64-bits systems.

An earlier demo of XXH64 quickly proved that moving to 64-bits achieves much better performance, just by virtue of wider memory accesses. For some time however, I wondered if it was a "good enough" objective, if XXH64 should also offer some additional security properties. It took the persuasion of Mathias Westerdhal to push me to create XXH64 as a simpler derivative of XXH32, which was, I believe now, the right choice.

XXH64 is therefore a fairly straighfoward application of XXH methodology to 64-bits : an inner loop with 4 interleaved streams, a tail sequence, to handle input sizes which are not multiple of 32, and a final avalanche, to ensure all bits are properly randomized. The bulk of the work was done by Mathias, while I mostly provided some localized elements, such as prime constants, shift sequences, and some optimization for short inputs.

The quality of XXH64 is very good, but that conclusion was difficult to assess. A key problem with 64-bits algorithms is that it requires to generate and track too many results to properly measure collisions (you need 4 billions hashes for a 50% chance of getting 1 collision). So, basically, all tests must be perfect, ending with 0 collision. Which is the case.
Since it's a bare minimum, and not a good enough objective to measure 64-bits quality, I also starred at bias metric. The key idea is : any bit within the final hash must have a 50% chance of becoming 0 or 1. The bias metric find the worst bit which deviates from 50%. Results are good, with typical worst deviation around 0.1%, equivalent to perfect cryptographic hashes such as SHA1.

Since I was still not totally convinced, I also measured each 32-bits part of the 64-bits hash (high and low) as individual 32-bits hashes. The theory is : if the 64-bits hash is perfect, any 32-bits part of it must also be perfect. And the good thing is : with 32-bits, collision can be properly measured. The results are also excellent, each 32-bits part scoring perfect scores in all possible metric.

But it was still not good enough. We could have 2 perfect 32-bits hashes coalesced together, but being a repetition of each other, which of course would not make an excellent 64-bits hash. So I also measured "Bit Independence Criteria", the ability to predict one bit thanks to another one. On this metric also, XXH64 got perfect score, meaning no bit can be used as a possible predictor for another bit.

So I believe we have been as far as we could to measure the quality of this hash, and it looks good for production usage. The algorithm is delivered with a benchmark program, integrating a coherency checker, to ensure results remain the same across any possible architecture. It's automatically tested using travis continuous test environment, including valgrind memory access verification.

Note that 64-bits hashes are really meant for 64-bits programs. They get roughly double speed thanks to increased number of bits loaded per cycle. But if you try to fit such an algorithm on a 32-bits program, the speed will drastically plummet, because emulating 64-bits arithmetic on 32-bits hardware is quite costly.

SMHasher speed test, compiled using GCC 4.8.2 on Linux Mint 64-bits. The reference system uses a Core i5-3340M @2.7GHz
VersionSpeed on 64-bitsSpeed on 32-bits
XXH6413.8 GB/s1.9 GB/s
XXH326.8 GB/s6.0 GB/s

Monday, July 7, 2014

Pointer overflow : an analysis of LZ4 literal attack

 Last week, when a blog announced to the wild that it was possible to overflow a pointer within LZ4, I immediately produced a fix within the next few hours to protect users, without checking how the code would naturally behave in such circumstance. After all, one assumption of 32-bits memory allocation was broken, so as a rule of thumb, I accepted the idea that it must have broken something.

With the fix available, I was curious to have a deeper look at the technical behavior of the overflow. What follows is a continuation of an attack scenario presented here, which incidentally match an observation I made a long time ago, while assessing the level of issue 52, and totally forgot last week. Since current code is protected against overflow scenario, I will look at this issue from an "old version" perspective, such as, for example, the relatively old r91 (march 2013). The behavior analyzed concerns the function LZ4_decompress_safe(), which is the one designed to be protected against malicious packets. Note that an unsafe version also exists, which is called LZ4_decompress_fast() and is not protected against malicious packets, and therefore offers no such guarantee.
(Note : the safe function is also mapped to LZ4_uncompress_unknownOutputSize(), the unsafe one to LZ4_uncompress()).

A key claim is that it is possible to achieve a Remote Code Execution on such older version. An RCE attack requires to deliver a crafted payload at a selected address in memory (Edit : see some relevant comments on this). The proposed attack is described here. A later version would add that it is possible to do the same with less payload if the destination buffer get allocated within high address region, but ultimately uses the same logic. The present article starts from there.

We will suppose that the target OS has no memory protection in place, such as detection of illegal reading or writing, which would make the attack pointless.

At the start of the attack, we have the destination pointer op, which points into a valid buffer region. If the malicious payload wants to trick the library into writing into an unauthorized region, it looks good enough to cheat on the length of data to be copied. Unfortunately, this condition is checked, and if the pointer gets beyond the valid destination buffer, the LZ4 decoder stops right there and output an error code.

For the attack to have a chance to succeed, the objective is to provide a length which is so large that it makes the pointer wraps beyond the last address 0xFFFFFFFF (note : this is only possible in 32-bits mode, 64-bits address space is way too large to overflow). It requires a lot on input data, but we'll suppose that this condition is met too. This is then possible because the end of literal area is calculated as a pointer called cpy :
cpy = op + length ;  // if length is large enough, we may have cpy < op because of overflow

OK, so what happens next ? The article claims that it delivers a payload of one's choice at the cpy address, basically the code to execute.
What's going to happen is a bit more complex. To understand why, let's follow the code. The relevant lines, for r91, are copied below :

        // get runlength
        token = *ip++; /* ... calculate length ... */
        // copy literals
        cpy = op+length; /* ... check buffer limits ... */
        LZ4_WILDCOPY(ip, op, cpy); ip -= (op-cpy); op = cpy;

        // get offset
        LZ4_READ_LITTLEENDIAN_16(ref,cpy,ip); ip+=2;
        if unlikely(ref < (BYTE* const)dest) goto _output_error;   // Error : offset outside of destination buffer

        // get matchlength

Since cpy < op, the tests checking for the end of output buffer will pass. We also suppose that the test for input buffer pass, so we move to the next line.

        LZ4_WILDCOPY(ip, op, cpy);

This macro is not too easy to understand. Basically, it will speculatively copy 8 bytes from ip (which is supposed to be valid, otherwise the decoder would have already stopped) to op, which is still valid. Yes, only 8 bytes, whatever the value of length. Why ? Because cpy < op,  so after 8 bytes it just stops there.

ip -= (op-cpy); op = cpy;

That's where it starts to become nasty. With op = cpy, the destination pointer is now at a forbidden area. Note that ip has changed too ! Basically, both ip and op have been translated by the same amount.

ip is now somewhere in memory. We don't know where exactly, but it's likely to be an unauthorized memory segment. So the next line of code matters :

LZ4_READ_LITTLEENDIAN_16(ref,cpy,ip); ip+=2;

In LZ4, a literal copy is necessarily followed by a match copy. This line calculates where the data to be copied is, and stores it into the pointer ref. At this point, some OS will crash, due to unauthorized or unmapped access, but we'll take the assumption that the read will silently pass.
By definition of the format, ref = op - distance;
op is currently the same as cpy, so points to an unauthorized memory area.
distance has just been provided by ip. But ip, too, is into an unauthorized memory area. So that means that we don't know what ip is reading. distance is basically a random 16-bits number.

So now, we have ref < op = cpy < validBuffer.
At this stage comes the next line of code :

if unlikely(ref < (BYTE* const)dest) goto _output_error;

Translated, it means that if ref < validBuffer, the decoder detects a problem, and immediately stops decoding. Which is the case here. As a consequence, the overflow finally gets detected here, and the decoder exits with an error code.

OK, so if the issue was already solved, why keeping issue 52 opened on the board ? Well, I confusely remember that I almost believed for a time that the issue was solved, but then realized that this check wasn't good enough. Indeed, after some more scrutiny, here is one border case scenario which requires some additional conditions.

In this specific scenario, it is possible to make the detection fail with the help of a bit of luck : suppose that cpy is so small that its value is < 64K. This requires to either be extremely lucky, or to know the start address op before even generating the crafted payload. This is a very strong hypothesis, and suggests either the existence of another exploit, or a level of insider knowledge associated with predictable allocation practices. But for this exercise, we will nonetheless suppose just that.
Now, let's also suppose that we get lucky, and distance (which is not controlled by the attacker, so is basically a random number) is large enough to make ref underflow memory space. Now ref stands at very high address, and therefore passes the detection test.

What follows is a match copy, which size is controllable by the attacker, thanks to the token, from 4 to 18 bytes (beyond that, size is no longer controllable). Let's suppose we'll only copy 4 bytes, from ref, a very high address presumed unauthorized, to cpy, a very low address < 64K.
This is starting to spell trouble, since an unauthorized memory area gets modified.
Note however that the attacker still does not control what is being copied.

The next stage therefore is another literal copy. It seems it is the right moment to deliver the code to execute ?
Unfortunately, ip is currently lost, somewhere in memory. It will deliver a random token, followed by a random literal sequence.

As long as both ip & op remain small enough, they will evade the detection, and the algorithm will continue to write some random noise at op position, but as soon as ref stops underflowing memory address space, which is probable at each step and necessary beyond 64K, it will get detected, and trigger an error code.

So, the conclusion is :
  • A blind overflow attempt of literal length is extremely likely to get detected
  • To remain undetected, the overflow must be accurate, and finish into the 0-64K memory segment (guaranteeing the success of this operation requires complementary knowledge)
  • It's not possible to start writing beyond address 64K without being detected.
  • Whatever get written into this low memory segment is some random copy & literals operations  from other parts of the memory, which are not under the control of the attacker.
This situation is not the same as safe : writing noise into the low-memory area can result in bad consequences, likely a process crash. Furthermore, the indirect protection requires reading a few bytes from unauthorized memory area, which is also susceptible to crash the program. These are good reasons to update today to r119 for 32-bits applications.

Wednesday, July 2, 2014

Vulnerabilities disclosure - how it's supposed to work

Note : there is a follow-up to this story

 In the lifetime of LZ4, I've received tons of feedbacks, feature requests, warming thanks, bug disclosures, often with their bugfix freely offered.

This is the essence of open source : through disseminate usage and reports, the little piece of software get exposed to more and more situations, becoming stronger, and reaching overtime a level a perfection that most closed-source implementations can only dream of, due to limited capabilities of internal testings only. The strength of the formula totally relies on such feedback, since it is impossible for a single organization to guess and test all possible situations.
It is an enlightening experience, and I can encourage every developer to follow a similar journey. It will open your eyes and perspective into a much larger linked world.

Among theses issues, there were more than a few security bugfixes. I can't tell how much I thank people for their welcomed contribution in this critical area. Disclosures were received by email, or using the LZ4 issue board. Typical contributors were modest professional developers, providing their piece of help for free, on top of a larger and larger edifice, and got a quick "thanks notice" in the version log, an honor they were not even requesting. This is this modesty and positive construction mindset which really touched and drove me forward.

Such early security fixes deserve more praise today than a recent "integer overflow hype" that was debunked in this blog, and nonetheless received inappropriate media coverage. It's a sad state of affair, but much more important issues got fixed way more politely and securely, as a sincere desire to improve a situation. Overtime, such security bug became less and less common, and almost disappeared, basically proving the strength of the open source formula, an implementation becoming stronger after each correction.

When an implementation becomes widespread, the rules of security disclosure gradually change. That's because the number of systems and user potentially exposed become too large to play with. An excellent illustration of a proper reporting process is explained on the CERT "Computer Emergency Response Team" website. The process is both transparent and clean. The security auditing firm would first start by contacting upstream developer team, ensure the bug is understood and a fix undertaken. It will issue a typical disclosure delay of 45 days, to ensure upstream organization get an incentive to solve the problem fast enough (without the delay, some organisations would opt for not doing anything at all !). After which delay, a public disclosure can be sent. Hype is optional, but it's not a problem either, as long as users get correctly protected beforehand. This is quite commendable, and the proper way I understand security in the 21st century.

That's why, in retrospect, I've been so angry last time about DonB code of conduct. I initially reacted at the overblown vulnerability description, and only realized later that it was also a question of disclosure policy. With a usual level of communication, everything would have been fine, as always.
Unfortunately this is not what happened. The auditor barely left a footnote on the issue board to request a level raise on an item (which was accepted), and then simply cut contact, focusing instead on creating big headlines, overselling a security risk to a maximum number of medias. In doing so, he never looked back, never ensured that a fix was ongoing and being deployed to protect users before disclosure.

This behavior is totally untypical from a respectable security firm behavior. In fact, it is in total contradiction with core values of security auditing. This is considered insane to prioritize media coverage before public safety. I was thinking : "Damn, suppose he's right..."

Yes, let's just suppose it.
That means the security auditor decided to release to the public, hence to all electronic criminals worldwide, a critical vulnerability, "DCE level", the highest possible danger which allows remote execution on target platform, without even bothering if a fix was completed and being deployed to protect exposed users. This is so borderline, the scope of risk is so huge for society given the number of systems exposed, I was incredulous. So much havoc, just for a headline in the medias ? Is that even within the limits of the law ?
As a self-stated security expert in a security firm, the auditor was expected a much more responsible behavior.

Fortunately, the risk was not as large as claimed. And since then, I've been tricked into believing it was a genuine mis-communication lapse, thanks to reassuring words from the auditor himself, acknowledging the communication problem and wishing to improve the situation for the future (and linking the nice words to twitter to spread a positive image). Willing to tune down the situation, and considering that previous vulnerability couldn't result in anything serious, I issued a much more neutral statement to close the story, naively expecting that, from now on, a normal communication level would be restored, future bug following a usual report process, starting with an issue notification, and a discussion.

I was wrong.

In total contradiction with his own statements, donb doubled down earlier today, broadcasting a new vulnerability issue directly to the wild, without a single notification to upstream developer :

The new vulnerability could be correct this time. I have not been able to prove/disprove it myself, but have no reason to disbelieve it. Some specific hardware and debugging capabilities seem required to observe it though.

Apparently, the risk is valid for ARM platforms (maybe some specific versions, or a set of additional platform specific rules, the exact scope of which I don't know about). I have doubts that it is only hardware related, I believe it must be OS-driven instead, or a combination of both.
The key point is the ability of the 32-bits system to allocate memory blocks at high addresses, specifically beyond 0x80000000h. This situation never happened in earlier tests. Each 32-bits process was encapsulated by the OS into its own virtual address space, which maximum size is 2 GB, irrespective of the total amount of RAM available (this was tested on 4 GB machines). With no counter example at hand, it became an assumption. The assumption was key in the discussions assessing gravity level, and remained undisputed up to now. Today's new information is that this situation can in fact be different for some combinations of OS and hardware, the precise list of which is not clearly established.
[Edit] The vulnerability existence can now be tested, using the new fuzzer test available at
Should you have a system able to generate such a condition, you're very welcomed to test the proposed fix on it. The quality of the final fix for this use case will depend on its ability to be tested.
The issue tracker is at this address :
A first quickfix is proposed there.

For the open source movement, a vulnerability disclosure is welcomed, because overtime, it serves to improve the code, each fix making it an even better proposal to the world. That's a view I totally embrace.
Obviously, there are many ways to disclose a vulnerability. The CERT process is very clean, but even a simple mail or a note on the issue board is a good first step. It's just common sense. At the end of the day, the objective is to get the issue fixed and deployed quickly and efficiently, ensuring little to no risk for the users, by reducing the window of opportunity for malware hackers.

I can only regret that this latest disclosure does not share such goodwill elements. By going straight to the public, without ever notifying the upstream developer, and spreading via social media amplification to ensure maximum exposition danger for every users and systems already deployed, the security auditor transforms a valuable information, for which he could have been praised, into a harmful act testing the limits of the law. That's plain stupid, and a lost opportunity.
Should the vulnerability be confirmed, and I have no reason to think otherwise, his article pretends to provide a textbook to perform an exploit on millions of potential ARM machines throughout the world. It's not just childish, it is incredibly dangerous. And all this, for the sake of ... pride.

Willingly putting public safety at risk for the sake of an immature personal satisfaction created him a reputation within his own professional circle. Want now to entrust your critical security vulnerabilities to such an auditor ? Sure. We must hope today that there will be no accident as a result of the released exploit description, but as long as the patch is not deployed, we can't guarantee such happy ending.

[Edit] : After further analysis, it seems the new overflow issue remain relatively difficult to exploit usefully. It has opened new fronts, but still require some favorable conditions, outside of attacker control, such as allocation at the extreme end of the available memory address range. With most LZ4 programs using small blocks, it makes overflow risk a rarity, if not provably impossible depending on allocation rules. Relatively large data blocks remain a necessity for a correspondingly good success perspective. Previously published conditions still apply to design an interesting attack scenario.
Still, with a fix available, updating your 32-bits applications to r119+ remains a recommended move.

[Edit 2] : End of Linux kernel risk assessment. This potential overflow has no useful attack scenario for now. It is nonetheless getting fixed, to protect future applications. Knowing current list of applications using LZ4 within the kernel, the only remaining attack scenario is a boot image modification. When such a scenario is possible, then you've got a lot more to worry about, a non-guaranteed potential overflow under cooperative conditions pales in comparison of a direct modification of the boot image, inserting typically some worm code.


[Edit] : Sight, and now, place to an organized straw-man campaign, basically putting words in my mouth I never said and strongly disagree with.
To put the records straight, here are a few affirmations I stand for, today like yesterday :
OK, so where is the problem ?
  • Broadcasting a vulnerability to the wild without even providing a single notification to upstream organization cannot get close to the definition of ethical disclosure, under no possible metric
  • Conflating gravity levels and spreading meritless fear to the public to harvest some free ad is a despicable scare-mongering practice
The debate over Responsible Disclosure is not new. In fact, it is gaining strength, precisely because software becomes ubiquitous. Long gone are the day when a vulnerability would mostly put at risk some geek's computers primarily used to play games. This is no longer "for fun". Computing is the backbone of our most critical services, and with Internet of Things, it's also going to be present into everyday devices, including medical pumps, surveillance camera, water probes, etc.

In Responsible Disclosure, there is Disclosure, which is a good thing. There is also Responsible, which translates into a notification, and a fix delay. How long should be the delay is a difficult question, since it can be abused by either side. CERT says it's 45 days. I don't know. But I'm sure it's something above zero. There is also a huge difference between a calm notification into a public issue board, which remains nonetheless public, and a communication campaign designed to make a vulnerability known by a maximum number of people before a fix get a chance to get produced and deployed.

In Europe, law has selected its side, ruling in simple words that providing to the wild a manual to create a cyber attack is about as good as providing a plan for a bomb. However, only the most adamant cases had to meet a judge, resulting in few convictions, mostly when the specific charges of "willful harm" were on the table.

I do believe that this debate will continue, and overtime shift the norm towards Responsible Disclosure, which is common sense. I also believe that, while currently justice mostly get involved when an offender explicitly targets a plaintiff, it will no longer remain acceptable to consider public safety as a dispensable collateral victim, exposed by the fire of an advertisement exercise or a personal revenge.


As far as we are concerned, the goal of the game is to get a safer implementation of LZ4 available to the general public, trying nonetheless to make the window of opportunity as small as possible. In the longer run, the episode will serve as another reinforced stone, providing security benefit to the open sourced edifice. But in the short term, we suffer exposure.

The new status is as follows :
  • The vulnerability affects only 32-bits applications (64-bits are safe)
  • The new vulnerability affects systems allocating memory beyond address 0x80000000h (others are already safe with r118)
This last point is very difficult to know. It seems Windows systems are safe for example, but that still leaves a lot of other systems to check. The new fuzzer tool is now designed to test the existence of this vulnerability, and check the efficiency of the last fix against this new exploit scenario.

You can get the new fuzzer tool and the proposed fix at :
The fix seems to provide good results so far, don't hesitate to test it on your target system, should it match the above conditions.

[Edit] : the fix is good to go to master, hello r119 !
[Edit 2] : since the second condition is relatively difficult to assess, the fix is recommended for any 32-bits application.
[Edit 3] : the kernel version will get fixed too, but in the meantime, don't worry too much : there is no known practical kernel attack scenario once considered the programs using kernel functions today. The fix will basically protect new potential programs.


Saturday, June 28, 2014

Vulnerability assessment

[Edit] There is a new follow up to this story.

I've received an answer from Don Bailey. He blames the situation on a lack of communication. OK. In an attempt to bring the discussion to a more neutral level, this post is dedicated at providing a hopefully clear, concise and factual description of the situation. I hope it will help the reader to properly assess the risk level.

  • There was a vulnerability, it was fully described by Ludwig Strigeus and tracked for correction on the LZ4 issue board
  • The vulnerability is fixed. Update is recommended for 32-bits applications
  • The vulnerability requires uncommon conditions to be exploitable :
    • Input data supplied by untrusted source
    • Large block sizes, preferably ~16MB (beyond LZ4 file format specification)
    • 32-bits applications only
  • These conditions are not met by currently known programs using LZ4
    • For a list of them, get to the LZ4 homepage. It's correct to state that some yet unknown application may match these conditions. That's why the reference version has been fixed
    • Edit : One use case has been found, involving custom scripts to receive untrusted data from external sources using a Python version of LZ4 while not enforcing block size limits. The targeted Python version was using an unprotected version of the decoder. Conclusion remains the same : the Python version has been updated to cover such situation, and is a recommended upgrade
  • The Linux Kernel version, which is different and supplied by LG, has this vulnerability too
  • Linux Kernel programs (which are the only ones to access kernel functions) do not match the conditions that would make it exploitable
  • The Linux Kernel version is nonetheless being fixed too, to avoid any future risk for future versions of the kernel

Hope it helps.

Thursday, June 26, 2014

Debunking the LZ4 "20 years old bug" myth

[Edit] : There is a follow-up to this story.

[Edit 2] The below post was redacted right after the initial publication. Should you wish a much shorter, and arguably more neutral tone, summary of the situation, please consider reading the follow up.

 A recent post on a security blog has claimed that LZ4 is affected by a subtle bug which could result in remote code execution on basically any machine using LZ4 algorithm. Given that LZ4 is installed on every modern Linux distro, including critically Android, and is also part of modern file systems such as ZFS, shipped with FreeBSD and Illumos, used too within widely deployed databases such as MySQL, or big data nodes powered by Hadoop, it must be a pretty big deal.

The article then makes a fairly good job at describing the conditions required to trigger that risk, but unfortunately, these explanations are scattered somewhere between the middle and the end of an overly long technical article, ensuring most readers will stop at the dramatic headline. The present article has the objective to counterweight those "high stakes" claims.

First, there is a problem of methodology. The author left a brief note on the LZ4 issue board, and then, without even listening the detailed explanations you'll find below nor engaging into mitigation activity, went alone towards advertising it to the widest possible audience just a few days later, creating exposition risks for the users. This is far short of an expected professional behavior from a security firm.
[Edit] : Later comments from DonB state that this situation is due to him not receiving notifications about answers being provided for him on the board.

Second, the author claims to have found this subtle risk through careful code study on its own. This is not true. The risk was identified by Ludwig Strigeus, the creator of ĀµTorrent, quite some time ago. Ludwig made a very fine job at describing the risk. Instead of trying to make a headline, he proposed a solution for it. After multiple partial fixes, the risk was finally plugged recently (just a few days before DonB second "disclosure"). Why so much time you would ask ?
Well, because there was no real-world risk.

Let's now get into the technical details.
In order to use the vulnerability, a number of conditions must be met.
A first minor one is : it is necessary to work within a 32-bits environment. 64-bits is totally unaffected. That basically put most server-side applications outside of the risk zone.

Second, the attacker need to forge a special compressed block to overflow 32-bits address space. This is possible ... if the compressed block is something like 16 MB.

There is just a little problem with that : the legacy LZ4 file format is limited to 8 MB blocks. Any value larger than that just stops the decoding process. 8MB is not enough to trigger a problem. The newer streaming format is even stricter, with a hard limit at 4 MB. As a consequence, it's not possible to exploit that vulnerability using the documented LZ4 file/streaming format.

Well, you say, but what about programs which use their own, undocumented, format ?
Indeed, same condition applies. To be exposed to this risk, very large blocks of 16MB or larger must be read by the decoder.
Does that happen ?
Let's have a look at several high-profile programs using LZ4. ZFS ? Max block size is 128 KB. Lucene ? Typical index size is 16 KB. It could be a bit more, say 64 KB, that's still far short of the objective. zram maybe ? nope, 4 KB memory segments. Linux kernel ? The boot section has to decompress a multi-megabytes kernel into memory, surely it can match the limit ? Yes, it does, but it uses the LZ4 Legacy File format, which limits each block to 8 MB maximum. OK, maybe AAA games such as Guild Wars 2 ? nope, real-time protocol is the realm of short messages, the protocol itself doesn't allow anything beyond a few KB. And so on, and on.

At the end of the day, none of the known implementation of LZ4 is exposed to this risk.
Basically, most user programs employ LZ4 for small data packet structure, way below the critical limit. Programs which generate and distribute large compressed blocks (notably the lz4c pos-x compression utility, distributed within Linux Distro) use the documented streaming format, which limits block size to 4 or 8 MB. Remove also from the list programs which never take "externally provided" data as input, they can't be targeted either.

So sorry, this is not a "new heartbleed" situation the author seems to dream for.

Nevertheless, it's a good move to close this risk, just in case, in the future, one implementation may inadvertently wander into the area of "custom compression format using large blocks of > 8 MB on 32-bits system, and receiving data from untrusted external sources". Granted, this scenario stands in the low probability range. But that's nonetheless good to plug it. Finding a solution without undesirable side-effects took some time though, but that's finally the case within current LZ4 release available on Github and Google code.

It's one thing to tell there is a potential vulnerability that should be fixed, to ensure it does not become exploitable in the future. It's a totally different thing to pretend there is a dangerous RCE (Remote Code Execution) exploit currently active on the Internet, which is scary. DonB article cleverly conflates the two, implying the second to create a flashy headline, while disseminating some minor disclaimer elements throughout the long article to pretend, whenever necessary, having said the first. That's clever, but conflating gravity level to grab some free ad is not a respectful behavior from a security firm.

I'm also bitter at the bug finding misappropriation. The real identifier is Ludwig Strigeus, let's make that clear. The long list of "credits" at the end of DonB article is another reason for caution : it happens I asked some of the influential names listed there, and they told me they barely heard about the guy. Fame by association ? Sure, please thank Linus Torvald for "coordinating and advising" on the issue.

Finally, I'm also angry because security matters, a lot. Triggering too many alarms to grab a bit of fame is a good way to weaken the power of future alarms. You can guess that when every headline claim a "new heartbleed" situation, no one will pay attention to the real next one which will matter. And that's dangerous.

Anyway, should you feel nonetheless at risk now, please don't hesitate to update your LZ4 version. It's a good thing to do anyway, and as stated previously, the vulnerability was already patched.

Tuesday, May 20, 2014

Streaming API for LZ4

 For quite some time, the LZ4 Streaming API project has been started and delayed, as other priorities stepped in the way. To be fair, one important delaying factor was the difficulty to define a "clean enough" API, something that would be simple to use and understand, while also providing the level of fine-tuning required by advanced programmers (typically within embedded environments).

I feel quite close today from breaking this shell, with an interface definition which matches my definition of "clean enough" API. So let's share some preliminary results.

Current streaming interface

The current streaming API exposes the following functions :

void* LZ4_create (const char* inputBuffer);
int   LZ4_compress_continue (void* LZ4_Data, const char* source, char* dest, int inputSize);
char* LZ4_slideInputBuffer (void* LZ4_Data);
int   LZ4_free (void* LZ4_Data);

Although it works as intended, this API introduces some critical limitations.

First, there is a need to create a data structure which tracks the behavior of the stream.
This is void* LZ4_data, which will be generated by LZ4_create().

It looks fine, but a first issue is that allocation happens inside LZ4_create(). In some circumstances, this may not be acceptable : sometimes allocation must be fully controlled by the host process. For example, standard functions such as malloc() may be unavailable. For this use case have been created 2 more functions :

int LZ4_sizeofStreamState(void);
int LZ4_resetStreamState(void* state, const char* inputBuffer);

It looks fine too, but is unfortunately still not usable for static allocation purpose (on stack, for example).

The second issue is the const char* inputBuffer argument, specified at creation stage, because it will be fixed during the entire compression process. It's not possible to jump between memory segments. This is a major limitation, preventing "in-place" compression of scattered memory segments.

LZ4_compress_continue() looks fine as a function definition, but what is less so is the documented requirement : up to 64KB of previously processed data must stand *before* the data to be compressed. This is a major constraint, typically requiring to prepare data into an intermediate buffer (which in some circumstances, might prove an unaffordable burden).

Then there is LZ4_slideInputBuffer(), which is arguably not at its right responsibility level. Its mission is to copy the last previous 64KB of source data to the beginning of the input buffer. It exists because the content of void* LZ4_data is not accessible.

No function is defined to simply load a dictionary : to achieve that goal, one has to compress a segment using LZ4_compress_continue(), throw the result, and compress the next segment using data accumulated so far.

To summarize, yes it can work, but it's cumbersome. One has to accept the idea that data to compress must be prepared into an intermediate buffer where above conditions can be met.
It's not convenient, and may sometimes not be possible, for example due to allocation limitations.
It also has a negative impact on performance, due to memory copy operations overhead.

New streaming interface definition

While defining the streaming API, I constantly struggled to find the right balance between a "fully automated" API, and a "user-controlled" one. The "fully automated API" would take in charge all kind of workload, generate data respecting the LZ4 Streaming Format with full headers and markers, and take care of allocation for required buffers. On the other hand, the "user-controlled" API would serve the needs for host programs which require full control over resource allocation.

I therefore settled with providing both.
The "user-controlled" version will be part of LZ4 library. It's the simpler one, and only takes care of generating compressed block format, chained or independent. It depends on the host process to pay attention to all memory operations (allocation, position and availability) and to provide its own custom format linking the successive blocks.

The "fully automated" API will be part of a new library, which will be called LZ4S for "LZ4 Streaming". It is basically a user program of the previous API.

In this blog post, we will therefore concentrate on the first API, since it is also an underlying foundation for the 2nd one.

typedef struct { unsigned int table[LZ4_STREAMSIZE_U32]; } LZ4_stream_t;

int LZ4_compress_continue (void* LZ4_stream, const char* source, char* dest, int inputSize);

OK, let's focus on the most important differences with current API.

An LZ4_stream_t structure is exposed. It is provided to give a better control over memory management to the host program. For example, allocation can be static (stack, global) or dynamic, and use any user-defined allocation function.

Obviously, the host program is both in charge of allocating and freeing this memory space. It may also duplicate it "at will", which is a good idea for "static dictionary compression" scenarios.

Cleaning (or resetting) LZ4_stream_t is only a matter of memset() it to zero. It's a requirement to do it once before using this structure.

LZ4_stream_t 's size is controlled by the value LZ4_STREAMSIZE_U32, which is checked at compile time thanks to a static assert piece of code, as already used within xxHash. It mostly depends on LZ4's hash table (which typical size is about 16KB). Hash table size is a parameter controlled by the defined value LZ4_MEMORY_USAGE. Up to now, this define was present into lz4.c. To be coherent with the new interface, it will be moved to lz4.h instead. I don't foresee any issue with that.

LZ4_stream_t is described as a table of unsigned int. This is intentional. The real usage of this memory bank remains hidden inside lz4.c. This way, internal variables within the structure cannot be used as a kind of implicit interface contract, allowing future modifications to happen without breaking compatibility with existing programs.

Once the streaming structure is created, you can start to populate it by loading a static dictionary using :

int LZ4_loadDict (void* LZ4_stream, const char* source, int size);

This part is optional. Loading a dictionary flushes any prior data reference from LZ4_stream_t , if there is any, so you may also use this function to initialize an LZ4_stream_t structure by simply setting a size of 0.

LZ4_compress_continue() looks the same as previously, and is indeed compatible,  but a major difference is that it no longer requires to compress next data block "right after" previous data. Previous and Next data can be anywhere in memory. The LZ4_stream_t structure will keep track of it automatically.

One strong assumption of LZ4_compress_continue() is that previously compressed data is still available. Unfortunately, this might not be the case, and this function cannot guess that.
Should previously compressed data be no longer accessible, for example because you need the space for something else, or you can't guarantee its availability, it's possible to solve that situation :
  • You can "save" the relevant data segment from previous block (typically, the last 64KB, it can also be less than that) into a "safe" place. At which point, you will need to indicate this change of position.
int LZ4_moveDict (void* LZ4_stream, char* safeBuffer, int dictSize);

Memory space available at char* safeBuffer must be at least dictSize,  since it is the amount of data  LZ4_moveDict() will copy there (Note : the maximum amount of data that will be copied is 64KB, even if dictSize is larger).


The current streaming decompression API is defined as follows :

int LZ4_decompress_safe_withPrefix64k (const char* source, char* dest, int compressedSize, int maxOutputSize);
int LZ4_decompress_fast_withPrefix64k (const char* source, char* dest, int originalSize);

Its documented requirement is that previously decompressed data must stand *right before* the memory address where the next data block is going to be decompressed. It works, but implies the creation of a temporary buffer to match the requirement.

The new streaming API get rid of this limitation, allowing previous data to stand anywhere within accessible memory :

int LZ4_decompress_safe_usingDict (const char* source, char* dest, int compressedSize, int maxOutputSize, const char* dictStart, int dictSize);
int LZ4_decompress_fast_usingDict (const char* source, char* dest, int originalSize, const char* dictStart, int dictSize);

The dictionary definition is part of the argument list (const char* dictStart, int dictSize), therefore there is no need for a tracking structure, in contrast with compression.

A first code implementing this API is currently available in the "dev" branch of LZ4 on github. It is still early stage, and probably the right time to be provide your comments should you have any.

[Edit] Added : LZ4_moveDict() as a potential replacement of LZ4_SetDictPos()
[Edit] Added : Paragraph on decompression
[Edit] Modified : move to LZ4_moveDict()
[Edit] Interface definition converges towards LZ4_compress_continue()
[Edit] "streaming" branch now merged into "dev" branch

Monday, April 7, 2014

Taking advantage of unequalities to provide better compression

 When starting investigation on ANS properties, in November 2013, I stumbled upon the fact that positions in the table are not equivalent. Basically, the low states are more "worthy" than higher states. Back then, it wasn't properly explained nor forecast by the theory.

On first look, it may look like a nuisance : usual arithmetic coders offer clean flat probabilities, this uneven layout seems like an added "noise" on top of an already complicated algorithm. On second though, it looks like a potential opportunity, even if complex, to improve accuracy by taking advantage of such differences.
With many other issues to solve to get a working ANS decoder out, this issue was left for a later investigation. And here we are today.

The same conclusion was later demonstrated by Charles Bloom, within his serie of blog posts on ANS. He produced a graphic, which proved much clearer than the stats produced the year before.

Probability of each state value

Note that the "clean graph" is obtained through an average of multiple "synthetic" inputs, not real files, and requires the "precise distribution algorithm" for the layout of symbols. If you try to reproduce this result on your own, you are likely to obtain a graph which roughly produces the same perspective, but not with the same accuracy.

As stated by Charles, the graph follows quite closely a 1/X distribution. It's an important conclusion, because it hints towards a formula able to "guess" the best spot for a singleton probability. It seems that, as long as P is between sqrt(2) and sqrt(2)/2, we can find its optimal position in the table.

It's an interesting result that we'll try exploit. I made multiple tests with different P values, looking for the optimal position in the table, and the result was relatively close to the 1/X formula (best position is usually slightly different, but the overall difference in file size is extremely small)

  P    1/X  Best  Difference
0.707  511  499    4 bytes
0.850  339  351    2 bytes
1.000  212  209    2 bytes
1.150  117  115    1 byte
1.300   44   43    2 bytes
1.414    0    0    0 bytes

So we've got a way here to improve accuracy of singleton probabilities (those for which the normalized frequency is 1). Can we use that ?

Unfortunately, we must also take into consideration the increased header cost to specify position accurately. If a symbol probability is normalized to 1 (a singleton), it means its number of occurrence is relatively small. Therefore, we can't spend 10 bits to provide the exact best position of a rare symbol into the table, it would be counter productive.

In order to experiment, I targeted the easy gains, and took a look at low-probability symbols, those for which realP < 1. Such cases are unfortunately quite common. They are also the most wasteful ones, since they require the minimum normalized frequency, which is 1 (even if realP = 0.01, we can't round that down to zero, otherwise the compression would be lossy).
One way to minimize their loss would be to put them at the top of the table, where the probability is lowest (approximately sqrt(2)/2 ~= 0.707). The end effect is somewhat equivalent to what Charles Bloom did within his own tANS version, but in a more "directive" manner, since we are cherry picking which symbols must fit at the top of the table.
The impact on header size is going to be very moderate, since now we just have to distinguish between a "normal 1" and a "low 1".

Let's check if this method produces some worthy gains :

Table  Low1     HC     Fast3  Gains    Fast1
4K    435286  435426  435398  0.026%  435723
2K    435677  436041  436072  0.091%  436783
1K    436691  437756  437875  0.271%  439181

Indeed, this is much better than anticipated. This new version handily beats the "perfect normalization algorithm", by taking advantage of unequal probabilities within ANS table. It is therefore now integrated within the open source version of FSE on Github.

A great property is that this improved table initialization algorithm is achieved with the same speed as before, so it's basically an improvement we get for free.

Another important property is that the gain improves while table size decreases. It's great : it means we can consider reducing the table size, hence its memory cost, while keeping the compression loss in check. Notice how we get better compression performance today than earlier version (Fast1) did produce using tables twice bigger !

It's an important benefit, and it enables some critical RAM saving, essential to improve Zhuff performance.
But let's keep that part for a later blog post...