Open-source crypto is no better than closed-source crypto

TL;DR: This post makes the point that on average open-source crypto is not safer than closed-source crypto, based on the author’s experience. YMMV.

An idea behind Linus’s law is that open-source software (OSS) has fewer bugs than closed-source software (CSS) because more people have access to OSS and to its code. Consequently, more people would use the software, read its source code, find bugs therein, and report such bugs. This post discusses this idea in the context of crypto bugs, which are software or logic bugs in cryptography components.

I’d like to categorize crypto bugs in four categories, based on how they are found and how hard they are to find, using examples from my recent talk about bugs in blockchains (which are often crypto bugs). These thoughts are mostly based on my experience reviewing crypto software starting a decade ago, for a variety of projects, whether tech start-ups, large firms, government organizations, or blockchain companies.

  • Usage bugs: These are bugs that you can find using the application without reading the code. For example the benign SHA-512 out-of-bound read from slides 45-49. For this kind of bug, OSS doesn’t necessarily win, when 82% of projects on Github have fewer than 45 stars (a figure I just made up, but you get the idea). In CSS I haven’t noticed a major difference in quality between widely used and marginal applications, suggesting a sparsity of usage bugs.
  • Primitive bugs: These are bugs caused by a wrong choice of crypto primitive or protocol. It is often the case that reading the documentation is sufficient, without even delving into the code. Primitive bugs weren’t uncommon ten years ago, but today everybody has an Internet connection and can learn about the primitives to be avoided. A recent example is the use of a hash function vulnerable to collisions within IOTA signatures, as discussed on slides 24-25. In my experience, in recent years OSS tends to have fewer primitive bugs, a reason being that CSS sometimes depends on other/legacy components and therefore can’t fully choose all primitives.
  • Misuse bugs: This is when the right primitive is used, but in an insecure way. For example, Lisk uses Ed25519 signatures and secure hash functions to generate too short addresses (see slides 17-20). Other common pitfalls include stream ciphers with collision-prone nonces, or insufficient number of PBKDF2 iterations. These bugs are most often found in code reviews and are relatively easy to find, if somewhat harder than primitive bugs. Restricting ourselves to misuse bugs, I don’t think I’ve seen more horrors per line of code in CSS than in OSS.
  • Hard bugs: To find such bugs, the crypto “pop culture” is insufficient. You generally need to understand a complex protocol or logic, and have specific—yet not necessarily advanced—skills in domains such as mathematics or programming languages internals. For example, the libzerocoin bug discussed on slides 14-16 isn’t obviously apparent for mere code reviewers. Such bugs can be found accidentally, but they have more chances of being found by experienced people—the kind of people who won’t work for free: either  during paid audits, when selling the bugs, or in exchange for bug bounties.

The upshot is that to find the hard bugs you need people who will diligently read the code, but there are too few such people and too much code around, hence bugs remain in both OSS and CSS. Paid security audits help but won’t find all the bugs, as they tend to be broader than they are deep. Even after security audits by qualified people some bugs will  remain undetected even in widely audited applications. Once again, this applies to both OSS and CSS.

All of this doesn’t sound very original nor specific to crypto bugs as opposed to “normal” bugs. Maybe a difference is that crypto bugs tend to be harder to find than non-crypto ones, while at the same time their exploitation tends to be less complex: for example, you’ll rarely need to write complex shellcode and chain exploits in order to exploit crypto bugs. However, certain crypto bugs will require major computational power to exploit (such as ROCA or SHA-1 collisions).

Of course these points are subject to major selection biases: they’re only based on the projects I’ve reviewed, and you can speculate that companies willing to share their source code for audits or other reasons do so because they’re more confident in their code’s quality (the opposite is also true,  when companies ask for an audit because they know that their code sucks.)

Some parting thoughts: I’ve only discussed the quality of the code, not the actual risk. You can, for example, argue that bugs in OSS are more likely to be exploited because the source code is out there, but that’s another debate. You can also argue that OSS tends to be reused by other OSS, which amplifies the impact of a given bug.

Thanks for the feedback and edits of: Antony Vennard, Arrigo Triulzi, Jason Donenfeld, Nadim Kobeissi, Nathan Hamiel, Samuel Neves.

One comment

  1. Many of your points are valid, but your article completely fails to address the most important issue.

    I want security and cryptography that I can trust, and that means that I want to know that, for free, by default, both myself and others are able to audit it. This might mean we take a quick look over the source code and then compile it. It might mean we hire security expert to do an audit of somebody else’s code so that we can feel comfortable using it. Most importantly, this trust also stems from the fact that if something malicious was in that code, there’s a fairly high chance that someone could spot it. If the code is closed sourced, then it’s possible that the only people who are allowed to read the code are people who might also not be allowed to reveal its malicious nature. ultimately for me, a very important part of the safety comes from how easy it would be to spot malicious code. Remember, often companies have interests that are opposite to their consumers interest. Because of this, running any closed source software (outside of a very safe sandbox) is extremely dangerous. Often we run closed-source software anyway simply because we have no viable alternative. However, if we ever have a choice, particularly in security matters, it makes much more sense to go with an open-source product.

Leave a Reply