RSA could potentially be cracked by careless implementation, but does that mean it’s broken?
Let’s talk about RSA encryption. Last month we wrote about an exploit called Bleichenbacher’s CAT that could impact RSA key generation. Today we’re going to discuss how poorly configured RSA can lead to cracked public keys.
This actually isn’t new research, it stems from a 2012 research paper, but a blog post by William Kuszmaul on Algorithm Soup last week kicked the tires on it. You can find that post here, it’s very well written, William is an MIT PHD student that specializes in computer science and algorithms.
Or, if that sounds a bit daunting you can let me hash it out for you.
Either way, today we’ll be discussing the RSA cryptosystem, its weaknesses and why random number generators may not be so random after all.
Let’s hash it out.
Let’s start with RSA
RSA stands for Rivest-Shamir-Adleman, the surnames of its creators. As we’ve discussed before, Public Key cryptography was actually “discovered” twice, once by the UK’s GCHQ in 1973 – at the time it was deemed impractical – and then by Whitfield Diffie and Martin Hellman (with an assist from Ralph Merkle) in 1976.
At that point things were still fairly theoretical, nobody had been able to make the encryption a one-way function, one that was nearly impossible to reverse engineer.
Ron Rivest, Adi Shamir and Leonard Adleman set out to accomplish that in 1977. Rivest and Shamir were both computer scientists while Adleman was a mathematician. Eventually after several failed approaches (and having nearly lost all hope), they arrived at prime factorization as a method for one-way encryption.
[Prime numbers – for those who slept through high school math – are numbers greater than one that are only divisible by one and themselves (factors).]
Remember, public key encryption, one-way encryption, asymmetric encryption – whatever you feel like calling it – is what facilitates key exchange with SSL/TLS. We’ve given an over-the-top look at this before, today we’ll dive a little bit deeper.
When we mention prime factorization, what’s meant is that RSA uses two large random prime numbers (p and q) – we’re talking huge, hundreds of bits long – and multiplies them to create a public key.
So, p * q = x
And, x = Public Key
Anyone can take the public key and use it to encrypt a piece of data. Typically in the context of SSL/TLS what’s being encrypted is the session key. However, without knowing the values of the two prime numbers, p and q, nobody else can decrypt the message.
To give you a better idea of the computational hardness of RSA, factoring a 232-digit number took a group of researchers over 1,500 years of computing time (distributed across hundreds of computers). You can also check out this video.
Typically, key length is what indicates computational hardness. Currently the standard is 2,048-bit RSA keys, up from 1,024, which was allowable until just a few years ago. Some organizations use 3,072-bit and 4,096-bit keys, but as RSA key sizes grow, the amount of security provided by them isn’t commensurate to the amount of computational power that will be required to use them.
So, what’s the problem with RSA?
Well, aside from the fact it scales poorly, the Achilles heel for RSA is that it relies on random number generators to determine the prime numbers (p and q).
A random number generator (RNG) is really just a device or program that generates a sequence of numbers that can’t be predicted with any more probability than you would have at random. Hence the name.
From an RSA standpoint, RNGs have two big problems:
- They’re not really that random
- Almost everyone uses the same ones
I’m going to try to keep this high level because I am not a mathematician and I don’t want your eyes to glaze over and roll up into the back of your skull. [Update: we ended up covering RNGs about a month later, you can read about them here] So, here goes:
There are, for all intents and purposes, two ways to randomly generate numbers. The first requires you to map some naturally occurring event or phenomenon that is expected to be random. There’s actually a fairly new biological key generation method being tested at Penn State University that maps the movement of living cells against a grid that’s overlaid and then uses that to generate a cryptographic key. Cloudflare does something similar with the lava lamps in its lobby.
Frankly, the first way is kind of fascinating but it’s also a rabbit hole that we’re going to avoid right now. The second method is called pseudorandom number generation and it relies on computational algorithms. For the sake of RSA encryption we call these cryptographically secure pseudorandom number generators (CSPRNGs) and they produce long sequences of what appear to be random results, which are actually determined entirely on the basis of a shorter value, called a seed.
Here’s a list of some of the most commonly used CSPRNGs that have been standardized:
- FIPS 186-4
- NIST SP 800-900A Rev. 1
- ANSI X9.17-1985 Appendix C
- ANSI X9.31-1998 Appendix A.2.4
- ANSI X9.62-2005, Annex D
Let’s get back to seeds. When you hear politicians and law enforcement officials talk about encryption backdoors, one way to accomplish that is to share the seed that was used during key generation, which can help to determine the value of the keys faster and more efficiently.
This lack of seed diversity goes back to the first issue RSA has: the prime numbers being generated aren’t really random.
RSA Encryption Provides less than 99.8% security
The 2012 research paper, titled “Ron was wrong, Whit is right” (alluding to Ron Rivest of RSA fame and Whitfield Diffie of Diffie-Hellman), sought to examine the “validity of the assumption that different random choices are made each time keys are generated.”
We’ve already covered the issues with random number generation not being so random, and that the same methods for RNG are widely used. The result is that many public keys share a prime.
But why is that a problem?
This is where the surprise comes in for any non-number theory expert. If two public keys share a prime then this prime is a common divisor and, the surprise is that, it is easier to find the greatest common denominator of two numbers than it is to factor a number. Finding a common divisor takes time proportional to the number of digits. Once you have the common divisor you can read messages sent by any of the public keys.
What that means, is that you could, for instance, grab a copy of the session key being exchanged during the SSL/TLS handshake and eavesdrop on the entire connection.
Armed with this idea, the researchers scanned the web and collected 6.2 million actual public keys. They then computed the largest common divisor between pairs of keys, cracking a key whenever it shared a prime factor with any other key. All in all, they were able to break 12,934 keys. In other words, if used carelessly, RSA encryption provides less than 99.8% security.
That sounds negligible, it’s about two in every 1,000.
But does that mean RSA is cracked? Not quite, just vulnerable. One of the things that the researchers from the 2012 paper downplayed, and the element that caught William’s attention, was the algorithm that was used during the research to help factor and crack nearly 13,000 public keys.
According to the authors, they were able to run the entire computation in a matter of hours on a single core. But a back-of-the-envelope calculation suggests that it should take years to compute GCD’s between 36 trillion pairs of keys, not hours.
This was accomplished with an ingenious, yet purposely opaque algorithm and something called a triple-logarithmic factor. According to one paper such an algorithm would take roughly 7.65 seconds per 1,000 keys, which would take about 13 hours to run all 6.2-million keys. Another, slightly more elegant version of the algorithm could run 1,000 in 4.5 seconds, which would take about seven and a half hours to complete all 6.2 million.
Let’s double back to our original question. Is RSA cracked? No, but again, it’s vulnerable. The vast majority of the cracked keys were from devices like routers, or embedded applications like firewalls. This would point to the fact that many manufacturers are likely using the same RNG and possibly even the same seeding.
Ironically, it seems to be easier to break keys in batches than doing it one-by-one. That means you need to ensure that none of your public keys share primes with related keys. Also, assess what CSPRNG you’re using and review whether it provides sufficient entropy. Entropy, for lack of a better definition, is a measure of randomness. The more random – the better.
Or, as we suggested in our last RSA-related blog post, you could always just switch to Elliptic Curve Diffie-Helman.
As always, leave any comments or questions below…