Here’s what happens when your Private Key gets compromised
1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Here’s what happens when your Private Key gets compromised

This is why we can’t have nice things…

One of the most common refrains from Certificate Authorities and the digital certificate industry, in general, is ‘never let anything happen to your private key.’ Unfortunately, just saying, “bad things could happen” is a little vague and lacks punch. So here’s a real-world example that can serve as a cautionary tale for everyone.

Researchers have discovered a pair of malware families that were digitally signed using compromised credentials from Taiwanese tech companies, including D-Link, a multinational corporation that produces networking equipment.

How these cybercriminals were able to compromise the private keys is not yet known. What is known is that they signed malware with the keys.

What happens when you sign malware?

Let’s back up for a second and discuss Code Signing for a moment before we jump into what happens when you sign something nefarious. Code Signing is now a standard practice wherein a software developer goes through validation by a trusted Certificate Authority and receives a certificate and Private Key that can be used to sign scripts and executables.

Private KeyNow, there’s a little bit going on here and the easiest way to explain things might be to discuss what happens when you don’t code sign first. Pretty much every device, OS and web browser has been hard-coded to trust as few sources as possible. This is done entirely in the name of security. When you write a piece of software and upload it to the internet without signing it, a warning will trigger in the browser for anyone attempting to download it. The warning will say that the this download originates from an unknown source and its contents can’t be trusted.

That’s good. That’s how browsers are supposed to treat downloads of unknown origins. Now, when you Code Sign a piece of software, what you’re doing is adding a digital signature using the private key associated with your code signing certificate. Browsers do not trust you, yourself, but if they can chain your digital signature back to a trusted root, that is, a certificate from one of those trusted CAs, it will trust you because the CA, by virtue of issuing you the certificate, is vouching for you.

That was a mouthful so let me put it another way, when you sign something properly the browser can trace it back to a certificate it trusts, which grants you trust in turn.

Like I said, Code Signing is now expected. You need it to get apps in the app stores for Apple and Android, you need it to have your software downloaded by all major browsers. There’s really no way around it. And in order to keep this practice as safe as possible there is a validation process before issuance that is meant to weed out ne’er-do-wells and cybercriminals.

To boil all of that down to the simplest terms: Only trusted developers are supposed to be able to Code Sign because that digital signature grants instant trust.

You can probably see where this is going.

When a private key is compromised and a digital signature is applied to malware it tricks the browser filters and antivirus programs that typically scan downloads. Now, instead of seeing that this script or executable comes from an unknown source, the browser thinks it comes from D-Link, whom is trusted, and the browser lets the download commence. This is an incredibly effective attack vector.

Stolen Code Signing Certificate

What was signed with the Compromised Private Keys?

Two pieces of Malware were signed with the compromised keys, as The Hacker News explains:

Security researchers from ESET have recently identified two malware families, previously associated with cyberespionage group BlackTech, that have been signed using valid digital certificates belonging to D-Link networking equipment manufacturer and another Taiwanese security company called Changing Information Technology.

The first malware is called Plead. JPCERT, a Japanese Computer Security Incident Response Team (CSIRT) did a full analysis of Plead in June. It is essentially a backdoor that can be used to steal information and spy on people. The second malware is a related password stealer that targets:

  • Google Chrome
  • Microsoft Internet Explorer
  • Microsoft Outlook
  • Mozilla Firefox

D-Link and Changing Information Technology were notified and both had revoked the certificate by July 4th.

Stolen Code Signing

BlackTech is continuing to use the revoked certificates to sign malware, anway. That may sound dumb, but this issue shines light on a flaw with many different antivirus solutions: they don’t scan for the code signing certificate’s validity.

What SHOULD happen, as is the case in the accompanying images, is the antivirus program sees the certificate that signed the code was revoked and either notifies the user or blocks the download. Even if the malware was timestamped (which it was, and which also happens to be best practice, by the way), there should still be a notification that the certificate was revoked early. Instead, many antiviruses don’t check validity at all, which means that yes an expired or compromised certificate can still pose quite the threat.

This isn’t even the first time a Taiwanese tech company has been victimized this way. The 2010 Stuxnet worm was signed using stolen certificates from RealTek and JMicron.

Let’s talk about preventing key compromise

Looping back to where this article began, your private key is critical. A compromised key can cause a litany of problems, regardless whether it’s for an SSL certificate, Code Signing, Personal Authentication– it doesn’t matter. Obviously, the effects can be catastrophic. Hopefully the signed malware families don’t end up causing big problems, but signing malware – which will help it get by antivirus programs and browser filters – is always a dangerous proposition.

So, here’s the best piece of advice we can give you:

Store your private key on an external hardware token

External Hardware TokenRight now the idea of storing keys on a physical hardware token has largely been co-opted by the cryptocurrency industry, which calls them hardware wallets. In a way the Cryptocurrency industry is kind of an interesting test case for various kinds of private key storage because it’s like the freaking wild west right now with all kinds of hackers and snake oil marketing and everybody looking over their shoulder as anecdotal evidence of people losing fortunes mounts and mounts.

The cryptocurrency community (which is distinct from the crypto community) pushes a range of key storage solutions, everything from laminated paper wallets to engraving it on the side of a physical bitcoin to “cutting edge” cold storage solutions that cost way too much. None of these are new.

The best solution has always been, and will continue to be storing your key offline. How you do it is completely up to you (there are a few methods), just remember to keep it well guarded in an office safe or somewhere that it’s not easy for someone to pocket.

Had D-Link and Changing Information Technology done this, compromising their keys would have involved an actual caper or a heist, as opposed to just some fancy hacking. And who has time for a caper nowadays?

Author

Patrick Nohe

Patrick started his career as a beat reporter and columnist for the Miami Herald before moving into the cybersecurity industry a few years ago. Patrick covers encryption, hashing, browser UI/UX and general cyber security in a way that’s relatable for everyone.