Fighting Deep Fakes with Digital Signatures
1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Fighting Deep Fakes with Digital Signatures

Talking to Entrust’s Chris Bailey about combating fake news, deep fakes and what role digital signatures might play.

Deep fakes will be the next generation of fake news. The warnings are already going off. Sure, it may be difficult spotting a well-perpetrated fake account or determining the veracity of fake news, but just wait until the videos start rolling in.

Back in December, Motherboard first reported on the now-eponymous Reddit user named “deepfakes” that was producing convincing face swap videos using a machine learning algorithm, publicly available photographs and open source code. You know how Carrie Fisher appears as a younger version of herself in the movie Rogue One? Someone else acted and then they used an AI and machine learning to plaster over the motion-capture actor with Fisher’s likeness.

Fighting Deep Fakes with Digital SignaturesWell deep fakes are a lot like that, only they can potentially be made by anyone and as they become more and move convincing, we will face a real crisis where what see online could be complete fabrication and we will be none the wiser.

Just think of the potential applications, are you mad at a celebrity? You could create a video of them using racial epithets and how would anyone know it’s fake without some form of digital forensic investigation?

The bigger concern is national security. As the past couple of years have clearly demonstrated, the US is under constant cyber attacks from the likes North Korea, China and Russia. We’ve already seen highly sophisticated misinformation campaigns perpetrated by nation state actors. Google and Facebook are deleting more and more fake accounts by the day. It’s only a matter of time before deep fakes make their way into these groups’ arsenals.

The Law Fare blog, lays out just a few of the potential scenarios that could spell disaster for the US:

  • Fake videos where public officials commit crimes, use racial slurs or engage in sexual acts.
  • Fake videos could be produced placing politicians in locations there were not, saying things they never said.
  • Fake videos could place public figures in meetings with foreign enemies, spies or criminals.
  • Fake videos could show soldiers committing war crimes and other atrocities.
  • Fake videos could stir racial tensions by showing a white police officer killing a black man.
  • Fake audio could “reveal” criminal behavior by candidates or public figures.
  • Fake audio could show public officials discussing plans to intervene in other countries.
  • Fake audio or video could be used to mislead the public into thinking a disaster has or will occur.

As Motherboard originally put it, “we’re all #$%&ed.”

Chris Bailey
Chris Bailey, VP of Strategy and Business Development at Entrust

Personally, I thought that was a little pessimistic, and wondered if maybe digital signatures and other forms of authentication could combat the rise of fake news and deep fakes. So I decided to catch up with Chris Bailey, the co-founder and former CTO of GeoTrust and the current VP of Strategy and Business Development at Entrust, to pick his brain about deep fakes, fake news and whether or not digital signatures and authentication could play a role in staving off the threat they pose.

The following interview has been lightly edited for flow.

Patrick: Having news outlets or the producers of legitimate video sign the video with a digital signature so that you could actually determine its authenticity– is that a feasible approach?

Chris: Everything you said is technically possible. And there are some organizations that are starting to set up projects, I think one is called the Trust Project. The problem is you can approach it from one of two ways, you could deliver it from a trusted channel, a trusted website or whatever – a Facebook page – or you can deliver it via the trusted content itself—the article itself. Probably both, if you were to really get down to it, you would want to do both at the end of the day. Have the Washington Post, which is an [Extended Validation] site, be trusted. And maybe have some kind of credential that says that they a duty to do some type of level of due diligence in their reporting and then if they didn’t do that – and maybe that’s measured by an independent organization – they would lose some type of seal or, for lack of a better word some type of indicator. And then if they wanted to syndicate that information then that information should at least be signed. But they also might want to sign it for the long-term integrity of the document, to make sure that it’s not manipulated in any way.

Patrick: For that to be feasible, what would need to happen as far as, I guess, there would need to be some level of browser buy-in and obviously you would need to have the outlets buy in, what else would need to happen for that to be feasible?

Chris: Technically, to get the ecosystem working you’ve got to get a group of people – and from a political standpoint it has to be a non-partisan type of objective – so you’ve got to set the ground rules and then you’ve got to have applications actually start to recognize this. And once you start to actually build this up and have major news outlets start to use it, then the applications are almost forced into accepting it. So it’s the chicken and the egg paradox, you probably have to have the news outlets start to use it first.

Patrick: And that would start to force browser buy-in and other buy-in from other entities?

Chris: I think so because if something is published, and you know, for example it’s not real and it starts to catch on virally to multiple website you’ve got to be able to track it and I don’t think the systems that are in place today can really do that. The only way you can really do that, well, let’s just say that every good security system has a proactive and reactive layer. So, identity is a proactive layer. And a reactive layer is like detecting spam or phishing or viruses—things like that. But the proactive layer is kind of like a whitelist.

Patrick: Doubling back to something you said earlier, and I know that is maybe getting a little outside of your scope of expertise, but I’m curious about your opinion on this: you mentioned that it would have to be non-partisan, who do you think would be best suited to make that call with regard to authenticating?

Chris: I don’t know. We actually are looking into it to see what we can do to help. So we’re trying to figure out who could do that, so we have a standards team if it works for us. Before we go out and start working on the technology we’re trying to figure out who could set up the potential ground rules for this because I think that’s going to be a big part of this.

Patrick: This a relatively new threat, I think that digital signatures in particular could have a big role in combating that, it’s just a matter of feasibility and whether that’s something that the industry is already looking at.

Chris: The difference between digital signing and this, is you need something more to wrap around, some type of context that this is a special digital signing that means this, it’s not just authentic content, but it’s [from a legitimate outlet]. As EV is to identity, this would probably be something that would be more than that. It would mean that there’s some type of standard happening, some kind of due diligence that the organization goes through maybe akin to a privacy policy and having and audit on that.

Many thanks to Chris for taking a few minutes out of his busy day to talk with us!

As always, leave any questions or comments below.

Author

Patrick Nohe

Patrick started his career as a beat reporter and columnist for the Miami Herald before moving into the cybersecurity industry a few years ago. Patrick covers encryption, hashing, browser UI/UX and general cyber security in a way that’s relatable for everyone.