Spyware and Deepfakes

You may have read about NSO Group’s Pegasus software designed to compromise mobile devices running iOS or Android. There’s an aspect that many in the media have missed: since Pegasus allows the hackers to listen to and record voice data, such as phone calls, anyone public figure compromised by the tool could very likely have their voice falsified. Tools known as text-to-speech can easily be trained by listening to audio narration, allowing a user of the software to type a sentence on their computer and make it sound like it came from a specific person’s voice.

Unless the public can validate that a digital asset comes from its supposed source, we should be skeptical of any content attributed to any of the known compromised public figures. To a lesser degree, we should do the same for any digital content we consume (unless we can validate its source). For the time being, that means we should be skeptical of almost any content.

Our company provides a solution allowing the public to determine how trustworthy any digital asset is. We’ll also give everyone a reliable way to dispute falsified media.

Previous
Previous

Social Engineering with Text-to-Speech

Next
Next

Consensus: Deepfakes are a Problem