Daniel Schauer Daniel Schauer

The Dangers of Personalized Service for Huge Clients in Banking

computers can speak

Today, there is a class of customers in banking that get services and offerings vastly different from what is offered to the ordinary consumer. The disparity can feel like the gap between purchasing a ticket with American Airlines and chartering a private jet. Private Banking provides an unprecedented level of service for these clients, assigning their customer relationship to a specific individual intimately familiar with each of the limited number of clients. The practical implications of this are that the banker recognizes each customer's voice in their portfolio. Additionally, they know their customer's e-mail addresses, spouses' names, children's names/ages, anniversary dates, and more. However, because of the familiarity they have, steps like asking security questions may get skipped while the banker thinks to themselves, "Why would I annoy Mike by asking for the last four of his social when I obviously know him by the phone number that called me and the sound of his voice?"

Customers in this class are used to interacting with their bank the way 911 calls are made in works of fiction. They simply call their banker, say "Hi" and then proceed to describe the exact transaction they called to execute; once the intended action is confirmed by their banker, they will say "Thanks," hang up and then move on to the next meeting/call on their calendar, etc.

This arrangement is perfectly susceptible to a simple text-to-speech impersonation of a high-end client; rumors circulate about real-world cases that no institution wants to disclose. But, in fact, the very nature of these clients is that they often have public personas that would provide enough training material to synthesize their voice.

Sweet dreams!

Consider submitting a contact form if you’re interested in helping DeepFake Stop implement a solution as soon as possible… before this problem gets much worse.

Read More
Daniel Schauer Daniel Schauer

Social Engineering with Text-to-Speech

In our current world of social audio apps like Clubhouse and online meeting apps like Zoom, it is entirely plausible that enough audio of your voice could be captured to train a text-to-speech algorithm to mimic your voice.

You’re sitting down to have a nice meal with your significant other on your one-night-a-week date night (when you can actually afford the requisite babysitter for the kids at home). Just as you land in your chair, you get an urgent text from your boss. It looks like there’s been some sort of security issue. They need you to reset your password as soon as possible by going to the link they gave you so you can use your company’s password reset tool. Your phone recognizes the number because your boss is in your Contacts, and you can see their photo and prior SMS text message history that you share with them.

The waiter comes to your table and asks what you and your partner would like to drink with your meal. Taking a few moments to review the wine list together, you decide to try out a modestly priced bottle. Your partner mentions that they made their wine choice based on the winery because it was where you had your first date together years ago. Suddenly, your phone starts ringing. Examining your phone’s screen, you see your boss’ name and photo from your Contacts; given that text message you got a moment ago, you get the distinct impression that you need to answer this call and do so while mouthing an apology to your partner.

You recognize the voice on the other end of the line, and your boss sounds irritated. “Didn’t you get the earlier text message?!? Why haven’t you reset your password yet?” The questions make you feel quite uneasy. Your boss explains that you need to reset a password because the company’s ongoing protection and prevention scans found your current login and password on the dark web. Almost everyone at the company was affected. So, your company quickly made an externally facing web application that would allow employees to reset their passwords. That’s where that link you got texted to you earlier points to; your boss needs you to go to that link and reset your password right now.

A little unorthodox, but you recognized the voice and the number that texted/called you… what are the chances that you click that link? A whole lot better than the chances of you doing so if some random person had texted/called, or a voice that you didn’t recognize had been on the other end of that phone call.

The ability to “spoof” the number of a phone call or text message so that it appears to come from a different number is well documented. However, in our current world of social audio apps like Clubhouse and online meeting apps like Zoom, it is entirely plausible that enough audio of your voice could be captured to train a text-to-speech algorithm to mimic your voice (or your boss’ voice). This new addition to the old attack vector of spoofed phone numbers significantly elevates the risk of successful social engineering attacks leveraging fake audio.

Read More
Daniel Schauer Daniel Schauer

Spyware and Deepfakes

You may have read about NSO Group’s Pegasus software designed to compromise mobile devices running iOS or Android. There’s an aspect that many in the media have missed: since Pegasus allows the hackers to listen to and record voice data, such as phone calls, anyone public figure compromised by the tool could very likely have their voice falsified. Tools known as text-to-speech can easily be trained by listening to audio narration, allowing a user of the software to type a sentence on their computer and make it sound like it came from a specific person’s voice.

Unless the public can validate that a digital asset comes from its supposed source, we should be skeptical of any content attributed to any of the known compromised public figures. To a lesser degree, we should do the same for any digital content we consume (unless we can validate its source). For the time being, that means we should be skeptical of almost any content.

Our company provides a solution allowing the public to determine how trustworthy any digital asset is. We’ll also give everyone a reliable way to dispute falsified media.

Read More
Early Signups Daniel Schauer Early Signups Daniel Schauer

Consensus: Deepfakes are a Problem

Over 80% of businesses agree that deepfakes are a problem

According to a survey by Attestiv (a startup in the data authentication space) described by VentureBeat in an article published 5/24/2021, over 80% of respondents said that deepfake media poses a potential risk to their organization.

  • 80% of respondents said that deepfake media poses a risk

  • But, less than a third of respondents said they’ve actually taken any steps to fight deepfake media

    • 46% of respondents said their organization lacks a plan to fight deepfakes, or that they personally lack knowledge of a plan

    • 25% of respondents claim their organizations plan to take some action in the future

These are some startling figures, given the level of fear around the potential of deepfakes to ruin the publics trust in digital media, and the personal havoc that such false media can cause for victims.

Fortunately, a solution is coming! Sign Up if you want to be one of the first to be protected by our new solution.

Read More