Quick summary:
- AI-generated deepfakes have reached the point where the average human can no longer tell fake voices and faces from real ones, creating a direct threat to business operations
- Fake job candidates make up 25%-40% of all applicants, with nation-state actors like North Korea placing trained operatives inside companies through remote hiring
- Businesses need to shift from guarding the back door to screening the front door, treating every digital interaction as potentially compromised
Matt Moynahan used to think deepfakes were funny. Early AI-generated videos showed Will Smith eating spaghetti, people with six or seven fingers, Donald Trump flying an F-16 fighter jet. Everyone laughed because context made the fakes obvious.
While people were laughing, however, the technology kept improving. The extra fingers are gone. The voices are perfect. And Moynahan, a 30-year cybersecurity veteran and CEO of GetReal, now admits even his trained eye struggles to tell the difference.
“Everything about cyber starts out as funny until it isn’t,” Moynahan tells host Greg Matusky on The Disruption Is Now. “And we’re at that point.”
The conversation lays out where businesses are most exposed, how nation-states exploit remote work, and why the human eye alone can no longer protect you.
Watch now:
Key takeaways
The “believability index” makes deepfakes far more dangerous than phishing
Phishing emails created a $12 billion email security industry. Deepfake video and audio threaten something much larger because of how human trust works.
Moynahan calls it the “believability index.” People trust written text at a baseline level. Trust jumps when they hear a voice. It jumps again when they see a face on a video call.
That’s why TikTok influencers move products and why video calls feel more credible than emails. AI has caught up to near-perfect mimicry at the exact moment business runs on remote-first, video-first communication. When a deepfake CEO appears on a call asking an employee to click a link or authorize a wire transfer, people react to authority in the immediacy of the request. Instinct overrides skepticism.
Up to 40% of job candidates are fake and HR isn’t built to catch them
Depending on which research firm you ask, between 25% and 40% of all candidates applying for jobs are fraudulent.
Some use fake resumes and stolen identities. Others use AI to alter their appearance or voice during video interviews. In some cases, a different person shows up on camera under a modified face to take a coding test on the candidate's behalf. In others, people modify their appearance just enough to hold down a second or third job without being recognized. And some are real, qualified humans working for hostile governments.
North Korea has trained a significant portion of its workforce in computer science and IT administration. These operatives apply for remote jobs, pass interviews, and do the work. Their paychecks funnel back to the regime. If discovered, they may steal intellectual property or hold the company hostage with ransomware on their way out.
Unfortunately, HR has never functioned as a security checkpoint. Background checks often happen late, after a candidate is already selected.
Therefore, companies need to move that check to what Moynahan calls “the pre-ground” — before engagement begins — because “we’ve been guarding the back door for so long that people are walking right through a remote first process.”
There is now more fake content online than real content
Before Steve Jobs died, more photos had been taken with the iPhone than with all cameras in the cumulative history of the world. Generative AI has done the same for content.
“We’re at a point now where there’s more fake content than real content because of the amount of content coming out of generative AI,” he says.
That changes the question businesses should ask. Instead of hunting for fakes, they need to verify what’s real. “Finding a deep fake is one thing, but actually authenticating the real is just as important,” Moynahan says.
GetReal approaches this by sitting inside the communication stream and monitoring traffic continuously using what the company calls forensic signal intelligence. The complication is that even legitimate calls use AI to strip background noise and optimize lighting. “We have to find the bad AI in the good AI,” Moynahan says. “It’s very complicated.”
Every business process with a human in it is a potential attack vector
Moynahan lays out three business processes that need protection now.
First, recruiting — the most active threat. Second, day-to-day video and voice interactions where a deepfake boss or board member can trigger immediate action. Third, account recovery and credential resets, where AI can mimic the voice of a real employee or customer calling IT.
“I don’t care whether you’re pressing a button to wire money or pressing a button to fire a missile. Everybody has a vested interest in making sure that the content they’re engaging with and the data they’re engaging with is accurate, trustworthy, and correct.”
To illustrate how real the problem is, he pointed to the call itself. He and Matusky had never met in person. “Until then, you’re just a streaming piece of media to me,” Moynahan said. “The traffic’s coming over the internet, just like Netflix traffic, coming into my office, and here we are having the conversation.”
The world will boomerang back to authenticity
Moynahan sees a cultural correction ahead. The flood of synthetic content, collapsing trust, and deepfake-powered fraud will produce a backlash. People and businesses will gravitate toward verified, authentic interactions.
“I think you’re gonna see the world sort of do a little bit of a boomerang and come back to authenticity,” he predicts. “I think what used to be open social networks and just everyone, I think it’s gonna come back to engaging with people you trust.”
For businesses, investing in digital integrity now creates an advantage later. The old tricks for spotting fakes — looking for extra fingers, checking shadows, asking someone who claims to be in Europe to show a European electrical outlet — are already obsolete.
Technology must fill the gap. “The average human can’t see or hear the difference with AI generated content,” Moynahan says. “So, tech companies like GetReal have to step in and provide that assistance.”
Key moments:
- How the early humor of deepfakes masked a growing threat (3:12)
- The “believability index” and what it means for business trust (4:43)
- The three enterprise processes most vulnerable to deepfakes (7:57)
- GetReal’s four-question framework for filtering communication traffic (9:43)
- Why even a 30-year cybersecurity veteran can’t spot today’s fakes (12:00)
- How North Korea trained IT workers to infiltrate remote-first companies (15:14)
- The arms race between detection techniques and adversary adaptation (17:48)
- Why verifying the real matters more than finding the fake (21:30)
Q&A with Matt Moynahan, CEO of GetReal
Q: What's the simplest way to understand what GetReal does?
A: Think of a water tap. The water coming out is the data powering your business — you want to drink it, consume it, make good decisions. Now imagine that water is poisoned.
We sit in the traffic of an enterprise like a filtration system and ask four questions about everything flowing through. Is this person real or synthetic? If they're real, are they who they claim to be? If they're not, what should we do — kill the call, flag it, notify users? And finally, who's behind the attack?
Q: Does AI and synthetic automatically mean dangerous?
A: Not at all. There will be plenty of legitimate uses for agentic AI, so synthetic doesn't equal malicious. But the same technology that powers a helpful AI assistant can also be used to submit fake evidence in an insurance claim, collect someone else's social security benefits, or let a foreign adversary sit in on a sensitive call pretending to be an ally.
The technology is neutral. The intent behind it is what matters.
Q: Why do deepfake impersonation attacks work so well inside companies?
A: People react to authority in the immediacy of a request. When your CEO appears on a video call and tells you to click a link or authorize a transfer, your instinct is to comply. The same goes for a board member, a manager, even a customer calling to reset credentials.
AI can now mimic the voice of real employees and real customers, and you'd never know. That's what makes it so effective. It preys on the trust and hierarchy that already exist inside every organization.

