The Piano Problem Explains Why 80% of Enterprise AI Keeps Failing

Nov 7, 2025 | AI

Quick summary:

  • 80% of enterprise AI deployments fail because companies hand out tools without training, frameworks, or connection to business knowledge
  • Shadow AI occurs when employees bypass corporate systems to plug business information into consumer apps like ChatGPT, creating security risks and operational chaos
  • 77% of enterprise leaders believe competitors are ahead of them in AI adoption, even as 86% report using LLMs somewhere in daily operations
  • Subject matter experts must drive AI implementation, not IT departments or data scientists working in isolation

Jed Dougherty kept seeing the same mistake. Companies would buy 50,000 Microsoft Copilot licenses, hand them out to employees, and then wonder why nothing changed.

Business leaders blamed the technology. But the problem wasn’t the technology. It was the strategy (or lack thereof).

As SVP of AI and Platform at Dataiku, Dougherty spends his days helping enterprises actually use AI instead of just buying it. Speaking to host Greg Matusky on The Disruption Is Now from the Money 20/20 show floor, he broke down why most implementations fail, how shadow AI creates chaos behind IT’s back, and why the people closest to the work need to build the agents that support it.

Watch now:

Key takeaways: 

The piano problem illustrates why most businesses struggle with AI

A RAND study found that over 80% of all AI deployments at scale have failed. Dougherty argues most of that number comes from companies buying thousands of licenses and assuming employees will figure it out themselves.

“You can’t just throw these tools at people,” Dougherty says. “You need to give them a framework. You need to give them new training. You need to give them assistance on the best way to use these things, in examples.”

The analogy holds. Giving someone a piano doesn’t make them Paul McCartney. Giving someone ChatGPT doesn’t make them an AI expert. Organizations need centralized teams leading rollouts, providing training, showing examples, and helping people understand how these tools connect to their actual workflows.

Shadow AI is flooding enterprise stacks right now

Every organization has shadow AI happening right now.

Employees are taking proprietary information, customer data, and strategic documents and feeding them into consumer applications on their phones.

This happened because generative AI arrived backwards compared to previous technology waves. Earlier machine learning systems operated invisibly in the background through data scientists, but ChatGPT’s unprecedented consumer adoption — faster than TikTok or Facebook — meant employees were already using AI tools before their companies could establish policies or controls.

So, companies now have no choice but to provide governed alternatives, or the shadow AI continues unchecked. Without centralized systems, there are no shared efficiencies, no accumulated value, and no way to track what sensitive information is leaking into uncontrolled environments.

The fix requires an AI gateway between large language models (LLMs) and end users that detects toxicity, catches personally identifiable information, blocks prompt injection attacks, masks sensitive data, and maintains full auditability. Without that layer, the enterprise has no visibility into what employees are doing or how they’re using AI.

AI governance needs an HR department for your agents

Think of AI agents as employees. Each agent calls multiple tools and performs specific functions within larger agentic systems. Organizations need to track agent performance the same way HR tracks human performance.

“I want to be able to track the quality of what that agent is doing, the work that they’re doing,” Dougherty explains. “And I want to be able to promote them if they’re doing better, or I want to be able to remove them if they’re doing poorly.”

This governance extends beyond day-to-day monitoring. When rolling out an agent across an entire organization, companies need documentation showing exactly who approved it, which steps validated it, how it handles financial transactions in compliance with existing rules, and whether audit trails exist for regulatory review.

The framework operates at two levels. Immediate governance happens through AI gateways that detect problems in real time. Workflow governance tracks who approved agents, how they were tested, what compliance requirements they meet, and whether documentation exists for auditors from the FCC or EU AI Act.

Organizations can think of it as managing a new augmented workforce. Just as modifying an HR setup in Workday requires governance, managing AI agents requires equivalent systems.

Subject matter experts must build the agents, not developers

The biggest implementation mistake is assuming IT departments or data scientists should lead AI adoption. They understand technology. They don’t understand the workflows, compliance requirements, regulatory constraints, and business logic that determine whether an AI system truly works.

Dougherty’s firm surveyed 800 global leaders across industries. While 86% said they were using LLMs somewhere in daily operations, 77% believed their competitors were ahead of them in the AI race.

That gap exists because organizations keep treating AI like previous technology rollouts. They centralize control with technical teams who don’t know the business. They build systems in isolation from the people who will use them. They miss the nuance that makes automation valuable.

“If you have somebody who understands the workflow, who understands the industry, who understands the compliance, the regulatory, and all the other issues that impact decision making, you come much closer to a final usable model,” Dougherty says.

In a 50,000-person company, 45,000 people probably know how to use ChatGPT. If organizations provide interfaces through which those employees can connect business tools into chains, workers will augment their own jobs, work faster, produce better results, and generate new ideas that can be distributed across the organization.

The other 5,000 people split into two groups. About 4,000 data scientists and analysts build tools everyone else uses. The final 1,000 serve as the regulatory body, watching what gets built and tracking what’s happening.

Trust but verify beats distrust and verify

Right now, AI systems aren’t consistent enough to trust completely. They make mistakes. They present incorrect information with absolute confidence. Organizations need verification processes that catch errors before they compound through agentic systems.

“Trust but verify is a place I think we can get to right now,” Dougherty says. “I distrust and verify, but I think in the relatively near future we can have a trust but verify situation.”

Whatever organizations build should help with that verification process. Systems need to trace entire agent thought processes and provide descriptions when giving answers back. They need to reference exactly where agents pulled information from.

Consider a bank transfer. The agent should provide receipts that link to source systems and confirm transactions actually happened. This also means giving agents the correct tools. Don’t ask your agent to be a prediction model. Give your agent the ability to talk to a prediction model, then prove it called the model and show its work.

When agents have proper tool access and verification processes, organizations can be much more sure that statements and actions are real.

Key moments: 

  • Why Gen AI came from consumers to enterprises instead of the other way around (2:52)
  • How shadow AI creates chaos when employees bypass corporate systems (3:56)
  • The governance layer that sits between LLMs and end users (9:25)
  • AI governance as HR for your agents (11:08)
  • Why 77% of leaders think competitors are ahead in AI adoption (13:42)
  • The chatbot trap that limits what organizations think AI can do (14:30)
  • Why subject matter experts must build agents, not IT departments (17:00)
  • The RAND study showing 80% of AI deployments fail at scale (17:50)
  • The piano problem explaining failed implementations (18:25)
  • Trust but verify versus distrust and verify (20:17)
  • The technical wall that prevented smart people from building things (22:17)
  • Why AI empowers rather than eliminates workers (23:49)

Q&A with Jed Dougherty, SVP of AI and Platform at Dataiku

Q: What exactly is shadow AI and why is it dangerous?

A: Shadow AI is when every person in your organization is plugging your business information into their phones and into consumer apps.

They’ll all have a different operating procedure. There will be no efficiencies among people. There will be no shared value.

And if you’re a business, you have no choice but to provide an alternative to those consumer applications. Or every person in your organization will be doing shadow AI.

Q: How should organizations think about AI governance?

A: I almost think of it as HR for AI.

Let’s say I have an agentic system. That agentic system is going to be made up of multiple agents, and those agents are going to be calling multiple tools. We can think of each agent as maybe a human being or maybe a department.

And I want to be able to track the quality of what that agent is doing, the work that they’re doing, and I want to be able to promote them if they’re doing better, or I want to be able to remove them if they’re doing poorly.

Q: Why do so many enterprise AI deployments fail?

A: I think a huge portion of that number is people buying 50,000 Microsoft Copilot licenses and saying, hey, everybody’s not doing anything different. We gave him a Copilot license, without any training or any control or any suggestions of what to do.

You can’t just throw these tools at people. You need to give them a framework. You need to give them new training. You need to give them assistance on the best way to use these things, in examples.

Q: Should AI implementation happen on-premises or in the cloud?

A: I think it depends on the business. There are certainly some aspects like, let’s say some aspects of banking, for example, where they will never be fully in the cloud. You’ll have some things that always need to stay on-prem. But I think most of the time you can be completely secure and completely governed and completely safe with using the cloud based models, as long as you have that AI gateway managed by your organization in between those models and your end users.

Q: What has surprised you most about AI adoption since ChatGPT launched?

A: What I’ve been consistently surprised about is how empowering this is for very smart people who previously had a technical wall in front of them that did not allow them to do really, really cool things. The wall was code. The wall was math. And it’s just inspiring to watch people in my life who I never thought would build a website or create an app or be able to transfer their idea from, oh, I really need to find a developer into an amazing, fully fledged production line system.

Q: How should organizations balance AI replacing versus augmenting jobs?

A: I think jobs will shift. People will be doing different things. But there’s a pessimistic view that the modern white collar worker is essentially a GUI process that interacts in between different pieces of computer software. I prefer to believe that people are inventive endlessly and will instead augment the way they work and provide a lot less of that busywork, but then have the opportunity to work on much more interesting things.

The Piano Problem Explains Why 80% of Enterprise AI Keeps Failing
Google Podcasts logo
The Piano Problem Explains Why 80% of Enterprise AI Keeps Failing
Greg Matusky

Recommend a guest

Gregory invites individuals with unique insights into artificial intelligence to join the conversation. Interested participants are encouraged to appear as guests on The Disruption Is Now podcast. Fill out the form to recommend someone who may be a good fit.