The MIT Nanda "Gen AI Divide - State of AI in business 2025" report is misleading

MIT’s 95% AI failure rate is wrong

The MIT NANDA report claiming 95% of AI projects fail is making rounds on LinkedIn. Everyone’s sharing it like it’s gospel.

Has anyone actually read the methodology?

They interviewed 52 organizations. That’s their entire sample. We’ve worked with over 250 companies at Paid, and the reality looks completely different.

The flaw in their math

MIT counts any pilot that doesn’t reach “full production deployment” as a failure. By that definition, your spike to test if Redis handles your load pattern is a failure. Your POC to validate an API integration is a failure. Your two-week experiment with vector databases is a failure.

This is like saying 95% of git branches fail because they don’t get merged to main.

From our data across 250+ companies, here’s what’s actually happening:

  • 30% are learning exercises – POCs to understand feasibility. They achieved their goal.
  • 25% are vendor evaluations – Testing if a tool fits before committing. Working as intended.
  • 20% work perfectly but reveal the problem wasn’t worth solving. That’s discovery, not failure.
  • 25% are actual failures – Wrong approach, bad implementation, or solving the wrong problem.

The real failure rate is 25-30% for intentional production deployments. Still high, but nowhere near 95%.

That’s not to say PoCs don’t fail – because they do. And we think we know how to solve that too – despite the PoCs being a nightmare.

What MIT got right

Buried on page 19, MIT mentions something crucial: External partnerships succeed 67% of the time. Internal builds succeed 33% of the time.

This matches our data exactly. But they bury the lede!

The companies failing at 95% rates are trying to build everything internally.

They think they can weekend-hack their way to production AI.

They’re learning what every vendor already knows: the last 20% of the problem takes 80% of the effort.

External vendors have already hit every edge case. They’ve dealt with multilingual inputs crashing their parsers, CSVs that have weird separators or multi-lines, PDFs with an incorrect number of pages… They’ve seen classification models that work perfectly until someone uploads a weird-ass Excel file.

Your internal team will discover these one painful customer complaint at a time.

The real problem

MIT’s report concludes that AI doesn’t deliver value. The actual conclusion should be: 95% of companies don’t know how to evaluate, buy, or build AI solutions.

They’re measuring the wrong thing, using flawed methodology, and drawing conclusions that miss the entire point.

The companies succeeding with AI aren’t the ones with the biggest budgets or the best engineers. They’re the ones who understand that most “failures” are just expensive education on the path to finding what actually works.

Stop reading “95% failure rate” as “AI doesn’t work.”
Start reading it as “95% of companies don’t know what they’re doing.”

There’s a difference.


MIT report: https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf

What’s your experience? Are you seeing 95% failure rates, or is your sample telling a different story?


Posted

in

by

Tags:

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.