Why So Many AI Projects in Education Quietly Fail
I’ve seen a lot of AI projects fail in education.
Almost none of them failed because the model wasn’t good enough.
They failed because AI was built as a feature, not as part of a system that actually reduced user pain.
This distinction sounds subtle, but it explains a huge amount of what is happening across EdTech right now.
When AI Is Used to Hide a Broken Experience
In one EdTech role, leadership was excited about the idea of a navigation chatbot. Users were struggling to find things in the product, and the chatbot was framed as a fast way to help them get unstuck.
On the surface, that sounded reasonable.
But the real issue was not navigation.
It was that the UX itself was broken.
The chatbot was being used to paper over a structural problem instead of fixing it. Rather than asking why users were confused in the first place, the solution became an intelligent layer on top of confusion.
I’ve seen this pattern repeatedly. AI becomes a way to avoid harder product decisions. It is easier to add a conversational interface than to rethink information architecture, flows, and mental models. But when AI is used this way, adoption rarely follows.
Users do not experience the product as “helpful.” They experience it as fragile.
Technically Impressive, Practically Unused
Before I joined that organization, several AI services had already been built by talented engineers. On paper, they were impressive. Clean architecture. Solid models. Real effort.
The question that kept coming up internally was simple and frustrating.
Why isn’t anyone using them?
The uncomfortable answer was not about engineering quality. It was about relevance and connection.
Some tools did not solve a big enough pain. Others solved a real problem, but in isolation. They were not embedded into the workflows where teachers already spent their time. There was no natural moment where the tool became indispensable.
From the user’s perspective, these services felt optional. Optional tools do not get adopted in classrooms where time and attention are already stretched thin.
Where AI Actually Worked
Where AI did work looked very different.
In a recent role, we partnered deeply with UX Research and focused relentlessly on one question: why.
Why was lesson planning so hard?
Why did teachers feel behind even when they were working nonstop?
Eventually we landed on the real issue. Teachers did not know what skills their students had actually mastered. Planning was guesswork.
Once that became clear, AI stopped being a feature and became a system. Signals from student activity were collected. Those signals were synthesized into mastery insights. And those insights were surfaced in a way teachers could act on immediately.
Nothing about this was flashy. But it respected how teachers make decisions.
Adoption followed because the system reduced cognitive load. It replaced uncertainty with clarity. AI did not add work. It removed friction.
The Reality of Classrooms and Offline Work
Another hard lesson came from classrooms themselves.
Most meaningful student work is still offline. When you visit schools and talk to teachers, you see clipboards, paper, pencils, and handwritten notes everywhere. Digital signals are only part of the picture.
We explored ways to capture offline work. Tablets. Scanning assignments. QR codes. Styluses. On paper, these ideas looked promising.
Teachers told us something different.
They already move quickly with clipboards. They can flip to the right page instantly. Pens are fast, reliable, and disposable. Tablets require two hands. Typing slows them down. Styluses get lost. Input friction matters.
The insight here was not about technology limitations. It was about flow.
If AI breaks classroom flow, it does not matter how intelligent it is. Teachers will not adopt tools that make them slower in the moments that matter most.
The Pressure to Add AI Anyway
Across organizations, I have felt consistent pressure to ask the wrong question.
“Where can we add AI?”
That question usually comes from good intentions. Leadership sees investment flowing into AI. They read about advances daily. There is a fear of falling behind.
But the better question is harder and less exciting.
“What is the most painful problem our users are facing, and should AI even be part of the solution?”
When teams start with technology instead of pain, they end up with scattered experiments. Engineers get frustrated by low usage. Product teams struggle to measure impact. Users quietly ignore the tools.
AI becomes theater instead of leverage.
Feature Thinking vs System Thinking
This is the pattern I keep seeing.
Most AI in education is built as a feature, when it needs to be built as a system.
A feature answers the question:
“What can this model do?”
A system answers a different question:
“What decision does this help a human make, faster or better, in their real context?”
Systems:
- Are rooted in actual user pain
- Respect existing workflows
- Reduce cognitive load instead of adding it
- Accept human reality instead of fighting it
When AI is designed this way, adoption is not something you have to force. It happens naturally.
A Closing Thought
AI has enormous potential in education. But the bar is higher than technical novelty. Classrooms are complex, human environments. Tools that succeed there must earn their place.
I am increasingly convinced that the most important AI decisions in EdTech are not about models at all. They are about design, flow, and empathy.
I’m curious how others have seen this play out. Especially where AI did work, and why.
