Site icon aivancity blog

Less certainty, more awareness: AI shows the way that schools dare not take

When a machine says “I don’t know”, it’s time to reinvent education.

By Dr. Tawhid CHTIOUI, President and Founder of aivancity, the Grande Ecole of AI and Data

Just imagine. You ask a question to a state-of-the-art artificial intelligence, loaded with artificial neurons, gorged with planetary data, more connected than your teenager on a Saturday night, and it answers you, without blushing: “I don’t know.”

Not a loading error. Not a server failure. No: a deliberate act of algorithmic modesty.

At the other end of the machine, you’re left speechless. As if ChatGPT had suddenly discovered an existential doubt.

This scene is not science fiction. It’s the result of a very real innovation by Themis AI, an MIT start-up that has just endowed generative AIs with an unexpected superpower:doubt. An AI that hesitates. An AI that chooses not to answer. An AI that’s just like us… when we’re honest. A programmed act of lucidity.

For years, we’ve been training machines to answer everything, to talk more than think, to assert more than question. We’ve entrusted them with the mission of enlightening the world… while forbidding them to say they don’t see clearly.

In this great digital comedy, teachers have long been the first to be relegated to the role of extras. Their voices have been replaced by computer-generated voices, their doubts by automated certainties, their fruitful slowness by instant reactivity.

But now AI itself has taken a step to the side. It has discovered that knowledge without consciousness is the ruin of cognition. It’s giving doubt its rightful place.

What if 21st-century education no longer meant answering all the questions, but learning to ask the right ones? No longer piling up knowledge, but cultivating a rarer, more subversive, more fertile skill: knowing how to say “I don’t know. And that’s why I’m looking.”

At the heart of this mini-revolution lies a simple but radical gesture: teach an artificial intelligence to recognize that it doesn’t know. That’s exactly what Themis AI, the start-up founded by MIT researchers, is doing. Their idea? Add a ” boundary awareness layer ” to large language models. A sort of filter that measures, in real time, the AI’s level of confidence in its own response. And if the doubt is too great, the machine responds: “I don’t know.”

An obstacle refusal? No. A technical feat coupled with an ethical turning point. Because it’s not by perfecting the model’s performance that we build confidence. It’s by giving it the ability to doubt itself.

By reducing hallucinations in professional use by 64%, Themis AI has not only improved a tool: it has redefined the maturity criteria for artificial intelligence.

But what’s most fascinating is that this technological innovation… is just catching up with what the school should have done a long time ago.

Because the truth is, doubt is not a weakness. It’s a method. A posture. A skill.

And if an AI can learn to doubt, why does our education persist in valuing self-assurance over lucidity? To prioritize quick answers over slow questions?

For centuries, the authority of the teacher rested on one decisive advantage: he knew. He held the content, controlled the knowledge, and had the authority to correct those who strayed. He was the guardian of the cognitive temple.

Then came the Internet, then Wikipedia, then ChatGPT… And suddenly, this role of official distributor of knowledge became as obsolete as a cassette player. The encyclopedia teacher? Outdated.
The search engine prof? Decommissioned.

But this apparent downgrading is in fact a tremendous opportunity for reinvention. For while AI is capable of reciting, rephrasing and even writing with disturbing brilliance, it still lacks the ability to accompany, contextualize, doubt, nuance and grow.

Yet this is precisely what a 21st century teacher must do:
no longer transmit answers, but orchestrate pathways.
The teacher becomes an architect of skills.

Its role is no longer to know everything, but to help people know better. Not to impose a truth, but to build spaces for exploration. Not to fear AI, but to use it as a pedagogical mirror, revealing fuzzy areas, blind spots, things left unsaid. In other words, where AI produces content, the teacher produces meaning.

And in a world saturated with mass-generated information, meaning has become a far more precious commodity than raw knowledge. An AI can write a perfect essay. But it can’t explain why that question is worth asking.

So, no, the professor isn’t dead. He’s just changed his costume. He’s swapped the smock of the know-it-all for the hard hat of the skills builder. And his best tool? Doubt…

He would probably have started by taking away her voice.

Then he would teach her to ask questions. Not questions to get answers. Questions to cast doubt on the answer. Questions to explore the shadows around certainty. Questions like holes in knowledge, where thought can breathe.

Because Socrates, it’s easy to forget, never taught anything. He never gave a lecture, never corrected a paper, never gave a mark out of ten. All he did was talk, tirelessly. And when pressed, he would reply: ” All I know is that I know nothing.

Today, he would fail the agrégation…

But his spirit is back with a vengeance, and through an unexpected door: that of machines.

Themis AI, this artificial humility module, does nothing more than institute an algorithmic version of Socratic doubt. It doesn’t seek to make AI more knowledgeable, but more lucid. Less arrogant. More reliable, precisely because it knows it can make mistakes.

What if we taught our students to detect the blind spots in a line of reasoning, even when it’s well presented, well written, even signed ChatGPT? What if we rehabilitated uncertainty as a learning tool, not as a weakness to be corrected?

Because in the end, intelligence isn’t the art of answering quickly, it’s the ability to stay in the question without panicking.

So yes: AI doubts, so it thinks. And if it starts to think like Socrates, maybe it’s time for human education to do the same.

In recent days, headlines have thundered like premature autopsies:

The latest salvo? An MIT study of 54 participants (aged 18 to 39), divided into three groups (ChatGPT, Google Search or “brain only” users). The result: the ChatGPT group showed significantly lower brain activity: less creativity, less memory and essays judged to be “soulless” and “uniform”[i]. A similar finding points to an atrophy of critical thinking in those who “trust” AI.

It would seem that artificial intelligence is a kind of cognitive microwave: practical, fast, but which kills the taste for cooking. Except there’s a slight problem with this line of reasoning: it’s not AI that makes you stupid, it’s the way you use it.

Give a three-year-old a drill and he won’t build you a tree house. Give ChatGPT to an educational system formatted for answers, scales and essays, and you’ll get… answer clones.

But if you give that same AI to minds trained to doubt, question and deconstruct, then it becomes a formidable intellectual adversary. A training ground. A sparring partner for reasoning.

It’s not the tool that makes the level, it’s the level that makes the tool.

So, yes, AI can make you lose your neurons.
But only if you’ve been taught to obey, not explore.
Only if you’ve been led to believe that “getting the right answer” is more important than “understanding why that answer makes sense”.

In other words: it’s not artificial intelligence that threatens us. This is artificial pedagogy.

And if we want to avoid becoming processor-assisted intellectual zombies, maybe it’s time to put doubt, humility and friction back into our learning. To stop pretending that there’s always a right answer. To stop noting speed instead of depth. To transform error into method, ignorance into departure, uncertainty into compass.

What if we rethought education – not just schooling, but education in the broadest sense of the term – as the gymnastics of uncertainty?

No longer a parade of well-aligned correct answers, but a laboratory of the unsure, a playground for testing, formulating, getting lost and starting again.

Because while AIs know how to generate text, they still don’t know how to generate critical thinking. They have no intuition, no fertile doubts, no inner voice that says: “Hmm, really?”
And if schools are to prepare tomorrow’s citizens, they would do well to teach what machines will never do: think against oneself.

And if we want to prepare minds capable of resisting the algorithmic illusion, we’ll need more than committed teachers: we’ll also need enlightened parents.

Yes, educating about doubt starts at home. When you stop answering every question with confidence. When we say to a child: ” I don’t know, but we can look together. When we value curiosity more than certainty, research more than recitation.

So what can we do about it? Here’s a radically sensible little program to educate – at school and at home – about uncertainty:

Transforming an admission of ignorance into a rite of initiation.

At school, start every lesson with a question that no one can answer right away, even with ChatGpt. Create “I don’t know yet” badges rather than “you’re wrong” notes.

At home, when a child asks a question, resist the instinct to explain everything. Sometimes answer: “Good question… do you want to look it up together?” Show them that even grown-ups don’t know everything, and that it’s okay. It’s even exciting.

What if two opposing statements could be… both useful?

At school, develop the art of “yes, but”, of “it depends”, of “look at it from another angle”. Make doubt a skill, not a bug. Develop a love of nuance, complexity and grey areas.

At home, stop answering “is it true or false?” with absolute “true” or “false”. Sometimes say, “It’s more complicated than that.” Read stories together with no obvious good guys or bad guys. Show that the world is not binary, and that this is what makes it interesting.

Mistakes are not faults. It’s the draft of intelligence.

At school, rather than “correcting”, let’s analyze discrepancies, failed hypotheses, poorly formulated intuitions. Show that thinking is built on deviations, not certainties. Intelligence without error is intelligence without learning.

At home, when a child makes a mistake, avoid “You see, you’re not thinking!” and prefer: “Interesting… what did you mean?”, helping him to unfold his reasoning, to correct by himself, without shame. Create a climate where error is not a fault, but a springboard.

At school, instead of banning ChatGPT, let’s learn to ask him real questions. To spot its hallucinations, assess its confidence, debate its limits. Let’s make machines objects of study, not oracles.

At home, explore AI with the children, not behind their backs. Ask them: “What do you think?”, then “Do you agree? Learn to check, to doubt, to dig. Show them that just because it’s well formulated doesn’t mean it’s right.

The student of the future doesn’t need to know everything. They need to know how to formulate a problem, pose a hypothesis, test an intuition, dismantle a line of reasoning. In short, to become an investigator of reality, a craftsman of doubt, an architect of mobile truth.

At school, replace knowledge tests with mini-investigations. Learn to search, to formulate a hypothesis, to articulate a doubt. Make the student a detective of reality, not a repository of knowledge.

At home, encourage curiosity. When a child says “I don’t understand”, don’t give him the answer, but give him a hint. Help him to look in several sources. Teach them that understanding is a path, not a box to be ticked.

Because in a world saturated with automated certainties, the real cognitive luxury is not knowing. To search, for a long time. To doubt, intelligently. And to understand, finally, that it’s not the answer that counts. It’s how you approach it.

What if the greatest educational revolution of our century wasn’t technological, but philosophical? What if the most valuable thing AI could teach us was not speed, productivity or performance… but modesty?

Yes, we have to say it: we have taught too much to answer, and too little to seek. We’ve built systems where those who doubt are slow, weak and disoriented. Where error is punished, uncertainty stigmatized, nuance frowned upon.

But doubt is the breath of intelligence, the necessary pause between information and understanding, the breath before the leap.

And now an AI reminds us. That an algorithm finally dares to say what so many teachers no longer dare: “I don’t know.”

The AI’s coming-out isn’t that it thinks: it’s that it doesn’t know everything, and that’s perhaps the smartest thing it’s ever said.

Perhaps this is the beginning of something big. Not the end of knowledge, but a rebirth.
A shift from vertical education, where knowledge is poured into open skulls, to circular, dialogical, Socratic education.

So to all those who teach, train and support: don’t be afraid to ignore.

Fear pretending to know. Make doubt a tool, humility a style, uncertainty a pedagogy. Not to give up on understanding. But to relearn how to think.

Because in the face of machines that claim to know everything, the most human, the freest, the most powerful act is perhaps, simply… to say: “I don’t know. And that’s why I’m alive.”

Selected as one of the 25 most influential global figures in AI and data by Keyrus (January 2025), Tawhid CHTIOUI is an international expert, speaker and serial entrepreneur in higher education and training. He is President and Founder of aivancity, the Grande École of Artificial Intelligence and Data. He holds a PhD in Management Sciences from Paris Dauphine University and a leadership development program in Higher Education from Harvard University, and has held scientific and management positions in various business schools in France and abroad. Tawhid CHTIOUI is a Chevalier (2016) and Officier (2022) of the Ordre des Palmes Académiques, and has also received several international awards, including the “Top 100 Leaders in Education Award” from the Global Forum on Education & Learning, “The Name in science & Education Award” from the Socrates Committee Oxford Debate University of the Future and the Top 10 Most Inspiring People in Education, 2022 issued by CIO VIEWS, and the “Trophée de la Pédagogie” 2024 from Eduniversal.

- https://www.medialab-factory.com/ia-intelligence-artificielle-rend-plus-bete-ou-augmente-creativite/ 
Exit mobile version