By Dr. Tawhid CHTIOUI, Founding President of aivancity, the leading school for AI and data
1. The Comforting Illusion of Technical Training
By the end of 2025, a strange consensus had taken hold. Faced with the sudden emergence of artificial intelligence in every sphere of society, the world decided to provide training on a massive scale, quickly, and free of charge… as if to ward off anxiety through education.
Training in AI has become essential. But one question continues to be carefully avoided: training in what?
AI has never been more accessible. With just a few clicks, anyone can now learn to interact with models, generate text, code, and images, and automate complex tasks. Yet our collective understanding of what AI is actually doing to our societies has never been more fragile. We are learning to wield a power we don’t fully grasp.
The scene is almost caricature-like. Major platforms are opening up their courses. Universities are making their curricula available. Microsoft is launching a global AI Learning Challenge and providing free access to dozens of modules focused on skills, productivity, tool usage, and professional certifications. Google is rolling out Google Skills, a platform offering thousands of free resources on AI, centered on machine learning, APIs, cloud applications, and workflow optimization. Stanford is opening its flagship deep learning and NLP courses to the public: a bold academic move, but one whose scope remains largely technical, with societal dimensions appearing only marginally. MIT is widely distributing its machine learning and AI courses via OpenCourseWare; a few modules address ethics, but they remain peripheral to the technical core of the offering.
Everywhere, the same headlines keep popping up, like a reassuring mantra: AI for Everyone, Prompt Engineering, Machine Learning Basics, Build with AI. The underlying message is powerful, almost reassuring: understanding AI means knowing how to use it—click, configure, optimize. Produce faster, code more efficiently, automate more.
Thus, we learn to communicate with models, but not whether or not to delegate decisions to them. We learn to optimize algorithms, but not to question the power dynamics they reshape. We learn to produce faster, but not to understand what is quietly shifting: work, responsibility, sovereignty, trust.
This reductionism is not insignificant. It is convenient. Because technology is reassuring: it is measurable, teachable, and certifiable. It creates the illusion of control. It allows us to believe that AI is merely a new tool—sophisticated, to be sure, but fundamentally neutral. As if learning to use electricity were enough to understand industrial society…
Yet artificial intelligence is not merely a technical advancement. It represents a subtle anthropological shift—a gradual transfer of decision-making, judgment, and mediation to systems that we design but no longer fully control.
By focusing solely on practical training, we are creating societies that are skilled but blind. Organizations that are efficient but dependent. Individuals who are competent but helpless in the face of the consequences of what they are handling.
Therefore, the real question is not an educational one. It is political, cultural, and civilizational: can we seriously claim to prepare a society for AI by merely teaching it to click on models, without ever teaching it to reflect on the world that these models are transforming?
2. The fundamental misunderstanding: mastering the technology does not mean understanding AI
At the heart of the current debate on artificial intelligence lies a fundamental misunderstanding: a subtle yet far-reaching semantic shift—we have confused technical mastery with true understanding.
Knowing how to train a model, write an effective prompt, or integrate an API has, for short, become synonymous with “understanding AI.” As if learning a programming language were enough to grasp what programming does to the world. As if one could understand finance by learning to use a spreadsheet, or democracy by knowing how to vote.
This confusion is reassuring. It allows us to frame a profound revolution as a matter of skills. It shifts collective anxiety toward a simple solution: more training, faster, and with a greater focus on technical skills. But it rests on a dangerous illusion: the notion that AI is just another technology—complex, to be sure, but fundamentally neutral and manageable.
But artificial intelligence is not a standalone tool. It is an invisible decision-making infrastructure. It does more than simply perform tasks; it guides, organizes, prioritizes, and makes recommendations. It influences what we see, what we believe, what we choose, and what we reject. It functions less like a machine and more like a constant filter of reality.
Understanding AI, then, isn’t just about knowing how it works, but about understanding what it brings about.
It shifts power to those who own the models, the data, and the infrastructure. It shifts responsibility from individuals to systems, and then from systems to organizations that are difficult to identify. It shifts the nature of work, automating not only execution but also analysis, evaluation, and sometimes judgment itself.
A purely technical education leaves these aspects out of the picture. It teaches people to optimize without questioning, to implement without evaluating, and to speed things up without considering the direction. It produces experts capable of operating systems that they are not always able to question.
Even more worrying: this approach fosters a form of collective abdication of responsibility. If AI is viewed as merely a tool, then its effects are dismissed as side effects; its biases, as technical imperfections; and its social impacts, as unavoidable externalities. Technology absorbs the criticism, and critical thinking recedes.
But a society cannot delegate its understanding of the world to the engineers who build its infrastructure. It cannot leave it to statistical models alone to redefine the rules of the economic, social, and political game.
Teaching AI without teaching its purpose, its limitations, and its consequences is like producing expertise without a compass.
The question, then, is not whether we need more technical skills.
We certainly do.
The question is a far more daunting one: what becomes of a society that knows how to use systems but no longer knows how to think?
3. Artificial Intelligence as a Total Social Phenomenon
(or why training in AI in 2026 can no longer be just a technical matter)
Artificial intelligence is not merely transforming our tools. It is transforming the very conditions of human decision-making. It is affecting work, the economy, governance, democracy, and the environment—simultaneously, silently, and systematically. It is not an industry; it is an environment. It is precisely for this reason that AI constitutes a total social phenomenon. And it is precisely for this reason that training limited to technical skills is structurally insufficient.
Take ethics, for example. It has never been invoked so frequently. Charters, principles, and declarations abound. And yet, the gap between rhetoric and reality is vast: 82% of companies claim to have responsible AI principles, but only 27% have implemented concrete operational processes to apply them (MIT Sloan Management Review, 2024). In other words, ethics remains largely declarative. Training in AI without training in the engineering of responsibility means training in powerful systems devoid of effective safeguards.
Governance is another major blind spot. Yet when it is structured, cross-functional, and fully embraced, it reduces incidents related to bias, compliance, or uncontrolled use by 40% (McKinsey, Global AI Survey). But governance cannot be taught in a tutorial. It requires considering the human responsibility behind every algorithmic decision and connecting technology, business, risk, and strategy. Without this, AI becomes an organizational black box—high-performing, but uncontrollable.
Transparency and explainability highlight the same divide. More than 60% of European citizens believe that AI systems should be able to explain their decisions (OECD, Trust in AI Report). Not because they demand total transparency, but because trust requires a basic level of understanding. Yet explaining an algorithmic decision is not a coding issue; it is a matter of accountability, education, and proportionality to risk. Here again, purely technical training misses the point.
Data, the foundation of all AI, reveals another illusion. 55% of AI projects fail due to data quality or governance issues (Gartner, AI Project Failure Analysis). This is not an algorithmic failure; it is an organizational failure. By 2026, the issue will no longer be about accumulating ever more data, but about selecting, documenting, and governing it, and directly addressing the challenges of sovereignty and strategic dependence, particularly in Europe.
Algorithmic biases, however, are no longer a matter of suspicion but of fact: more than 65% of the AI systems evaluated exhibit measurable biases that affect certain social groups (Nature Machine Intelligence). And yet, no algorithm alone can correct inequalities that stem from social and political choices. Teaching AI without teaching judgment is to believe that justice can be automated.
As AI becomes integrated into critical processes, it also creates a new attack surface. Risks related to data manipulation and adversarial attacks are among the major threats identified by 2026 (NIST, AI Risk Management Framework). Here again, AI security cannot be considered in isolation from the overall resilience of organizations. It is not an isolated skill; it is a culture.
And then there’s the angle we often prefer to avoid: the environment. Energy consumption by data centers could double by the end of 2026, largely due to generative AI (International Energy Agency). Teaching AI without teaching energy efficiency amounts to normalizing a technological race to the bottom whose environmental cost is already evident.
Finally, at the heart of all these issues lies the most sensitive question: human responsibility. The more efficient systems appear to be, the more individuals tend to over-delegate their decisions (Harvard Business School, studies on automation bias). Automation thus becomes a form of passive abdication. Training people in AI without teaching them the right to disagree, to take back control, and to exercise human vigilance is to create societies where decision-making shifts without responsibility following suit.
Regulation, particularly at the European level, establishes a minimum standard. More than 80% of high-risk AI applications will be subject to stricter requirements by 2026 (European Commission, AI Act Impact Assessment). But compliance is not a vision. It does not replace culture, understanding, or the ability to view AI as a system that involves society as a whole.
Training in AI in 2026 is not simply about imparting isolated skills. It is about learning to navigate a world where technology, economics, politics, ethics, and the environment are now inextricably linked. Any training program that ignores this complexity does not prepare people for the future; it dangerously oversimplifies it.
4. AI Education in 2026: A Shift in the Educational Paradigm
At this point, one thing is clear: the educational divide is no longer technical. It is cultural, political, and economic. It does not pit those who can code against those who cannot, but rather those who understand what AI is transforming against those who see it merely as yet another tool to master.
Training in AI in 2026 can no longer simply involve piling up technical skills, no matter how sophisticated they may be. This represents a paradigm shift: moving from a focus on usage to a focus on understanding, governance, and accountability.
Three key principles should now form the foundation of any serious AI training program.
- First, understand.
Understanding what AI actually does to organizations, professions, and institutions. Understanding how it redefines the value of work, shifts decision-making centers, alters power dynamics, and accelerates certain economic trends while undermining others. Understanding that AI is not merely a productivity gain, but a profound transformation of social structures.
- Then, governing.
Training in AI means training in decision-making: when to automate, why to do so, how far to go, and when to stop. Governing AI requires knowing how to balance performance and risk, innovation and sustainability, efficiency and equity. This involves establishing frameworks, defining clear human responsibilities, and accepting that certain decisions cannot—or should not—be delegated.
- Finally, take responsibility.
Recognize the human, social, environmental, and democratic impacts of technological choices. Recognize that all automation has unintended consequences. Recognize that AI is never neutral, because it embodies choices, priorities, and values. Training in AI, therefore, means training in responsibility—not just legal responsibility, but moral and political responsibility as well.
In this context, technology does not disappear. It simply changes its role. It becomes a means—an indispensable one, but never an end in itself. A language to be mastered, not a goal to be worshipped.
It was precisely this conviction that led to the creation of aivancity. Not as just another engineering school, nor as a specialized business school, but as a hybrid institution, designed from the outset to address this systemic transformation. AI is not treated as an isolated discipline, but as a cross-cutting field situated at the intersection of technology, business, and society.
This choice is not a marketing strategy. It stems from a simple observation: training only technical experts, without equipping them to understand the economic, social, and ethical implications of what they design, amounts to creating expertise without a compass. Conversely, discussing AI without mastering its technical foundations leads to abstract thinking that is disconnected from operational realities. Hybridization is therefore not a compromise; it is a necessity.
This is also where the role of schools, universities, and academic institutions becomes indispensable. Platforms are good at teaching tools. They excel at disseminating practical, quick, and standardized skills. But they do not foster awareness or the ability to make long-term decisions.
The academic world, for its part, can no longer be content with merely providing advanced technical skills. It must assume an intellectual responsibility: that of putting things into perspective, making connections, and exploring complexity. It must embrace a long-term vision capable of transcending technological cycles and passing fads. Above all, it must bridge the gaps between what is too often kept separate: technology, the economy, society, ethics, and politics.
Training in AI can no longer be limited to training computer scientists. Training in AI means training leaders capable of making decisions. Managers capable of making judgments. Lawyers capable of understanding what they regulate. Designers capable of anticipating uses and effects. Engineers capable of questioning what they build. And, more broadly, citizens capable of not merely passively accepting the systems they use.
This is not a marginal goal. It is a democratic necessity. For a society that delegates its understanding of AI to a handful of technical experts effectively relinquishes a portion of its intellectual sovereignty. Educating people about AI in a different way does not slow down progress. It gives it direction, legitimacy, and a chance to endure.
5. Conclusion: Building a community, not just a user base
The question is no longer whether we will be providing widespread training in artificial intelligence. That is already happening. The real question is far more daunting: what exactly are we training people to do?
If training in AI means learning to use increasingly powerful tools without questioning how they affect our decisions, our organizations, and our democracies, then we will create societies that are efficient but vulnerable; fast but dependent; innovative but lacking direction.
Conversely, approaching AI as a total social phenomenon means embracing complexity. It means recognizing that technology can no longer be separated from responsibility, that performance can no longer be conceived without sustainability, and that automation can no longer advance without a conscious effort to reclaim human control. It means understanding that AI is not merely a matter of skills, but a societal choice.
2026 marks a turning point. On one hand, there is the temptation to take the easy route: to train people quickly, on a large scale, and in technical skills, and to believe that this will be enough. On the other hand, there is a more challenging but more fruitful goal: to cultivate minds capable of understanding, governing, and embracing AI in all its scope.
This choice is not a technological one. It is educational, political, and civilizational. Because, ultimately, the issue is not whether we will be able to use artificial intelligence.
The question is whether we will be able to remain collectively intelligent as we roll it out.
References
- European Commission. (2024). Impact assessment accompanying the proposal for a regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act).
https://digital-strategy.ec.europa.eu - Gartner. (2024). Why AI projects fail: Data quality, governance, and organizational readiness.
https://www.gartner.com - Harvard Business School. (2023). Automation Bias and Human Decision-Making in AI-Supported Environments. Working Paper.
https://www.hbs.edu - International Energy Agency. (2024). Electricity 2024: Data Centers, Artificial Intelligence, and Energy Demand.
https://www.iea.org - McKinsey & Company. (2024). The State of AI: Global AI Survey.
https://www.mckinsey.com - MIT Sloan Management Review. (2024). From principles to practice: Operationalizing responsible AI.
https://sloanreview.mit.edu - National Institute of Standards and Technology. (2023). AI Risk Management Framework (AI RMF 1.0).
https://www.nist.gov - Nature Machine Intelligence. (2023). A meta-analysis of algorithmic bias across machine learning systems, 5(4), 312–326.
https://www.nature.com - Organization for Economic Co-operation and Development. (2023). Trust in Artificial Intelligence.
https://www.oecd.org - World Economic Forum. (2023). The Future of Jobs Report.
https://www.weforum.org - Microsoft. (September 4, 2025). New commitments to advance AI skills and education.
https://blogs.microsoft.com/on-the-issues/2025/09/04/new-white-house-commitments/ - Microsoft. (2025). AI Learning Challenge & LinkedIn Learning AI Pathways.
https://www.linkedin.com/learning - Google. (October 21, 2025). Introducing Google Skills: Free learning paths for AI and digital skills.
https://blog.google/outreach-initiatives/education/google-skills/ - Google Cloud. (2025). Machine Learning and Generative AI Training Catalog.
https://cloud.google.com/training - Stanford University. (2025). Artificial Intelligence and Machine Learning Courses (open access).
https://online.stanford.edu - Stanford University. (2025). CS230: Deep Learning; CS224N: Natural Language Processing with Deep Learning.
https://www.youtube.com/@stanfordonline - Massachusetts Institute of Technology. (2025). MIT OpenCourseWare – Artificial Intelligence and Machine Learning.
https://ocw.mit.edu - Massachusetts Institute of Technology. (2025). Ethics of AI Bias; Ethics for Engineers: Artificial Intelligence.
https://openlearning.mit.edu

