Articles

Regulating without stifling innovation: the dilemma facing emerging countries as AI rapidly expands

By Dr. Tawhid CHTIOUI, President and Founder of aivancity, the Grande Ecole of AI and Data

Introduction

Artificial intelligence occupies a central place in today’s global dynamics, carrying with it revolutionary potential in terms of economic innovation, social progress and improvements to everyday life. However, this technological revolution presents major legal and ethical challenges, particularly for emerging countries, which must not only catch up technologically, but also build regulatory frameworks adapted to their local realities.

Strict regulations such as the European AI Act, the US AI Executive Order and the Canadian Artificial Intelligence and Data Act (LIAD) illustrate advanced and precise models, designed to protect citizens from AI abuse. These frameworks establish strong requirements for transparency, accountability and control. However, transposing these models directly to emerging countries raises important questions. Rigid Western rules could stifle innovation and may not be fully applicable given the significant cultural, economic and institutional differences.

This dilemma requires emerging countries to think about hybrid and innovative approaches balanced between citizen protection and flexible, pragmatic AI adoption. But how can emerging countries reconcile effective regulation with the promotion of innovation? What specific features need to be taken into account when defining an appropriate legal framework? How can we anticipate and manage the economic, social and ethical impacts of the massive introduction of AI in these countries?

A legal framework under construction: between rigor and flexibility

Currently, the majority of emerging countries are still exploring the best approaches to regulating artificial intelligence, oscillating between regulatory caution and encouraging innovation.

In Africa, the African Union has taken an important step forward by adopting, as of July 2024, a continental strategy focused on ethical and responsible regulation, while insisting on the need for an approach adapted to the local realities of member countries.

This strategy includes a specific strand dedicated to AI regulation and safety, while favoring a flexible approach, enabling member countries to gradually build their own regulatory framework tailored to their specific context. The document highlights several dimensions essential to the successful integration of AI in Africa. In particular, it stresses the need to develop local human capital in order to have a skilled workforce capable not only of using, but also of designing AI technologies corresponding to African needs and particularities. In addition, the strategy stresses the importance of improving digital infrastructures, such as internet connectivity and data centers, to ensure sovereign management of local data. It also encourages the boosting of the AI-related economy, notably through support for innovative startups and the creation of a favorable climate for technological investment. Finally, it calls for the establishment of sustainable regional and international partnerships that will enable African countries to benefit from skills sharing and technology transfer, while guaranteeing regular monitoring and evaluation of the progress made. This strategy thus represents a balanced and ambitious model, fully integrating ethical, economic, cultural and social issues, to ensure that artificial intelligence makes a positive and lasting contribution to the development of the African continent.

On a national level, legal frameworks relating to artificial intelligence often remain embryonic in many African countries. Côte d’Ivoire, for example, is stepping up its efforts against digital disinformation in the run-up to the presidential elections, but does not yet have a comprehensive, structured AI framework.

In Senegal, the political shift in 2024 with the election of President Bassirou Diomaye Faye marked a new direction in technological development. The country abandoned the logic of the Plan Sénégal Émergent (PSE) in favor of the “Sénégal Horizon 2050” vision, articulated around a structural, inclusive and sovereign transformation. In February 2025, the Senegalese authorities launched a new digital strategy entitled “New Technological Deal”, of which artificial intelligence is one of the pillars.

This strategy aims to integrate AI into public policies in a cross-cutting way, linking it with national priorities: education, healthcare, agriculture, governance and entrepreneurship. It also calls for the development of an AI-specific legal framework, as well as comprehensive reform of data protection, digital law and cybersecurity. Particular emphasis is placed on technological sovereignty, the creation of local skills and the promotion of African solutions based on Senegalese linguistic and social realities.

Although still in the development phase, this strategy marks a strong desire not to undergo the digital transition, but to steer it ethically, inclusively and tailored to the country’s needs. Senegal thus aims to become a structuring player in the regional governance of AI in West Africa.

In Egypt, although initiatives such as the creation of the National Council for Artificial Intelligence and the adoption of the Egyptian Charter for Responsible AI have been put in place to promote the ethical use of AI, the country does not yet have a national legal framework specific to AI. The Charter, adopted in 2023, aims to ensure the ethical use, deployment and management of AI systems in Egypt, incorporating principles such as fairness, transparency, human focus, accountability and safety. However, the absence of specific AI legislation compromises the effective application of these principles.

These examples illustrate the efforts undertaken by African countries such as Senegal and Egypt to integrate AI into their development strategies. However, the lack of specific and comprehensive legal frameworks on AI highlights the need for these nations to strengthen their legal infrastructures to ensure the ethical and responsible use of artificial intelligence.

Morocco, meanwhile, is taking a proactive, balanced approach: although there is as yet no legislation specifically dedicated to AI, the country is building on several existing frameworks such as Law 09-08 on the protection of personal data, and Law 05-20 on the cybersecurity of digital infrastructures. In May 2024, the Moroccan Minister of Justice, announced the preparation of an ambitious bill aimed at specifically framing AI and its uses, taking into account the potential challenges and threats linked to these technologies. This draft would comprise 17 articles integrating personal data, governance (with the creation of a national committee dedicated to the supervision of AI systems) and compliance (cyber and privacy).

In addition, Morocco has strengthened its international commitment with the establishment, from November 2023, of an International Center for Artificial Intelligence under the aegis of UNESCO, to promote AI in Africa through applied research, training and local capacity building.

These initiatives clearly demonstrate Morocco’s commitment to developing an integrated public policy on AI, combining technological innovation, responsible governance and respect for citizens’ fundamental rights.

In Asia, India is adopting a flexible, evolutionary approach, favoring sector-specific and progressive regulation. To date, India does not have a single framework law dedicated exclusively to AI, preferring a pragmatic assemblage of policies, guidelines and regulations specific to local contexts and priority sectors. Back in 2018, the Indian government think tank NITI Aayog defined an ambitious “National Strategy for Artificial Intelligence” aimed at positioning India as a global leader in key areas such as healthcare, agriculture, education, smart cities or mobility. Since 2022, this sector strategy has been accompanied by a significant update of the overall regulatory framework through the draft “Digital India Act”, designed to regulate new technologies, including AI. In parallel, a major Personal Data Protection Act, passed by the Indian parliament in August 2023, provides for important complementary regulations to come. In March 2024, the Indian government took a further step by requiring AI providers to obtain prior approval before deploying experimental models, in order to prevent discriminatory bias and protect electoral integrity in the run-up to general elections. Last but not least, India’s active participation in the Global Partnership on Artificial Intelligence (PMIA, OECD, 2025) bears witness to a desire to be part of an international dynamic, while preserving the margin for national innovation necessary for its technological development.

In December 2024, Malaysia inaugurated a National Artificial Intelligence Office dedicated to AI policymaking and regulation. This initiative aims to centralize AI-related efforts, provide strategic planning, encourage research and development, and ensure regulatory oversight. Initial objectives include establishing a code of ethics for AI, creating a regulatory framework and implementing a five-year technology action plan to 2030. At the same time, Malaysia has formed strategic partnerships with major companies such as Amazon, Google and Microsoft, who have invested in the country’s data centers, cloud and AI projects.

Singapore, which plays a leading role in developing governance and ethical guidelines for AI within the Association of Southeast Asian Nations (ASEAN), is actively collaborating with member countries to develop an AI application guide for the region’s public and private sectors. In 2023, Singapore updated its National Artificial Intelligence Strategy, originally launched in 2019, to reflect technological developments and national priorities.

These examples illustrate the diversity of approaches adopted by emerging countries to regulate and promote the use of artificial intelligence, according to their national contexts and strategic priorities, approaches that are distinct yet converging towards a balanced regulation of AI. These emerging countries are actively seeking to strike the right balance between responsible adoption of legal frameworks inspired by international standards and careful respect for local cultural, economic and institutional specificities. This approach, which is constantly evolving, is essential to enable these countries to fully exploit the opportunities offered by AI, while effectively protecting their citizens against possible abuses.

Emerging countries’ major challenges in regulating AI

Emerging countries face several major challenges in their approach to AI regulation.

In the first place, their limited institutional capacities represent a major obstacle: judicial and regulatory institutions, often under-funded and technically poorly trained, struggle to develop and apply appropriate regulatory frameworks, opening the way to potential abuses such as algorithmic bias, abusive surveillance or invasion of privacy. However, they can also represent an opportunity to create hybrid, pragmatic and innovative models, far removed from Western regulations that are sometimes too rigid, as in India, which favors sector-specific regulation differentiated according to the risks associated with each AI application.

In addition, the rapid rise of automation poses major social challenges, particularly in terms of labor law. Key sectors such as services, finance and manufacturing could be profoundly transformed, threatening the jobs of the most vulnerable populations. Particular attention needs to be paid to the digital sector, where click workers, responsible in particular for data labeling, often operate in precarious conditions and without adequate legal protection.

Finally, the issue of digital sovereignty is emerging as a fundamental strategic challenge. Emerging countries’ dependence on foreign technological solutions limits their control over critical infrastructures and sensitive data. To remedy this vulnerability, investment in autonomous local infrastructures, such as the Moroccan project for sovereign data centers and adapted linguistic models, is a promising way forward. These initiatives not only strengthen economic resilience, but also ensure better protection for citizens and their data, laying the foundations for effective digital sovereignty.

Conclusion: What kind of regulation for tomorrow?

Hybrid, balanced and adapted regulation is the preferred path for emerging countries, combining protection, innovation and regulatory flexibility. However, this approach must be accompanied by in-depth reflection on the ethical issues inherent in AI. A robust ethical framework is essential to guide AI innovation. This framework should be based on universal principles such as transparency, fairness, explicability, loyalty and reliability. AI also implies clear independent control and audit mechanisms to identify and quickly correct any drifts. Emerging countries can take advantage of UNESCO’s international recommendations and PMIA practices, while adapting them to their own socio-economic specificities to establish their own ethical charter.

Several critical questions remain open: how can emerging countries ensure that the data used to train AI models truly reflects their cultural and linguistic realities? How can they avoid increased dependence on technological solutions developed mainly in Western contexts? What institutional and international mechanisms should be considered to foster more equitable and inclusive cooperation in the field of AI?

These questions shape future debates on AI, calling for more participative and inclusive global governance, in which emerging countries play an active role in defining international standards and developing technologies tailored to their specific needs. The challenge is clear: to build together a digital future that is truly equitable and beneficial for all.

References

African Union (2024). Continental strategy for artificial intelligence.
OECD (2025). PMIA: towards responsible artificial intelligence.
UNESCO (2023). Recommendation on the ethics of artificial intelligence.
ISED Canada (2024). Artificial Intelligence and Data Act (LIAD).
Africa Cybersecurity Magazine
CNDP Maroc (2024). Draft law on AI regulation in Morocco.
NITI Aayog (2023). Principles for responsible AI in India.
ILO (2023). The future of work in the face of automation in emerging countries.
Médias24 (2024). Morocco considers a legal framework for artificial intelligence.
Le Matin (2023). UNESCO International Center for Artificial Intelligence in Morocco.
AISigil (2023). National strategy for artificial intelligence in India.
Trésor (2023). Digital India Act and Data Protection Act in India.
Digital Century (2024). Regulation of AI models by Indian authorities.

Related posts
Articles

Less certainty, more awareness: AI shows the way that schools dare not take

Just imagine. You ask a question to a state-of-the-art artificial intelligence, loaded with artificial neurons, gorged with planetary data, more connected than your teenager on a Saturday night, and it answers you, without blushing: “I don’t know.”
Articles

Why is everyone talking about agents?

Since the start of the AI revolution – let’s call that the release of ChatGPT-3 – the capabilities of large language models (LLMs) have been accelerating rapidly. Let’s put this into perspective
Articles

7 ethical principles for trustworthy artificial intelligence

By Pr Nathalie DEVILLIER Doctor of International Law | Professor of AI Law and Ethics at aivancity To achieve trustworthy AI, seven fundamental ethical principles must be applied and evaluated throughout the lifecycle of the…
La clinique de l'IA

Vous souhaitez soumettre un projet à la clinique de l'IA et travailler avec nos étudiants.

Leave a Reply

Your email address will not be published. Required fields are marked *