Artificial Intelligence: Between Innovation and Regulatory Framework
Companies involved in generative artificial intelligence finally have a reference document to help them prepare for the requirements of the AI Act. Originally expected in May, the best-practice guide for so-called “generalist” models was finally published by the European Commission on Thursday, July 10. This slight delay does not diminish its importance: this document, developed by experts, clarifies, point by point, the new rules that companies will be required to follow starting August 2.
This applies to all entities developing or deploying AI models, including major industry players such as ChatGPT (OpenAI), Claude (Anthropic), Gemini (Google DeepMind), and Copilot (Microsoft). For these organizations, compliance with the AI Act is becoming an essential step in their deployment strategy in Europe.
Adopted in 2024 and phased in starting in 2025, the AI Act is the world’s first comprehensive legislative framework designed to regulate the use of artificial intelligence. Led by the European Commission, this landmark legislation aims to foster an environment of trust around AI by protecting fundamental rights while promoting innovation.
In June 2025, the European Union published an implementation guide for businesses1. The goal: to help them understand their obligations, achieve compliance, and deploy AI systems that are responsible and compliant with European law.
A risk-based approach: the cornerstone of the European framework
The AI Act is based on a classification of AI systems according to their risk level:
- Unacceptable risk: prohibited applications (e.g., social scoring, behavioral manipulation, real-time facial recognition without a legal basis).
- High risk: AI in sensitive sectors (healthcare, education, employment, justice, law enforcement), subject to strict requirements (auditing, documentation, human oversight).
- Limited risk: conversational systems, chatbots, or deepfakes simply need to indicate that they are artificial.
- Minimal risk: video games, product recommendations, with unrestricted use subject to compliance with the GDPR.
This classification allows for a phased implementation, proportionate to the potential impact of AI systems2.
A Practical Guide for Businesses
The implementation guide published by the European Commission offers practical tools to help companies comply with the AI Act:
- Compliance checklists based on risk level,
- Technical and impact documentation templates,
- Industry-specific recommendations (healthcare, HR, marketing, manufacturing),
- Examples of best practices and anonymized use cases.
Particular attention is paid to risk assessment, the principle of transparency, data traceability, and human involvement in the decision-making process.
Which companies are affected?
This applies to all companies that design, deploy, or integrate AI systems as part of their business operations within the European Union, including:
- AI system providers,
- business users,
- technology integrators,
- non-European companies targeting the EU market.
Special programs are in place for SMEs and startups to ensure that innovation is not hindered: support from innovation hubs, simplified guidelines, and technical and legal assistance.
What skills should be utilized?
The enactment of the AI Act is transforming AI governance in the corporate sector:
- Legal professionals and data protection officers (DPOs) will be responsible for ensuring compliance with the law.
- Data scientists and developers will need to produce systems that are well-documented, auditable, and traceable.
- Risk and compliance officers will oversee the impact assessment.
New professions are emerging: algorithm auditor, AI impact assessor, and advisor on the ethical alignment of AI systems.
The Ethical and Strategic Implications of the European Framework
The AI Act is more than just a legal framework; it embodies a political and ethical vision of artificial intelligence—one in which AI:
- explainable,
- non-discriminatory,
- controllable by humans,
- respectful of fundamental rights.
This system of trust can become a competitive advantage for European companies: those that adapt to it quickly will have a seal of ethical credibility in a global market seeking points of reference.
Is the AI Act the beginning of an era of active regulation of artificial intelligence?
With the AI Act, Europe is becoming the first region to establish a clear and binding framework for the use of artificial intelligence. This regulatory ambition is accompanied by a commitment to engage in dialogue with businesses, researchers, and civil society.
Other countries (Canada, Brazil, Japan) are following suit. The AI Act could thus become a global standard. But its success will depend on its practical implementation: it is up to each organization to make it their own, so that AI becomes a tool for trust and progress, not for inequality3.
References
1. European Commission. (2025). Practical guidance for compliance with the AI Act.
https://digital-strategy.ec.europa.eu/
2. AI Watch. (2024). Understanding the EU’s risk-based approach to AI regulation.
https://ai-watch.ec.europa.eu/
3. Future of Life Institute. (2025). How the AI Act is shaping international governance.
https://futureoflife.org/

