Artificial intelligence between innovation and regulatory framework
Companies involved in generative artificial intelligence finally have a reference document to help them prepare for the requirements of the AI Act. Initially expected in May, the best practice guide dedicated to so-called “generalist” models was finally published by the European Commission on Thursday, July 10. This slight delay in no way detracts from its importance: the document, drawn up by experts, clarifies point by point the new rules that companies will have to comply with from August 2.
This concerns all organizations developing or deploying AI models, including major players in the sector such as ChatGPT (OpenAI), Claude (Anthropic), Gemini (Google DeepMind) or Copilot (Microsoft). For these organizations, compliance with the AI Act is becoming an essential step in their deployment strategy in Europe.
Adopted in 2024 and phased in from 2025, the AI Act is the world’s first comprehensive legislative framework to regulate the use of artificial intelligence. Spearheaded by the European Commission, this landmark text aims to create an environment of trust around AI, protecting fundamental rights while fostering innovation.
In June 2025, the European Union published an application guide for companies1. Objective: to help them understand their obligations, achieve compliance and deploy responsible AI systems that comply with European law.
A risk-based approach: the heart of the European system
The AI Act is based on a typology of AI systems classified according to their level of risk:
- Unacceptable risk: prohibited applications (e.g. social rating, behavioral manipulation, real-time facial recognition with no legal basis).
- High risk: AI in sensitive areas (health, education, employment, justice, law enforcement), subject to strict requirements (audit, documentation, human supervision).
- Limited risk: conversational systems, chatbots or deepfakes simply need to indicate that they are artificial.
- Minimal risk: video games, product recommendations, with freedom of use subject to RGPD compliance.
This classification allows for a graduated application, proportionate to the potential impact of AI systems.2.
Operational instructions for companies
The application guide published by the European Commission offers practical tools to help companies comply with the AI Act :
- Risk-based compliance checklists,
- Technical and impact documentation templates,
- Sector-specific recommendations (healthcare, HR, marketing, industry),
- Examples of best practices and anonymized use cases.
Particular attention is paid to risk assessment, the principle of transparency, data traceability and human intervention in the decision-making loop.
Which companies are concerned?
All companies that design, deploy or integrate AI systems as part of their activities within the European Union are affected, including:
- AI system suppliers,
- professional users,
- technology integrators,
- non-European companies targeting the EU market.
Special arrangements have been made for SMEs and start-ups, so as not to hold back innovation: support from innovation hubs, simplified guides, technical and legal assistance.
Which skills to mobilize?
The entry into force of the AI Act transforms the governance of AI in business:
- Lawyers and Data Protection Officers (DPOs) will have to ensure compliance with the law.
- Data scientists and developers will have to produce systems that are documented, auditable and traceable.
- Risk and compliance managers will oversee the impact assessment.
Emerging professions are taking shape: algorithm auditor, AI impact assessor, advisor on the ethical alignment of AI systems.
The ethical and strategic challenges of the European framework
The AI Act is more than just a legal framework: it embodies a political and ethical vision of artificial intelligence. That of an AI :
- explainable,
- non-discriminatory,
- controllable by humans,
- respectful of fundamental rights.
This system of trust can become a competitive advantage for European companies: those who adapt quickly will have a label of ethical credibility in a global market in search of benchmarks.
Is the AI Act the beginning of an era of active regulation of artificial intelligence?
With the AI Act, Europe becomes the first area to define a precise and binding framework for the use of artificial intelligence. This regulatory ambition is accompanied by a commitment to dialogue with companies, researchers and civil society.
Other countries (Canada, Brazil, Japan) are following suit. The AI Act could thus become a global standard. But its success will depend on its practical application: it’s up to each organization to make it its own, so that AI becomes a tool for trust and progress, rather than a source of imbalance.3.
References
1. European Commission. (2025). Practical guidance for AI Act compliance.
https://digital-strategy.ec.europa.eu/
2. AI Watch. (2024). Understanding the EU risk-based approach to AI regulation.
https://ai-watch.ec.europa.eu/
3. Future of Life Institute. (2025). How AI Act is shaping international governance.
https://futureoflife.org/