Regulation & Legal Framework

AI Act: Europe Sets Clear Guidelines for Artificial Intelligence in Business

Companies involved in generative artificial intelligence finally have a reference document to help them prepare for the requirements of the AI Act. Originally expected in May, the best-practice guide for so-called “generalist” models was finally published by the European Commission on Thursday, July 10. This slight delay does not diminish its importance: this document, developed by experts, clarifies, point by point, the new rules that companies will be required to follow starting August 2.

This applies to all entities developing or deploying AI models, including major industry players such as ChatGPT (OpenAI), Claude (Anthropic), Gemini (Google DeepMind), and Copilot (Microsoft). For these organizations, compliance with the AI Act is becoming an essential step in their deployment strategy in Europe.

Adopted in 2024 and phased in starting in 2025, the AI Act is the world’s first comprehensive legislative framework designed to regulate the use of artificial intelligence. Led by the European Commission, this landmark legislation aims to foster an environment of trust around AI by protecting fundamental rights while promoting innovation.

In June 2025, the European Union published an implementation guide for businesses1. The goal: to help them understand their obligations, achieve compliance, and deploy AI systems that are responsible and compliant with European law.

The AI Act is based on a classification of AI systems according to their risk level:

  • Unacceptable risk: prohibited applications (e.g., social scoring, behavioral manipulation, real-time facial recognition without a legal basis).
  • High risk: AI in sensitive sectors (healthcare, education, employment, justice, law enforcement), subject to strict requirements (auditing, documentation, human oversight).
  • Limited risk: conversational systems, chatbots, or deepfakes simply need to indicate that they are artificial.
  • Minimal risk: video games, product recommendations, with unrestricted use subject to compliance with the GDPR.

This classification allows for a phased implementation, proportionate to the potential impact of AI systems2.

The implementation guide published by the European Commission offers practical tools to help companies comply with the AI Act:

  • Compliance checklists based on risk level,
  • Technical and impact documentation templates,
  • Industry-specific recommendations (healthcare, HR, marketing, manufacturing),
  • Examples of best practices and anonymized use cases.

Particular attention is paid to risk assessment, the principle of transparency, data traceability, and human involvement in the decision-making process.

This applies to all companies that design, deploy, or integrate AI systems as part of their business operations within the European Union, including:

  • AI system providers,
  • business users,
  • technology integrators,
  • non-European companies targeting the EU market.

Special programs are in place for SMEs and startups to ensure that innovation is not hindered: support from innovation hubs, simplified guidelines, and technical and legal assistance.

The enactment of the AI Act is transforming AI governance in the corporate sector:

  • Legal professionals and data protection officers (DPOs) will be responsible for ensuring compliance with the law.
  • Data scientists and developers will need to produce systems that are well-documented, auditable, and traceable.
  • Risk and compliance officers will oversee the impact assessment.

New professions are emerging: algorithm auditor, AI impact assessor, and advisor on the ethical alignment of AI systems.

The AI Act is more than just a legal framework; it embodies a political and ethical vision of artificial intelligence—one in which AI:

  • explainable,
  • non-discriminatory,
  • controllable by humans,
  • respectful of fundamental rights.

This system of trust can become a competitive advantage for European companies: those that adapt to it quickly will have a seal of ethical credibility in a global market seeking points of reference.

With the AI Act, Europe is becoming the first region to establish a clear and binding framework for the use of artificial intelligence. This regulatory ambition is accompanied by a commitment to engage in dialogue with businesses, researchers, and civil society.

Other countries (Canada, Brazil, Japan) are following suit. The AI Act could thus become a global standard. But its success will depend on its practical implementation: it is up to each organization to make it their own, so that AI becomes a tool for trust and progress, not for inequality3.

1. European Commission. (2025). Practical guidance for compliance with the AI Act.
https://digital-strategy.ec.europa.eu/

2. AI Watch. (2024). Understanding the EU’s risk-based approach to AI regulation.
https://ai-watch.ec.europa.eu/

3. Future of Life Institute. (2025). How the AI Act is shaping international governance.
https://futureoflife.org/

Don't miss our upcoming articles!

Get the latest articles written by aivancity experts and professors delivered straight to your inbox.

We don't send spam! Please see our privacy policy for more information.

Don't miss our upcoming articles!

Get the latest articles written by aivancity experts and professors delivered straight to your inbox.

We don't send spam! Please see our privacy policy for more information.

Related posts
Regulation & Legal Framework

AI-based age verification in tobacco shops: The CNIL strongly opposes it

In recent years, artificial intelligence systems capable of estimating a person’s age based on their face have become increasingly common in retail settings. Using so-called “smart” cameras, these devices analyze facial features in real time to estimate an age range, without necessarily identifying the individual.
AI Adoption: Barriers & DriversTechnological Advances in AIThe Future of AI: Trends and PredictionsAI & EducationAI & HealthcareGenerative AIInnovation & Competitiveness Through AIRegulation & Legal Framework

Let's Talk AI – April 18, 2025

Discover the latest trends in artificial intelligence in marketing, education, finance, and research through a selection of articles published on April 18, 2025.
The AI Clinic

Would you like to submit a project to the AI Clinic and work with our students?

Leave a comment

Your email address will not be published. Required fields are marked with *