By Pr Nathalie DEVILLIER Doctor of International Law | Professor of AI Law and Ethics at aivancity
To achieve trustworthy AI, seven fundamental ethical principles must be applied and evaluated throughout the lifecycle of the AI system. These requirements are interconnected, equally important and mutually supportive. They are not exhaustive, but represent systemic, individual and societal aspects.
These 7 fundamental principles are reflected in the European AI Regulation and taken from the European Commission’s 2019 Guidelines.
- Human action and human control. This principle states that AI systems should support human autonomy and decision-making, not undermine it. AI must enable individuals to maintain adequate control and supervision over systems, respecting fundamental rights and ensuring a democratic and equitable society. This implies that users can understand the system, challenge its decisions and intervene if necessary. Approaches such as “human-in-the-loop” or “human-on-the-loop” are recommended, where the human remains in a position of command.
For example, in the development of autonomous vehicles, the ability of the human driver to regain control at any time (for example, in the event of dangerous road conditions or AI failure) embodies this principle.
- Technical robustness and security. A trustworthy AI system must be resilient to attack, secure, reliable, reproducible and accurate. It must be able to cope with errors and failures, as well as malicious or unintended use. Correct prediction and reliable results are essential to avoid negative impacts.
For example, an AI system used to manage a power plant must be designed to be resistant to cyber-attacks and technical failures, in order to guarantee safe operations and stable supply.
- Confidentiality and data governance. This principle requires respect for privacy, data quality and integrity, and secure access to data. Data used by AI must be relevant, accurate, valid and reliable. Users must be informed about the collection and use of their data, and be able to access and correct it.
Thus, the application of the General Data Protection Regulation (GDPR) to an AI system for urban surveillance, requiring data minimization, anonymization and transparency about its use, is a direct illustration.
- Transparency includes traceability, explicability and communication. AI systems must make it possible to understand how their decisions are made and what factors are taken into account. Even for complex systems, some form of explicability is necessary to understand their behavior. Clear communication about the AI system’s capabilities and limitations is also essential.
The fact that an AI-based bank loan decision algorithm provides a clear and detailed explanation of the reasons for a loan refusal, rather than a simple “no” answer, illustrates transparency.
- Diversity, non-discrimination and equity. This principle aims to ensure that AI systems are developed and used without unfair prejudice, are accessible to all and are inclusive (including people with disabilities) and include stakeholder participation. The aim is to prevent and reduce bias in data and algorithms to ensure fair and non-discriminatory results.
An example of the application of this principle is the revision of a recruitment AI tool that was found to be sexist due to biases in the training data, in order to make it fairer for all candidates.
- Societal and environmental well-being. AI systems must make a positive contribution to societal well-being, sustainability and environmental protection, while respecting social and democratic values. This includes minimizing negative incidents and taking into account the environmental impact of AI, such as its energy consumption. Using AI to optimize urban traffic flows and reduce congestion, thereby reducing pollution and improving the quality of life in cities, contributes to societal and environmental well-being.
- Accountability. This principle implies auditability, minimization of negative incidents and clear communication of responsibilities and remedies. Mechanisms must be in place to guarantee autonomy and accountability. Traceability of AI operations and the possibility of external audits are crucial. Roles and responsibilities must be defined, and recourse mechanisms (e.g. for complaints) must be available. In the event of a serious error in a medical diagnostic AI system, the ability to audit the algorithm, its training data and decisions, and to clearly identify who is responsible (developer, hospital, doctor) and how patients can obtain redress, illustrates the principle of accountability.
These principles are essential to ensure that AI is developed and used ethically and for the benefit of society.
How do we put these principles into practice?
When implementing ethical requirements for AI systems, conflicts may arise between different principles, making certain arbitrations unavoidable. These decisions must be taken in a reasoned, transparent way, based on current technical knowledge, and assessing the risks to fundamental rights.
If no ethically acceptable arbitration is possible, the system must not be used as it stands. Decisions must be documented, regularly re-evaluated and those responsible held to account.
In the event of unfair negative impact, accessible redress mechanisms must be provided, with particular attention to vulnerable people.