Need for a model of total hybridization between Artificial Intelligence, Business, and Management

Although artificial intelligence dates back to the 1950s, it is now experiencing such rapid growth that machines are now capable of outperforming humans in certain areas.

Artificial intelligence can legitimately replace humans in performing tedious and time-consuming tasks, allowing them to devote themselves to more rewarding activities. The challenge of AI therefore lies in training young and old alike to meet these new challenges.

Beyond its technical challenges (Web Crawling, Data mining, Data Science, Machine Learning, Deep learning, etc.), the hybridization between AI and management is a necessity if we are to face up to the strong technological changes, the challenges of scientific evolution and the societal constraints that are shaping tomorrow's world. Total integration of content, pedagogy and research in AI and the social sciences will enable the emergence of a new model capable of meeting the major scientific, industrial and societal challenges. Our school project does not seek to compete with engineering schools or business schools, both of which are long-established and recognized in France, but to invest in the field that lies between these two pillars of excellence, successfully demonstrated in certain world-class institutions such as Stanford University or the Ecole Polytechnique Fédérale de Lausanne (EPFL).

The teaching [of AI ethics] is virtually absent from the curricula of engineering schools or university IT courses, even though the volume and complexity of the ethical issues that these future graduates will be faced with are constantly growing." Villani Report, 2018

aivancity: a model of total hybridization between artificial intelligence, business, and ethics

Tawhid Chtioui, Founding President and Dean of aivancity School for Technology, Business & Society Paris-Cachan, presents the challenges of artificial intelligence and the need for a hybrid educational model.
Play
Our video manifesto

Artificial Intelligence based on trust and responsibility

In AI, the issue of trust is obviously essential. When we take a medication without knowing its chemical formula, we trust the manufacturer regarding the medication's effect in treating our condition.

In the same way, we trust recommendation algorithms: the first results of a search engine, Netflix's suggestions for movies to watch or Amazon's suggestions for products to buy, the route suggested by your GPS, etc.

But what are the reasons and objectives behind these recommendations? Are they "customer-oriented" recommendations, designed to satisfy the customer, or "service-oriented" recommendations, designed solely to meet the needs of the company (clearing stock, promoting a product, etc.)? And what are the optimization functions? An internet video search engine can be optimized based on the amount of time the user spends in front of the screen. This leads AI to offer increasingly addictive content (violence, rumors, fake news, etc.).

It is therefore essential to integrate these issues relating to ethics, in the broadest sense, into AI training programs: issues relating to the sociology of work (click workers), data (GDPR), the robustness of algorithms, their explainability, their biases, but also "customer relations," governance (social credit, etc.), and philosophy (free will, volition, etc.).

Aivancity's degree programs are designed to strike a balance between three components: 50% technological skills, 25% business management, and 25% AI ethics. These skills are taught through educational content (lectures, applications, AI clinics, personal development) in which the three components are fully integrated. All of our professional training programs also systematically incorporate the business and ethical implications of the technological aspects of AI and/or data covered. This hybrid approach is the hallmark of aivancity School of AI & Data for Business & Society.