Last week's Paris Peace Forum is an opportunity to discuss the most pressing threats to global stability today and look ahead to those of tomorrow. Artificial Intelligence (AI) will be one of the featured topics. Without going as far as the dystopian futures depicted by Hollywood, the list of potential threats is long: disinformation destabilizing democracies, disruption of financial markets, large-scale hacking...
This is not the first time that humanity has been confronted with a major technological leap. Lessons must be learned from history: the Industrial Revolution which lifted mankind out of poverty to bring it on the verge of the greatest ecological catastrophe of all time; and nuclear power which was both a tremendous energy opportunity and the deadliest weapon of the 20th century. To avoid these catastrophic scenarios, we must look to the responsible governance of AI. Unfortunately, current national governance mechanisms are both too rigid and difficult to apply. Regulators react to observed damage ("ex post"), and when they do, the problem is already elsewhere. A new approach is needed.
First and foremost, governance frameworks must adopt a holistic approach that integrates all interconnected technologies, such as quantum computing, augmented and virtual reality, blockchain, and many others. They must also enable the anticipation ("ex ante") and acceptance of risks. As consensus is needed at the societal level, a multi-stakeholder approach is mandatory, including governments, civil society, technical experts, academics and financial backers.
'Governments must play a central role'
Currently, IT expertise and resources are largely centralized within a few private-sector companies (notably the GAFAMs). In 2023, Apple has been leading acquisitions with over 20 AI-related companies. It's worth noting that, unlike open-source AI such as Mistral AI's model or Dall-E, proprietary AI such as Google Bard and ChatGPT are more complex to embrace. These are virtual machines to which we provide input (queries, questions, tasks) and that generate an output (text, images, actions, results…). The algorithmic logic in between remains unexplicit. It is critical that all stakeholders weigh in on the governance of AI.
Governments must play a central role. They must ensure that standards are consistent and effective. At present, however, the EU and the USA have very different regulatory frameworks, the former proposing to regulate via the AI Act, and the latter adopting a largely non-interventionist approach. As a result, auditing, compliance monitoring, and, consequently, commercial technological exchanges between the two regional blocs risk being hindered.
Experiments with regulatory "sandboxes" (i.e. test facilities operating under real conditions, but with no risk of contagion with existing systems) could be envisioned. Initiatives are already underway in the UK and Spain. The US and EU could also build on the momentum of the EU-US Trade and Technology Council (TTC) to include proactive regulatory cooperation.
Favoring ethical investment criteria
The aim is not to draw up regulations making it more difficult to harness the power of AI, as was understood by over a hundred business leaders who signed an open letter against the European AI regulation last June. On the contrary, the aim is to harmonize regulations or at least make them interoperable.
Academia must also play its part, starting with building leaders conversant on technology and global affairs, as well in ensuring that emerging governance frameworks are based on indisputable facts. This, moreover, requires governments to ensure that they have access to the algorithms and data powering AI.
Philanthropists and private backers, for their part, need to assess risks properly, right from the product design stage. The role of venture capitalists as investors and mentors of technology start-ups places them in a privileged position to apply ethical investment criteria – virtually non-existent today – in emerging tech (often entitled "Corporate Digital Responsibility").
'At the dawn of the AI revolution'
Last year, start-ups raised a record $621 billion from venture capitalists. With this kind of power in their hands, these companies and their influential Board members can leverage and nudge developers. Momentum could also come from international bodies such as the International Sustainability Standards Board, chaired by Emmanuel Faber. These bodies can steer the development of ESG principles in the investment mandates of venture capitalists.
Finally, developers, like nuclear engineers, need to be aware of the ethical implications of their role. It would be useful if they could also work concretely on drawing up deontological charters, applicable to each economic sector impacted by AI. Journalism, finance and healthcare are examples of immediate fields impacted by AI. The principles recently adopted by the G7 may be a good beginning.
We are only at the dawn of the AI revolution. This upheaval has the potential to be for the common good, provided that systems are ethical from their very conception. This is a critical, even existential, precautionary principle.
Arancha Gonzalez is the former Spanish Minister of Foreign Affairs, European Union and Cooperation (2020-2021) and Dean of the Paris School of International Affairs at the French political science university Sciences Po since 2022.
Constance Bommelaer de Leusse is the Executive Director of Sciences Po's Project Liberty Institute, which aims to support research to ensure ethical governance is embedded and prioritized in the development of new technology.