The governance of AI: the ‘Brussels effect’ of a pan-European framework

While it is widely recognized that regulation is needed, the risk of over-regulation remains an alarming image

May 20, 2021

5 Min Read

Artificial Intelligence (AI) is transforming our way of life and its applications are continuously expanding.

The development of AI shows massive potential but also poses critical unanswered questions.

Earlier this year, a panel of distinguished speakers gathered under the auspices of Workday and Forum Europe to discuss the governance of AI and the development of a global eco-system of trust.

Panelists discussed, in the first place, Europe’s attempts to set up an AI regulatory framework, the risks and challenges this entails, as well as cooperation opportunities with third states and multilateral fora.

The need for a pan-European AI framework

The key theme that found a consensus during the conference is the need for a European regulatory framework that could allow safe use of AI. At the EU level, emphasis has been placed on the desirability of a harmonized approach to AI regulation and governance, especially as regards the upcoming Digital Services Act (DSA). The Commission is relying on a pyramid-like categorisation of AI applications, including B2B and B2C ones, whereby high-risk applications are pictured at the top, and the main bulk includes several low-risk B2B cases that do not require additional regulation. It was also highlighted how EU standards on public-sector AI procurement would be needed, on the basis of the seven Ethics Guidelines for Trustworthy Artificial Intelligence presented by the High-Level Expert Group on AI. Among these, a notion of transparency and human oversight emerged as particularly crucial ones from this panel.

Jim Shaughnessy from Workday highlighted that the company’s AI whitepaper shows agreement with the European Commission’s approach proposing a ‘Trustworthy by Design Regulatory Framework’ based on transparency, governance, accountability and enforcement. Given the growing pervasiveness of AI technology, continued Jim Shaughnessy, a coherent and trustworthy framework for AI developers would prevent fragmentation and ensure trust. MEP Anna-Michelle Asimakopoulou added that such framework should be human-centric and based on European values. The objective of the regulations should be “trustworthiness rather than mere trust”, claimed Andrea Renda from the Centre for European Policy Studies. The EU should not regulate any kind of AI, but AI which is oriented towards the common good.

The risk of over-regulation

While it is widely recognized that regulation is needed, the risk of over-regulation remains an alarming image. Indeed, the speakers underlined that some aspects relating to AI are already regulated by existing legislation. Privacy, for instance, is comprehensively dealt with in the GDPR which also defines ‘risk’. Similarly, for transparency there are several sector-specific regulations. Cecilia Bonefeld-Dahl from Digital Europe argued that while the EU should not shy away from high risk, it should be careful not to over-regulate. On this matter, European Commission’s representative Kim Jørgensen reassured the public that regulation and innovation are not contradictory, but that to enable the latter, consumers need to trust AI.

Enforcement

Future AI regulation – unlike traditional standard-setting – will require enforcement instruments and mechanisms should that are flexible enough to address the evolving nature of algorithms. For instance, high-risk AI applications will arguably need continuous regulatory revisions. That would require an expert-led, multi-stakeholder body constantly updating the notion of AI-related risk practices. That would be easier at the EU level, but more challenging multilaterally. Overall, effective enforcement will require close cooperation between the public and private sector.

Cooperation with third countries

A final theme that ran through the conference was the opportunity for cooperation with third countries. Specifically, MEP Anna-Michelle Asimakopoulou stated that like-minded countries need to agree on standards and answer together to critical questions. The US elections have created new room for transatlantic cooperation, and it is therefore fundamental for the EU to act now. Reaching an international alliance, noted Renda, is as desirable as complicated. Interpretations of risk and legal understanding differ, with the EU being significantly more focused on ex ante regulations than the US.

Discussing AI governance and future transatlantic cooperation, the panel noted that data privacy would be a viable agenda item for the EU to build new shared agenda with the US. To that end, the risk of regulatory fragmentation across the EU would pose challenges to a smooth transatlantic cooperation. However, the Commission is confident that the ongoing EU digital transformation under Von der Leyen will ensure enough coherence to productively engage with Washington. The goal would be a GDPR-like ‘Brussels effect’ whereby the world would look at the EU as a model when it comes to shaping the digital future and designing relevant regulation.

Finally, at a multilateral level, a number of existing institutional frameworks were discussed as potential ground for future AI governance, including the World Trade Organization and NATO as far as security and defense applications are concerned. The common element to these fora, panelists agreed, should be a determination not to stifle innovation. To that end, Caecilia Dahl noted, existing regulation such as existing laws on discrimination and consumer protection should be effectively enforced.

All in all, the governance of AI is a tricky matter which is gaining increased momentum at the European as well as global level. The panel showed consensus on the importance of ensuring a trustworthy framework for European innovation although opinions on the likelihood and shape of transatlantic cooperation differ. The next weeks will be key for the creation of new cooperation opportunities with the US administration but will also be the stage of new EU’s legislative proposals.

Leopoldo focuses on a wide number of topics across the UK and Europe, analysing policies related to content regulation, digital trade, data privacy and telecommunications. Prior to joining Access Partnership, Leopoldo worked at the Italian Embassies in London and Prague and at the European Commission Representation to Italy.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like