We take a look at some of the suggestions outlined in the White Paper on AI and address the need to define what precisely constitutes high-risk AI, and how regulation can empower, rather than inhibit, innovation

January 25, 2021

5 Min Read

The events of 2020 made it feel like many aspects of life had ground to a halt, and so it was comforting to witness artificial intelligence’s stubborn refusal to conform, with innovation in the field continuing apace.

As a result of the technology’s rapid advancement, the European Commission released its White Paper on AI in February 2020.

Within, it offered a regulatory framework in which the fundamental values of the EU, including human dignity and privacy, are protected, outlining the importance of building an ‘ecosystem of trust’ in Europe with a ‘human centric approach’.

This is a necessity if society is to reap the rewards promised by the evolution of AI. However, the breadth of AI poses a distinct challenge when structuring regulation, with a difficult balancing act of preserving fundamental values without unduly curtailing the freedom to innovate or reducing European competitiveness in the field.

We take a look at some of the suggestions outlined in the White Paper and address the need to define what precisely constitutes high-risk AI, and how regulation can empower, rather than inhibit, innovation.

Defining complexity

When defining regulation for AI, it’s vital to understand that it’s not a single entity that can be painted with a broad brush. There are different kinds of AI systems, and a wide range of technologies encompassed by it. From Edge-based AI (where the processing doesn’t leave the device), to Cloud based AI, and where different deployments mean different things, and pose different levels of risk.

While AI itself refuses to be pigeon-holed, a common agreement is that ‘bad’ AI can lead to safety issues, such as loss of privacy, discrimination, and the ability to take important decisions away from citizens and reduce the control they have over their lives. Visions of ‘The Terminator’ may pervade, but real life is more pernicious: consider a recruitment programme which is biased and leads to discrimination.

As a result, more common ground is found in the idea that to make intelligent decisions in AI, a clear set of ethics is required. This is where the White Paper on AI – and existing regulation – comes in, outlining the European Commission’s desire to ensure that AI remains a force for good.

What is high-risk AI?

The existing European legislation attaches to all business, citizens and authorities in Europe and, drives behaviour that meets the EU’s fundamental values. The White Paper, meanwhile, is a consultation document investigating how Europe could be a leader in AI without jeopardising these values.

The goal is to deliver trustworthy AI with a human-centric approach – achieved through ethics, respect for human autonomy, prevention of harm, fairness and explicability of AI, among other compliance guidelines.

In order to meet these standards, the White Paper promotes a risk-based approach to AI, with a focus on what constitutes ‘high-risk AI’. This, however, is difficult to define. The definition provided considers what is at stake (could the AI feasibly impact a person’s health, for example), as well as the sector and intended use of the AI in question.

Two cumulative criteria will be used to establish what constitutes high risk. The first is, will it be deployed in a sector where areas of risk are likely to occur – transportation or health, for example. The second is, will the AI be used in such a manner that significant risks are expected to arise. This considers the fact that not every AI deployed in the transportation or healthcare industry, for instance, will place people at risk.

This is an excellent start, though the inclusion of a third element may ensure better regulation. This third criteria could be whether the identified risk can be mitigated, either through say Edge deployments, where privacy and security risks are minimised, or via sectoral regulations, such as automotive, where there’s already a heavy compliance burden and a liability regime.

The White Paper also mentions that some areas will be automatically included as high risk, such as remote biometric multiple person identification in public spaces. But care needs to be taken with this broad-brush approach, as even this deployment is nuanced and care needs to be taken to define, for example, what is a public space, how is the data processed, and what is done with the identification data.

Building a suitable framework

The White Paper goes on to propose a mandatory framework for high-risk AI. The requirements within this framework include AI having to ensure robustness and accuracy during all lifecycles, adequate records having to be maintained, and having human oversight in place to prevent AI from undermining human autonomy.

While many of these guidelines are vitally important in ensuring trustworthy AI, others may well have a negative impact on its continued development.

For instance, the framework states that AI training data must respect the EU’s fundamental values, and is not biased (which is fine), and is possibly from common data sets that are broad based and cover all scenarios. Common data sets, however, may prove less effective than the private data sets typically paid for by companies, when striving for innovation.

Also, while it seems completely reasonable to ask that records are maintained, including of the programming and the data used to train the AI, this could come with a price tag that would disadvantage EU companies in export markets.

With this in mind, it’s clear that the current framework’s risk-based approach is welcome, but that some further refinement is a necessity to ensure continued innovation.

An excellent start could be to consider whether the technology being used, such as Edge-based deployments, are a mitigating factor when determining what is high-risk AI and conversely would continuous deep learning AI, where machines are left to develop their own solutions, make an otherwise borderline AI ‘High Risk’?

Put simply: there’s nuance and flexibility needed in the drafting to avoid stifling innovation, reducing EU competitiveness and limiting the true potential of this incredible technology.

John Patterson is VP and Associate General Counsel at Xperi Corporation. He is responsible for Xperi’s legal activities in EMEA. Prior to Xperi John has served in various international in-house legal roles including  Hewlett Packard and TiVo. John is currently a member of the CLEPA Legal working group.  John is Solicitor, qualified in the UK.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like