AI and Europe


EU is working on AI. At least ! The first draft of the High level Expert Group on Artificial Intelligence is available. There works focused on ethics and general consideration on which kind of AI UE should work on. Here is a summary of their work.

First draf

The HLEG released in December 2018 a first draft about ethics in AI. There main idea is that AI need to be trustworthy to be accepted by citizens. The idea is to have “quality AIs” rather that uncontrolled systems.

In their first draft, they discussed possible way to design trustworthy AI. The main principles they draw out are

  • respect for human autonomy
  • prevention of harm
  • fairness
  • explicability

The first principles – respect for human autonomy – cover the fact that AI should not force or manipulate human to do something against their will or beyond their understanding, steal their right to choose and, of course, their jobs. On my part, I totally agree with this principle, but how realistic it is ? For instance, will we really be able to design AI that could do the job better than a human but anyway use it only as decision helper ?

Prevention of harm is a bit more than the UE version of Azimov’s laws. It is also to make robust AI, resistant to bug and attacks. It also means to avoid to emphasis “asymmetric situations” such as employers/employees or Governments/citizens. I think the last point is a major one, as complicated tools such as AI have strong risk to strengthen such disequilibrium. Education here is a key solution. But there is an open question here : will players in dominant position make it goes like that ? I would love too…

Fairness is about avoiding bias in AI design or application. It covers design of model that take into account various ethnic/gender/political opinion/… as well that an equal access to services and goods. My opinion is that the question of access to identical services and goods is straightforward and easy to manage. Removing unwanted bias in models will be another challenge…

Explicability is, by far, the most technical question. The idea behind is that the decision/results given by an AI must be understandable by humans (at least experts of the field). In general, I don’t see any field where people would not love to have explicable results. It remains a though challenge, especially in field like healthcare, to get a reasonable explicability. Not sure we will have efficient and understandable AI soon…

Stakeholder consultation

It becomes a common practice to consult stakeholders. It is also the spirit of UE, or at least it is suppose to be. The HLEG followed that way. There was 506 contributions, which can be found here.

My own contribution can be find in this document and I will reproduce here soon.

What is coming ?

The HLEG has released in April 2019 an update version of the draft about ethics. They are now inviting stakeholder to test the list of recommendations and gives feedback. These feedbacks will allow them to evaluate these recommendation and revise them by 2020. This is only the beginning of the process…

Personnal Thoughts

On general, I totally agree on the principle and main ideas of the HLEG. As a citizen as well as a stakeholder in AI, I prefer a AI with acceptable design of constrains but far better respect of the human being. My first reason is morality, I want create tools that help not harm or make things worst for those who already have trouble to make their way. The second is practical. I doubt citizens will go for the AI technology if they don’t trust it. I had the opportunity to discuss with many non-stackholder with various level of knowledge about AI. Like GMO, most of them would instinctively reject AI.

I have anyway a concern about the AI celerity to deal with AI. I followed the french initiative toward AI and now the EU one. Both are trying to create strong basis, but both are slow. AI is a fast paced field and european countries started lately to get interest on it. In France, we have a tale about a turtle beating a hare at a race because the hare was too confident and rests while the turtle was continiously running, but I don’t see the hares in the AI field resting on their first successes…

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s