“Ethics Guidelines for Trustworthy AI”: To Promote Human Dignity, Agency and Flourishing

On 18 December the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG) published a draft version of ‘Ethical Guidelines for Trustworthy AI‘. Moreover, they invited citizens, the general public as well as experts, to review this draft and to provide feedback via the European AI Alliance.

065/365: Show us your smile!
Picture by Ben Smith “065/365: Show us your smile!”: https://flic.kr/p/5dkLUi (CC BY-SA 2.0)

First, I would like to congratulate the AI HLEG with this document. It’s clear, it’s accessible, it’s thorough, and it’s practical. Let me sum up all the things I find brilliant: 

They use ‘Trustworthy’ as an overarching term. I think this is brilliant. No matter how you conceptualize AI–as ‘general AI’ or ‘narrow AI’, as ‘AI in autonomous systems’ or ‘AI as a tool to advance agency of humans’–we can all relate to the need for AI that is worthy of our trust. You want trustworthy AI similar to how you want a trustworthy car, a trustworthy drilling machine, a trustworthy babysitter, or a trustworthy partner.

They explain the relationships between rights, principles and values. Rights are relatively abstract and provide the “bedrock” for formulating ethical principles. In order to uphold these principles, we need values, which are more practical. Moreover, we need to translate rights, principles and values into requirements for developing AI systems. Putting rights, principles and values into these relationships provides clarity, which is direly needed for a constructive discussion of ethics. They discuss the following rights, principles, values and requirements:  

  • Rights: respect for human dignity, freedom of the individual, respect for democracy, justice and the rule of law, equality, non-discrimination and solidarity, including the rights of people in minorities, and citizen rights (based on: Charter of Fundamental Rights of the EU). 
  • Principles: Beneficence (Do good); Non-maleficence (Do no harm); Autonomy (Preserve human agency); Justice (Be fair); and Explicability (Operate transparency) (from: AI4People—An Ethical Framework for a Good AI Society); the last one, Explicability, is relatively new and specific for AI. 
  • Values: The Guidelines “do not aim to provide yet another list of core values”–since there are many useful lists available, like the lists from Asilomar, Montreal, IEEE and EGE (these lists are reviewed in: AI4People—An Ethical Framework for a Good AI Society).
  • Requirements: accountability; data governance; design for all; governance of AI autonomy (human oversight); non-discrimination; respect for (and enhancement of) human autonomy; respect for privacy; robustness; safety; and transparency. 

They structure their guidance in three parts, from the abstract to the practical: Guidance for ensuring ethical purpose; Guidance for realizing trustworthy AI; and Guidance for assessing trustworthy AI. Such a structure is very useful, and much needed, during the design process (purpose), implementation process (realizing) and evaluation process (assessing). We need to move from abstract to practical, and back, in an iterative fashion.

I think this is brilliant: to introduce ‘Trustworthy’ as an overarching term; to explain the relationships between rights, principles, values and requirements; and to provide guidance for iterating between design, implementation and evaluation.


Now, following the AI HLEG’s invitations to provide feedback, here are some concerns and thoughts for further improving these guidelines: 

Concern for Human Dignity

The AI HLEG asked feedback on “Critical concerns raised by AI” (pp. 10-13). I would like to propose to add one concern: a concern for human dignity.

What do I mean by that? Well, you are familiar with the Turing Test. It aims to evaluate whether a computer can give a performance that we recognize as human-like intelligence so that we cannot distinguish it from a human. In a Turing Test the computer’s aim is to behave like an intelligent person.

Now imagine a Reverse Turing Test. In such a test you, as a human being, aim to adapt to the computer and its algorithms. You fix your eyes on your mobile phone’s screen and you mindlessly click ‘okay’, ‘view next’, ‘buy’–you do whatever the algorithm tells you to do. In a Reverse Turing Test your aim is to behave like a machine.

This concern is related to other concerns discussed by the AI HLEG: for ‘Identification without Consent’ (when you mindlessly click ‘yes, I accept terms and conditions’), for ‘covert AI systems’ (when a system treats you in a mechanical manner, with machine logics), and for ‘Normative and Mass Citizen Scoring’ (when a system gathers all sorts of personal data and uses these for all sorts of purposes, in non-transparent ways).

Implementing too many AI systems, in too many spheres of life, and using these too much, is a threat to human dignity.

This concern was discussed, e.g., by Brett Frischmann and Evan Selinger (Re-engineering Humanity, 2018: 175-183; I took the idea for a Reverse Turing Test from them), by Sherry Turkle, who reminded us of the value of genuine human contact, both intra-personal and interpersonal (Reclaiming Conversation, 2015), and by John Havens (Heartificial Intelligence, 2016), who advocated “embracing our humanity to maximize machines”: to design and use machines in ways that preserve and support human dignity.

Putting Human Agency First

Furthermore, I would like to propose an improvement and clarification in the formulation of two of the ‘Requirements of Trustworthy AI’ (pp. 13-18).

The AI HLEG discusses “Governance of AI Autonomy (Human oversight)” and “Respect for (& Enhancement of) Human Autonomy”. My proposal is to merge these requirements into one requirement, under the heading of, e.g., “Appropriate Allocation of Agency”, or: “Putting Human Agency First”.

Both requirements (“Governance of AI Autonomy” and “Respect for Human Autonomy”) are about distributing agency between people and an AI system. Put simplistically:

  • Moral agency resides in people, not in machines;
  • there’s only 100 agency-percent-points to share (as it were);
  • and you can delegate some agency-points to a machine;
  • but then you will loose these (like in a zero-sum game).

The agency of humans and the agency of an AI system are on one and the same axis: on one side of this axis people have 90% of the autonomy and the AI system 10%; on the other side the AI system has 90% of the autonomy and people 10%. The choice is ours — and we will need to decide carefully, taking into account the various pros and cons of delegating agency to machines.

Merging these two requirements about autonomy is intended to clarify that human agency diminishes when we delegate agency to machines.

Underlying this intention is the belief that technology must never replace people or corrode human dignity. Rather, we need to put human agency first, and use technologies as tools. Here it needs to be acknowledged that tools are never neutral; the usage of any tool shapes the human experience and indeed the human condition (https://ppverbeek.wordpress.com/mediation-theory/) — this requires careful decision making, e.g., in the ways in which an AI-tool gathers data, presents or visualizes conclusions, provides suggestions, etc.

This idea is at the heart of the Capability Approach, which views technologies as tools to extend human capabilities: to create a just society in which people can flourish (see: Organizing Design-for-Wellbeing projects: Using the Capability Approach; copy for personal, academic use). This idea is also expressed in the “Statement on Artificial Intelligence, Robotics, and ‘Autonomous’ Systems” of the European Group on Ethics in Science and New Technologies, in which ‘Autonomous’ has quotation marks to indicate that a system cannot have moral autonomy. Finally, the principle of “an appropriate allocation of function between users and technology” is explicitly mentioned as a principle in the ISO 13407:1999 standard for Human-centred design processes for interactive systems (the updated ISO 9241-210:2010 standard puts this less explicitly).

Virtue Ethics for Human Flourishing

Moreover, the AI HLEG invites suggestions for technical or non-technical methods to achieve Trustworthy AI (p. 22). In line with the suggestions above (a concern for human dignity; and putting human agency first), I’d like to propose to add virtue ethics to the mix of non-technical methods.

In her book “Technology and the Virtues” (2016), Shannon Vallor advocated developing and using technologies in ways that promote human flourishing. She views technologies as tools that can help — or hinder — people to cultivate specific virtues. She argues that we need to cultivate specific technomoral virtues to guide the development and the usage of technologies, so that we can create societies in which people can flourish in the 21st century.

Please note that each society, for each specific era and area, needs to make its own list of virtues that are needed for that society. The virtues that Aristotle proposed were for the citizens of ancient Athens. The virtues of Thomas of Aquinas were for medieval catholic people. Vallor proposed the following virtues for our current global, technosocial context (op.cit.: 118–155):

Honesty (Respecting Truth), Self-control (Becoming the Author of Our Desires), Humility (Knowing What We Do Not Know), Justice (Upholding Rightness), Courage (Intelligent Fear and Hope), Empathy (Compassionate Concern for Others), Care (Loving Service to Others), Civility (Making Common Cause), Flexibility (Skillful Adaptation to Change), Perspective (Holding on to the Moral Whole), and Magnanimity (Moral Leadership and Nobility of Spirit).

(Shannon Vallor, Technology and the Virtues, 2016: pp 118-155)

Vallor argued that virtue ethics is an especially useful approach for discussing the development and usage of emerging technologies (op.cit.: 17–34): technologies that are under development and not yet crystallized. AI is an example of an emerging technology. Emerging technologies entail what Vallor calls “technosocial opacity” (op.cit.: 1–13); their usage, integration into practices, effects on stakeholders, and place in society are not yet clear. She argues that other well-known ethical traditions, like deontology or consequentialism, can have limitations when used for the development and usage of emerging technologies. In deontology, one aims to find general rules and duties that are universally applicable. In consequentialism, one aims to maximize positive effects and minimize negative effects for all stakeholders. For an emerging technology like AI, however, it is hard to find general rules and duties, or to calculate all possible effects for all stakeholders (op.cit.: 7–8).

Take, for example, autonomous cars — with lots of AI in them, and in the infrastructure around the cars. Yes, there are some cars driving around with some level of autonomy. But they are not fully autonomous and they are not widely used. Therefore we cannot yet have a good-enough understanding of the ways in which people use autonomous cars and of their place in society.

Autonomous cars may, e.g., incentivize people to make longer commutes: to travel 4 hours in the early morning (while sleeping behind the wheel) and travel 4 hours in the evening (while watching videos). This could disrupt family lives, corrode leisure time, social interactions and the social fabric of society, and have huge negative impacts on the environment — and on traffic congestion.

For such a case, it would be hard to know exactly which duties are involved or which general rules apply. Or it would be hard to anticipate and calculate all the positive and negative consequences for all stakeholders involved. A virtue ethics approach, however, would be useful here: to identify the virtues that are relevant in this specific case (to create a society in which people can flourish), and to provide recommendations to cultivate these virtues, including processes of self-examination and self-direction (op.cit.: 61–117).

Rather than putting different approaches in opposition to each other, to disqualify one, or to favour one at the expense of another, I’d like to propose to create a productive combination: to use deontology where and when we have clarity about general rules and duties; to use consequentialism where and when we are able to calculate positive and negative consequences; to use virtue ethics where we ask questions about what kind of society we want to create and how technology can support people’s flourishing.


It is my hope that these three suggestions–a concern for human dignity; putting human agency first; and applying virtue ethics–can help to further develop these Ethics Guidelines for Trustworthy AI.

Marc Steen (marcsteen.nl; marc.steen@tno.nl)

Advertisement

VWData in the news

Three key project team members (of the VWData projectResponsible Collection and Analysis of Personal Data for Justice and Security“) were recently interviewed for “JenV Data Magazine (1) 2018” of the Ministry of Justice and Security:

In addition, two op-eds were published in Dutch newspapers in June, in which Marc Steen (TNO) advocates making algorithms more transparent (focus of the project) and taking a careful look at the role of Artificial Intelligence:”

If you are interested in what we are doing, please feel free to contact: marc.steen@tno.nl.

 

Transparency of algorithms

In January 2018 we kicked off the VWData research programme, with Inald Lagendijk as coordinator: a research programma that brings together academia, government and industry, and that aims to develop technical and societal solutions for using big data and algorithms responsibly (VWData Flyer).

Transparency of algorithms in the context of justice and security

Ibo van de Poel and Paul Hayes of Delft University of Technology, Remco Boersma of the Dutch Ministry of Justice and Security, and me (Marc Steen of TNO), work in the project “Responsible Collection and Analysis of Personal Data for Justice and Security”. We focus on the usage of big data and algorithms in the context of justice and security, e.g., by judges and by police officers, which raihses a range of questions about ethics and justice, e.g., about discrimination against specific groups of people.

Our objective is to make the usage of algorithms in the context of justice and security more transparent, so that their fairness, accuracy and confidentiality can be evaluated.

Clearly, one cannot maximize transparency in justice and security. Rather, transparency will need to be optimized; transparency will need to be balanced with security. The Ministry needs to be open and transparent ‘where possible’ and to provide security and safety ‘where needed’ (Informatiestrategie 2017-2022, pp 17, 23-24; and Informatieplan 2017, pp 15-19).

transparency

We will combine conceptual and practical research:

  • Conceptual: We will clarify what we mean with transparency, vis-à-vis other values, most notably fairness, accuracy, confidentiality, security and safety, and in terms of accountability, i.e. the ability to provide satisfactory accounts to diverse stakeholders, e.g., courts of justice, police officers and their managers, journalists and citizens;
  • Practical: We will conduct one case study, in close collaboration with the Ministry of Justice and Security’s ‘Living Lab Big Data’, and deliver a set of scenarios for optimizing transparency (the topic will be defined by the Ministry). This case study will also take into account the current data handling processes policies of the Ministry.

Auditing algorithms for fairness, accuracy and confidentiality

In parallel, we will also be working on the development of a standard process for the auditing of algorithms (to ‘open the black box’); this process would help:  1) to decide which algorithms should be audited; and 2) to execute the assessment of the algorithm’s fairness, accuracy and confidentiality. Sander Klous and Remko Helms (and others) will also be involved in this work. auditing

Currently, many algorithms function like ‘black boxes’. The give answers but no explanations. This is bad news if you are refused a mortgage (‘algorithm says no’) or if the police arrests you (‘algorithm says yes’).

We foresee that it will be necessary, within the next two years, to audit algorithms, i.e. to assess the algorithm’s fairness, accuracy and confidentiality (maybe use other terms, e.g., reliability, explainability) against a well-defined standard. The results of such an audit can help in various ways: 1) consumers/citizens can assess the algorithm’s fairness, accuracy and confidentiality, similar to how they can assess organic meat or fair trade bananas; and 2) service providers, both public and private, can position their offer as ‘fair’ or ‘accurate’ or ‘confidential’.

We are aware of other initiatives, e.g., “This logo is like an “organic” sticker for algorithms

Responsible Data Innovation

It has become clear that Big Data is not only an enabler of radical changes in technology and business, but also a source of radical changes in society and in people’s daily lives. And, as with many emerging technologies, Big Data offers opportunities, as well as challenges. This is the case, e.g., for Predictive Policing, Quantified Self and all sorts of other Big Data applications and services.

smallbigdatamarcsteen

Pitching ‘Responsible Data Innovation’ in two minutes

Many discussions of Big Data depart from a legal perspective and address, e.g., what is legally permitted. As a complement, we will explore, in this blog, the ‘Ethics in Big Data’, i.e. the various ethical issues at play in developing and deploying Big Data applications.

Let us illustrate what we mean with ‘Ethics in Big Data’, by giving some examples of questions and issues that can arise during the development and deployment of (Big) Data applications–issues that can impact society and can raise ethical questions:

  • Data Selection and Collection: the selection of sources to be included (or excluded), they ways in which missing data points are dealt with–and the ways in which this can, unintentionally, discriminate against certain (‘minority’) groups
  • Data Processing and Modelling: the usage of (implicit) assumptions, prior knowledge or (existing) categories to interpret or label data–which can, often unintentionally, propagate existing biases or unfairness
  • Data Presentation and Action: including, e.g., (unintentional) ‘framing’, and suggestions towards specific interpretations and actions–which can lead to questions about agency: who is in charge, the people or the data?

Framework and workshop format

In order to enable people in the industry to engage with ethical questions like this, we we developed a framework, which also serves as a practical workshop format. The framework consists of three rows (data selection and collection; data processing and modelling; data presentation and action) and three or more columns, with key ethical values–values that are key in a liberal, democratic society:

  • Autonomy and Freedom: people’s capability to form a conception of ‘the good life’ and the practical ability to realize this (‘positive freedom’), and to act without being obstructed by others (‘negative freedom’)
  • Fairness and Equality: the capability for people to be treated fairly or equally, e.g., regarding the distribution of goods and evils between people, and to share the consequences of disadvantageous situations
  • Transparency and Accountability: the capability of people to understand how organizations, both public and private, use their personal data, and the implications of Big Data applications for their personal and social lives.
  • Other values: Please note that this list of (ethical) values can be augmented; we can add other values, depending on the context of the application and the organization, e.g., values like: Privacy, Solidarity, Dignity, Authenticity.

This framework facilitates people to identify and discuss key ethical questions in a systematic manner, i.e. in the different cells of the table, e.g., questions concerning privacy, representation, agency, interpretation, uncertainty, and algorithmic fairness. Very practically, the framework can function as the basis for a workshop format.

respdatainno

Typically, a group of 4-6 people, who are involved in the development and deployment of a specific Big Data application, are invited to discuss a series of ethical questions (typically 3 out of the 9 cells are most relevant), and to explore ways to deal with these questions, and to develop practical solutions. This workshop can be done in 90-120 minutes.

This framework is based on the classical idea of eudaimonia, which refers to people’s flourishing and wellbeing, both on the level of individuals and on the level of society.

We have done this workshop with people of NPO, who were working on ‘MyNPO’, an app that will offer personalized media content, using advanced data analysis of people’s behavior patterns, and with people from the Municipality of Rotterdam, who are exploring ways to analyse data on citizens to forecast future needs for social services. The results of doing the Responsible Data Innovation workshop are the following:

  • Clarity on which ethical issues are at play
  • Suggestions for dealing with these issues
  • Action points for furthering the development

Please contact Dr. Marc Steen of TNO (marc.steen@tno.nl) if you are interested in this framework.