top of page

God, the Oracle, and the Nightclub Bouncer: Can human dignity be modelled in an AI-based decision support system for post-Covid health certification?

2nd November 2020 - 1st March 2021
HD.png
Project team
Profile pic placeholder.jpg

Principal Investigator

Associate Professor in Law, University of Nottingham

Profile pic placeholder.jpg

Co-Investigator

Associate Professor in Data Analytics, Middlesex University

Profile pic placeholder.jpg

Co-Investigator

Research Associate in Health Informatics, King’s College London

Profile pic placeholder.jpg

Dr Carolina Fuentes

Co-Investigator

Lecturer, Cardiff University

Profile pic placeholder.jpg

Co-Investigator 

Postdoctoral Research Associate, University of Exeter

Profile pic placeholder.jpg

Dr Robin Renwick

Supporting Partner

Research Analyst, Trilateral Research Ltd.

Summary

This project explores how human dignity may be impacted by an AI-based decision support system for post-Covid health certification.

 

Drawing from law and moral philosophy, we relate the definition and substantive content of human dignity to two aspects: (1) recognition of the status of human beings as agents with autonomy and rational capacity to exercise judgement, reasoning, and choice; and (2) respectful treatment of human agents so that their capacity is not diminished or lost through interaction with or use of the technology.

 

We identify components, sub-components, and related concepts of human dignity to translate into algorithms. These algorithms are then used to design an agent-based behavioural simulation model of the health certification process and AI-based decision support system. In a closed computer-based environment the simulation model utilises scenarios that indicate undermining or loss of human dignity (e.g. coercion, manipulation, deception, loss of autonomy).

 

Part of the challenge is to see whether it is possible to represent human dignity as algorithms to determine behavioural changes, which can then be used as a proxy for understanding the impact of an AI-based decision support system.

Key Findings

The project has identified several legal-philosophical components that constitute human dignity, primarily the status of human beings as autonomous agents with rational capacity. A respectful treatment which does not diminish these capacities, where the decisions are made in a way which put person’s interests first would constitute a system which treats human dignity fairly.

The team has developed an algorithmic design of a human dignity-aware decision support system (DSS), together with pre-conditions simulation for a human dignity-aware DSS, based on agent-based modelling.

Three use case scenarios were developed to represent real-life contexts of an individual interacting with a human dignity- aware DSS to access a vaccine and obtain a vaccine credential. These scenarios provided representation of:

  • Different options which may be available to an individual, and how an individual may act in relation to their interactions with the DSS and other agents.

  • The role of the DSS, the health certification authority, and the service provider.

  • Human dignity components which may be impacted when an individual interacts with the DSS.

  • Factors which may affect an individual’s interaction with the DSS (e.g., time, technical ability, human factors, AI).

Impact

This project’s impact is that this is the first ever attempt at designing and developing a human dignity-aware AI system that relies on source expertise on human dignity based on law and moral philosophy, and combines with AI expertise in algorithm mapping and design, and simulation modelling. It creates a “human dignity-aware AI design” method that connects source expert knowledge on human dignity with guidelines to inform development of future human dignity-aware AI systems.

 

By representing the AI system as part of a process in which it may be operating autonomously or semi-autonomously, alongside other agents, and where there may be intervening acts by other agents, an evaluation can be made as to:

(i) whether the system directly or indirectly impacts on an individual’s human dignity;

(ii) the causation of impact; and

(iii) legal responsibility and liability for harm, damage, or loss.

This project is innovative, because human dignity has not been specifically explored as an important indicator of whether the technology is ethically designed, developed, and deployed. Nor has human dignity been used to understand behavioural change induced by the technology deployed.

 

If an AI-based decision support system is used for health status certification purposes to determine the extent to which a person can enter public spaces, premises, and access resources/services, then there is potential for the person’s human dignity to be undermined or lost.

Next Steps

The team will collate, evaluate, and write up the research findings in a high impact peer-reviewed journal. The team also aims to seek further funding to develop the existing basis of the research, and expand it to other domains.

bottom of page