Professors Shannon Vallor (Baillie Gifford Chair in the Ethics of Data and AI, School of Philosophy, Psychology and Language Sciences) and Michael Rovatsos (Personal Chair of Artificial Intelligence, School of Informatics) brought their expertise together in a collaborative project between SHAPE and STEM colleagues which led to new and innovative ideas about the concept of answerability and responsibility gaps in AI decision making. 

Since AI is always about trying to understand and replicate how humans think, act, and work together, we've always looked to the SHAPE disciplines to understand the conceptual frameworks that they develop. And for our project in particular, the real innovation was that our SHAPE colleagues came up with new ideas for how we can fill responsibility gaps through this concept of answerability. This was something that was genuinely novel in the computing and AI fields.
Professor Michael Rovatsos
Our project built on the idea of answerability: a dialogical approach to moral and legal responsibility for AI that is grounded in the disciplines of philosophy, cognitive science and law. We brought that idea together with a parallel approach to designing AI systems for dialogue, grounded in computer science. The project’s success grew from the constant interplay between the team’s three disciplinary lenses: first, the moral and legal question of what answerability demands of those who build, use and regulate AI systems; second, the social question of what kinds of answers people actually need and expect in order to see AI systems as trustworthy; and finally, what our technical capabilities in AI design can do to help us live up to those demands and expectations.
Professor Shannon Vallor

A problem of responsibility gaps

As computer systems continue to become increasingly involved in high-stakes autonomous operations – independently piloting vehicles, detecting fraudulent banking transactions, reading and diagnosing medical scans – it is vital to be able to confidently assess and ensure their trustworthiness. Holding others responsible for what they do and the decisions they make is crucial in building and maintaining trust. But how can responsibility be attributed for actions taken autonomously by AI systems without any human direction? This mismatch is known as a “responsibility gap” as AI does not meet the conditions to be held accountable itself. Responsibility gaps are a growing societal problem as AI autonomy becomes increasingly utilised by organisations in everyday decision-making.  

 

Bridging disciplinary gaps to tackle responsibility gaps: innovative collaboration

The Making Systems Answer project team of SHAPE and STEM colleagues came together as they were all individually thinking about how organisations can responsibly use AI in decision-making. The philosophical theory of responsibility as answerability had not yet been explored within the AI field, and this was where the team found their opportunity to work together to develop an innovative response to an urgent societal problem – new answerability practices that might fill responsibility gaps in AI decision making. 

The team utilised a novel, people-centred approach to develop informed guidance and recommendations for organisations using AI in decision-making processes, as well as those designing and regulating these systems. Through practical recommendations that enhance an organisation’s ability to provide answers for what its systems do, the outputs of the project can improve trustworthiness and accountability in AI-driven decision-making. The project also designed and prototyped a dialogical AI tool that might help to fill in some of the responsibility gaps that have opened up between large, complex organisations and their most vulnerable stakeholders, who are increasingly exposed to harms from AI systems that can act without human direction. 

 

SHAPE and STEM collaborations lead to innovative, practical interventions

The Making Systems Answer multidisciplinary team used their combined expertise to shine a different light on a problem that the STEM colleagues had thought was well understood. They connected across SHAPE and STEM disciplines to tackle the AI responsibility gap problem by collaboratively innovating to develop new interventions for an increasingly urgent and thorny social problem. Looking forward, the team would like to use their work, which they turned into a practical handbook for AI answerability in organisations, to drive greater adoption of responsible AI practises in industry and government. They note that it can be a challenge for ideas from SHAPE disciplines to translate to everyday use at scale; but through SHAPE and STEM collaborations these ideas can not only be translated into actionable insights, but also into better technologies and improved ways of governing them. For the team this is a key strength in these partnerships and how they envision the advancement of SHAPE and STEM collaborations having even greater real-world impact. 

 

 

Other team members:  

Dr Nadin Kokciyan (School of Informatics), Dr Nayha Sethi (School of Population Health Sciences), Prof Tillmann Vierkant (School of Philosophy, Psychology and Language Sciences), Dr Dilara Kekulluoglu (School of Informatics), and Dr Louise Hatherall (Usher Institute). 

 

Project Publications: 

Vallor, Shannon, Hatherall, Louise, Keküllüoğlu, Dilara, Kokciyan, Nadin, Rovatsos, Michael, Sethi, Nayha, & Vierkant, Tillmann. (2025). Making Systems Answer: A Practitioner’s Handbook for Trustworthy Autonomous Systems. University of Edinburgh. https://doi.org/10.7488/ERA/MSA-001 

Hatherall, Louise and Sethi, Nayha (2024). Regulating for Trustworthy Autonomous Systems: Exploring Stakeholder Perspectives on Answerability. Journal of Law and Society 51: 586–609. https://doi.org/10.1111/jols.12501 

Hatherall, Louise and Sethi, Nayha (2025). Exploring Expert and Public Perceptions of Answerability and Trustworthy Autonomous Systems. Journal of Responsible Technology 21, 100106: https://doi.org/10.1016/j.jrt.2025.100106 

Keküllüoğlu, D., Hatherall, L., Sethi, N., Vierkant, T., Vallor, S., Kokciyan, N., & Rovatsos, M. (2023). Answerability by Design in Sociotechnical Systems. 26th ACM Conference On Computer-Supported Cooperative Work And Social Computing, Minneapolis, Minnesota, USA. https://cscw-user-ai-auditing.github.io/media/papers/Answerability_by_Design.pdf 

Vallor, Shannon and Vierkant, Till (2024). Find the Gap: AI, Responsible Agency and Vulnerability. Minds & Machines 34, 20. https://doi.org/10.1007/s11023-024-09674-0 

 

Inspiring collaborations

Discover what is possible when STEM and SHAPE researchers work together
Hands holding gloe

Failure Modes of Engineering (FeME)

Dr Encarni Medina-Lopez (School of Engineering) and Dr Agnessa Spanellis (Business School) are working with colleagues to tackle the inequality of the climate change crisis by creating networks for solution co-creation with those who are impacted the most.

Rewriting approaches to literature and mental health

Dr Patrick Errington, Dr Daniel Mirman and Professor Sarah McGeown are innovating new ways to engage young readers with poetry and boost mental wellbeing.