Workshop: Defeasible Reasoning – Work at the Intersection of Ethics and AI

Date: Monday, November 13th & Tuesday November 14th 2023

Location: the Seminar Room, Institute for Safe Autonomy, Room ISA/135, Campus East, University of York

This is a hybrid event. If you wish to attend via Zoom e-mail the organiser, Alan Thomas, at ap.thomas@york.ac.uk

Zoom links will be sent out on Sunday the 12th of November

To be eligible to participate virtually you must be University faculty, including postgraduate researchers, or employees of an affiliated NGO/company of the EPSRC Project ‘Resilience of Autonomous Systems’

Workshop Program

Day One

Monday November 13th

10:00  Opening remarks & welcome

10:15 – 11:00 Alan Thomas (University of York) 

‘Weightless Reasons’

Abstract

The project of implementing common sense ethical knowledge in an autonomous system via a list of background default hedged principles runs into the problem of conflict when several such principles seem to generate a reason. How does one derive an overall verdict? One, natural, idea is to carry over into this representation the intuitive idea that reasons have weights either individually or in combination. This metaphor in search of an underlying explanation is assessed in the light of John Horty’s discussion of it. It is concluded that the idea cannot be saved, and probably ought to be put out to pasture. This leaves a practical problem to be solved which can be resolved by specifying an artificial set of priority relations between classes of reason. This does, however, represent only a second grade level of moral understanding with implications for the kinds of (low stakes) contexts where systems of this kind ought to be deployed.

11:00 – 11:45 Michael Fisher (University of Manchester)

‘Responsibility, Irresponsibilty, and Indifference in Agent-Based Autonomy’

Abstract

Autonomous Systems make their own decisions (and potentially take their own actions) without human intervention. In building such systems we use (“cognitive”, “rational”, “intelligent”) agents to capture this core decision-making in a transparent, explainable, and verifiable way. In this talk I will describe a formalisation of “responsibilities” in agent-based autonomy, addressing: how responsibilities can lead to agent action; how responsibilities can lead to agent inaction; and where these responsibilities come from.

This work is carried out as part of the “Computational Agent Responsibility” project:   https://web.cs.manchester.ac.uk/~michael/Responsibility/

11:45 – 12 noon Tea/coffee Break

12 noon – 12:45 Aleks Knoks (University of Luxembourg)

Abstract

The philosophical literature that engages with foundational questions about the nature of normativity often appeals to normative reasons, or considerations that count in favor of or against actions. These reasons and their interaction is usually taken to determine the deontic status of actions—that is, to determine whether a given action is permissible, forbidden, or required—and the interaction between reasons is standardly made sense of by appeal to the metaphor of (old-fashioned) weighing scales. The main goal of my talk is to present a new formal model of the way reasons and their interaction determine deontic status of actions. The model draws on recent ideas from formal argumentation, and it is more faithful to the philosophical literature than its competitors. My hope is that it will prove useful in furthering philosophical debates about reasons, as well as relevant for the concerns of machine ethics.

12:45 – 2:00 pm Lunch

2:00 – 2:45 Bijan Parsia (University of Manchester);

Title tbc;

2:45 ­– 4:15 Keynote Address (1.5 hour)

John Horty (University of Maryland) 

Knowledge Representation for Computational Normative Reasoning

Abstract

I will talk about issues involved in designing a machine capable of acquiring, representing, and reasoning with information needed to guide everyday normative reasoning – the kind of reasoning that robotic assistants would have to engage in just to help us with simple tasks. After reviewing some current top-down, bottom-up, and hybrid approaches, I will define a new hybrid approach that generalises ideas developed in the fields of AI and law and legal theory.

[Joint work with Ilaria Canavotto] 

4:00-4:15 Tea/coffee Break

4:15 – 5:00 Ilaria Canavotto (University of Maryland)

‘Explanation Through Legal Precedent Based Reasoning’

Abstract

Computational models of legal precedent-based reasoning developed in the field of Artificial Intelligence and Law have recently been applied to the development of explainable Artificial Intelligence methods. The key idea behind these approaches is to interpret training data as a set of precedent cases; a model of legal precedent-based reasoning can then be used to generate an argument supporting a new decision (or prediction, or classification) on the basis of its similarity to a precedent case [3,4,5]. In this talk, which builds on [1,2], I will present a model of precedent-based reasoning that provides us with an alternative way of generating arguments supporting a new decision: instead of citing similar precedent cases, the model generates arguments based on how the base-level factors present in the new case support higher level concepts. After presenting the model, I will discuss some open questions and work in progress.

7:00 pm Conference Dinner

Ambiente Tapa Fossgate Restaurant

—————————————————-

Day Two

Tuesday November 14

10:00  ­– 10:45 Bev Townsend & T T Arvind (University of York)

Title tbc;

10:45 – 11:30  Radu Calinescu & Ana Cavalcanti (University of York)

‘Operationalisation of Social, Legal, Ethical, Empathetic and Cultural Requirements for AI Agents’;

11:30 – 11:45 Tea/coffee Break

11:45 – 12:30 Pekka Väyrynen (University of Leeds)

‘What Explains Reasons for Action?’;

Abstract

There are three kinds of view about what explains reason relations. When a consideration is a normative reason to do something: (1) nothing explains this; (2) something intrinsic to the reason relation explains this; (3) something extrinsic to the reason relation explains this. I’ll argue that “Nothing” fails and that “Intrinsic” works at best in certain special cases that “Extrinsic” may be able to accommodate. The upshot — that reason relations are generally explained by something extrinsic to them — limits the field of theoretical options.

12:30 – 2:00 pm Lunch

2:00 – 2:45 Louise Dennis (University of Manchester)

‘A Defeasible Logic Implementation of Ethical Reasoning’

Abstract

Not so many years ago, I had a conversation with John Horty about the implementation of ethical reasoning.  I believe I said something to the effect that I would happily implement any ethical theory and he pointed me in the direction of his work on defeasible logic.   The result was a Prolog implementation of defeasible ethical reasoning that I did with Cristina Perea del Olmo.  In particular we take Horty’s ought operator and combine this with Antoniou et al’s formalisation of defeasible logic into a executable system.  This talk will discuss that implementation, its strengths in terms of expressing ethical rules and statements in an accessible way, but also some of the challenges it faces in dealing with sequencing of actions.

2:45 ­– 3:30 Marija Slavkovik (University of Bergen)

‘Telling Machines What Is Right: A Defeasible Logic Problem?’

Abstract

The talk considers the problem of passing on morally relevant information to an algorithmic decision making system. There are two sub-problems involved: i) identifying the morally relevant information and ii) specifying the information so that it is usable by an algorithm. The talk outlines these two sub-problems and draws on on my recent work to discuss what is the possible role of defeasible logic in supplying machines with morally relevant information. 

The papers involved is : https://www.jair.org/index.php/jair/article/view/14368

3:30 – 3:45 Tea/coffee Break

3:45 – 4:30 Round Table: What Have We Learned?

4:30 Workshop closes



Leave a comment