EXplainable AI Planning

An ICAPS 2018 Workshop
Delft, The Netherlands
June 25, 2018

As AI is increasingly being adopted into application solutions, the challenge of supporting interaction with humans is becoming more apparent. Partly this is to support integrated working styles, in which humans and intelligent systems cooperate in problem-solving, but it is also a necessary step in the process of building trust as humans invest greater authority and responsibility in intelligent systems. Explainability poses challenges for many types of AI systems, including planning and scheduling (PS) systems. For example, how should a PS system justify that a plan or schedule is correct, or good, or respects supplied preferences? How can the PS system explain particular steps, ordering decisions, or resource choices? How can a PS system explain that no solution is possible, or what relaxations of the constraints would allow a solution? How can a PS system respond to questions like “what is the hard part?” or “why is this taking so long?”. These are all difficult questions that can require analysis of plan or schedule structure, analysis of the goals, constraints, and preferences, and potentially hypothetical reasoning.

The intent of this workshop is to focus on these issues of explainability and transparency in planning and scheduling.

Topics of interests include but are not limited to:

  • representation, organization, and knowledge needed for explanation;
  • the creation of such content during plan generation and understanding;
  • generation and evaluation of explanations;
  • the way in which explanations are communicated to humans (e.g., plan summaries, answers to questions);
  • the role of knowledge and learning in explainable planners;
  • human vs AI models in explanations;
  • links between explainable planning and other disciplines (e.g., social science, argumentation);
  • use cases and applications of explainable planning

Format

The workshop will include invited talks, presentations of accepted papers and a panel discussion.

Submission Details

Authors may submit *long papers* (8 pages plus up to one page of references) or *short papers* (4 pages plus up to one page of references).

All papers should be typeset in the AAAI style, described at: http://www.aaai.org/Publications/Author/author.php removing the AAAI copyright. Accepted papers will be published on the workshop website.

Papers must be submitted in PDF format via the EasyChair system (https://easychair.org/conferences/?conf=xaip18).

Important Dates

Paper submission: April 6, 2018
Notification: April 26, 2018
Camera-ready submission: May 25, 2018

Invited Speaker


David Aha. Naval Research Laboratory

Relating XAI (Explainable AI) to XAIP (XAI Planning)

The DARPA Explainable AI (XAI) program is a high-profile effort, among many, whose objective is to encourage research on AI systems whose models and decisions are more accessible and transparent to users. Yet the common focus of DARPA XAI's 11 projects is machine learning; it could have been called XML rather than XAI. Still, it is raising awareness that AI researchers need to collaborate with social scientists, and others, on the design and evaluation of XAI systems. This also applies broadly to other XAI efforts, including those of interest to the ICAPS community. In this talk, I'll summarize the objectives and status of DARPA XAI, emphasizing some topics of interest to XAIP. I'll also discuss/relate some work on XAIP that has appeared at the IJCAI-17 XAI Workshop, or will appear at the upcoming IJCAI/ECAI-18 XAI Workshop, which has a broad XAI focus (I.e., not limited to ML).

Program

Monday (June 25, 2018)
9:00
Welcome and introduction
9:10
Invited Talk: Relating XAI (Explainable AI) to XAIP (Explainable Planning)
David Aha
10:10
Human-Aware Planning Revisited: A Tale of Three Models
Tathagata Chakraborti, Sarath Sreedharan and Subbarao Kambhampati
10:30
Coffee break
11:00
Explaining Rebel Behavior in Goal Reasoning Agents
Dustin Dannenhauer, Michael Floyd, Daniele Magazzeni and David Aha
11:20
Action Selection for Transparent Planning
Aleck Macnally, Nir Lipovetzky, Miquel Ramírez and Adrian Pearce
11:40
Moral Permissibility of Action Plans
Felix Lindner, Robert Mattmüller and Bernhard Nebel
12:00
Explaining Agent Plans with Valuings
Michael Winikoff, Virginia Dignum and Frank Dignum
12:20
Lunch
13:40
Explicability as Minimizing Distance from Expected Behavior
Anagha Kulkarni, Yu Zhang, Tathagata Chakraborti and Subbarao Kambhampati
14:00
Generating Explanations for Mathematical Optimisation: Solution Framework and Case Study
Christina Burt, Katerina Klimova and Bernhard Primas.
14:20
What was I planning to do?
Mark Roberts, Isaac Monteath, Raymond Sheh, David Aha, Piyabutra Jampathom, Keith Akins, Eric Sydow, Vikas Shivashankar and Claude Sammut
14:40
Plan Explanation Through Search in an Abstract Model Space: Extended Results
Sarath Sreedharan, Midhun Pookkottil Madhusoodanan, Siddharth Srivastava and Subbarao Kambhampati
15:00
Coffee break
15:30
Challenges in Explainable Planning for Space Operations
Simone Fratini and Nicola Policella
15:50
Improving Explanation and Effectiveness of Interactions among Autonomous Vehicles and Pedestrians
Sara Manzoni, Simone Fontana, Andrea Gorrini, Domenico G. Sorrenti and Stefania Bandini
16:10
Visualizations for an Explainable Planning Agent
Tathagata Chakraborti, Kshitij Fadnis, Kartik Talamadupula, Mishal Dholakia, Biplav Srivastava, Jeffrey O. Kephart and Rachel K. E. Bellamy
16:30
Towards Explanation-Supportive Knowledge Engineering for Planning
Mauro Vallati, Lee Mccluskey and Lukas Chrpa

Proceedings

The workshop proceedings are available here.

Organizing Chairs

  • Susanne Biundo, Ulm University
  • Pat Langley, University of Auckland
  • Daniele Magazzeni, King’s College London
  • David Smith

Program Committee

  • Susanne Biundo (University of Ulm)
  • John Bresina (NASA)
  • Tathagata Chakraborti (Arizona State University)
  • Dustin Dannenhauer (Naval Research Laboratory)
  • Jeremy Frank (NASA)
  • Pat Langley (University of Auckland)
  • Daniele Magazzeni (King’s College London)
  • Matthew Molineaux (Knexus Research Corporation)
  • Mark Roberts (Naval Research Laboratory)
  • David Smith
  • Siddharth Srivastava (ASU)
  • Kartik Talamadupula (IBM)

Related Works

Below is a sample of related work. Additional suggestions are welcome.

  • D. Smith. Planning as an Iterative Process. AAAI 2012.
  • M. Fox, D. Long, D. Magazzeni. Explainable Planning. IJCAI Workshop on XAI, 2017.
  • P. Langley, B. Meadows, M. Sridharan, D. Choi. Explainable agency for intelligent autonomous systems. AAAI 2017
  • T. Chakraborti, S. Sreedharan, Y. Zhang, S. Kambhampati. Plan explanations as model reconciliation: Moving beyond explanation as soliloquy. IJCAI 2017
  • B. Seegebarth, F. Muller, B. Schattenberg, S. Biundo. Making hybrid plans more clear to human users - A formal approach for generating sound explanations. ICAPS 2012.
  • M. Floyd, D. Aha. Using explanations to provide transparency during trust-guided behavior adaptation. AI Communications, 2017.
  • S. Sohrabi, J. Baier, S. McIlraith. Preferred explanations: Theory and generation via planning. AAAI 2011.
  • S. Rosenthal, S. Selvaraj, M. Veloso. Verbalization: Narration of autonomous robot experience. IJCAI 2016.

© 2018 International Conference on Automated Planning and Scheduling