Important dates

Paper submission April 6, 2018
Notification April 26, 2018
Camera-ready submission May 25, 2018

EXplainable AI Planning

An ICAPS 2018 Workshop
Delft, The Netherlands
June 25 or June 26, 2018

As AI is increasingly being adopted into application solutions, the challenge of supporting interaction with humans is becoming more apparent. Partly this is to support integrated working styles, in which humans and intelligent systems cooperate in problem-solving, but it is also a necessary step in the process of building trust as humans invest greater authority and responsibility in intelligent systems. Explainability poses challenges for many types of AI systems, including planning and scheduling (PS) systems. For example, how should a PS system justify that a plan or schedule is correct, or good, or respects supplied preferences? How can the PS system explain particular steps, ordering decisions, or resource choices? How can a PS system explain that no solution is possible, or what relaxations of the constraints would allow a solution? How can a PS system respond to questions like “what is the hard part?” or “why is this taking so long?”. These are all difficult questions that can require analysis of plan or schedule structure, analysis of the goals, constraints, and preferences, and potentially hypothetical reasoning.

The intent of this workshop is to focus on these issues of explainability and transparency in planning and scheduling.

Topics of interests include but are not limited to:

  • representation, organization, and knowledge needed for explanation;
  • the creation of such content during plan generation and understanding;
  • generation and evaluation of explanations;
  • the way in which explanations are communicated to humans (e.g., plan summaries, answers to questions);
  • the role of knowledge and learning in explainable planners;
  • human vs AI models in explanations;
  • links between explainable planning and other disciplines (e.g., social science, argumentation);
  • use cases and applications of explainable planning


The workshop will include invited talks, presentations of accepted papers and a panel discussion.

Submission Details

Authors may submit *long papers* (8 pages plus up to one page of references) or *short papers* (4 pages plus up to one page of references).

All papers should be typeset in the AAAI style, described at: removing the AAAI copyright. Accepted papers will be published on the workshop website.

Papers must be submitted in PDF format via the EasyChair system (

Important Dates

Paper submission: April 6, 2018
Notification: April 26, 2018
Camera-ready submission: May 25, 2018

Organizing Chairs

  • Susanne Biundo, Ulm University
  • Pat Langley, University of Auckland
  • Daniele Magazzeni, King’s College London
  • David Smith, NASA Ames Research Center

Related Works

Below is a sample of related work. Additional suggestions are welcome.

  • D. Smith. Planning as an Iterative Process. AAAI 2012.
  • M. Fox, D. Long, D. Magazzeni. Explainable Planning. IJCAI Workshop on XAI, 2017.
  • P. Langley, B. Meadows, M. Sridharan, D. Choi. Explainable agency for intelligent autonomous systems. AAAI 2017
  • T. Chakraborti, S. Sreedharan, Y. Zhang, S. Kambhampati. Plan explanations as model reconciliation: Moving beyond explanation as soliloquy. IJCAI 2017
  • B. Seegebarth, F. Muller, B. Schattenberg, S. Biundo. Making hybrid plans more clear to human users - A formal approach for generating sound explanations. ICAPS 2012.
  • M. Floyd, D. Aha. Using explanations to provide transparency during trust-guided behavior adaptation. AI Communications, 2017.
  • S. Sohrabi, J. Baier, S. McIlraith. Preferred explanations: Theory and generation via planning. AAAI 2011.
  • S. Rosenthal, S. Selvaraj, M. Veloso. Verbalization: Narration of autonomous robot experience. IJCAI 2016.

© 2016-2018 International Conference on Automated Planning and Scheduling - Photo license information: