Symposium on Natural Communication for Human-Robot Collaboration

AAAI Fall Symposium Series

November 9–11, 2017

Westin Arlington Gateway
Arlington, VA USA


As robots become more and more integrated into various work and living environments, there is a growing need to develop intuitive, natural ways for humans and robots to communicate for effective collaboration. The topic of natural human-robot communication has been studied by researchers from diverse communities including Natural Language Processing, Computer Vision, Robotics, and Human-Computer Interaction. However, these communities tend to focus on specific challenges that are unique in their own disciplines, and there is relatively little collaboration across fields. This symposium aims to bring together researchers from different disciplines to brainstorm and discuss experimental ideas toward the common goal of natural human-robot interaction.

If you have any questions, please contact the organizers.

Call for Papers

We welcome exploratory ideas and cross-disciplinary work. We target a broad set of topics related to natural communication between humans and robots including, but not limited to, the following:

We invite participants to submit extended abstracts (max two pages, exclusive of one page for references) or full-length technical papers (max six pages, exclusive of one page for references) that describe recent or ongoing research. Papers should be in PDF format and adhere to the AAAI format. The AAAI Author Kit provides templates for LaTeX and Word. Note that papers will be reviewed in a single-blind manner.

Papers, abstracts, and supplementary materials can be submitted by logging into the conference management website located at:

Invited Speakers

We have an excellent lineup of people who will be giving invited talks at the symposium:

Important Dates

July 21, 2017 July 28, 2017 Abstract/paper submission deadline
September 1, 2017 Notification of acceptance
October 13, 2017 Registration deadline
November 9–11, 2017 Fall Symposium Series


The symposium spans two-and-a-half days and will include invited and contributed talks, interactive sessions, and open discussions.

Fifteen minutes are allocated for each contributed talk, followed by an additional five minutes for questions. Five minutes are allocated for each lightning talk, with each session followed by a joint, five minute question and answer session. Authors of lightning talk papers will then present their work during the subsequent poster session. We invite authors of contributed talk papers to participate in the poster session as well.

If you would like to propose questions for the panel, please add them here:

Thursday, November 9

09:00am–09:10am Welcoming Remarks
09:10am–10:30am Invited Talk
Ralph Hollis (Carnegie Mellon University), Physical Human-Robot Interaction with Dynamically Stable Mobile Robots
Contributed Talks
Junjie Hu, Desai Fan, Shuxin Yao, and Jean Oh, Natural Communication for Human-Robot Collaboration
Adrian Boteanu, Jacob Arkin, Siddharth Patki, Thomas Howard, and Hadas Kress-Gazit, Robot-Initiated Specification Repair through Grounded Language Interaction
10:30am–11:00am Coffee Break
11:00am–12:30pm Invited Talk
Alborz Geramifard (Amazon), TBD
Contributed Talks
Divesh Lala, Koji Inoue, Pierrick Milhorat and Tatsuya Kawahara, Detection of Social Signals for Recognizing Engagement in Human-Robot Interaction
Dan Bohus, Sean Andrist, and Eric Horvitz, A Study in Scene Shaping: Adjusting F-formations in the Wild
12:30pm–02:00pm Lunch
02:00pm–03:30pm Lightning Talks and Poster Session
Casey R. Kennington and Sarah Plane, Symbol, Conversational, and Societal Grounding with a Toy Robot
Peggy Wu, Human Physical Movements for Kinematic Learning for Robots
Douglas Summers-Stay and Dandan Li, Analogical Reasoning with Knowledge-based Embeddings
Mara Brandt, Britta Wrede, Franz Kummert, and Lars Schillingmann, Confirmation Detection in Human-Agent Interaction Using Non-Lexical Speech Cues
Raj M Korpan, Susan Epstein, Anoop Aroor, and Gil Dekel, WHY: Natural Explanations from a Robot Navigator
03:30pm–04:00pm Coffee Break
04:00pm–04:45pm Invited Talk
Rohan Paul (Massachusett Institute of Technology), Leveraging Visual-Linguistic Context for Grounding Natural Language Instructions
04:45pm–05:30pm Panel Discussion (Suggest questions here)

Friday, November 10

09:00am–09:45am Invited Talk
Ben Kuipers (University of Michigan), TBD
09:45am–10:30am Invited Talk
Joyce Chai (Michigan State University), TBD
10:30am–11:00am Coffee Break
11:00am–11:45am Invited Talk
Yejin Choi (University of Washington), TBD
11:45am–12:30pm Invited Talk
Dan Bohus (Microsoft Research), Engagement and Turn-Taking in Physically Situated Language Interaction
12:30pm–02:00pm Lunch
02:00pm–03:30pm Lightning Talks and Poster Session
Michael Wollowski, Carlotta Berry, Ryder Winck, Alan Jern, David Voltmer, Alan Chiu, and Yosi Shibberu, A Data-driven Approach Towards Human-robot Collaborative Problem Solving in a Shared Space
Qiaozi Gao, Lanbo She, and Joyce Chai, Interactive Learning of State Representations through Natural Language Instruction and Explanation
Stephanie Zhou, Alane Suhr, and Yoav Artzi, Visual Reasoning with Natural Language
Dipendra Misra and Yoav Artzi, Reinforcement Learning for Mapping Instructions to Actions with Reward Learning
Szrung Shiang, Jean Oh, and Anatole Gershman, A Generalized Model for Multimodal Perception
Dianna Radpour and Vinay Ashokkumar, Non-Contextual Sarcasm Modeling with Neural Network Benchmarking
Nicole K Glabinski, Rohan Paul, and Nicholas Roy, Grounding Natural Language Instructions with Unknown Object References using Learned Visual Attributes
03:30pm–04:00pm Coffee Break
04:00pm–05:00pm Invited Talk
David Traum (University of Southern California), TBD
Contributed Talk
Megan Zimmerman and Jeremy Marvel, Smart Manufacturing and The Promotion of Artificially-Intelligent Human-Robot Collaborations in Small- and Medium-sized Enterprises
05:00pm–05:30pm Panel Discussion (Suggest questions here)

Saturday, November 11

09:00am–10:30am Invited Talk
Ray Mooney (University of Texas, Austin), Robots that Learn Grounded Language Through Interactive Dialog
Contributed Talks
Sergei Nirenburg and Peter Wood, Toward Human-Style Learning in Robots
Andrea F. Daniele, Thomas Howard, and Matthew R. Walter, Learning Articulated Object Models from Language and Vision
10:30am–11:00am Coffee Break
11:00am–12:30pm Invited Talk
Alex Rudnicky (Carnegie Mellon University), Blended Conversations
Contributed Talks
Claire N Bonial, Matthew Marge, Ron Artstein, Felix Gervits, Cory Hayes, Cassidy Henry, Susan Hill, Anton Leuski, Pooja Moolchandani, Kimberly Pollard, David Traum, Clare Voss, Ashley Foots, and Stephanie M Lukin, Laying Down the Yellow Brick Road: Development of a Wizard-of-Oz Interface for Collecting Human-Robot Dialogue
Nakul Gopalan, Edward Williams, Stefanie Tellex, and Mina Rhee, Learning to Parse Natural Language to Grounded Reward Functions with Weak Supervision
12:30pm–12:45pm Closing Remarks


Please register for the symposium through the AAAI 2017 Fall Symposium site. Note that the deadline for registration is October 13, 2017.


Jean Oh, Carnegie Mellon University
Matthew Walter, Toyota Technological Institute at Chicago
Zhou Yu, University of California, Davis

Program Committee

Jacob Arkin, University of Rochester
Yoav Artzi, Cornell University
Mohit Bansal, University of North Carolina
Yonatan Bisk, University of Southern California
Joyce Chai, Michigan State University
Anthony Cohn, University of Leeds
Sanja Fidler, University of Toronto
Alborz Geramifard, Amazon
Raia Hadsell, Google DeepMind
Thomas Howard, University of Rochester
Thomas Kollar, Amazon
Hadas Kress-Gazit, Cornell University
Matthew Marge, Army Research Laboratory
Cynthia Matuszek, University of Maryland, Baltimore County
Hongyuan Mei, Johns Hopkins University
Wolfgang Minker, University of Ulm
Dipendra Misra, Cornell University
Raymond Mooney, University of Texas, Austin
Rohan Paul, Massachusetts Institute of Technology
Peter Stone, University of Texas, Austin