Second International Workshop on Symbolic-Neural Learning (SNL-2018)

July 5-6, 2018
Nagoya Congress Center (Nagoya, Japan)

Language Grounded Activity Recognition and Planning

Tsuyoshi Okita (Kyushu Institute of Technology) and Sozo Inoue (Kyushu Institute of Technology)

Abstract:

The interface between robot and human or between artificial intelligence and human needs language grounding. From the point of view of NLP this interface converts the abstract words into the physically concrete words. From the point of view from the bottom such as activity recognition, we need to give a name to every activities in an unsupervised manner. We consider the communication robots for weather forecasts: an autonomous robot is asked to give a bunch of roses to Ms. Hanako when the weather forecast for the next day is correct, this robot needs to understand the meaning of this sentence, and it fulfills this task. This task is related to language understanding, language grounding, activity planning, activity recognition, and multimodal sensors. We consider the deep learning approach to this.