Colloquium: Multimodal Dialogues with Autonomous Systems
1 June 2017
Autonomous Systems like self-driving cars and collaborative robots in Industrie 4.0 can form hybrid teams with humans and softbots. They must occasionally ask people around them for help in anomalous situations. A new generation of multiadaptive interaction platforms provides a comprehensive multimodal presentation of the current situation in real-time, so that a smooth transfer of control back and forth between human agents and AI systems is guaranteed. We present the anatomy of our multiadaptive human-environment interaction platform SiAM-dp which includes explicit models of the attentional and cognitive state of the human agents as well as a dynamic model of the cyber-physical environment, and supports massive multimodality including speech, gestures and body postures as well as multiscale and multiparty interaction. It is based on the principles of symmetric multimodality and bidirectional representations: all input modes are also available as output modes and vice versa, so that the system not only understands and represents the user’s multimodal input, but also its own multimodal output. We illustrate our new approach with examples from advanced automotive, manufacturing and retail applications.
Thursday, 01 June 2017, Uni Hamburg, Hörsaal ESA-H, Edmund-Siemers Allee
Prof. Dr. Dr. h.c. mult. Wolfgang Wahlster, Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI)