Projects - Area B
Thematic Area B: Efficient crossmodal generalization and prediction
Whereas the projects in Area A focus on crossmodal learning and integration as dynamic processes, the projects in Area B will continue to investigate the way in which crossmodal learning and integration impact generalization and prediction. Multimodal stimuli generally provide more information than the component unimodal stimuli, yet that extra information is only useful insofar as the stimuli can be integrated. Projects in Area B, therefore, investigate the processes by which crossmodal information can enhance generalization and prediction beyond that possible with unimodal information alone. As in the other thematic areas, area B combines work in the human brain with modelling and robotic approaches.
As in Area A, all five projects (B1 – B5) in Area B are continuation projects. They will continue to address the central research questions such as how biological brains learn and store crossmodal information for multisensory prediction (B1, B3, B4), how to resolve conflicts between unimodal components of crossmodal predictions (B2, B4, B5), how humans incorporate signals from one modality to improve generalization and prediction in another (B1, B5), how such insights can be transferred both to statistical models to improve their predictions (B2), and to robots to improve their responses in complex, multisensory environments (B5).
Project B1 (Engel, D. Zhang) investigated the neural dynamics underlying crossmodal prediction of sensory events in the human brain during the first funding phase. In particular, it addressed how crossmodal prediction is related to oscillatory neural activity and to large-scale dynamic coupling between brain regions. Building on this work, the project will now focus on the modulation of neural mechanisms underlying multisensory temporal predictions. The work plan will combine studies in healthy participants with studies in patients with Parkinson’s disease (PD) who have impairments in updating stimulus predictions and in judging event timing (Honma et al., 2016). In healthy subjects, crossmodal predictions will be modulated by manipulation of temporal cues in the sensory inputs. Furthermore, neurostimulation approaches will be used to modulate the mechanisms underlying temporal predictions. To this end, the project will use transcranial alternating current stimulation (tACS) in healthy participants as well as deep brain stimulation (DBS) in PD patients. To account for temporal predictions and their modulation, the ensemble oscillator model developed in the first funding period will be extended.
Project B2 (Zhu, Gläscher) will continue their studies on computational models (particularly Bayesian methods) for crossmodal integration and inference (Ganchev et al., 2010). In the first funding phase, the project focused on investigating how crossmodal integration interacts with learning, semantics, and social context. Experiments studied how attention shapes multisensory integration of more than two stimuli from the auditory and visual domains. The results suggest that an incongruent, unattended stimulus interfered with the crossmodal integration of the two attended stimuli by dramatically increasing the error rate on these trials. In the second funding period, B2 aims to study the fundamental mechanism by conjoining two complementary types of machine learning models, namely probabilistic models and symbolic ones, in the challenging scenario of visual-text crossmodal learning and inference. The main hypothesis is that conjoining two complementary types of methods will lead to interpretable crossmodal inference and predictions for visual-text data.
Project B3 (Q. Fu, Rose) investigated how people implicitly predict incoming stimuli according to the crossmodal information in implicit category learning and implicit sequence learning. The project demonstrated crossmodal memory formation in the absence of explicit memory access, developed a computational model of implicit crossmodal memory formation, and evaluated the neural basis of implicit crossmodal memory as well as the generation of explicit knowledge during the incidental learning process. In the second phase, the project will examine mechanisms relevant for the generalization and transfer of implicit memory. The general relevance of the observed processes in more realistic experimental settings will be tested by using virtual reality (VR). By adopting EEG and fMRI techniques, it will be tested how the human brain can generalize the acquired crossmodal memory to novel settings. Moreover, the transfer effect for perceptual as well as for more abstract representations acquired during implicit learning will be examined. Another aim is the examination of implicit learning and transfer effects for value stimuli which are known to strongly influence human learning (Sanz et al., 2018). Cultural differences in implicit learning and transfer will be assessed in a combined study measured identically in Hamburg and Beijing (Murphy et al., 2017).
Project B4 (X. Liu, Nolte, Engel) will investigate the neural dynamics of top-down control over audiovisual congruency processing, focusing on how top-down control modulates crossmodal information integration using electroencephalography (EEG), magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI) techniques. Specifically, it is hypothesized that top-down control will increase neural coherence (EEG/MEG) and representational similarity (fMRI) of unimodal processing so as to facilitate the integration of congruent audiovisual information. On the other hand, top-down control will reduce neural coherence and representational similarity of unimodal processing in order to selectively respond to information from the target modality when visual and auditory information is incongruent (Lin et al., 2018). Understanding how top-down control modulates oscillation (temporal coherence) and activation patterns (spatial representation) of neural activity in different stimulus-response stages of crossmodal learning and conflict processing will help reveal crossmodal learning architectures. Additionally, with several crossmodal congruency and rule-learning tasks learning strategies of how humans integrate and selectively process crossmodal information will be examined directly.
Project B5 (J. Zhang, Sun) will continue to explore methods for integrating crossmodal information into the learning and the execution of robot fine motor operations and dexterous manipulations, using designed robot experiments and learning tasks. In the first phase, crossmodal fusion for dexterous manipulation was studied. For the second funding phase, this project moves beyond sensor processing and crossmodal transfer of dexterous manipulation skills in new situations (Ramirez-Amaro et al., 2017, Devin et al., 2017). Being able to retain and reuse prior experience is arguably a fundamental aspect of intelligent behaviours. B5 plans to investigate crossmodal representations of dexterous manipulation skills, combing both low-level multisensory data and high-level abstractions produced by crossmodal imitation learning (Bacon et al., 2017). B5 will develop crossmodal transfer strategies to reuse the represented knowledge for new skill learning in new situations. Additionally, the transferred skills will be further refined using online learning techniques to succeed in dynamical environments
See also: