Projects - Area A
Projects in Area A investigate the dynamics of the crossmodal learning process itself, attempting to discover what occurs and what should occur during learning when information is arriving through multi- ple disparate modalities. The projects in Area A focus on research issues such as how crossmodal signals are integrated in humans and machines and how this integration can improve over time (A1–6), how network structure and prior knowledge affect the crossmodal-learning process (A1–2, 5–6), how crossmodal representations promote abstraction which can promote more efficient learning (A2–3, 6), and how crossmodal learning can change over time (A1, 3–5). A brief overview of the projects in this area is provided next, together with a look at their primary interrelationships.
Project A1 (Röder, Hong) will investigate how humans detect whether or not to integrate input of multiple modalities and how they adapt these integration rules based on new experience. Project A1 will use psychophysics, cognitive modelling, and electrophysiological techniques to gain significant new insights into the mechanisms of crossmodal integration and learning in humans, and will therefore have a considerable impact on the theoretical framework of integration initiative II-T (Theory) as well as on the models of integration initiative II-M (Models).
Project A2 (Hilgetag, Guan) seeks to gain fundamental insights into the dynamics of crossmodal learning using one such powerful new methodology: a genetically encoded activity reporter. This non-invasive, in vivo cellular-resolution imaging system, recently developed by the project PIs, is capable of recording the simultaneous activity of over 20,000 neurons in multiple cortical areas of the mouse brain. The project will make both theoretical contributions to II-T as well as contributions to the models of II-M.
In project A3 (Hummel, Gerloff, Xue) it is the sub-optimally functioning, aging brain that will shed light on the workings of crossmodal learning. This project will examine the neural dynamics of crossmodal learn- ing in healthy elderly individuals experiencing normal, age-related cognitive decline. The goal is not just to improve our understanding of multisensory learning in humans, but to use the knowledge gained to devise interventional strategies that can ameliorate loss of optimal function. The modelling studies in this project will be designed in cooperation with projects having expertise in neural modelling, especially A2 (Hilgetag, Guan), A5 (Wermter, Liu), and A6 (Hu, Weber). In addition, the project will make contributions to all three integration initiatives.
The goals of project A4 (C. Zhang, J. Zhang) are (1) to investigate a novel approach to robot perception, which is based on recent machine-learning methods for learning cross-modal features (Zhang, T. 2010; Ngiam et al., 2011). This project will be essential for II-R and thus is intrinsically related to all the projects contributing to this integration initiative. This project will have a close, bidirectional relationship with project B5 (J. Zhang, Sun), which supplies the II-R robotics platform essential to both projects.
Project A5 (Wermter, Liu) is focused closely on the real-world evaluation of a neurocognitive model. Its goals are (a) to use novel neurocomputational techniques to improve existing models of the superior colliculus (SC) and linked cortical areas; (b) to implement a model of these cortico-collicular networks in a physical robot; and (c) to compare both the model’s neural activity and the robot’s physical behaviour to the neural activity and physical behaviour of biological systems.
In project A6 (Hu, Weber) deep neural networks will be extended with additional crossmodal learning mechanisms found in the brain. The primary goals are both to improve performance in machine learning and also to study the computational principles of the brain, particularly its mechanisms of crossmodal learning, integration and representation. Furthermore, because A6 accepts complex, real-world visual and auditory data as input, its outputs, which represent abstractions from this raw data, can be fed as input to the models of other projects, such as A5 (Wermter, Liu), C4 (Weber, Wermter, Liu) and C5 (Li, Menzel, Qu), which address integration of vision and audition at different levels of the brain.
See also: Thematic Area B, Thematic Area C, Integration Initiatives