Research areas of the
Cognitive Systems Lab
The Cognitive Systems Lab uses an interdisciplinary and integrative approach, which combines highly-realistic, interactive, and immersive virtual environments with high-resolution, multimodal physiological measurements, and state-of-the-art AI-driven analysis tools.
We conduct research in the following four areas:
Decision making
Human-centered AI
Social face processing
Multisensory and multimodal object processing
ERP results of players engaged in our novel game-like paradigm, which incentivizes deceptive (i.e., lying) behavior.
How do we make high-risk decisions under pressure? How do we choose to cheat in a game?
This research line combines virtual reality, psychophysiological measures, and neuroimaging to chart and model human decision-making in high-risk situations.
We are currently recording several large-scale datasets with a unique combination of psychophysiological measures (including eye-tracking), EEG, as well as detailed behavioral data - stay tuned!!
Publications:
U. Ju, J. Kang and C. Wallraven. Testing intuitive decision-making in VR: personality traits predict decisions in an accident situation. In Proceedings of IEEE Conference on Virtual Reality (IEEE VR2016), IEEE, 2016.
U. Ju, J. Kang and C. Wallraven. To brake or not to brake? Personality traits predict decision-making in an accident situation. Frontiers in Psychology, 2019.
U. Ju, J. Kang and C. Wallraven. You or Me? Personality Traits Predict Sacrificial Decisions in an Accident Situation. IEEE Transactions on Visualization and Computer Graphics, 2019.
Y. Chen and C. Wallraven. Pop or not? EEG-correlates of risk-taking behavior in the balloon analogue risk task. In Proceedings of the 5th International Winter Conference on Brain Computer Interfaces, IEEE, 2017.
T Kang, Y Chen, S Fazli, C Wallraven. EEG-Based prediction of successful memory formation during vocabulary learning. IEEE Transactions on Neural Systems and Rehabilitation Engineering 28 (11): 2377-2389, 2020.
Y Chen, S Fazli, C Wallraven. An EEG Dataset of Neural Signatures in a Competitive Two-Player Game Encouraging Deceptive Behavior. scientific data 11 (389), 2024.
J Jang, C. Wallraven. Whom to Save? A Novel, Realistic Paradigm for Studying Human Decision-Making in Moral Dilemmas. IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), 2024.
HS Ryu, U Ju, C. Wallraven. Predicting Future Driving Decisions in an Accident Situation From Videos: A Combined Behavioral, Eye Gaze, and Computational Analysis. IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), 2024.
J Park, C Wallraven. Investigating Subjective Anxiety Dynamics due to Personal Space Violations and COVID-19-related Stressors in a Social VR Simulation. IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), 2024.
Results of extreme few-shot learning of facial landmarks via self-supervised pre-training with our 3fabrec architecture.
Here, we try to apply our experience in characterizing the human cognitive system in order to validate or improve existing as well as design new AI approaches.
Publications:
C. Wallraven, B. Caputo, and A.B.A. Graf. Recognition with Local Features: the Kernel Recipe. In ICCV 2003, volume 2, pages 257–264. IEEE Press, 2003.
C. Wallraven, R. Fleming, D. Cunningham, J. Rigau, M. Feixas and M. Sbert: Categorizing art: Comparing humans and computers. Computers and Graphics 33(4), 484-495, 2009.
J. Rigau, M. Feixas, M. Sbert and C. Wallraven: Toward Auvers Period: Evolution of van Gogh's Style. Computational Aesthetics 2010.
C. Herdtweck and C. Wallraven: Beyond the horizon: perceptual and computational estimates of horizon position. Applied Perception, Graphics, and Visualization (2010)
C. Herdtweck and C. Wallraven. Estimation of the horizon in photographed outdoor scenes by human and machine. PLoS One, pages 0-0 (2013).
C. Wallraven and J. Freese. We remember what we like?: Aesthetic value and memorability for photos and artworks - a combined behavioral and computational study. Journal of Vision, 2015.
J. Fischer, D. Cunningham, D. Bartz, C. Wallraven, H. Bülthoff, and W. Strasser. Measuring the discernability of virtual objects in conventional and stylized augmented reality. In 12th Eurographics Symposium on Virtual Environments, pages 53–61, 05 2006.
J. Wu, R. Martin, P. Rosin, X. Sun, Y. Lai, Y. Liu, and C Wallraven. Use of non- photorealistic rendering and photometric stereo in making bas-reliefs from photographs. Graphical Models, 2014.
B Browatzki, C Wallraven. 3fabrec: Fast few-shot face alignment by reconstruction. Proceedings of CVPR 2020, Oral presentation, 2020.
T Kang, Y Chen, C Wallraven. I see artifacts: ICA-based EEG artifact removal does not improve deep network decoding across three BCI tasks. Journal of Neural Engineering 21 066036, 2024.
Extreme example of "other-race" effect - a monkey face in which the eyes and mouths have been turned upside down (3rd column) is not "weird", whereas a human face is (1st column).
How are faces recognized? Which features are important? How do we process facial movement into emotions and social signals? Does culture play a role? These are just some of the questions we address in the lab using a variety of different approaches.
Current work in the lab focuses on how to parametrize affective signals and the degree to which humans and face recognition algorithms solve various face processing tasks similarly (or not!).
Publications:
M. Nusseck, D. W. Cunningham, C. Wallraven, and H. Bülthoff. The contribution of different facial regions to the recognition of conversational expressions. Journal of Vision, 8(8):1:1-23, 06 2008.
D. W. Cunningham and C. Wallraven. Temporal information for the recognition of conversational expressions. Journal of Vision, 9(13):1-17, 12 2009
C. Dahl, C. Wallraven, H. Bülthoff, and N. Logothetis. Humans and macaques employ similar face-processing strategies. Current Biology, 19(6):509-513, 03 2009.
C. Dahl, N. Logothetis, H. Bülthoff, and C. Wallraven. The thatcher illusion in humans and monkeys. Proceedings of the Royal Society of London B, 277(1696):2973-2981, 10 2010.
A. Shin, S. Lee, H. Bülthoff, and C. Wallraven. A morphable 3d-model of korean faces. In Proceedings of SMC 2012, 10 2012.
K. Kaulard, D.W. Cunningham, H.H. Bülthoff, and C. Wallraven. The MPI facial expression database - a validated database of emotional and conversational facial expressions. PLoS One, 7(3):e32321, 01 2012.
A. Aubrey, D. Marshall, P .Rosin, J. Vandeventer, D. Cunningham, and C. Wallraven. Cardiff conversation database (CCDB): A database of natural dyadic conversations. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), V&L Net Workshop, pages 0-0, 06 2013.
C Wallraven. Touching on face space: Comparing visual and haptic processing of face shapes. Psychonomic bulletin & review 21 (4), 995-1002, 2014
C. Wallraven, L Dopjans. Visual experience is necessary for efficient haptic face recognition. NeuroReport 24 (5), 254-258, 2013.
J. Kang, B. Ham, and C Wallraven. Cannot avert the eyes: Reduced attentional blink toward others’ emotional expressions in empathic people. Psychonomic Bulletin and Review, 24: 810-820, 2017
D Derya, J Kang, DY Kwon, C Wallraven. Facial Expression Processing Is Not Affected by Parkinson’s Disease, But by Age-Related Factors. Frontiers in Psychology 10, 2458, 2019
I Bülthoff, W Jung, RGM Armann, C Wallraven. Predominance of eyes and surface information for face race categorization. Scientific reports 11 (1), 1927, 2022.
C Malak, C Wallraven. The importance of external features for categorizing ethnicity: Can Koreans identify Korean, Japanese, and Chinese faces? Psyarxiv, https://osf.io/preprints/psyarxiv/gxky7_v1, 2024.
Mixed reality setup, in which participants touch 3D-printed objects (left), while at the same time seeing their exploration in virtual reality (right).
How are vision and touch combined to help us process the world around us?
A large series of experiments combining computer graphics with 3D printing technology and mixed reality technology shows that humans are not only visual but also haptic experts. We can process very complex shape information by touch alone! In addition, our brain is able to integrate information from both vision and touch to better process objects.
Current work in the lab focuses on how affective touch is processed and communicated. In addition, we are conducting experiments on how touch interacts with other cognitive tasks.
Publications:
B. Browatzki, V. Tikhanoff, G. Metta, H. Bülthoff, and C. Wallraven. Active In-Hand Object Recognition on a Humanoid Robot. IEEE Transactions on Robotics, 30: 1260-1269, 2014.
C. Wallraven, L. Whittingstall, and H. H. Bülthoff. Learning to recognize face shapes through serial exploration. Experimental Brain Research, 226(4): 513-523, 2013.
N. Gaissert, H. H. Bülthoff, and C. Wallraven. Similarity and categorization: From vision to touch. Acta Psychologica, 138:219-230, 07 2011.
H. Lee and C. Wallraven. Exploiting object constancy: effects of active exploration and shape morphing on similarity judgments of novel objects. Experimental Brain Research, 225:277-289, 2013.
H. Lee Masson, J. Bulthé, H. Op De Beeck, and C. Wallraven. Visual and Haptic Shape Processing in the Human Brain: Unisensory Processing, Multisensory Convergence, and Top-Down Influences. Cerebral Cortex 2015; doi: 10.1093/cercor/bhv170
H. Lee Masson, C. Wallraven*, and L. Petit. Can touch this: cross-modal shape categorization performance is associated with micro-structural characteristics of white matter association pathways. Human Brain Mapping, 2017.
H Kang, T Kang, C Wallraven. Putting vision and touch into conflict: results from a multimodal mixed reality setup. IEEE Transactions on Visualization and Computer Graphics 29 (12), 5224-5234, 2022.