Prof. Baoliang Lu
Project Description and Objectives
Emotion plays an essential part in natural communication among humans. In recent years, there has been a rising tendency in affective Brain-computer Interactions (aBCIs) research to enhance systems with the ability to detect, process, and respond to users’ emotional states. In this project, we focus on emotion recognition with EEG and eye movements using machine learning algorithms.
Since emotions are complex psycho-physiological phenomena associated with many nonverbal cues, it is difficult to build robust emotion recognition models using just a single modality. Signals from different modalities represent different aspects of emotion and the complementary information from different modalities can be integrated to build a more robust emotion recognition model compared to the existing unimodal approaches. Multimodal deep learning is a recently developed approach, which can extract shared representations among different modalities. The complementary characteristics of EEG and eye movements for different emotions are also an interesting research topic, since they reflect users’ internal cognitive states and external subconscious behaviors, respectively.
Individual differences across subjects and non-stationary characteristic of EEG limit the generalization of aBCIs in real-world applications. Transfer learning methods are promising approaches to dealing with this problem. Transfer learning methods try to transfer knowledge from source domain to target domain with few or no labeled samples available from subjects of interest.
The objectives of the project are to study several transfer learning methods and systematically compare each for EEG-based emotion recognition.
Emotion experiments to collect EEG and eye tracking data
EEG and eye movement data analysis
Learn and implement multimodal deep learning and transfer learning
Presentation and reports