I build brain encoding models that map between sensory and cognitive stimuli and the neural representations they evoke, using EEG and fMRI as primary modalities. My work spans functional connectivity, network dynamics, and cognitive/affective biomarkers — extended into multimodal AI integrating physiological signals with language, speech, and audio for personalized human-state estimation and neuroadaptive systems.
My research focuses on brain encoding models that map between sensory and cognitive stimuli and the neural representations they evoke, using EEG and fMRI as primary measurement modalities. I develop computational methods to characterize functional connectivity, network dynamics, and cognitive and affective biomarkers in healthy and clinical populations.
Building on this foundation, I extend encoding and decoding frameworks to multimodal settings — integrating physiological signals (EEG, PPG) with language, speech, and audio representations — to support personalized human-state estimation, neuroadaptive AI systems, and digital therapeutics.
My broader interests span computational neuroscience, brain–computer interfaces, and medical imaging AI.