Vighnesh Subramaniam
I am an incoming MIT EECS PhD student at CSAIL.
Before my PhD, I obtained my bachelors (2023) and MEng (2024) in Computer Science at MIT. During this time, I was a research assistant in the Infolab where I was advised by Dr. Boris Katz and Dr. Andrei Barbu specifically working on problems in deep learning, multimodal processing, and computational neuroscience. During my MEng, I also had the opportunity to work on research with Dr. Shuang Li and Prof. Antonio Torralba on problems related to generative modeling. I am extremely fortunate to be supported by the NSF Graduate Research Fellowship as well as the Robert J Shillman (1974) Fund Fellowship.
Email /
Scholar /
Github /
Twitter
|
|
Research
My research interests mainly include multimodal deep learning, particularly NLP/computer vision, and neural network interpretability, with a touch of neuroscience/cognitive science. I highlight some publications below.
|
|
Revealing Vision-Language Integration in the Brain with Multimodal Networks
Vighnesh Subramaniam,
Colin Conwell,
Christopher Wang,
Gabriel Kreiman,
Boris Katz,
Ignacio Cases,
Andrei Barbu
International Conference on Machine Learning, 2024
International Conference on Learning Representations: Workshop on Multimodal Representation Learning, 2023
Project Page /
arXiv /
Code
Multimodal deep networks of vision and language are used to localize sites of vision-language integration in the brain and identify architectural motifs that are most similar to computations in the brain.
|
|
BrainBERT: Self-supervised representation learning for intracranial recordings
Christopher Wang,
Vighnesh Subramaniam,
Adam Uri Yaari,
Gabriel Kreiman,
Boris Katz,
Ignacio Cases,
Andrei Barbu
International Conference on Representation Learning, 2023
arXiv
/
Code
BrainBERT is a transformer-based model that learns representations on intracranial recordings for improved linear decodability from the brain.
|
|