Vighnesh Subramaniam

I am a first-year MIT EECS PhD student at CSAIL advised by Dr. Boris Katz and Dr. Andrei Barbu in the MIT Infolab.

Before my PhD, I obtained my bachelors (2023) and MEng (2024) in Computer Science at MIT. During this time, I was a research assistant in the Infolab specifically working on problems in deep learning, multimodal processing, and computational neuroscience. During my MEng, I also had the opportunity to work on research with Dr. Shuang Li, Dr. Yilun Du, Dr. Igor Mordatch, and Prof. Antonio Torralba on problems related to generative modeling. I am extremely fortunate to be supported by the NSF Graduate Research Fellowship as well as the Robert J Shillman (1974) Fund Fellowship.

Email  /  Scholar  /  Github  /  Twitter

profile photo

Research

My research interests mainly include multimodal deep learning, particularly NLP/computer vision, and neural network interpretability, with a touch of neuroscience/cognitive science. I highlight some publications below -- see my scholar for a more complete list.

Vision-Language Integration Diagram **NEW** Training the Untrainable: Introducing Inductive Bias via Representational Alignment
Vighnesh Subramaniam, David Mayo, Colin Conwell, Tomaso Poggio, Boris Katz, Brian Cheung, Andrei Barbu
Preprint
Neural Information Processing Systems: Workshop on Unifying Representations in Neural Models, 2024
Project Page / arXiv / Code

We investigate the relationship between inductive biases in neural networks and their representation space by designing methods to transfer aspects that make certain networks trainable to networks that are difficult to train. This is done by a per-training-step alignment of the untrainable network activations and trainable network activations. Our findings are really surprising and we make some of these networks really competitive like RNNs!

Multiagent Finetuning diagram **NEW** Multiagent Finetuning: Self Improvement with Diverse Reasoning Chains
Vighnesh Subramaniam*, Yilun Du*, Joshua B. Tenenbaum, Antonio Torralba, Shuang Li, Igor Mordatch
International Conference on Learning Representations, 2025
Project Page / arXiv / Code

We designed a new self-improvement finetuning method for LLMs that preserves diversity. Our method builds on multiagent prompt frameworks like multiagent debate but finetuning sets of generation and critic models that interact by proposing more accurate solutions and critiquing solutions more accurately. We see pretty considerable improvements across several math-related tasks!

Brain Treebank (Coming soon!) Brain Treebank: Large-scale intracranial recordings from naturalistic language stimuli
Christopher Wang*, Adam Uri Yaari*, Aaditya K Singh, Vighnesh Subramaniam, Dana Rosenfarb, Jan DeWitt, Pranav Misra, Joseph R. Madsen, Scellig Stone, Gabriel Kreiman, Boris Katz, Ignacio Cases, Andrei Barbu
Neural Information Processing Systems Datasets and Benchmarks Track, 2024 (Oral, Top 1%)
Dataset / arXiv

The Brain Treebank is finally released! This is a large-scale dataset of intracranial recordings while subjects watch movies. We collected recordings from 10 subjects across 43 total hours. In total, subjects heard 36,000 sentences (205,000 words), while they had on average 167 electrodes implanted.

Vision-Language Integration Diagram Revealing Vision-Language Integration in the Brain with Multimodal Networks
Vighnesh Subramaniam, Colin Conwell, Christopher Wang, Gabriel Kreiman, Boris Katz, Ignacio Cases, Andrei Barbu
International Conference on Machine Learning, 2024
International Conference on Learning Representations: Workshop on Multimodal Representation Learning, 2023
Project Page / arXiv / Code

Multimodal deep networks of vision and language are used to localize sites of vision-language integration in the brain and identify architectural motifs that are most similar to computations in the brain.

BrainBERT Diagram BrainBERT: Self-supervised representation learning for intracranial recordings
Christopher Wang, Vighnesh Subramaniam, Adam Uri Yaari, Gabriel Kreiman, Boris Katz, Ignacio Cases, Andrei Barbu
International Conference on Representation Learning, 2023
arXiv / Code

BrainBERT is a transformer-based model that learns representations on intracranial recordings for improved linear decodability from the brain.


Website design credits to John Barron.