Recent News
- May 2025 Working at Adobe Research Speech AI Team in SF!
- Jan 2025 Gave an invited talk at UCSD MUSAIC Group on Do Music Generation Models Encode Music Theory?
- Nov 2024 Gave a talk on my paper Do Music Generation Models Encode Music Theory? at ISMIR 2024
- Oct 2024 Gave an invited talk at Boston AI Music Meetup on Do Music Generation Models Encode Music Theory?
- Oct 2024 Gave an oral presentation (spotlight) at BayLearn 2024 on Do Music Generation Models Encode Music Theory?
- May 2024 Working at Adobe Research Speech AI Team in Seattle!
- Apr 2024 Presented my poster on Do Music Generation Models Encode Music Theory? at New England NLP 2024
- Apr 2024 Presented our group poster at NYC Computer Vision Day 2024
About
I’m a CS PhD candidate at Brown advised by Professor Ellie Pavlick.
I am interested in building more interpretable and controllable multimodal foundation models, providing intuitive methods to empower human creative expression.
I completed my BS and MEng in CS at MIT, where I worked with Professor Josh Tenenbaum on reasoning, creativity, and planning in LLMs and Professor Antonio Torralba on building multimodal benchmarks for embodied agents.
Recently, I’ve worked at Adobe Research on creating models that better understand paralinguistics (prosody, emotion) in speech. Previously, I held software, product, and research roles at Microsoft, IBM, and several startups.
Publications
-
WACV 2026
-
EMNLP 2025 Findings
-
ISMIR 2024
-
ICML Workshop on Theory of Mind in Communicating Agents 2023
-
CogSci 2022