I am an AI researcher at Meta FAIR, working on developing novel deep neural network architectures and large-scale training methods. Over the past decade, I have developed deep learning architectures and distributed training methods that have been adopted across industry and academia.
Currently, I work on AI for scientific discovery, developing generative models, scaling methods, and datasets for atomic systems. My research includes distributed training algorithms for large-scale systems and novel neural architectures. Previously, I led the FAIR Speech team, building new multilingual and self-supervised speech models that achieved significant production adoption at Meta.
I have also worked on accelerating medical imaging through deep learning models now used in clinical practice, contributed to early neural speech recognition systems (like Deep Speech 2), and pioneered multimodal fusion methods.
I hold a Master's in Language Technologies from Carnegie Mellon University, where I applied machine learning to study epidemic dynamics.
My research has been featured in prominent publications such as the Wall Street Journal, CNBC, USA Today, Reuters, Fortune, TechCrunch, and MIT Tech Review.