``Vigyan'', No. 9,
Microsoft Research Lab,
Lavelle Road, Bengaluru,
Hi! I’m a Research Software Development Engineer (RSDE) at the Microsoft Research Lab (MSRI) in Bangalore, India, as part of Manik Varma’s group, and currently exploring the world of text-based generative models in the context of eXtreme Classification. Previously, I was an interdisciplinary Direct-Ph.D. scholar at the Robert Bosch Center for Cyber-Physical Systems (RBCCPS) at the Indian Institute of Science, Bangalore. I work at the Spectrum Lab in the Department of Electrical Engineering, under the supervision of Prof. Chandra Sekhar Seelamantula. Before that, I was at M.S. Ramaiah Institute of Technology (MSRIT), Bangalore, where I completed my Bachelors in Engineering in Electronics and Communications. Check out my full CV or publications. Amongst other things, I love photography and Misty, my doggo!!
My research interests are broadly in the area of machine learning and generative modeling, with a focus, during my Ph.D., on generative adversarial networks (GANs). My Ph.D. was on building theoretical foundations for analyzing GANs, leveraging insights from classical signal processing, and designing network architectures motivated by those findings. Recently, I’ve (like everyone else that worked with GANs) also been exploring score-based diffusion and normalizing-flow models!
Throughout my Ph.D. I’ve been graciously funded by the Microsoft Research Ph.D. Fellowship in 2018, the RBCCPS Ph.D. Fellowship in 2020-2021, and 2021-2022, and the Qualcomm Innovation Fellowship in 2019, 2021, 2022 and 2023!
Looking to collaborate or know more about my research?
Reach out to me at [FirstLetterOfFirstName][LastName] (at) microsoft (dot) com
|Sep 15, 2023||Successfully defended my Ph.D. Thesis. #PhDone!|
|Jun 30, 2023||Selected as a Super-Winner for the Qualcomm Innovation Fellowship 2023|
|May 11, 2023||Successfully submitted my Ph.D. Thesis!!|
|Apr 28, 2023||Successfully completed my Ph.D. Colloquium!!|
|Mar 1, 2023||Our paper has been accepted at CVPR 2023! See you in Vancouver, Canada|
- NeurIPS 20Teaching a GAN What Not to LearnIn Advances in Neural Information Processing Systems (NeurIPS) 2020
- NeurIPSW 22Bridging the Gap Between Coulomb GAN and Gradient-regularized WGANIn Proceedings on "The Symbiosis of Deep Learning and Differential Equations - II" at NeurIPS Workshops 2022
- ICASSP 23A game of snakes and GANsThe Proceedings on the IEEE International Conference of Acoustics, Speech and Signal Processing (ICASSP) 2023
- CVPR 23Spider GANs: Leveraging friendly neighbors to accelerate GAN trainingThe IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2023
- JMLREuler-Lagrange Analysis of Generative Adversarial NetworksJournal of Machine Learning Research (JMLR) 2023
- arXivGANs Settle Scores!arXiv preprints, arXiv:2306.01654 2023
- arXivData Interpolants – That’s What Discriminators in Higher-order Gradient-regularized GANs ArearXiv preprints, arXiv:2306.01654 2023
- NeurIPSW 23\f\-GANs Settle Scores!In NeurIPS 2023 Workshop on Diffusion Models 2023
- NeurIPSW 23ELeGANt: An Euler-Lagrange Analysis of Wasserstein Generative Adversarial NetworksIn The Symbiosis of Deep Learning and Differential Equations III 2023