Postdoc Highlight: Dravyansh Sharma

Dravyansh Sharma, a Ph.D. graduate from Carnegie Mellon University, is a postdoctoral researcher at Toyota Technological Institute at Chicago (TTIC) and the Institute for Data, Econometrics, Algorithms, and Learning (IDEAL). His postdoctoral work is hosted by Avrim Blum (TTIC) and Aravindan Vijayaraghavan (Northwestern University).

IDEAL is “a multi-institution and transdisciplinary institute involving the University of Illinois Chicago; Northwestern University; Toyota Technological Institute at Chicago; the University of Chicago; Loyola University Chicago and Illinois Institute of Technology, in partnership with members of the Learning Theory team at Google that focuses on key aspects of the theoretical foundations of data science,” according to the IDEAL website.

Sharma joined TTIC after discovering a fellowship opportunity through TTIC/IDEAL, though he was already familiar with the institution.

“I had already visited TTIC before when I came to do a talk and for workshops. I knew it was a fantastic place to work with a great research ecosystem with other universities.”


Advancing the Theory of Machine Learning

Sharma’s research focuses on machine learning theory, adversarial robustness, and game theory, driven by a fascination with both the power and responsibility of modern AI.

“Machine learning and AI are dominating so many aspects of modern life,” Sharma said. “What has always excited me is the brilliance it takes to design algorithms and to use deep mathematics to prove things about them. And today we can use AI to assist us with that!”

During his Ph.D. at Carnegie Mellon University, Sharma explored the theoretical foundations of machine learning, beginning with classical models such as linear regression and decision trees before moving toward deep neural networks, systems that are widely used in practice but often lack rigorous theoretical foundations.

A central focus of his work is hyperparameter optimization: the process of tuning the many “knobs” that control how machine learning models are trained. Previously, Sharma worked at Google’s Mountain View office, where he saw firsthand how critical and challenging hyperparameter tuning can be in large-scale applied systems.

“In practice, tuning hyperparameters requires a tremendous amount of engineering effort,” he said. “But there hasn’t been a strong theoretical framework for understanding why certain choices work.”

That experience shaped his research agenda: to develop principled, mathematically grounded approaches to algorithm design, even for modern deep networks. Over the course of his Ph.D. and continuing into his postdoctoral work, Sharma developed new mathematical tools that allow researchers to give data-driven guarantees for key hyperparameters, such as learning rates, which are fundamental to the success of neural networks.

“These were long-standing open questions. Deep networks work remarkably well in practice, but the theoretical understanding has lagged behind. It’s been exciting to reach a point where we have powerful rigorous tools for designing models actually deployed in the real-world.”

A major milestone in this work was a tutorial he helped deliver at NeurIPS, one of the leading conferences in machine learning, “New Frontiers of Hyperparameter Optimization: Recent advances and open challenges in theory and practice.”

“What I really appreciated about the tutorial was that it brought people into the same room who might not naturally interact,” Sharma said. “We want the theory to stay relevant to practice, and we also want practitioners to understand what the theory can and cannot currently guarantee.”

Beyond advancing theory, Sharma is motivated by the broader implications of his work. “The excitement is that we can unlock the potential of designing more powerful algorithms, possibly even using machine learning itself to design algorithms,” he said. “At the same time, we can provide stronger guarantees about how they behave.”


Teaching, Collaboration, and Community

At TTIC, Sharma has found a highly collaborative environment that supports ambitious research across institutions.

“There are so many opportunities to participate as a presenter, organizer, or collaborator,” he said. “It’s remarkably easy to reach across institutions. The research environment here is very vibrant and well connected.”

In addition to research, Sharma has embraced teaching. During his postdoc, he designed and taught Machine Learning for Algorithm Design (TTIC 31290) alongside his mentor, Avrim Blum. The course introduced students from TTIC and the University of Chicago to cutting-edge ideas at the intersection of machine learning and algorithm design, many drawn directly from Sharma’s own research.

“It was both a teaching and a learning experience for me. Designing a course around my own research and seeing students’ excitement about a new field I’ve been working on was incredibly rewarding.”


Life in Chicago and Looking Ahead

Outside of research and teaching, Sharma has enjoyed building a life in Chicago.

“There’s so much to love about Chicago. It’s so beautiful, and I love the ability to go to the lake whenever you want. You can do any activity in this city and find any kind of food you want. I personally enjoy indoor climbing and biking down the lakeshore.”

As he continues his postdoctoral work at TTIC and IDEAL, Sharma remains focused on strengthening the theoretical foundations of machine learning while ensuring those advances remain closely tied to real-world practice. His goal is to help shape a future where the AI systems that increasingly influence everyday life are not only powerful, but principled and well understood.