RAP Alumni Highlight - Steven Hanneke

In August of 2021, Professor Steve Hanneke finished his three-year term as a Research Assistant Professor at TTIC. Starting this fall, he will be joining the faculty at Purdue University as an Assistant Professor.

He first learned of TTIC as an undergraduate at the University of Illinois, Urbana Champaign. After visiting the Institute for a workshop, he felt that the environment was one where he could see himself growing as an early-career researcher. “TTIC is a big name in learning theory. There’s a long list of celebrities in the field who were part of the RAP program at some point in the past, or had positions at TTIC in some capacity,” said Prof. Hanneke.

As an RAP, his role has mostly revolved around research. Prof. Hanneke has spent the last three years developing a research agenda on subjects that interest him, and utilizing TTIC’s resources for travel and meeting with other researchers in his field. His main research area is machine learning theory, which is concerned with understanding what types of provable guarantees are possible for learning algorithms.

“My focus is on reducing the amount of data that’s required for machine learning. The most direct way to do this is to simply design better learning algorithms, and prove that they use fewer data to achieve the same accuracy guarantees as previous methods. Another approach explores variations on the whole protocol, which can reduce the amount of data. For example, the learning algorithm itself could be allowed to interactively participate in assembling the data set. This is called active learning. I’ve done a lot of work on showing how it’s possible to use active learning to dramatically reduce the amount of data needed for learning,” he said.

Essentially, Prof. Hanneke hopes to create simpler, faster machine learning algorithms. Most recently, he has been working on a project concerning adversarial examples with TTIC student Omar Montasser and Professor Nati Srebro. Many machine learning algorithms have an odd behavior in which they can be trained, and can correctly classify new examples of the same type that they have been trained on, but if someone were to take one of those correctly classified examples and change it slightly, the algorithm will suddenly classify it incorrectly.

For example, if an algorithm is given an image, and that image is changed even in a way that is imperceptible to the human eye, it will entirely change the way that the algorithm perceives that image. This type of situation could cause real-world havoc, such as if someone were operating a self-driving car, and came across a stop sign with a sticker on it. Suddenly, the car could see it as a speed limit sign, and drive right through it.

His goal is to design algorithms that are less vulnerable to this type of problem. Unfortunately, many algorithms currently in practical use are far from foolproof, but Prof. Hanneke and his collaborators have been working on designing new theoretically inspired algorithms that are provably robust against situations like this.

Prof. Hanneke has enjoyed his time working with students and faculty at TTIC, and having the opportunity to work in a unique environment where his ideas could immediately be appreciated by others around him. RAPs at the Institute can work as independently as they would like, but there is also so much opportunity for collaboration and support in their research, and to grow professionally through learning from senior colleagues.

“It has certainly been an experience that I will cherish. I got an opportunity to work with graduate students, which is something I had not done previously to much extent. This was an incredibly rewarding and fruitful collaboration. I learned a lot from them, and I hope that they learned things from me. I’ll take this experience with me to Purdue when I am advising graduate students in that role,” said Prof. Hanneke.

This fall at Purdue, he will be teaching a course on learning theory. Prof. Hanneke is looking forward to designing this course, and hopes to cover some of the interesting open questions in learning theory.