Jitendra Malik

Wednesday, October 11, 2017 at 11:00 am

Jitendra Malik - Deep Visual Understanding from Deep Learning

Video

Arthur J. Chick Professor of Electrical Engineering and Computer Sciences
University of California, Berkeley

Abstract: Deep learning and neural networks coupled with high-performance computing and big data have led to remarkable advances in computer vision. For example, we now have a good capability to detect and localize people or objects. But we are still quite short of “visual understanding”. I’ll sketch some of our recent progress towards this grand goal. One is to explore the role of feedback or recurrence in visual processing. Another is to unify geometric and semantic reasoning for understanding the 3D structure of a scene. Most importantly, vision in a biological setting, and for many robotics applications, is not an end in itself but to guide manipulation and locomotion. I will show results on learning to perform manipulation tasks by experimentation, as well as on a cognitive mapping and planning architecture for mobile robotics.

Bio: Jitendra Malik is Arthur J. Chick Professor and Department Chair of Electrical Engineering and Computer Science at UC Berkeley. Over the past 30 years, Prof. Malik’s research group has worked on many different topics in computer vision. Several well-known concepts and algorithms arose in this research, such as anisotropic diffusion, normalized cuts, high dynamic range imaging, shape contexts and R-CNN. Prof. Malik received the Distinguished Researcher in Computer Vision Award from IEEE PAMI-TC, the K.S. Fu Prize from the International Association of Pattern Recognition, and the Allen Newell award from ACM and AAAI. He has been elected to the National Academy of Sciences, the National Academy of Engineering and the American Academy of Arts and Sciences. He earned a B.Tech in Electrical Engineering from Indian Institute of Technology, Kanpur in 1980 and a PhD in Computer Science from Stanford University in 1985.

Host: Greg Shakhnarovich

Dan Jurafsky

Monday, October 23, 2017 at 3:00 pm

Dan Jurafsky - Automatically Extracting Social Meaning from Language

Video

Professor of Linguistics and Computer Science
Stanford University

Abstract: I describe three lines of research from our lab on computationally extracting social meaning from language, meaning that takes into account social relationships between people.  We study interactions between police and community members in traffic stops recorded in body-worn camera footage, using language to measure interaction quality, study the role of race, and draw suggestions for going forward in this fraught area.  We computationally model the language of scientific papers and the networks of scientists to better understand the role of interdisciplinarity in scientific innovation and the implications for the history of artificial intelligence.  And we show how understanding of framing and socio-economic variables can be extracted from the language of food:  menus, reviews, and advertising. Together, these studies highlight the importance of social context for interpreting the latent meanings behind the words we use.

Bio: Dan Jurafsky is Professor and Chair of Linguistics and Professor of Computer Science, at Stanford University. His research has focused on the extraction of meaning, intention, and affect from text and speech, on the processing of Chinese, and on applying natural language processing to the cognitive and social sciences. Dan is also passionate about NLP education; he is the co-author of the widely-used textbook “Speech and Language Processing” and co-taught the first massive open online class on natural language processing. The recipient of a 2002 MacArthur Fellowship, Dan is also a 2015 James Beard Award Nominee for his book, “The Language of Food: A Linguist Reads the Menu”.

Host: Karen Livescu

Naftali Tishby

Wednesday, April 18, 2018 at 10:30 am

Naftali Tishby - Information Theory of Deep Learning 

Professor of Computer Science, and the incumbent of the Ruth and Stan Flinkman Chair for Brain Research at the Edmond and Lily Safra Center for Brain Science (ELSC)
Hebrew University of Jerusalem

Abstract: I will present a novel comprehensive theory of large scale learning with Deep Neural Networks, based on the correspondence between Deep Learning and the Information Bottleneck framework.  The theory is based on the following components: (1) rethinking Learning theory. I will prove a new generalization bound, the input-compression bound, which shows that compression of the input variable is far more important for generalization than the dimension of the hypothesis class, an ill defined notion for deep learning. (2) I will than prove that for large scale Deep Neural Networks the mutual information on the input and the output variables, for the last hidden layer, provide a complete characterization of the sample complexity and accuracy of the network. This put the information Bottlneck bound as the optimal trade-off between sample complexity and accuracy with ANY learning algorithm. (3) I will then show how stochastic gradient descent, as used in Deep Learning, actually achieves this optimal bound. In that sense, Deep Learning is a method for solving the Information Bottleneck problem for large scale supervised learning problems.  The theory gives concrete predictions for the structure of the layers of Deep Neural Networks, and design principles for such Networks, which turns out to depend solely on the joint distribution of the input and output and the sample size.

Bio: Dr. Naftali Tishby is a professor of Computer Science, and the incumbent of the Ruth and Stan Flinkman Chair for Brain Research at the Edmond and Lily Safra Center for Brain Science (ELSC) at the Hebrew University of Jerusalem. He is one of the leaders of machine learning research and computational neuroscience in Israel and his numerous ex-students serve at key academic and industrial research positions all over the world. Prof. Tishby was the founding chair of the new computer-engineering program, and a director of the Leibnitz research center in computer science, at the Hebrew university. Tishby received his PhD in theoretical physics from the Hebrew university in 1985 and was a research staff member at MIT and Bell Labs from 1985 and 1991. Prof. Tishby was also a visiting professor at Princeton NECI, University of Pennsylvania, UCSB, and IBM research. His current research is at the interface between computer science, statistical physics, and computational neuroscience. He pioneered various applications of statistical physics and information theory in computational learning theory. More recently, he has been working on the foundations of biological information processing and the connections between dynamics and information. He has introduced with his colleagues new theoretical frameworks for optimal adaptation and efficient information representation in biology, such as the Information Bottleneck method and the Minimum Information principle for neural coding.

Host: Nathan Srebro

Cynthia Dwork

Monday, May 7, 2018 at 10:30 am

Cynthia Dwork - What’s Fair?

Gordon McKay Professor of Computer Science
Radcliffe Alumnae Professor at the Radcliffe Institute for Advanced Study
Affiliated Faculty at Harvard Law School
Harvard University

Abstract: Data, algorithms, and systems have biases embedded within them reflecting designers’ explicit and implicit choices, historical biases, and societal priorities. They form, literally and inexorably, a codification of values. “Unfairness” of algorithms – for tasks ranging from advertising to recidivism prediction – has attracted considerable attention in the popular press. The talk will discuss recent work in the nascent mathematically rigorous study of fairness in classification and scoring.

Bio: Cynthia Dwork is the Gordon McKay Professor of Computer Science at the Paulson School of Engineering and Applied Sciences, the Radcliffe Alumnae Professor at the Radcliffe Institute for Advanced Study, and an Affiliated Faculty Member at Harvard Law School. She has done seminal work in distributed computing, cryptography, and privacy-preserving data analysis. Her most recent foci include stability in adaptive data analysis (especially via differential privacy) and fairness in classification.

Host: Nathan Srebro

Location:

All talks will be held at TTIC in room #526/530 located at 6045 South Kenwood Avenue (intersection of 61st street and Kenwood Avenue)

Parking: Street parking, or in the free lot on the corner of 60th St. and Stony Island Avenue.

For questions and comments contact Nathan Srebro.


Previous Distinguished Lecture Series