Fulton-Watson-Copp Chair and Professor, Computer Science Dept., University of Illinois Urbana-Champaign
Abstract: Intrinsic images are maps of surface properties. A classical problem is to recover an intrinsic image, typically a map of surface lightness, from an image. The topic has mostly dropped from view, likely for three reasons: training data is mostly synthetic; evaluation is somewhat uncertain; and clear applications for the resulting albedo are missing. The decline of this topic has a consequence - mostly, we don’t understand and can’t mitigate the effects of lighting.
I will show the results of simple experiments that suggest that very good modern depth and normal predictors are strongly sensitive to lighting – if you relight a scene in a reasonable way, the reported depth will change. This is intolerable. To fix this problem, we need to be able to produce many different lightings of the same scene. I will describe a method to do so. First, one learns a method to estimate albedo from images without any labelled training data (which turns out to perform well under traditional evaluations). Then, one forces an image generator to produce many different images that have the same albedo – with care, these are relightings of the same scene. Finally, a GAN inverter allows us to apply the process to real images. I will show some interim results suggesting that learned relightings might genuinely improve estimates of depth, normal and albedo.
Bio: I am currently Fulton-Watson-Copp chair in computer science at U. Illinois at Urbana-Champaign, where I moved from U.C Berkeley, where I was also full professor. I have occupied the Fulton-Watson-Copp chair in Computer Science at the University of Illinois since 2014. I have published over 170 papers on computer vision, computer graphics an machine learning. I have served as program co-chair or general co-chair for vision conferences on many occasions. I received an IEEE technical achievement award for 2005 for my research. I became an IEEE Fellow in 2009, and an ACM Fellow in 2014. My textbook, “Computer Vision: A Modern Approach” (joint with J. Ponce and published by Prentice Hall) was widely adopted as a course text. My recent textbook, “Probability and Statistics for Computer Science”, is in the top quartile of Springer computer science chapter downloads. A further textbook “Applied Machine Learning” has just appeared in print. I have served two terms as Editor in Chief, IEEE TPAMI. I serve on a number of scientific advisory boards.
Host: Greg Shakhnarovich
Registration to attend virtually: https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=60b3f587-be8f-428a-8706-afbd012ab9bc
Professor, John Hopkins University, ACL Fellow
Abstract: Large autoregressive language models have been amazingly successful. Nonetheless, should they be integrated with older AI techniques such as explicit knowledge representation, planning, and inference? I’ll discuss three possible reasons:
As possible directions, I’ll outline some costly but interesting extensions to the standard autoregressive language models – neural FSTs, lookahead models, and nested latent-variable models. Much of this work is still in progress, so the focus will be on designs rather than results. Collaborators include Chu-Cheng Lin, Weiting (Steven) Tan, Li (Leo) Du, Zhichu (Brian) Lu, and Hongyuan Mei
Bio: Jason Eisner is a Professor of Computer Science at Johns Hopkins University, as well as Director of Research at Microsoft Semantic Machines. He is a Fellow of the Association for Computational Linguistics. At Johns Hopkins, he is also affiliated with the Center for Language and Speech Processing, the Mathematical Institute for Data Science, and the Cognitive Science Department. His goal is to develop the probabilistic modeling, inference, and learning techniques needed for a unified model of all kinds of linguistic structure. His 150+ papers have presented various algorithms for parsing, machine translation, and weighted finite-state machines; formalizations, algorithms, theorems, and empirical results in computational phonology; and unsupervised or semi-supervised learning methods for syntax, morphology, and word-sense disambiguation. He is also the lead designer of Dyna, a declarative programming language that provides an infrastructure for AI algorithms. He has received two school-wide awards for excellence in teaching, as well as recent Best Paper Awards at ACL 2017, EMNLP 2019, and NAACL 2021 and an Outstanding Paper Award at ACL 2022.
Host: Karen Livescu
Registration to attend virtually: https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=6355939c-49d5-496a-9d9e-af8e0117f336
Professor, Paul G. Allen School of Computer Science & Engineering, University of Washington, MacArthur Fellow, ACL Fellow, Distinguished Research Fellow at Institute for Ethics in AI at Oxford
Host: Nati Srebro
Registration to attend virtually: TBA
Professor, The Weizmann Institute of Science
Host: Avrim Blum
Registration to attend virtually: TBA
All talks will be held at TTIC in room #530 located at 6045 South Kenwood Avenue (intersection of 61st street and Kenwood Avenue)
Parking: Street parking, or in the free lot on the corner of 60th St. and Stony Island Avenue.
For questions and comments contact Nati Srebro.