David Forsyth

Wednesday, May 3, 2023 at 11:30am

David Forsyth - Intrinsic images, lighting and relighting without any labeling

Fulton-Watson-Copp Chair and Professor, Computer Science Dept., University of Illinois Urbana-Champaign

Abstract: Intrinsic images are maps of surface properties. A classical problem is to recover an intrinsic image, typically a map of surface lightness, from an image. The topic has mostly dropped from view, likely for three reasons: training data is mostly synthetic; evaluation is somewhat uncertain; and clear applications for the resulting albedo are missing. The decline of this topic has a consequence - mostly, we don’t understand and can’t mitigate the effects of lighting.

I will show the results of simple experiments that suggest that very good modern depth and normal predictors are strongly sensitive to lighting – if you relight a scene in a reasonable way, the reported depth will change. This is intolerable. To fix this problem, we need to be able to produce many different lightings of the same scene. I will describe a method to do so. First, one learns a method to estimate albedo from images without any labelled training data (which turns out to perform well under traditional evaluations). Then, one forces an image generator to produce many different images that have the same albedo – with care, these are relightings of the same scene. Finally, a GAN inverter allows us to apply the process to real images. I will show some interim results suggesting that learned relightings might genuinely improve estimates of depth, normal and albedo.

Bio: I am currently Fulton-Watson-Copp chair in computer science at U. Illinois at Urbana-Champaign, where I moved from U.C Berkeley, where I was also full professor. I have occupied the Fulton-Watson-Copp chair in Computer Science at the University of Illinois since 2014. I have published over 170 papers on computer vision, computer graphics an machine learning. I have served as program co-chair or general co-chair for vision conferences on many occasions. I received an IEEE technical achievement award for 2005 for my research. I became an IEEE Fellow in 2009, and an ACM Fellow in 2014. My textbook, “Computer Vision: A Modern Approach” (joint with J. Ponce and published by Prentice Hall) was widely adopted as a course text. My recent textbook, “Probability and Statistics for Computer Science”, is in the top quartile of Springer computer science chapter downloads. A further textbook “Applied Machine Learning” has just appeared in print. I have served two terms as Editor in Chief, IEEE TPAMI. I serve on a number of scientific advisory boards.

Host: Greg Shakhnarovich

Registration to attend virtually: https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=60b3f587-be8f-428a-8706-afbd012ab9bc

Jason Eisner

Wednesday, May 10, 2023 at 11:30am

Jason Eisner - Putting Planning and Reasoning Inside Language Models

Professor, John Hopkins University, ACL Fellow

Abstract: Large autoregressive language models have been amazingly successful. Nonetheless, should they be integrated with older AI techniques such as explicit knowledge representation, planning, and inference? I’ll discuss three possible reasons:

  1. Capacity: Current autoregressive models lack the computational capacity to attack combinatorially hard problems.
  2. Modularization: Results could be improved by consulting up-to-date domain knowledge, domain-specific theories, and systematic reasoning.
  3. Interpretability: Ideally, generated answers should be able to discuss the underlying reasoning and the certainty of its conclusions.

As possible directions, I’ll outline some costly but interesting extensions to the standard autoregressive language models – neural FSTs, lookahead models, and nested latent-variable models. Much of this work is still in progress, so the focus will be on designs rather than results. Collaborators include Chu-Cheng Lin, Weiting (Steven) Tan, Li (Leo) Du, Zhichu (Brian) Lu, and Hongyuan Mei

Bio: Jason Eisner is a Professor of Computer Science at Johns Hopkins University, as well as Director of Research at Microsoft Semantic Machines. He is a Fellow of the Association for Computational Linguistics. At Johns Hopkins, he is also affiliated with the Center for Language and Speech Processing, the Mathematical Institute for Data Science, and the Cognitive Science Department. His goal is to develop the probabilistic modeling, inference, and learning techniques needed for a unified model of all kinds of linguistic structure. His 150+ papers have presented various algorithms for parsing, machine translation, and weighted finite-state machines; formalizations, algorithms, theorems, and empirical results in computational phonology; and unsupervised or semi-supervised learning methods for syntax, morphology, and word-sense disambiguation. He is also the lead designer of Dyna, a declarative programming language that provides an infrastructure for AI algorithms. He has received two school-wide awards for excellence in teaching, as well as recent Best Paper Awards at ACL 2017, EMNLP 2019, and NAACL 2021 and an Outstanding Paper Award at ACL 2022.

Host: Karen Livescu

Registration to attend virtually: https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=6355939c-49d5-496a-9d9e-af8e0117f336

Yejin Choi

Monday, October 16, 2023

Yejin Choi - Possible Impossibilities and Impossible Possibilities

Professor, Paul G. Allen School of Computer Science & Engineering, University of Washington, MacArthur Fellow, ACL Fellow, Distinguished Research Fellow at Institute for Ethics in AI at Oxford

Abstract: In this talk, I will question if there can be possible impossibilities of large language models (i.e., the fundamental limits of transformers, if any) and the impossible possibilities of language models (i.e., seemingly impossible alternative paths beyond scale, if at all).

Bio: Yejin Choi is Wissner-Slivka Professor and a MacArthur Fellow at the Paul G. Allen School of Computer Science & Engineering at the University of Washington. She is also a senior director at AI2 overseeing the project Mosaic and a Distinguished Research Fellow at the Institute for Ethics in AI at the University of Oxford. Her research investigates if (and how) AI systems can learn commonsense knowledge and reasoning, if machines can (and should) learn moral reasoning, and various other problems in NLP, AI, and Vision including neuro-symbolic integration, language grounding with vision and interactions, and AI for social good. She is a co-recipient of 2 Test of Time Awards (at ACL 2021 and ICCV 2021), 7 Best/Outstanding Paper Awards (at ACL 2023, NAACL 2022, ICML 2022, NeurIPS 2021, AAAI 2019, and ICCV 2013), the Borg Early Career Award (BECA) in 2018, the inaugural Alexa Prize Challenge in 2017, and IEEE AI’s 10 to Watch in 2016.

Host: Nati Srebro

Registration to attend virtually: https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=bbae3879-b4c4-4cb0-b9ae-b0820139b35c

Irit Dinur

Friday, November 3, 2023

Irit Dinur - Expanders in Higher Dimensions

Professor, The Weizmann Institute of Science

Abstract: Expander graphs have been studied in many areas of mathematics and in computer science with versatile applications, including coding theory, networking, computational complexity and geometry. High-dimensional expanders are a generalization that has been studied in recent years and their promise is beginning to bear fruit. In the talk, I will survey some powerful local to global properties of high-dimensional expanders, and describe several interesting applications, ranging from convergence of random walks to construction of locally testable codes that prove the $c^3$ conjecture (namely, codes with {\bf c}onstant rate, {\bf c}onstant distance, and {\bf c}onstant locality).

Bio: Irit Dinur a professor of computer science at the Weizmann Institute of Science. Her area of research is in Foundations of Computer Science and in Combinatorics, especially Probabilistically Checkable Proofs, hardness of approximation, and most recently high dimensional expanders.

Host: Avrim Blum

Registration to attend virtually: https://uchicago.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=bc47b8f8-e129-4496-bb44-b07801315ad7


All talks will be held at TTIC in room #530 located at 6045 South Kenwood Avenue (intersection of 61st street and Kenwood Avenue)

Parking: Street parking, or in the free lot on the corner of 60th St. and Stony Island Avenue.

For questions and comments contact Nati Srebro.


Previous Distinguished Lecture Series