Photo of Sam
    Buchanan

Sam Buchanan

Research Assistant Professor
Toyota Technological Institute at Chicago [email protected]

[email protected]
6045 South Kenwood Ave, 411
Chicago, IL 60637

I am a Research Assistant Professor at TTIC. I completed my Ph.D. in Electrical Engineering at Columbia University in 2022, working with John Wright, and my B.S. in Electrical Engineering at the University of Kansas.

I study the mathematics of representation learning from the perspective of signals and data. I’m interested in questions that span theory and practice — What structural properties of modern data play a role in the success or failure of deep learning? How can we design better deep architectures by exploiting these structures? I’m especially interested in applications to visual data.

Upcoming Events

  • Tutorials: I will give tutorial lectures on designing deep network architectures to pursue low-dimensional structures in data and our recent white-box transformers work at ICASSP 2024 in Seoul (Apr 2024) and at CVPR 2024 in Seattle (Jun 2024).

Recent Highlights

  • 1st Conference on Parsimony and Learning: I co-organized the inaugural Conference on Parsimony and Learning (CPAL), which took place at the University of Hong Kong from January 3–6, 2024. Thanks to all authors, speakers, organizers, and especially to the local team at HKU, whose hard work made the conference a success! Stay tuned for CPAL 2025. (Jan 2024)

Recent Updates

  • Talk: Gave my annual Research at TTIC talk about TILTED! Here is the video recording.

  • Talk: Gave the Redwood Seminar. (Feb 2024)

  • Publication: Learned proximal networks, a methodology for parameterizing, learning, and evaluating expressive priors for data-driven inverse problem solvers with convergence guarantees, will appear in ICLR 2024. The camera-ready version of the manuscript can be found here. (Jan 2024)

  • Publication: CRATE-MAE will appear in ICLR 2024. At the heart of this work is a connection between denoising and compression, which we use to derive a corresponding decoder architecture for the “white-box” transformer CRATE encoder. The camera-ready version of the manuscript can be found here. (Jan 2024)

  • Publication: We presented CRATE at NeurIPS 2023, and as an oral in the XAI in Action workshop. (Dec 2023)

  • Preprint Release: The full version of the CRATE story is now on arXiv. CRATE is a “white-box” (yet scalable) transformer architecture where each layer is derived from the principles of compression and sparsification of the input data distribution. This white-box derivation leads CRATE’s representations to have surprising emergent segmentation properties in vision applications without any complex self-supervised pretraining. (Nov 2023)

  • Publication: We presented TILTED at ICCV 2023. TILTED improves visual quality, compactness, and interpretability for hybrid neural field 3D representations by incorporating geometry into the latent features. Find the full version on arXiv. (Oct 2023)

Past Updates