Introducing the NeurIPS 2024 Tutorials
by Andrew M. Dai, Irene Chen, Gal Chechik
We are excited to present the list of tutorials selected to present at the NeurIPS 2024 conference! We look forward to a programme that we hope will engage the attendees with topics ranging from experimental design for AI researchers and meta-generation algorithms for LLMs through to cross-disciplinary insights into alignment. In this post, we will describe this year’s programme and our selection process.
Program
There will be 14 tutorials this year. All of them will be conducted in person to encourage active participation and some of them include panels to allow for a diverse range of discussion. The tutorials selected this year with their speakers are:
Flow Matching for Generative Modeling
Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu
Cross-disciplinary insights into alignment in humans and machines
Gillian Hatfield, Joel Z. Leibo, Dylan Hadfield-Menell
Experimental Design and Analysis for AI Researchers
Katherine Hermann, Jennifer Hu, Mike Mozer
Generating Programmatic Solutions: Algorithms and Applications of Programmatic Reinforcement Learning and Code Generation
Shao-Hua Sun, Levi Lelis, Xinyun Chen
Evaluating Large Language Models – Principles, Approaches, and Applications
Irina Sigler, Yuan (Emily) Xue, Bo Li
Beyond Decoding: Meta-Generation Algorithms for Large Language Models
Sean Welleck, Hailey Schoelkopf, Matthew Finlayson
Opening the Language Model Pipeline: A Tutorial on Data Preparation, Model Training, and Adaptation
Akshita Bhagia, Nathan Lambert, Kyle Lo
Dynamic Sparsity in Machine Learning: Routing Information through Neural Pathways
Edoardo M. Ponti, André F. T. Martins
Advancing Data Selection for Foundation Models: From Heuristics to Principled Methods
Jiachen T. Wang, Ruoxi Jia, Ludwig Schmidt
Watermarking for Large Language Model
Xuandong Zhao, Yu-Xiang Wang, Lei Li
Sandbox for the Blackbox: How LLMs Learn Structured Data?
Ashok Vardhan Makkuva, Bingbin Liu, Jason D. Lee
Out-of-Distribution Generalization: Shortcuts, Spuriousness, and Stability
Yoav Wald, Aahlad Puli, Maggie Makar
Causality for Large Language Models
Zhijing Jin, Sergio Garrido
Meaningful Evaluations of ML Privacy Techniques: What Are We Waiting For
Mimee Xu, Fazl Barez, Dmitrii Usynin
Selection process
We received 57 unique tutorial proposals this year with roughly half of them related to large language models. Each submission was reviewed by at least one tutorial chair with the accepted tutorials receiving at least two reviews. Review assignments were chosen based on the area of expertise and to avoid conflicts of interest. Each chair gave a score from 1-5 on three categories, the speaker/panelist quality, the topic (interestingness, importance, timeliness) and appropriateness (suitability for the NeurIPS audience).
Common reasons for lower scores included:
- Topics that were too niche for the general NeurIPS audience.
- Significant overlap with stronger proposals.
- Insufficient diversity among the speakers.
- Too much expectation of familiarity with specific software or hardware.
- Too much overlap with past tutorials.
Choosing the final tutorials from the high scoring ones was a challenging task and we appreciate all the work that went into these proposals. We will be doing run-throughs with the selected tutorials to offer advice and feedback.
Keep your eyes on a post-conference retrospective blog post with reflections on tutorials. Please feel free to reach out to us at tutorial-chairs@neurips.cc if you have any suggestions.