Announcing NeurIPS 2023 Invited Talks
Presenting Eleven Speakers for Six Keynotes and One Panel
By Amir Globerson, Kate Saenko, Moritz Hardt, Sergey Levine and Comms Chair, Sahra Ghalebikesabi
NeurIPS Program Chairs are delighted to announce this year’s keynote speakers and schedule – all times listed are in CST and all invited talks will be held in Hall F. There are two invited talks during each day of the main program, with one on Monday during the opening of the conference. All keynotes will be recorded and will be available to the public in late January.
Schedule-at-a-Glance:
- Monday – Björn Ommer at 5:25 – 6:15 pm 11 Dec
- Tuesday – Lora Aroyo at 8:30 – 9:20 am 12 Dec
- Tuesday – Linda Smith at 2:15 – 3:05 pm 12 Dec
- Wednesday – Jelani Nelson at 8:30 – 9:20 am 13 Dec
- Wednesday – Beyond Scaling Panel at 2:15 – 3:15 pm 13 Dec
- Thursday – Christopher Ré at 8:30 – 9:20 am 14 Dec
- Thursday – Susan Murphy at 2:15-3:05 pm 14 Dec
Alexander Rush will be moderating the Wednesday afternoon panel discussion, “Beyond Scaling,” with Aakanksha Chowdhery, Angela Fan, Percy Liang and Jie Tang. For more information about who is presenting and what the different invited talks will be about, read on for session abstract details and speaker bios.
Invited Speakers
Monday – Björn Ommer at 5:25 – 6:15 pm 11 Dec
On Monday at 5:25 pm CST, the invite talk track will kick off with the opening plenary presented by Björn Ommer who will discuss “NextGenAI: The Delusion of Scaling and the Future of Generative AI.”
Abstract: The ultimate goal of computer vision and learning are models that can understand our (visual) world. Recently, learning such representations of our surroundings has been revolutionized by deep generative models. As this paradigm is becoming the core foundation for diverse novel approaches and practical applications it is profoundly changing the way we interact with, program, and solve problems with computers. However, most of the progress came from sizing up models – to the point where the necessary resources started to have profound detriments on future (academic) research, industry, and society.
This talk will contrast the most commonly used generative models to date and highlight the very specific limitations they have despite their enormous potential. We will then investigate mitigation strategies such as Stable Diffusion and recent follow-up work that significantly enhance the efficiency of democratizing AI. Subsequently, the talk will discuss lessons learned from this odyssey through model space, the interesting perspectives this casts on the future of generative modeling, and its implications on society.
Bio: Björn Ommer is a full professor at University of Munich where he is heading the Computer Vision & Learning Group. Before he was a full professor in the department of mathematics and computer science at Heidelberg University and a co-director of its Interdisciplinary Center for Scientific Computing. He received his diploma in computer science from University of Bonn, his PhD from ETH Zurich, and he was a postdoc at UC Berkeley.
Björn serves as an associate editor for IEEE T-PAMI. His research interests include semantic scene understanding and retrieval, generative AI and visual synthesis, self-supervised metric and representation learning, and explainable AI. Moreover, he is applying this basic research in interdisciplinary projects within neuroscience and the digital humanities. His group has published a series of generative approaches, including “VQGAN” and “Stable Diffusion”, which are now democratizing the creation of visual content and have already opened up an abundance of new directions in research, industry, the media, and beyond.
Tuesday – Lora Aroyo at 8:30 – 9:20 am 12 Dec
8:30 am CST Tuesday morning, Lora Aroyo will present “The Many Faces of Responsible AI.”
Abstract: Conventional machine learning paradigms often rely on binary distinctions between positive and negative examples, disregarding the nuanced subjectivity that permeates real-world tasks and content. This simplistic dichotomy has served us well so far, but because it obscures the inherent diversity in human perspectives and opinions, as well as the inherent ambiguity of content and tasks, it poses limitations on model performance aligned with real-world expectations. This becomes even more critical when we study the impact and potential multifaceted risks associated with the adoption of emerging generative AI capabilities across different cultures and geographies. To address this, we argue that to achieve robust and responsible AI systems we need to shift our focus away from a single point of truth and weave in a diversity of perspectives in the data used by AI systems to ensure the trust, safety and reliability of model outputs.
In this talk, I present a number of data-centric use cases that illustrate the inherent ambiguity of content and natural diversity of human perspectives that cause unavoidable disagreement that needs to be treated as signal and not noise. This leads to a call for action to establish culturally-aware and society-centered research on impacts of data quality and data diversity for the purposes of training and evaluating ML models and fostering responsible AI deployment in diverse sociocultural contexts.
Bio: I am a research scientist at Google Research NYC where I work on Data Excellence for AI. My team DEER (Data Excellence for Evaluating Responsibly) is part of the Responsible AI (RAI) organization. Our work is focused on developing metrics and methodologies to measure the quality of human-labeled or machine-generated data. The specific scope of this work is for gathering and evaluation of adversarial data for Safety evaluation of Generative AI systems. I received MSc in Computer Science from Sofia University, Bulgaria, and PhD from Twente University, The Netherlands.
I am currently serving as a co-chair of the steering committee for the AAAI HCOMP conference series and I am a member of the DataPerf working group at MLCommons for benchmarking data-centric AI. Check out our data-centric challenge Adversarial Nibbler supported by Kaggle, Hugging Face and MLCommons. Prior to joining Google, I was a computer science professor heading the User-Centric Data Science research group at the VU University Amsterdam. Our team invented the CrowdTruth crowdsourcing method jointly with the Watson team at IBM. This method has been applied in various domains such as digital humanities, medical and online multimedia. I also guided the human-in-the-loop strategies as a Chief Scientist at a NY-based startup Tagasauris. Some of my prior community contributions include president of the User Modeling Society, program co-chair of The Web Conference 2023, member of the ACM SIGCHI conferences board. For a list of my publications, please see my profile on Google Scholar.
Tuesday – Linda Smith at 2:15 – 3:05 pm 12 Dec
2:15 pm CST Tuesday, Linda Smith will present, “Coherence statistics, self-generated experience and why young humans are much smarter than current AI.”
Abstract: The world presents massive amounts of data for learning. However much of that data is latent, only made manifest by physical action on a physical world. The structure of those actions and revealed data are tightly constrained by the continuity of time and space. In this talk, I will present evidence that the statistics of infant and child daily-life experiences at multiple timescales have a natural coherence structure that yields rapid learning and innovative generalization from sparse data and one-time experiences. A common simplifying assumption in AI is that the quantity of the data is all that matters: if enough data is amassed and aggregated, it will contain the latent structure necessary for optimal performance. The findings on the statistics of human ego-centric experience suggests that is assumption may be seriously off-the mark.
Bio: Linda B. Smith, Distinguished Professor at Indiana University Bloomington, is an internationally recognized leader in cognitive science and cognitive development. Taking a complex systems perspective, she seeks to understand the interdependencies among perceptual, motor and cognitive developments during the first three years of post-natal life. Using wearable sensors, including head-mounted cameras and motion sensors, she studies how the young learner’s own behavior creates the statistical structure of the learning environments with a current focus on developmentally changing visual statistics at the scale of everyday life and their role in motor, perceptual, and language development. The work has led to novel insights extended through collaborations in artificial intelligence and education. The work also motivates her current efforts on defining and promoting a precision (or individualized) developmental science, one that determines the multiple causes and interacting factors that create children’s individual developmental pathways. Smith received her PhD from the University of Pennsylvania in 1977 and immediately joined the faculty at Indiana University. Her work has been continuously funded by the National Science Foundation and/or the National Institutes of Health since 1978. She won the David E. Rumelhart Prize for Theoretical Contributions to Cognitive Science, the American Psychological Association Award for Distinguished Scientific Contributions, the William James Fellow Award from the American Psychological Society, the Norman Anderson Lifetime Achievement Award, and the Koffka Medal. She is an elected member of the National Academy of Sciences and the American Academy of Arts and Science.
Wednesday – Jelani Nelson at 8:30 – 9:20 am 13 Dec
8:30 am CST Wednesday morning, Jelani Nelson will present “Sketching: core tools, learning-augmentation, and adaptive robustness.”
Abstract: ‘Sketches’ of data are memory-compressed summarizations that still allow answering useful queries, and as a tool have found use in algorithm design, optimization, machine learning, and more. This talk will give an overview of some core sketching tools and how they work, including recent advances. We also discuss a couple newly active areas of research, such as augmenting sketching algorithms with learned oracles in a way that provides provably enhanced performance guarantees, and designing robust sketches that maintain correctness even in the face of adaptive adversaries.
Bio: Jelani Nelson came to Berkeley in 2019 from Harvard University. He is a professor in the Department of Electrical Engineering and Computer Sciences. He was an Alfred P. Sloan Research Fellow and received a 2017 Presidential Early Career Award for Scientists and Engineers. “I find it fulfilling to work on algorithmic problems that are both practically relevant and simultaneously mathematically beautiful.”
Research Focus: Sketching and streaming algorithms for big data, and dimensionality-reduction techniques for high-dimensional data.
Wednesday – Beyond Scaling Panel at 2:15 – 3:15 pm 13 Dec
Join us at 2:15 pm Wednesday for the Beyond Scaling Panel moderated by Alexander Rush where a discussion with Aakanksha Chowdhery, Angela Fan, Percy Liang, and Jie Tang will take place in Hall F.
Thursday – Christopher Ré at 8:30 – 9:20 am 14 Dec
8:30 am CST Thursday, Christopher (Chris) Ré will discuss “Systems for Foundation Models, and Foundation Models for Systems.”
Abstract: I’m a simple creature. I fell in love with foundation models (FMs) because they radically improved data systems that I had been trying to build for a decade–and they are just awesome! This talk starts with my perspective about how FMs change the systems we build, focusing on what I call “death by a thousand cuts” problems. Roughly, these are problems in which each individual task looks easy, but the sheer variety and breadth of tasks make them hard.
The bulk of the talk is about understanding how to efficiently build foundation models. We describe trends in hardware accelerators from a perhaps unexpected viewpoint: database systems research. Databases have worried about optimizing IO – reads and writes within the memory hierarchy – since the 80s. In fact, optimizing IO led to Flash Attention for Transformers.
But are there more efficient architectures for foundation models than the Transformer? Maybe! I’ll describe a new class of architectures based on classical signal processing, exemplified by S4. These new architectures: are asymptotically more efficient than Transformers for long sequences, have achieved state-of-the-art quality on benchmarks like long range arena, and have been applied to images, text, DNA, audio, video. S4 will allow us to make mathematically precise connections to RNNs and CNNs. I’ll also describe new twists, such as, long filters, data-dependent convolutions, and gating, that power many of these amazing recent architectures including RWKV, S5, Mega, Hyena, and RetNet, and recent work to understand their fundamental limitations to hopefully make even more awesome foundation models! A github containing material from is under construction at https://github.com/HazyResearch/aisys-building-blocks. Please feel free to add to it!
Bio: Christopher (Chris) Re is an associate professor in the Department of Computer Science at Stanford University. He is in the Stanford AI Lab and is affiliated with the Machine Learning Group and the Center for Research on Foundation Models. His recent work is to understand how software and hardware systems will change because of machine learning along with a continuing, petulant drive to work on math problems. Research from his group has been incorporated into scientific and humanitarian efforts, such as the fight against human trafficking, along with products from technology and companies including Apple, Google, YouTube, and more. He has also cofounded companies, including Snorkel, SambaNova, and Together, and a venture firm, called Factory.
His family still brags that he received the MacArthur Foundation Fellowship, but his closest friends are confident that it was a mistake. His research contributions have spanned database theory, database systems, and machine learning, and his work has won best paper at a premier venue in each area, respectively, at PODS 2012, SIGMOD 2014, and ICML 2016. Due to great collaborators, he received the NeurIPS 2020 test-of-time award and the PODS 2022 test-of-time award. Due to great students, he received best paper at MIDL 2022, best paper runner up at ICLR22 and ICML22, and best student-paper runner up at UAI22.
Thursday – Susan Murphy at 2:15-3:05 pm 14 Dec
2:15 pm Thursday, Susan Murphy will discuss “Online Reinforcement Learning in Digital Health Interventions.”
Abstract: In this talk I will discuss first solutions to some of the challenges we face in developing online RL algorithms for use in digital health interventions targeting patients struggling with health problems such as substance misuse, hypertension and bone marrow transplantation. Digital health raises a number of challenges to the RL community including different sets of actions, each set intended to impact patients over a different time scale; the need to learn both within an implementation and between implementations of the RL algorithm; noisy environments and a lack of mechanistic models. In all of these settings the online line algorithm must be stable and autonomous. Despite these challenges, RL, with careful initialization, with careful management of bias/variance tradeoff and by close collaboration with health scientists can be successful. We can make an impact!
Bio: Susan A. Murphy is Mallinckrodt Professor of Statistics and of Computer Science and Associate Faculty at the Kempner Institute, Harvard University. Her research focuses on improving sequential decision making in health, currently, online real-time learning algorithms for personalizing digital health interventions. She is a member of the US National Academy of Sciences and of the US National Academy of Medicine. In 2013 she was awarded a MacArthur Fellowship for her work on experimental designs to inform sequential decision making. She is a Fellow of the College on Problems in Drug Dependence, Past-President of the Institute of Mathematical Statistics, and a former editor of the Annals of Statistics.