NeurIPS 2025 Invited Speaker Topics
The NeurIPS 2025 organizing committee is excited to share details about the upcoming invited talks at this year’s conference. Our distinguished lineup includes six leading researchers who will explore critical questions at the frontiers of artificial intelligence, from the foundations of learning in deep networks to the societal implications of AI systems. For more on each talk, you can explore the speaker’s abstract.
At 8:30am on Dec 3, Richard Sutton, recipient of the 2024 Turing Award, will argue that as “AI has become a huge industry, to an extent it has lost its way,” and proposes a return to fundamental principles: “We need agents that learn continually. We need world models and planning. We need knowledge that is high-level and learnable.”
At 2:30pm on Dec 3, Zeynep Tufekci argues that focusing on AGI or superintelligence misses more immediate concerns: “Artificial Good-Enough Intelligence can unleash chaos and destruction long before, or if ever, AGI is reached.” Tufekci emphasizes that generative AI’s revolutionary impact comes “from making what’s already possible and desired cheap, easy, fast, and large-scale,” and warns that “existing AI is good enough to blur or pulverize our existing mechanisms of proof of accuracy, effort, veracity, authenticity, sincerity, and even humanity.”
At 8:30am on Dec 4, Yejin Choi will deliver the Posner Lecture this year. Her talk will examine the changing landscape of natural language processing and commonsense AI. Her presentation promises to address fundamental questions about how AI systems understand and reason about the world, drawing on her extensive work in commonsense reasoning and language understanding. Choi notes “Despite rapid progress on benchmarks, state-of-the-art models still exhibit “jagged intelligence,” indicating that current scaling approaches may have limitations. Additionally, our scientific understanding of artificial intelligence hasn’t kept pace with engineering advances, and the current literature presents seemingly contradictory findings that can be difficult to reconcile.”
At 2:30pm on Dec 4, Melanie Mitchell will present her observations on today’s generative AI systems that have “exceeded human performance on many benchmarks meant to test humanlike cognitive capabilities,” yet “still struggle in unhumanlike ways on real-world tasks.” Mitchell will draw on experimental methods from developmental and comparative psychology to demonstrate new approaches for evaluating cognition in large language models, covering analogical reasoning, visual abstraction, and mathematical problem-solving.
At 8:30am on Dec 5, Kyunghyun Cho will reflect on problem finding in AI research. His talk draws on his experience working across machine learning algorithms, generative modeling, machine translation, medical imaging, and protein modeling. Cho notes that these “seemingly different problems turned out to be closely related to each other from both technical, social and personal perspectives.” He will share his thoughts on “what our own discipline, which is sometimes called computer science, data science, machine learning or artificial intelligence, is.”
At 2:30pm on Dec 5, Andrew Saxe, is delivering the Breiman Lecture, Saxe will offer mathematical analyses revealing “how learning algorithms, data structure, initialization schemes, and architectural choices interact to produce hidden representations that afford complex generalization behaviors.” A key theme will be what Saxe describes as a “neural race: competing pathways within a deep network vie to explain the data, with an implicit bias toward shared representations.” He will demonstrate how these principles manifest across feedforward, recurrent, and linear attention networks.
These talks promise to spark important conversations about where AI research has been and where it should go next. Join us this December to hear from these exceptional speakers and engage with the broader NeurIPS community!