2025 Invited Speaker Lineup Announced
The NeurIPS 2025 organizing committee is pleased to announce the invited speakers for this years conference. The lineup includes six researchers representing areas spanning theoretical machine learning, reinforcement learning, AI and society, cognitive science, natural language processing, and deep learning applications.
Kyunghyun Cho
Glen de Vries Professor of Health Statistics, NYU; Executive Director of Frontier Research, Prescient Design, Genentech
Website: kyunghyuncho.me
Cho’s work spans machine learning and natural language processing. He co-developed the Gated Recurrent Unit (GRU) architecture and has contributed to neural machine translation and sequence-to-sequence learning. He is a CIFAR Fellow of Learning in Machines & Brains and received the 2021 Samsung Ho-Am Prize in Engineering. He served as program chair for ICLR 2020, NeurIPS 2022, and ICML 2022.
Yejin Choi
Professor of Computer Science, Stanford University; Dieter Schwarz Foundation Senior Fellow, Stanford HAI; Distinguished Scientist, NVIDIA
Website: yejinc.github.io
Choi’s research focuses on natural language processing, with emphasis on commonsense reasoning and language understanding. She is a 2022 MacArthur Fellow and was named to Time’s Most Influential People in AI in 2023. She has received multiple Test of Time Awards from ACL and CVPR, and Best Paper Awards at venues including ACL, EMNLP, ICML, and NeurIPS. She previously held positions at the University of Washington and Allen Institute for AI.
Melanie Mitchell
Professor, Santa Fe Institute
Website: melaniemitchell.me
Mitchell’s research areas include AI, cognitive science, and complex systems, with focus on conceptual abstraction and analogy-making in humans and AI systems. She authored “Complexity: A Guided Tour,” which won the 2010 Phi Beta Kappa Science Book Award, and “Artificial Intelligence: A Guide for Thinking Humans,” which was named one of the five best books on AI by both the New York Times and the Wall Street Journal. She received her PhD from the University of Michigan under Douglas Hofstadter, with whom she developed the Copycat cognitive architecture.
Andrew Saxe
Professor of Theoretical Neuroscience & Machine Learning, Gatsby Computational Neuroscience Unit and Sainsbury Wellcome Centre, UCL
Website: saxelab.org
Saxe’s research focuses on mathematical theories of learning in neural networks. He has developed exact solutions for learning dynamics in deep linear networks and studies connections between artificial and biological learning systems. He is a CIFAR Fellow of Learning in Machines & Brains and recipient of the 2019 Wellcome Trust Beit Prize. His work includes theoretical analyses of semantic development and the dynamics of representation learning.
Richard Sutton
Distinguished Research Scientist, Professor, University of Alberta; Chief Scientific Advisor, Amii
Website: incompleteideas.net
Sutton co-developed temporal difference learning and policy gradient methods in reinforcement learning. He received the 2024 Turing Award with Andrew Barto for foundational contributions to reinforcement learning. He is co-author of the textbook “Reinforcement Learning: An Introduction” and is a Fellow of the Royal Society and the Royal Society of Canada. His research focuses on computational principles underlying learning and decision-making.
Zeynep Tufekci
Henry G. Bryant Professor of Sociology and Public Affairs, Princeton University; New York Times Columnist
Website: zeynep.me
Tufekci examines the interplay of science, technology and society through a sociological framework and complex systems lens, focusing especially on digital, computational, and artificial intelligence technologies. She was a 2022 Pulitzer Prize finalist for commentary on the COVID-19 pandemic. Her book “Twitter and Tear Gas: The Power and Fragility of Networked Protest” examines the dynamics of social movements in the digital age. She is also faculty associate at the Berkman Klein Center for Internet & Society at Harvard University.