NeurIPS 2025 Socials : Join the Conversations!
As NeurIPS 2025 approaches, we’re thrilled to announce the eight accepted social events taking place in San Diego. These gatherings bring our community together to discuss, debate, and celebrate the latest advances in AI and machine learning. From interdisciplinary exchanges to focused thematic sessions, the NeurIPS social events serve as vital spaces for connection and dialogue, extending the intellectual and collaborative spirit of the conference beyond its technical program.
Accepted Socials at NeurIPS 2025
Learning Theory Alliance
https://let-all.com/neurips25.html
This social event, organized by the Learning Theory Alliance, will feature a fireside chat and ask-me-anything (AMA) with a senior community member followed by informal round-table discussions. The session aims to provide mentorship, share insights on research and career development, and foster connections among researchers in learning theory. Building on the success of last year’s NeurIPS social, “Theory in the Age of LLMs,” this event will strengthen engagement across the theoretical and applied machine learning communities and support the Alliance’s mission to cultivate an inclusive, collaborative, and globally connected learning theory community.
Day: Wednesday, December 3
When Errors Dream: Exploring Collective Creativity through AI Hallucination
When Errors Dream reframes AI hallucination as creative material. In a two-hour open-space jam in festival style, attendees rotate through small groups to generate surprising AI outputs—text, image, or sound—and transform them into collaborative artworks and interactive experiences using digital and analog media. No prior skills are required; emphasis is on the joy of making with machines, not polish. Designed to be drop-in friendly, the format scales to 150–200 participants through science-fair-style stations and quick exquisite-corpse creation cycles, fostering inclusive networking through shared play rather than one-directional talks.
Day: Thursday, December 4
The Role of AI in Scientific Peer Review
https://ai-scientific-peer-review.github.io
This social event will explore the role of Artificial Intelligence (AI) in addressing the current challenges and shaping the future of scientific peer review. We will examine how AI can be applied across the entire scholarly publishing process, from authoring to reviewing, editing, and even readership. The event will foster critical discussion on the ethical implications, potential benefits, and practical implementation of AI in this critical scientific process. Our goal is to bring together researchers, practitioners, and stakeholders from diverse fields in an interactive format to build community and explore actionable solutions for a more efficient, fair, and transparent peer review system.
Day: Wednesday, December 3
NeuroAI: From Neurons to Transformers
https://neuroai-social-websi-xk4i.bolt.host
As artificial intelligence grows ever more brain like, the dialogue between neuroscience and machine learning has never been more important. NeuroAI: From Neurons to Transformers explores how ideas from biological cognition are inspiring next-generation AI, and how, in turn, large-scale foundation models are reshaping how we study the brain and teach about it. The social invites participants to discuss neural computation, cognitive modeling, and educational technology—from brain-inspired architectures that mimic learning to AI-driven tools that personalize education and accelerate scientific discovery. In an inclusive setting, attendees will connect across disciplines, neuroscience, AI, EdTech, ethics, and beyond, to share ideas, debate implications, and imagine a future where understanding minds, human or artificial, transforms how we learn, create, and collaborate.
Day: Thursday, December 4
Agents Safety Panel
As AI systems become increasingly capable and widely deployed, ensuring their safety and reliability is more important than ever. Join us for a 30-minute panel discussion on the safety of agents from development to deployment, followed by a brief Q&A session. The rest of the event will consist of discussion and mingling among attendees. We will provide drinks and snacks. This event is co-organized by the Center for AI Safety (CAIS) and UK AI Security Institute (AISI).
Day: Wednesday, December 3
Nonprofits Working on Openness and Trust in AI
https://enterprise.wikimedia.com/blog/neurips-event
Join us for an in-person social event at NeurIPS 2025, to explore the intersections between generative AI data and open trusted datasets. This session will feature representatives from the Wikimedia Foundation, ML Commons and AI Alliance, offering an opportunity to connect with nonprofits committed to using technology for academic and social missions. The event will begin with presentations from both organizations, highlighting their goals, projects and research (e.g., Wikipedia, AI Alliance’s Open Trusted Data Initiative, MLCommon’s Croissant data standards) and challenges with trust and responsible data usage in AI. Following the presentations, the session will transition into roundtable discussions focused on current initiatives and an open Q&A.
Day: Wednesday, December 3
Value Chain from Research to ROI
womeninai.co/wailabs
We aim to understand: How can research be ready to accelerate into products/applied solutions? What kind of environments, processes & catalysts are needed to establish pathways from research to larger ecosystems consisting of products & businesses? We will have a panel discussion on these topics followed by a hands-on group activity where attendees will ideate sector-specific product applications from their own research. The event is structured to help researchers articulate their research’s broader potential, emphasizing ecosystem thinking and practical steps needed to scale research impact.
Day: Thursday, December 4
Evaluating Agentic Systems: Bridging Research Benchmarks and Real-World Impact
Agentic AI systems—LLM-driven agents capable of autonomous planning, tool use, and multi-step task execution—are rapidly advancing, yet methods for evaluating them remain underdeveloped. Traditional metrics for static or single-turn tasks fail to capture the complexity of open-ended, long-horizon interactions where goals evolve and behaviors emerge dynamically. This social aims to bridge research and industry perspectives on designing frameworks, simulation environments, and metrics that assess reliability, alignment, and safety in autonomous agents. Through lightning talks, panel discussions, and networking, the event fosters an interactive exchange on how to meaningfully evaluate and benchmark the next generation of agentic AI systems.
Day: Thursday, December 4
Social Co-Chairs
Ehsan Adeli (Stanford University)
Alessandra Tosi (Mind Foundry)
Saining Xie (New York University)