Announcing the NeurIPS 2024 Workshops
by Adil Salim, Bo Han, Manuel Gomez Rodriguez, and Rose Yu
We are excited to announce the list of NeurIPS 2024 workshops! We received 204 total submissions — a significant increase from last year. From this great batch of submissions, we have accepted 56 workshops that will take place on Dec. 14 & 15.
Given the exceptional quality of submissions this year, we wish we could have accepted many more, but we could not due to logistical constraints. We want to thank everyone who put in tremendous effort in submitting a workshop proposal.
Review Process
We continue to use OpenReview as our submission platform this year, which is aligned with other NeurIPS submission tracks due to the success of OpenReview in matching reviewers well to proposals. Additional details about the selection process are provided below.
The requested items for the proposals have not changed much this year. We kept the length of the main proposal limited to three pages and the organizer information limited to two pages, along with unlimited references. We specifically let reviewers know that they need not read beyond those pages.
With respect to last year, we have further increased the reviewer pool. We sent out over 418 invitations and managed to recruit 189 reviewers. This resulted in at least two reviews per proposal for (almost) all 204 proposals. We thank all the reviewers for their timely and professional efforts to provide quality reviews that greatly assisted our decision-making and facilitated an exciting and well-informed workshop program this year.
Selection Process
In making our selections, we asked the reviewers to closely follow our Guidance for Workshop Proposals, which was also shared with the proposal authors. Workshop proposals must be reviewed somewhat differently from academic papers, and hence we asked the reviewers to consider both scientific merits and broader impacts in their assessments. We recognize that workshop reviews might be somewhat more subjective than academic paper reviews. To offer feedback to the proposal authors, we have decided to release the review comments.
Individual evaluations of proposals by reviewers were important for the decision process, but they were not the only considerations in the decision process. For example, we also strived for a good balance between research areas, and between applications and theory. As interest across a variety of research areas is not uniform, some areas were more competitive than others. For example, there were many strong proposals surrounding large language models this year and we could not accept all. We attempted some balance of topics to cover both mainstream and emerging topics.
The next step is your contributions! Several workshops have begun soliciting submissions, many using our suggested submission date of Aug 30, 2024. We typically let each workshop advertise its own call for papers (if they plan to include workshop papers). We will communicate with the workshop organizers some additional deadlines to facilitate the successful planning of 56 exciting workshops. Stay tuned for more technical and contextual information coming soon!
NeurIPS 2024 Accepted Workshops
On to the best part: the preliminary list of accepted workshops for 2024!
- Intrinsically Motivated Open-ended Learning (IMOL)
- ML with New Compute Paradigms
- Federated Foundation Models
- Foundation Model Interventions
- Bayesian Decision-making and Uncertainty: from probabilistic and spatiotemporal modeling to sequential experiment design
- Open-World Agents: Synergizing Reasoning and Decision-Making in Open and Interactive Environments
- Audio Imagination: AI-Driven Speech, Music, and Sound Generation
- Foundation Models for Science: Progress, Opportunities, and Challenges
- AI for New Drug Modalities
- Statistical Frontiers in LLMs and Foundation Models
- Machine Learning in Structural Biology
- Table Representation and Generative Learning
- Data-driven and Differentiable Simulations, Surrogates, and Solvers
- Mathematical Reasoning and AI
- Red Teaming GenAI: What Can We Learn from Adversaries?
- Advancements In Medical Foundation Models: Explainability, Robustness, Security, and Beyond
- Causality and Large Models
- Large Foundation Models for Educational Assessment
- Machine Learning and Compression
- Machine Learning for Systems
- Scientific Methods for Understanding Neural Networks: Discovering, Validating, and Falsifying Theories of Deep Learning with Experiments
- Pluralistic Alignment
- Responsibly Building Next Generation of Multimodal Foundation Models
- Touch Processing: From Data to Knowledge
- GenAI for Health: Potential, Trust and Policy Compliance
- Symmetry and Geometry in Neural Representations
- Mathematics of Modern Machine Learning
- Tackling Climate Change with Machine Learning: Questioning Common ML Assumptions in the Context of Climate Impact
- Attributing Model Behavior at Scale
- Compositional Learning: Perspectives, Methods, and Paths Forward
- Time Series in the Age of Large Models
- Efficient Natural Language and Speech Processing: Highlighting New Architectures for Future Foundation Models
- Behavioral Machine Learning
- Interpretable AI: Past, Present and Future
- UniReps: Unifying Representations in Neural Models
- Regulatable ML: Towards Bridging the Gaps between Machine Learning Research and Regulations
- Evaluating Evaluations: Examining Best Practices for Measuring Broader Impacts of Generative AI
- Multimodal Algorithmic Reasoning
- Self-Supervised Learning: Theory and Practice
- Language Gamification
- Fine-Tuning in Modern Machine Learning: Principles and Scalability
- Safe Generative AI
- Adaptive Foundation Models: Evolving AI for Personalized and Efficient Learning
- Optimization for Machine Learning
- New Frontiers in Adversarial Machine Learning
- Towards Safe & Trustworthy Agents
- Algorithmic Fairness through the lens of Metrics and Evaluation
- Machine Learning and the Physical Sciences
- Socially Responsible Language Modelling Research
- AI for Accelerated Materials Design
- System-2 Reasoning at Scale
- Causal Representation Learning
- Scalable Continual Learning for Lifelong Foundation Models
- NeuroAI: Fusing Neuroscience and AI for Intelligent Solutions
- Video-Language Models
- Generative AI and Creativity: A dialogue between machine learning researchers and creative professionals