Reflecting on the Inaugural NeurIPS Position Paper Track: A Pilot Year Journey
As we announce decisions for the first-ever NeurIPS Position Paper Track, we find ourselves reflecting on an extraordinary experimental year. When NeurIPS launched this pilot track, we ventured into uncharted territory. The community response was overwhelming and inspiring. Every submission represented not just research, but a vision for how position papers could contribute to broader conversations in our field.
We want to take a moment to share our reflections, including lessons we’ll take forward into the future. This blog post is specific to the position paper track, which is just one of several tracks at NeurIPS: the Main Program track and Datasets and Benchmarks track are also releasing blog posts reflecting on their processes. To all authors who trusted us with their ideas and hard work, thank you. To all Reviewers, Emergency Reviewers, and Area Chairs who dedicated their time, thank you, too!
The Numbers
We received nearly 700 initial submissions—a response that both humbled and energized us. After accounting for withdrawals and desk reviews, we had 496 viable candidates for full consideration. From this impressive pool, we selected 40 papers for presentation at the conference. That’s an acceptance rate of approximately 8%. We know that this number is lower than other tracks in the conference, but it reflects the specific affordances of position papers and the conference strategy for piloting this track.
The NeurIPS 2025 Position Paper Track envisions giving each accepted author more focused attention: we are planning to provide accepted authors with new formats of presentation and tailored communication guidance to further evolve their message. We feel this better fits with the NeurIPS position paper track goal for this year of widespread visibility to serve as a springboard for public discussion. This does mean that we had to accept a limited number of papers.
If your paper received high scores but it wasn’t ultimately accepted for presentation at the conference, you’re in good company. We had a wealth of great papers this year! Additionally, some excellent papers with high technical merit weren’t the best fit for this inaugural community discussion—either requiring specialized expertise that might limit broader engagement or being more focused on technical implementations and methods than we envision for this year’s track.
We encourage authors who received positive reviews to incorporate that feedback for submissions to other venues and platforms. These conversations are too important to end here.
Clarifying Our Process
We’ve received questions about our process, and we want to be transparent as we continue to adapt and learn from this pilot year. We’re grateful for all the suggestions authors provided in the author survey, which included proposals for new review mechanisms, specific feedback on the call for submissions, and thoughtful ideas about track themes. We will continue to digest your proposals and translate these ideas into learnings for future conference position paper tracks.
One thing we piloted this year that we want to provide extra context on is the adjudication process. From the outset, adjudication was designed to improve our review process and address procedural concerns about reviewer behavior—not to alter paper scores after the fact. That said, all author feedback, including adjudication requests, was available to Area Chairs and Program Chairs to inform metareviews and decision-making.
We have several ongoing investigations into reviewer conduct concerns that will continue until appropriate resolution. For every author who requested adjudication, we’re corresponding individually.
Addressing Community Concerns
We’ve received thoughtful feedback that has highlighted some unintended consequences of our pilot approach. We want to address these directly:
Exclusivity and Limited Discussion: Our small acceptance pool, while intended to provide focused attention to selected papers, has understandably raised concerns about exclusivity and limited diversity of viewpoints. We stand by our decision to focus on fewer papers this year towards a goal of more amplification and resources on each, and the community feedback is something that we will take into account for future years.
Responses to Reviews: We experimented with changes to the review process, moving from rebuttals to an author survey and creating a space for adjudication where possible misbehavior might be in question. We were hoping to create more civil exchanges and provide mechanisms for addressing review concerns within the more constrained timeline of the position paper track. Our decision was Informed by a growing body of research that shows that the process of rebuttals can be biased and expose reviewers to “peer review bullying” Author survey responses, including how the paper would change based on reviewer feedback, were read and considered carefully in metareviews and the final decisions. In other words, we looked at everything when making the decisions.
Timeline Challenges: We apologize that our ambitious design was not accompanied by punctual logistics execution when we encountered storms along the way. We initially synchronized with other track deadlines within NeurIPS. Unfortunately, we found ourselves needing a significant number of emergency reviewers and additional time to ensure every paper received proper evaluation and a minimum of three reviews. This was a result of the novel format of the position paper track, uncertainty around reviewer expectations, and a greater number of reviewers who were unable to complete their work or became unresponsive than we had planned for.
We did not anticipate the scale of reviewer coordination required and failed to meet our original timeline commitments. When it became clear this would happen, we sent an email to authors suggesting that they prepare their papers for re-submission to other conferences before our decisions came out. In future iterations of our track, we will work to plan deadlines more strategically.
Lessons Learned
This pilot year has been a learning experience, including some challenging community feedback that prompted important reflection.
What We’ve Learned:
- We need to provide much clearer guidance about appropriate submissions and review expectations including examples that people can reference;
- We need to overrecruit and overassign reviewers to papers at a higher rather than dis-similar tracks to ensure a minimum of three reviews for every paper by the deadline;
- The position paper format works best when facilitating broad, meaningful discussions.
Addressing Community Concerns:
- On Exclusivity: Our small acceptance pool raised concerns about limited diversity of viewpoints. While we have a small number of papers, we believe they touch on many of the topics we saw voiced by the community. We believe future position paper tracks should be resourced to target a higher acceptance rate so that more authors have the opportunity to present.
- On Review Process: Our track’s different processes were difficult for some authors who followed prior venue guidance for improving paper acceptance, meanwhile some authors and reviewers took the changes at face value. Future chairs should be more creative with the strategies we use not only to communicate our ideas, but to listen to feedback and constructive criticism.
- On Timeline Challenges: We needed more time to recruit, assign, and complete emergency reviews. This resulted in a final decision date that overlapped with other conferences. In future years, chairs should be more cognizant of deadlines for our sister conferences and try to provide further advance notice to authors on how to handle the conflict.
We put out blog posts and communicated via OpenReview for our decision-making standards and changes to timelines regularly and at important timepoints. We would love the community’s advice on how to better communicate, our current channels (including blog posts!) may not be effectively reaching the audience for our messages.
Gratitude and Next Steps
The success of this experiment hinges on your willingness to engage with something new and trust us with your research. Whether your paper was selected or not, your contribution to this inaugural year has been meaningful and appreciated. Thank you.
The conversations that matter most in our field—about ethics, impact, and AI’s future—don’t end with a single conference track. They continue in labs, journals, online discussions, and countless ways our community engages with challenges and opportunities ahead.
Thank you for making the first NeurIPS Position Paper Track possible! We hope you enjoy all of the discussions sparked by these papers at NeurIPS in December!