Alina Beygelzimer, Yann Dauphin, Percy Liang, and Jennifer Wortman Vaughan, NeurIPS 2021 Program Chairs.
The review process for NeurIPS 2021 starts soon! NeurIPS has a long history of experimentation, and this year we are making several changes that we hope will inform and improve the review process going forward. In this blog post, we lay out these changes and the rationale behind them.
First and foremost, we are excited to announce that NeurIPS is shifting the entire reviewing workflow to OpenReview! OpenReview is a flexible platform that allows heavy customization and will be easy to adapt as the needs of the conference evolve. It brings a number of infrastructural improvements including persistent user profiles that can be self-managed, accountability in conflict-of-interest declarations, and improved modes of interaction between members of the program committee. We are hugely grateful to the OpenReview team which is working hard in preparation for NeurIPS 2021 and to the CMT team which diligently supported NeurIPS for many years.
As in previous years, the review process will be confidential. Submissions under review will be visible only to assigned program committee members, and we will not solicit comments from the general public during the review process. After the notification deadline, accepted papers will be made public and open for non-anonymous public commenting. Their anonymous reviews, meta-reviews, and author responses will also be made public. All internal discussions will remain confidential both during and after the reviewing process.
By default, rejected submissions will not be made public. However, authors of rejected submissions will have two weeks after the notification deadline to opt in to make their de-anonymized papers public and open for commenting in OpenReview. Choosing to make the submission public will also open up the (anonymous) reviews, meta-reviews, and any discussion with the authors for these papers. While we don’t reap the full benefits of an open reviewing system — in particular, discouraging authors from submitting their work prematurely — this policy does give authors a mechanism to publicly flag and expose potential problems with the review process. We felt this was the best compromise as it spares junior researchers from discouraging and potentially harmful public criticism.
Based on the results of the NeurIPS 2020 experiment that showed that roughly 6% of desk-rejected submissions would have been accepted if they had gone through the full review process and the amount of time that area chairs devoted to desk rejections, we decided not to continue desk rejections this year. Papers may still be rejected without review if they violate the page limit or other submission requirements.
To minimize the chance of misunderstandings during the reviewing process, we will allow for a rolling discussion between authors and reviewers after initial reviews and author responses are submitted. During the rolling discussion phase, authors may respond to reviewer questions that arise. To give authors a chance to carefully think through their reviews and how they would like to respond, there will still be a demarcated response phase before discussions start. If new reviews are added during the discussion period (including ethics reviews), authors will have an opportunity to respond to those as well.
To discourage resubmissions without substantial changes, authors will be asked to declare if a previous version of their submission has been rejected at any peer-reviewed venue. Like last year, authors of resubmissions must submit a description of the improvements they’ve made to the paper since the previous version. Within the machine learning community, there has been some inclusive evidence of resubmission bias in reviews. In a small-scale study of novice reviewers, reviewers gave lower scores to papers labeled as resubmissions. On the other hand, COLT 2019 allowed papers submitted to STOC to be simultaneously submitted to COLT and withdrawn if accepted. While it was not a randomized experiment, there was no evidence of resubmission bias despite STOC reviews being shared with COLT reviewers for these submissions. To allow us to evaluate resubmission bias at scale, resubmission information will be visible to reviewers and area chairs only for a randomly chosen subset of submissions.
Of course the success of the review process ultimately depends on the quality of reviews and the hard work of the NeurIPS community. Submissions have been growing at a rate of roughly 40% per year for the last four years, with over 9,400 full paper submissions in 2020. Taking into account the increase seen by ICML 2021 (around 10%) and ICLR 2021 (around 15%), we are preparing for the possibility of 12,000 submissions. Without the work of dedicated reviewers, area chairs, and senior area chairs, the conference simply will not be able to continue, so it is critically important that members of the community take on these roles when asked.
In order to help the reviewers who are new to the NeurIPS community, we are preparing a tutorial on writing reviews as a training resource. This tutorial will take the form of a short set of slides and we encourage all new reviewers to read through it to understand what is expected of a NeurIPS reviewer.
We look forward to your submissions! If you’re planning to submit, make sure to read our last blog post introducing the NeurIPS paper checklist and the ethics review process, and stay tuned for more!