Last year, NeurIPS launched the new Datasets and Benchmarks track to serve as a venue for exceptional work focused on creating high-quality datasets, insightful benchmarks, and discussions on improving dataset development and data-oriented work more broadly. Further details about the motivation and setup are discussed in our earlier blog post here.
This year, we received 447 submissions on a breadth of topics, out of which 163 have been accepted for publication. The acceptance rate was 36.46%. Please explore the list of accepted papers. The reviewing standards were again set very high, and the process involved a set of specific attention points, such as the impact and documentation quality of datasets, the reproducibility of benchmarks, as well as ethics, and long-term accessibility.
We are immensely grateful for the tremendous contributions of the 92 area chairs, 1064 reviewers, and 39 ethics reviewers to make this new endeavor a success. Different from last year, we organized a single reviewing round, more closely following the main NeurIPS review cycle, albeit with a longer rebuttal period which allowed many submissions to be substantially improved.
Of the 163 accepted papers, about half of the papers were identified as introducing new datasets, while the other half presented new benchmarks. They covered a broad range of topics. Approximately 23% of papers were related to computer vision; 8% natural language processing; 7% reinforcement learning and simulation environments; and 6% multimodal data. The remainder covered various other topics, such as speech processing, explainable AI, and ethics. While these are rough estimates, we hope they provide a sense of the distribution of topics in this year’s track.
This year, the Dataset and Benchmarks track also truly became a standard component of the NeurIPS conference. Datasets and Benchmarks papers are blended with the main conference papers in the poster sessions, panels, and on the virtual conference site. They will still be easily discoverable via a virtual site highlight page and stickers in the poster session. We are also delighted that the NeurIPS board has agreed to publish a single NeurIPS proceedings this year. The Datasets and Benchmarks papers will appear in the same proceedings as the other NeurIPS papers, with an indication that they are affiliated with the dataset and benchmark track to make them easy to find.
We are looking forward to another great edition of the NeurIPS Datasets and Benchmarks track, and hope to see you at the conference!