Douwe Kiela, Barbara Caputo, Marco Ciccone, NeurIPS 2021 Competitions Chairs
Today, we introduce the competitions that have been accepted at NeurIPS 2021. We selected a total of 23 competitions out of 46 very strong proposals, covering a wide range of areas and subdisciplines. We are super excited about this year’s program and the set of challenges it presents. The competitions are now officially open and ready for the community to participate. Please check the NeurIPS website for full competition details.
This year will mark the fifth year of NeurIPS having a dedicated competitions track. Whether this is the first time you’re hearing about NeurIPS competitions or you’re a competition grandmaster, we hope you’ll take some time to look over this years’ exciting set of competitions, and wholeheartedly encourage you to participate! Competitions are lots of fun: they’re great for connecting to a community of like-minded researchers, excellent for learning and resume building, and will allow you to use your skills to have real-world impact on important and challenging problems.
The program includes exciting competitions ranging from zero-resource speech, to interactive grounded language understanding, to designing new catalysts for renewable energy, to name just a few. The program includes a wide variety of domains, with some focusing more on applications and others trying to unify fields, or focusing on technical challenges like understanding the fidelity of approximate inference. We hope that the broad program makes it so that anyone who wants to work on a competition can find something to their liking.
Without further ado, the list of accepted competitions is as follows:
BASALT: A MineRL Competition on Solving Human-Judged Tasks
Diamond: A MineRL Competition on Training Sample-Efficient Agents
Enhanced Zero-Resource Speech Challenge 2021: Language Modelling from Speech and Images
Evaluating Approximate Inference in Bayesian Deep Learning
HEAR 2021: Holistic Evaluation of Audio Representations
IGLU: Interactive Grounded Language Understanding in a Collaborative Environment
Image similarity challenge
Learning By Doing: Controlling a Dynamical System using Control Theory, Reinforcement Learning, or Causality
Machine Learning for Combinatorial Optimization
Machine Learning for Mechanical Ventilation Control
MetaDL: Few Shot Learning Competition with Novel Datasets from Practical Domains
Multimodal Single-Cell Data Integration
Open Catalyst Challenge
Real Robot Challenge II
Reconnaissance Blind Chess
Shifts Challenge: Robustness and Uncertainty under Real-World Distributional Shift
The AI Driving Olympics
The BEETL 2021 Competition: Benchmarks for EEG Transfer Learning
The NetHack Challenge
Traffic4cast 2021 — Temporal and Spatial Few-Shot Transfer Learning in Traffic Map Movie Forecasting
VisDA21: Visual Domain Adaptation
WebQA Competition
For full details, see: https://neurips.cc/Conferences/2021/CompetitionTrack. We are very grateful to the colleagues that helped us with reviewing and selecting the competition proposals for this year.
We congratulate all the authors on their accepted proposals, and we thank everyone who submitted a proposal for their fantastic work. We’re very grateful to our program committee for providing very high-quality reviews.
Alina Beygelzimer, Yann Dauphin, Percy Liang, and Jennifer Wortman Vaughan, NeurIPS 2021 Program Chairs.
The review process for NeurIPS 2021 starts soon! NeurIPS has a long history of experimentation, and this year we are making several changes that we hope will inform and improve the review process going forward. In this blog post, we lay out these changes and the rationale behind them.
First and foremost, we are excited to announce that NeurIPS is shifting the entire reviewing workflow to OpenReview! OpenReview is a flexible platform that allows heavy customization and will be easy to adapt as the needs of the conference evolve. It brings a number of infrastructural improvements including persistent user profiles that can be self-managed, accountability in conflict-of-interest declarations, and improved modes of interaction between members of the program committee. We are hugely grateful to the OpenReview team which is working hard in preparation for NeurIPS 2021 and to the CMT team which diligently supported NeurIPS for many years.
As in previous years, the review process will be confidential. Submissions under review will be visible only to assigned program committee members, and we will not solicit comments from the general public during the review process. After the notification deadline, accepted papers will be made public and open for non-anonymous public commenting. Their anonymous reviews, meta-reviews, and author responses will also be made public. All internal discussions will remain confidential both during and after the reviewing process.
By default, rejected submissions will not be made public. However, authors of rejected submissions will have two weeks after the notification deadline to opt in to make their de-anonymized papers public and open for commenting in OpenReview. Choosing to make the submission public will also open up the (anonymous) reviews, meta-reviews, and any discussion with the authors for these papers. While we don’t reap the full benefits of an open reviewing system — in particular, discouraging authors from submitting their work prematurely — this policy does give authors a mechanism to publicly flag and expose potential problems with the review process. We felt this was the best compromise as it spares junior researchers from discouraging and potentially harmful public criticism.
Based on the results of the NeurIPS 2020 experiment that showed that roughly 6% of desk-rejected submissions would have been accepted if they had gone through the full review process and the amount of time that area chairs devoted to desk rejections, we decided not to continue desk rejections this year. Papers may still be rejected without review if they violate the page limit or other submission requirements.
To minimize the chance of misunderstandings during the reviewing process, we will allow for a rolling discussion between authors and reviewers after initial reviews and author responses are submitted. During the rolling discussion phase, authors may respond to reviewer questions that arise. To give authors a chance to carefully think through their reviews and how they would like to respond, there will still be a demarcated response phase before discussions start. If new reviews are added during the discussion period (including ethics reviews), authors will have an opportunity to respond to those as well.
To discourage resubmissions without substantial changes, authors will be asked to declare if a previous version of their submission has been rejected at any peer-reviewed venue. Like last year, authors of resubmissions must submit a description of the improvements they’ve made to the paper since the previous version. Within the machine learning community, there has been some inclusive evidence of resubmission bias in reviews. In a small-scale study of novice reviewers, reviewers gave lower scores to papers labeled as resubmissions. On the other hand, COLT 2019 allowed papers submitted to STOC to be simultaneously submitted to COLT and withdrawn if accepted. While it was not a randomized experiment, there was no evidence of resubmission bias despite STOC reviews being shared with COLT reviewers for these submissions. To allow us to evaluate resubmission bias at scale, resubmission information will be visible to reviewers and area chairs only for a randomly chosen subset of submissions.
Of course the success of the review process ultimately depends on the quality of reviews and the hard work of the NeurIPS community. Submissions have been growing at a rate of roughly 40% per year for the last four years, with over 9,400 full paper submissions in 2020. Taking into account the increase seen by ICML 2021 (around 10%) and ICLR 2021 (around 15%), we are preparing for the possibility of 12,000 submissions. Without the work of dedicated reviewers, area chairs, and senior area chairs, the conference simply will not be able to continue, so it is critically important that members of the community take on these roles when asked.
In order to help the reviewers who are new to the NeurIPS community, we are preparing a tutorial on writing reviews as a training resource. This tutorial will take the form of a short set of slides and we encourage all new reviewers to read through it to understand what is expected of a NeurIPS reviewer.
We look forward to your submissions! If you’re planning to submit, make sure to read our last blog post introducing the NeurIPS paper checklist and the ethics review process, and stay tuned for more!
There are no good models without good data (Sambasivan et al. 2021). The vast majority of the NeurIPS community focuses on algorithm design, but often can’t easily find good datasets to evaluate their algorithms in a way that is maximally useful for the community and/or practitioners. Hence, many researchers resort to data that are conveniently available, but not representative of real applications. For instance, many algorithms are only evaluated on toy problems, or data that is plagued with bias, which could lead to biased models or misleading results, and subsequent public criticism of the field (Paullada et al. 2020).
Researchers are often incentivized to benchmark their methods on a handful of popular datasets that have been well established in the field, with state-of-the-art results on these key benchmark datasets helping to secure a paper acceptance. Conversely, evaluations on lesser known real-world datasets, and other benchmarking efforts to connect models to real world impacts, are often harder to publish and are consequently devalued within the field.
In all, there are currently not enough incentives at NeurIPS to work and publish on data and benchmarks, as evidenced by the lack of papers on this topic. In recent NeurIPS conferences, very few (less than 5) accepted papers per year focus on proposing new datasets, and only about 10 focus on systemic benchmarking of algorithms across a wide range of datasets. This is partially due to publishing and reviewing guidelines which are meaningful for algorithmic papers but less for dataset and benchmark papers. For instance, datasets can often not be reviewed in a double-blind fashion, but do require additional specific checks, such as a proper description of how the data was collected, whether they show intrinsic bias, and whether they will remain accessible.
We therefore propose a new track at NeurIPS as an incubator to bootstrap publication on data and benchmarks. It will serve as a venue for publications, talks, and posters, as well as a forum for discussions on how to improve dataset development and data-oriented work more broadly. Submissions to the track will be part of the NeurIPS conference, presented alongside the main conference papers, as well as published in associated proceedings hosted on the NeurIPS website, next to the main conference proceedings. Submissions to this track will be reviewed according to a set of stringent criteria specifically designed for datasets and benchmarks. Next to a scientific paper, authors must also submit supplementary materials which provide full detail on how the data was collected and organized, what kind of information it contains, how it should be used ethically and responsibly, as well as how it will be made available and maintained. Authors are free in describing this to the best of their ability. For instance, dataset papers could make use of dataset documentation frameworks, such as datasheets for datasets, dataset nutrition labels, data statements for NLP, and accountability frameworks. For benchmarks, best practices on reproducibility should be followed.
In addition, we welcome submissions that detail advanced practices in data collection and curation that are of general interest even if the data itself cannot be shared. Audits of existing datasets, or systematic analysis of existing systems on novel datasets that yield important new insight are also in scope. As part of this track, we aim to gather advice on best practices in constructing, documenting, and using datasets, including examples of known exemplary as well as problematic datasets, and create a website that makes this information easily accessible.
Different from other tracks, we will require single blind review, since datasets cannot always be transferred to an anonymous platform. We leave the choice of hosting platform to the creators, but make it clear that publication comes with certain responsibilities, especially that the data remain accessible (possibly through a curated interface) and that the authors bear responsibility for their maintenance (e.g. resolving rights violations).
There are some existing related efforts in the broader community, such as dataset descriptors (e.g., Nature Scientific Data) or papers on the state of the AI field (e.g., the AI Index Report). However, dataset journals tend to focus purely on the data and less on its relation to machine learning, and projects such as the AI Index are very broad and do not focus on new experimental evaluations or technical improvements of such evaluations. This track will bring together and span these related efforts from a machine learning-centric perspective. We anticipate the output to be a rich body of publications around topics such as new datasets and benchmarks, novel analysis of datasets and data curation methods, evaluation and metrics, and societal impacts such as ethics considerations.
If you have exciting datasets, benchmarks, or ideas to share, we warmly welcome you to submit to this new track. To allow near-continuous submission, we will have two deadlines, this year on the 4th of June and the 23rd of August 2021. Submissions will be reviewed through OpenReview to facilitate additional public discussion, and the most appreciated submissions will also feature in an inaugural symposium at NeurIPS2021. Please see the call for papers for further details.
We would like to thank Emily Denton, Isabelle Guyon, Neil Lawrence, Marc’Aurelio Ranzato, and Olga Russakovsky for their valued feedback on this blog post.
Alina Beygelzimer, Yann Dauphin, Percy Liang, and Jennifer Wortman Vaughan, NeurIPS 2021 Program Chairs
Welcome! This is the first in a series of blog posts that will take you behind the scenes to explore the organization, review process, and program for NeurIPS 2021.
As Program Chairs, we have spent the past few months immersing ourselves in conference planning. We’ve begun recruiting a program committee (please say yes if we reach out to you!), finalizing the details of the call for papers and review process (more on that in our next blog post…), planning a stellar line-up of invited speakers, and engaging with the broader NeurIPS community to understand what we can do to make this year’s conference stronger than ever. We are thrilled and honored to have this opportunity to serve our community and are excited about the year ahead.
And what an exciting time it is! Machine learning impacts nearly every aspect of our day-to-day lives, from the news we see and movies we watch to our healthcare and education, all the way to whether we are offered a job or given a loan. Submissions to NeurIPS have been growing at a rate of roughly 40% per year for the last five years, with over 9,400 full paper submissions in 2020. With this rapid growth and endless opportunity for impact, it is increasingly important that we as a field continually revisit and examine our norms, our values, and the effect that we want our research to have on the world.
In 2019, NeurIPS introduced a reproducibility program, consisting of a code submission policy, a community-wide reproducibility challenge, and the inclusion of a reproducibility checklist as part of the paper submission process. Last year, NeurIPS took another important step, introducing the inclusion of broader impact statements in submissions along with a new ethics review process. We were thrilled to see these advances made and believe they represent a huge step forward for the community.
NeurIPS has a long history of experimentation. In that tradition, we wondered whether there might be a way to build on innovations like the reproducibility checklist and broader impact statements, but expand the scope to include other facets of responsible machine learning research and increase integration with the paper-writing process. We read the author feedback from the NeurIPS 2020 survey, listened to the thoughtful perspectives presented at NeurIPS 2020 broader impacts workshop, explored similar efforts taking place in other communities, and talked with researchers both within and outside the NeurIPS community who have thought long and hard about these issues. It became clear that authors want both more guidance around how to perform machine learning research responsibly and more flexibility in how they discuss this in their papers.
Taking this feedback into account, we landed on the idea of the NeurIPS Paper Checklist. The NeurIPS Paper Checklist is designed to encourage best practices for responsible machine learning research, taking into consideration reproducibility, transparency, research ethics, and societal impact. Our goal is to encourage authors to think about, hopefully address, but at least document the completeness, soundness, limitations, and potential negative societal impact of their work. We want to place minimal burden on authors, giving authors flexibility in how they choose to address the items in the checklist, while providing structure and guidance to help authors be attentive to knowledge gaps and surface issues that they might not have otherwise considered.
Most questions in the checklist are framed in terms of transparency. For example, “Did you describe the limitations of your work?” or “Did you include the code, data, and instructions needed to reproduce the main experimental results?” A response of “yes” is generally preferable to a response of “no,” but it’s fine to say “no” in some cases — this is expected and not a grounds for rejection. Authors have the option of adding a short justification of each answer and a pointer to the relevant sections of their paper. While the questions are phrased in a binary way, there will of course be some gray areas, and we encourage authors to simply use their best judgement. Completing the checklist is required for all full paper submissions, but some questions are genuinely not applicable and can be marked “n/a” without much additional work.
In designing the checklist, one of our guiding principles was to increase integration with the paper-writing process and encourage authors to think through responsible research practices early on. Because of this, we decided to incorporate the checklist directly in the latex template included in the style files. For initial full paper submission, the questions and answers will show up in a standardized format at the end of the PDF, after the references. This will make it easier for authors to notice the checklist and prepare their answers while writing the paper, and will allow authors to link to particular sections of the paper directly in their checklist answers. It will also make it easier for reviewers to take the checklist into account. For accepted papers, authors are encouraged — though not required — to include the checklist as an appendix. The checklist itself will not count towards the page limit for either initial submissions or accepted papers.
Building on the broader impact statements required for NeurIPS 2020, the checklist prompts authors to reflect on the potential negative societal impact of their work. Examples might include potential malicious or unintended uses like disinformation or surveillance, environmental impact from training huge models, fairness considerations, privacy considerations, and security considerations. Whereas NeurIPS previously required a stand-alone broader impacts section, this year we are letting authors decide where to most naturally place a discussion of potential negative societal impacts in their paper. Whereas the broader impacts section previously did not count towards the page limit, it must now fit within the page limit. However, we have extended the page limit from 8 pages to 9 pages (with an additional page allowed after acceptance) and encourage authors to prioritize using this space to address societal impacts.
Reviewers will be given clear guidance on how they should take the checklist into account. This guidance will be explicit that it’s ok to answer no to some questions and that authors should be rewarded, rather than punished, for being up front about the limitations and potential negative societal impact of their work.
Separate from the checklist, ethics reviews will continue this year. During the review process, papers may be flagged for ethics review and sent to an ethics review committee for comments. These comments will be considered by the primary Reviewers and Area Chair as part of their deliberation and visible to authors, who will have an opportunity to respond. Ethics reviewers do not have the authority to reject papers, but in extreme cases papers may be rejected by the Program Chairs (that is, us) on ethical grounds. We are working with this year’s General Chair, Marc’Aurelio Ranzato, and a small committee of experts to create a set of ethics review criteria that will be made public in advance of the paper submission deadline.
The NeurIPS Paper Checklist and processes around it were developed with input from dozens of researchers in the NeurIPS community as well as experts in AI ethics and responsible machine learning. We took inspiration (and in some cases, exact wording) from the machine learning reproducibility checklist, responsible AI documentation efforts including datasheets for datasets and model cards, ACM’s guidance on reporting negative impacts, and guidelines from other conferences including the NAACL ethics review questions. We iterated extensively on the contents of the checklist and piloted both the questions and style file with community volunteers, aiming to balance simplicity with thoroughness. We are immensely grateful to everyone who provided feedback, and especially those who took the time to try out the checklist on their own research papers. Still, we acknowledge that we’re trying something new and it won’t be perfect — we hope that future Program Chairs will continue to improve and evolve the checklist in subsequent years. We are lucky to be part of a community that embraces experimentation as we believe this is the way to make progress.
Douwe Kiela, Barbara Caputo, Marco Ciccone, NeurIPS 2021 Competitions Chairs
This year will mark the fifth year of NeurIPS having a dedicated competitions track!
The competition and demo chairs for 2021 are Douwe Kiela, Research Scientist at Facebook AI Research (FAIR), and Barbara Caputo, Full Professor at Politecnico di Torino, where she leads the Visual and Multimodal Applied Learning (VANDAL) group, with the help of Marco Ciccone, Ph.D. candidate at Politecnico di Milano.
Douwe brings his experience as co-organizer of last year’s Hateful Memes Challenge, an ambitious project that had 100,000 USD in prize money and the aim of improving multimodal reasoning and understanding with a clear societal benefit. He works in natural language processing and multimodal machine learning. Last year, Douwe was very impressed by the broad range of competitions organized in NeurIPS and the very strong sense of community that emerges from collectively organizing and participating in a competition.
Barbara is an expert in Computer Vision and she is on a mission to enable robots to learn autonomously about objects in an open-ended way. She brings her experience as organizers for three years of the ImageCLEF challenge, the evaluation forum launched in 2003 for cross-language annotation and image retrieval. Barbara was greatly impressed by the quality of the past competitions and believes in their crucial role in scaling machine learning algorithms, especially for acting safely in the real world.
Marco is thrilled to give his contribution to NeurIPS and help Douwe and Barbara with the organization of the track. During his Ph.D., he actively participated in challenges on video object segmentation and adversarial robustness, which he considered fundamental testbeds to understand the strengths and weaknesses of his research.
What’s changing?
Not much! Hugo and Katja did an amazing job last year, and we hope to continue building on that. We’d like to try to make sure that the whole world is represented in the challenges that we organize, especially promoting causes that use AI to help the most disadvantaged people in the world. Technological advances in our field have the potential to disproportionately hurt the most marginalized people in society — including people of color, people from working-class backgrounds, women, and LGBTQ people. We believe that these communities must be centered in the work we do as a research community. Hence, we very strongly encourage proposals from people with these identities or who are members of other marginalized communities, as well as proposals expressly designed to benefit these groups.
How does a competition work?
It’s quite simple, really: after the submission date, competition proposals will be reviewed and accepted competitions will be announced to the public in early May. Generally, the competitions will run from then onwards until the end of October. Competition organizers supply any necessary data and/or additional resources, and take on the job of making the competition a success. During the conference, competitions will have dedicated sessions where winners, organizers, and other participants can discuss the competition outcomes.
Competitions are a great way to build a research community around a topic, to encourage people to work on things that you care about and to just generally have fun tackling interesting problems together.
Where can I find the call?
The call is available here. Submissions for proposals are now open on CMT. Don’t miss the deadline on 31st March 2021 23.59 AOE!
NeurIPS 2020 was held online in December 2020. Attendees had reported mixed experiences with poster sessions from previous online conferences, and found mentor-mentee matching very valuable. We aimed to craft a smoother experience by re-designing poster sessions and making mentorship connections easier through a new matching platform. We wanted to lean into the unique opportunities afforded by the virtual format, which allows for blindingly fast browsing compared to tired feet, while preserving the serendipity of unplanned discovery of many colleagues wandering in a shared space.
Deciding on Gather Town for poster sessions
ICML and ICLR had held their poster sessions as Zoom calls for each poster. Feedback from attendees and presenters was mixed: presenters were often alone for two hours in zoom calls that attendees were wary of joining for fear that they might be alone with the presenter, attendees had no general snapshot of what other people were seeing at any given time.
The affinity workshops at ICML had experimented with Gather Town for poster sessions. Some of the feedback was very promising: having posters in a shared virtual space where people could walk their avatars around was well-received and solved some of the issues of Zoom calls. However, the set-up of poster rooms had been somewhat chaotic, and disconnection and attendance control issues disrupted the sessions. The scale of NeurIPS was massively larger than any poster session Gather Town had ever hosted, with nearly 2000 posters and registration projected to be above 20,000 people.
Early talks with Gather Town seemed promising: the set-up could be automated by using the API to populate poster rooms, access could be controlled through SSO (Single Sign On) so that only registered, logged in NeurIPS attendees could enter the towns, and new towns could be designed to solve many of the interaction problems that had surfaced from previous Gather Town uses, such as people not knowing how to find a given poster, or not seeing the boundaries for interaction zones for each poster. Scalability concerns could be addressed by setting up multiple poster rooms with each room having its own dedicated server space, to guarantee smooth running for a given capacity.
We decided on a hybrid solution using both Gather Town and Zoom calls. Each poster would get both a presentation spot in Gather Town and a dedicated Zoom call accessible from both the NeurIPS website and the poster town. Zoom was a battle-tested, scalable solution with a robust record that could handle heavy attendance in case of a crowded poster. If Gather Town was plagued with unforeseen issues on the day of the session, the Zoom calls could act as a familiar back-up.
Setting up the poster sessions before the conference
The first challenge was setting up the towns in advance of the conference. We worked from a target of smoothly accommodating up to 10,000 simultaneous attendees for the poster sessions. We did not expect the simultaneous attendance to ever be that high, but we wanted to be prepared in case it was (it turns out simultaneous attendance did not go over 2,000). The towns in Gather can support 2,000 people, but Gather engineers recommended that we aim for 400 people per town for the smoothest interactions with heavy video and poster use. For each of 7 poster sessions, Program Chairs painstakingly clustered the posters into 11 or 25 thematic clusters of up to 20 posters — the morning (in US timezones) sessions were preferred by roughly twice as many people as the evening ones.
Gather Town worked with us to produce a new custom poster room template that would address feedback from previous conferences, and a “hyperroom” that would allow attendees to go from a poster room to another. We can’t stress enough how valuable early usability tests were, allowing us to iterate and arrive to a better final custom poster room template: posters were clearly separated by markings on the carpet of the poster room, allowing to see which attendees were listening to a poster or simply wandering; each poster had a clearly marked presenter spot to easily spot the presenter; people could teleport directly to the poster of their choice from the NeurIPS website, and a coordinate systems allowed people to locate a poster of interest once they were in a room.
The final custom poster-room template
We then wrote the code to automatically create and populate the poster sessions. The rarefied documentation for the Gather Town API and the fact that most of us are not web programmers made this step far from frictionless, even though Gather Town engineers got us started by sharing a sample script for populating a poster room. Thankfully, this was alleviated by their constant availability to answer our questions on Slack.
We hope to make this stage a lot easier for organizers in the future by releasing our poster session set up code within the MiniConf repo, at https://github.com/Mini-Conf/Mini-Conf/tree/master/gather. This code creates poster sessions as sets of poster towns organized in a garden.
Once the poster sessions were set up, we reached out to poster presenters to give them a chance to check how their posters looked in advance of the conference. The advance checks turned out to be very important: about 10% of posters got re-uploaded in the first iteration, and some posters needed to be re-uploaded several times before authors were satisfied with the result.
During and after the conference
Attendees who went to poster sessions gave overall positive feedback: according to a post-conference survey, 75% of people who attended poster sessions enjoyed the experience, while 8% didn’t, Attendees were having lively interactions, reported being satisfied with the ability to instantly see what posters other attendees were visiting, and enjoyed randomly bumping into colleagues. Interactions were mostly smooth thanks to the generous capacity we had budgeted for. Gather Town engineers were available throughout the poster sessions to assist attendees with any problems, in a dedicated support booth we had set up for them in the poster session garden.
Garden from which all poster rooms could be accessed, with the Gather Town support booth
There were many fires that had us working down to the wire, sometimes because of our own mistakes trying to ship last-minute improvements, sometimes due to people’s documented aversion to reading instructions on how to check and update their posters (even a one-liner with bolded words in capital letters), or watching videos in advance about how to interact with the towns, with a stark preference for the wonder of just-in-time exploration and trial and error. But we discovered that communication plastered on the floor close to where people are going to need the help was an effective megaphone.
Empirically, over 10% of people who receive these instructions will not follow them and will email to ask why the update didn’t work.
Trying to inform people as quickly as possible when we messed up by pasting it on the floor
We also used Gather Town to set up a social space for attendees to hang in. We were somewhat worried that the towns could be overwhelmed by the larger-than-expected registration numbers at NeurIPS, so didn’t overly advertise the social space, or schedule specific events there. We split up the social space into several separate areas to ensure that it could support a large capacity, with a hanging space for smaller group discussions, a lobby, and a wellness garden with relaxation, workout, yoga and cute animal videos as well as access to the poster sessions. However, the splitting into separate spaces meant that attendees who were in the social space were too spread out and couldn’t see each other if they were in separate spaces. In the future, it might work better to build incrementally: start from a small, more compact space, and dynamically add more rooms if the first space is at capacity.
Map of the social space: Gather Cafe
Overall, we are happy with our decision to use Gather Town, and have learned a lot from this attempt. Attendees have given us precious feedback and suggestions, for example expressing an appetite for slightly more structure in the unstructured interactions: scheduled social time in the social space so that people can be expected to be there at the same time, dedicated space for people who are welcoming random chats with strangers. We think it could support even richer unplanned interactions in future conferences and will keep exploring how to balance the uncertainty of who is going to show up and how they will behave, with the excitement of experimenting with new ways to help people connect.
Creating a mentor platform
This was a joint effort of our team and Marc Deisenroth, Emtiyaz Khan, Cheng Soon Ong, Adam White, and Olga Isupova to enable mentorship opportunities for researchers in machine learning, both as mentors and mentees, with a special focus on underrepresented minorities. We think that these mentor sessions are a beneficial outcome of the need to go online. In physical conferences, it was harder to find a small group setting for meeting mid-career or senior members of research communities — in Mementor it is much easier.
Scheduling a mentor session is easy. To limit misuse, the video link is only available 30 min before start.
After experiences with spontaneous mentorship sessions at ICLR and ICML, we thought of creating our own platform specifically supporting this idea. We wanted to create a platform that could not only be reused from conference to conference but also allow these gatherings throughout the year. For mentors the platform should be an easy way to disseminate the information about a mentor session and for mentees to stay informed. Scheduling a meeting requires some essential information and a topic — which can be specific, like “ML in Health Care”, or general, like “Ph.D. advice”. The scheduling is announced via email to all subscribers.
The web portal was released and tested during NeurIPS 2020. It is available at https://mementor.net .
Mentor sessions are announced via email and are
Y-Lan Boureau, Facebook AI Research Hendrik Strobelt, MIT-IBM Watson AI Lab at IBM Research
NeurIPS 2020 Online Experience Chairs
shown on the landing page. Events can be exported to iCal.
The NeurIPS Foundation believes in the principles of ethics, fairness, and inclusivity, and is dedicated to providing a safe space where research can be shared, reviewed, and debated by the AI / ML community.
Having observed recent discussions taking place across social media, we feel the need to reiterate that, as a community, we must be mindful of the impact that statements and actions have on our peers, and future generations of AI / ML students and researchers.
It is incumbent upon NeurIPS and the AI / ML community as a whole to foster a collaborative, welcoming environment for all. Therefore, statements and actions contrary to the NeurIPS mission and its Code of Conduct cannot and will not be tolerated. For any conference attendee failing to abide by our Code of Conduct, NeurIPS reserves its right to rescind that person’s ability to participate in any future NeurIPS-organized events.
NeurIPS’ Code of Conduct clearly outlines prohibited behaviors and subsequent corrective actions in detail. As an organization dedicated to fostering open and productive dialog among the greater AI / ML community, accusations of bullying, harassment, and discrimination are given the utmost attention and care, and such behaviors will not be tolerated.
In this blog post, we are excited to announce the various awards that are presented at NeurIPS 2020 and to share information about the selection processes for these awards.
NeurIPS 2020 Best Paper Awards
The winners of the NeurIPS 2020 Best Paper Awards are:
No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium by Andrea Celli (Politecnico di Milano), Alberto Marchesi (Politecnico di Milano), Gabriele Farina (Carnegie Mellon University), and Nicola Gatti (Politecnico di Milano). This paper will be presented on Tuesday, December 8th at 6:00 AM PST in the Learning Theory track.
Improved Guarantees and a Multiple-Descent Curve for Column Subset Selection and the Nyström Method by Michal Derezinski (UC Berkeley), Rajiv Khanna (UC Berkeley), and Michael W. Mahoney (UC Berkeley). This paper will be presented on Wednesday, Dec 9th, at 6:00 PM PST in the Learning Theory track.
Language Models are Few-Shot Learners by Tom B. Brown (OpenAI), Benjamin Mann (OpenAI), Nick Ryder (OpenAI), Melanie Subbiah (OpenAI), Jared D. Kaplan (Johns Hopkins University), Prafulla Dhariwal (OpenAI), Arvind Neelakantan (OpenAI), Pranav Shyam (OpenAI), Girish Sastry (OpenAI), Amanda Askell (OpenAI), Sandhini Agarwal (OpenAI), Ariel Herbert-Voss (OpenAI), Gretchen M. Krueger (OpenAI), Tom Henighan (OpenAI), Rewon Child (OpenAI), Aditya Ramesh (OpenAI), Daniel Ziegler (OpenAI), Jeffrey Wu (OpenAI), Clemens Winter (OpenAI), Chris Hesse (OpenAI), Mark Chen (OpenAI), Eric Sigler (OpenAI), Mateusz Litwin (OpenAI), Scott Gray (OpenAI), Benjamin Chess (OpenAI), Jack Clark (OpenAI), Christopher Berner (OpenAI), Sam McCandlish (OpenAI), Alec Radford (OpenAI), Ilya Sutskever (OpenAI), and Dario Amodei (OpenAI). This paper will be presented on Monday December 7th at 6:00 PM PST in the Language/Audio Applications track.
In selecting winning papers, the committee used the following review criteria: Does the paper have the potential to endure? Does it provide new (and hopefully deep) insights? Is it creative and unexpected? Might it change the way people think in the future? Is it rigorous and elegant but does not over-claim its significance? Is it scientific and reproducible? Does it accurately describe the broader impact of the research?
To select the winners of the NeurIPS Best Paper Awards, the award committee went through a rigorous two-stage selection process:
In the first stage of the process, the 30 NeurIPS submissions with the highest review scores were read by two committee members. Committee members also read the corresponding paper reviews and rebuttal. Based on this investigation, the committee selected nine papers that stood out according to the reviewing criteria.
In the second stage of the process, all committee members read the nine papers on the shortlist and ranked them according to the review criteria. Next, the committee met virtually to discuss the highest-ranking papers and finalize the selection of award recipients.
In particular, the committee provided the following motivation for selecting three winning papers:
No-Regret Learning Dynamics for Extensive-Form Correlated Equilibrium. Correlated equilibria (CE) are easy to compute and can attain a social welfare that is much higher than that of the better-known Nash equilibria. In normal form games, a surprising feature of CE is that they can be found by simple and decentralized algorithms minimizing a specific notion of regret (the so-called internal regret). This paper shows the existence of such regret-minimizing algorithms that converge to CE in a much larger class of games: namely, the extensive-form (or tree-form) games. This result solves a long-standing open problem at the interface of game theory, computer science, and economics and can have substantial impact on games that involve a mediator, for example, on efficient traffic routing via navigation apps.
Improved Guarantees and a Multiple-Descent Curve for Column Subset Selection and the Nyström Method. Selecting a small but representative subset of column vectors from a large matrix is a hard combinatorial problem, and a method based on cardinality-constrained determinantal point processes is known to give a practical approximate solution. This paper derives new upper and lower bounds for the approximation factor of the approximate solution over the best possible low-rank approximation, which can even capture the multiple-descent behavior with respect to the subset size. The paper further extends the analysis to obtaining guarantees for the Nyström method. Since these approximation techniques have been widely employed in machine learning, this paper is expected to have substantial impact and give new insight into, for example, kernel methods, feature selection, and the double-descent behavior of neural networks.
Language Models are Few-Shot Learners. Language models form the backbone of modern techniques for solving a range of problems in natural language processing. The paper shows that when such language models are scaled up to an unprecedented number of parameters, the language model itself can be used as a few-shot learner that achieves very competitive performance on many of these problems without any additional training. This is a very surprising result that is expected to have substantial impact in the field, and that is likely to withstand the test of time. In addition to the scientific contribution of the work, the paper also presents a very extensive and thoughtful exposition of the broader impact of the work, which may serve as an example to the NeurIPS community on how to think about the real-world impact of the research performed by the community.
Test of Time Award
We also continued the tradition of selecting a paper published about a decade ago at NeurIPS and that was deemed to have had a particularly significant and lasting impact on our community. We are delighted to announce that the winner of the NeurIPS 2020 test of time award is HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent published in NeurIPS 2011 and authored by Feng Niu, Benjamin Recht, Christopher Re, and Stephen Wright.
This paper was the first to show how to parallelize the ubiquitously used Stochastic Gradient Descent algorithm without any locking mechanism while achieving strong performance guarantees. At the time, several researchers proposed ways to parallelize SGD, but they all required memory locking and synchronization across the different workers. This paper proposed a simple strategy for sparse problems called Hogwild!: have each worker concurrently run SGD on a different subset of the data and perform fully asynchronous updates in the shared memory hosting the parameters of the model. Through both theory and experiments, they demonstrated that Hogwild! achieves a near linear speedup with the number of processors on data satisfying appropriate sparsity conditions.
You can find more about the paper and its impact by attending the Test of Time talk on Wednesday December 9th at 6:00 AM PST in the Optimization track.
Selection process. We identified a list of 12 papers published at NeurIPS about a decade ago (NeurIPS 2009, NeurIPS 2010, NeurIPS 2011). These were the papers from these NeurIPS editions with the highest numbers of citations since their publication. We also collected data about the recent citations counts for each of these papers by aggregating citations that these papers received in the past two years at NeurIPS, ICML and ICLR. We then asked the whole senior program committee (64 SACs) to vote on up to three of these papers to help us pick an impactful paper about which the whole senior program committee was enthusiastic.
Reviewer Awards
Finally, but equally importantly, we again selected reviewer award winners. We selected the top 10% of reviewers, that is 730 reviewers, to receive this award. We made the selection based on the average rating of reviews they entered in the system (where the ratings were provided by the area chairs). We thank all these reviewers for their outstanding work and as a small token of appreciation they were given free registration.
Congratulations to all awardees for their great research or service contribution to our thriving community!
Hsuan-Tien Lin, Maria Florina Balcan, Raia Hadsell and Marc’Aurelio Ranzato
The conference is here and we’re very excited to bring you a fantastic experience! For the past couple of years the conference has started to think more actively about how it can communicate both with the public and with the attendee community, we wanted to bring you a quick peek into how those efforts have evolved.
One of our major efforts this year has been this very blog! We hope it’s giving you all a chance to see into the workings of the organizing committee and to better understand the decisions that they made and how they got made.
This is the first year that our vetted pool of journalists who cover the topic of machine learning and artificial intelligence professionally will be able to attend all parts of the conference including the tutorials and the workshops. We’re very excited about the openness of the conference and our ability to communicate the reality of the research within our own community and to the wider world.
For the second year in a row, NeurIPS is contracting with a Public Relations firm. This year, our partner is the firm Interprose. A typical goal of PR firms is to help an organization build a brand and visibility with the public. That’s not NeurIPS’s goal. In the case of the conference, public visibility came suddenly and somewhat unexpectedly. Our reason for working with a professional PR firm is that they have experience monitoring and managing crises — situations in which the conference has attracted negative attention perhaps due to a policy error or a mistake in judgment or even a simple misunderstanding. NeurIPS’s mission is to support the broader research community understanding “neural information processing” — the question of how simple computational units go about representing and transforming information. At some level, this topic is esoteric and academic. But, as we know, information processing is at the center of many core societal activities and neural networks have grown dramatically in their inference. The conference is continually making efforts to improve how it serves the research community and its broader impacts. We’re very happy to be working closely with Interprose on this important objective.
Will also be hosting a Town Hall for the third year in a row on Thursday of the week of the conference. This is a great opportunity for the NeurIPS community to talk about the conference itself: how we can change and how we can be better. Throughout the week, we’d love to hear your questions. Please email them to Townhall@NeurIPS.cc. During the Town Hall, we will take questions that we received earlier and will also take live. We will hold two sessions of the Town Hall — one at 4 AM PST and one at 4 PM PST in the attempt to make it as accessible as possible for all attendees, globally.
For the first time, our program chairs will be communicating with the attendees throughout the week — look for updates and other bulletins as the conference unfolds.
We are looking forward to an amazing week of communication and collaboration and discovery and we can’t wait to see you all, virtually at least, at this year’s NeurIPS Conference.