A choice of contribution types at NeurIPS 2026
The NeurIPS community benefits from a wide diversity of papers and ideas, which can arise and be developed in many different ways. To help cultivate the diversity of papers that belong at NeurIPS, this year the Main Track is asking authors to select a Contribution Type:
- General: We expect that most submissions will fall into this type.
- Theory: The main contribution is via theoretical analyses and proofs.
- Use-Inspired: The main contribution is in framing or designing approaches to meet the needs of a specific real-world application. (This often involves, e.g., engaging with domain experts.).
- Concept & Feasibility: The main contribution is a highly novel, high potential reward idea with scope beyond what can be validated in a single paper. (The significance and originality bar for these contributions is high.)
- Negative Results: The main contribution is in understanding a negative result. (The significance and originality bar for these contributions is high.)
The review form used is the same across all Contribution Types, but the way that reviewing criteria are interpreted will differ across Types. We encourage authors to look at the reviewing guidelines for each Contribution Type in deciding which to select. (By contrast, the Evaluations & Datasets Track and Position Paper Track have different review forms from the Main Track.)
Note that because reviewers will base their recommendations on the Contribution Type, it is not possible for this to be changed after submission, either by the authors or by the reviewers / PCs. It is also, of course, not permitted to submit the same or highly similar papers to multiple Contribution Types or Tracks – such papers will be desk rejected.
To help authors select the right Track and Contribution Type for their paper, we include a number of example papers below.
Example papers
Main Track – General
Segment Anything (Kirillov et al. 2023)
This paper introduces a model (SAM) for segmenting images, and a dataset used for training it. The focus of the paper is on the particular model being developed, so the paper would be a better fit for the Main Track (General) than the Evaluations & Datasets Track. The paper does not motivate or evaluate SAM with respect to the specific needs of a real-world use case – therefore the Use-Inspired contribution type would not be a good match.
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks (Frankle and Carbin 2019)
This paper empirically investigates the efficacy of training pruned subnetworks of a neural network. While this area of work is related to deep learning theory, the contributions are not theoretical and so Main Track (General) would be the best choice.
Main Track – Theory
Neural Tangent Kernel: Convergence and Generalization in Neural Networks (Jacot et al. 2018)
This paper formally connects neural network training to kernel methods with the introduction of the Neural Tangent Kernel (NTK). While the authors do conduct some illustrative experiments, the focus is on proving theoretical properties of the NTK and analyzing the implications of these results.
A Universal Law of Robustness via Isoperimetry (Bubeck and Sellke, 2021)
This paper is motivated by the significant overparameterization of deep learning models in comparison to the number of datapoints being fit. The authors prove theoretical results on the number of parameters a model needs to smoothly interpolate between datapoints, showing this is significantly more than the number required for interpolation alone, and consider how these results may align with behavior observed in practice.
Main Track – Use-Inspired
Estimating Canopy Height at Scale (Pauls et al. 2024)
This paper considers the application of ML models to the problem of canopy height estimation. While the core models are not novel, the authors contribute an analysis of what makes this problem and data special and introduce a novel loss function to account for these properties.
Probabilistic Emulation of a Global Climate Model with Spherical DYffusion (Rühling Cachay et al. 2024)
This paper considers the problem of emulating the Earth’s climate over long time horizons. The authors introduce Spherical DYffusion, an approach combining diffusion models with Spherical Fourier Neural Operators, in order to address several challenges particular to atmospheric data. Via extensive domain-informed experiments on a single, well-chosen dataset, the authors compare Spherical DYffusion to other models, both from machine learning and from physics.
Main Track – Concept & Feasibility
Capsule networks (Sabour et al., 2017)
This paper introduces a novel type of neural network architecture, capsule networks, which permit dynamic routing of information via a type of attention computed between different capsule units in the network. The paper shows that capsule networks have promising performance on a range of MNIST-based tasks, but explicitly does not attempt to explore the full range of properties they demonstrate on different tasks and datasets, nor to consider all ways that capsule networks may be instantiated. The authors draw parallels between the early stage of capsule network research and that of recurrent networks several years previously – emphasizing that such promising results in early-stage research suggest the value of more work in this area.
Stochastic Variational Inference (Hoffman et al. 2013)
This paper introduces a scalable approximate algorithm for posterior estimation that allows for analyzing large datasets through a Bayesian lens. This is an example of a paper with very broad applicability. The experiments could obviously not test on all Bayesian models, but provided concrete evidence that the ability to approximately analyze larger datasets provides performance benefits over using traditional inference on smaller datasets.
Main Track – Negative Results
Inherent Trade-Offs in the Fair Determination of Risk Scores (Kleinberg et al. 2017)
This paper investigates the impossibility of satisfying multiple fairness criteria simultaneously. While it relies heavily on a theoretical framing, its main contribution is that of a negative, rigorously demonstrated and surprising result that is not demonstrated via empirical evaluations.
Understanding deep learning requires rethinking generalization (Zhang et al., 2016)
At the time this paper was written, the common wisdom was that generalization in deep learning was similar to generalization in classical models. Through a combination of experimentation with carefully chosen semi-synthetic data and a theoretical analysis of a specific neural architecture, the authors make a compelling argument that the sources of generalization in classical models – limited expressivity and explicit regularization – cannot explain the generalization properties of neural networks. This was their core (negative) result. While they speculate that implicit regularization via SGD may be a critical element, the negative result is the main argument of the paper.
Evaluations & Datasets Track
ImageNet: A large-scale hierarchical image database (Deng et al., 2009)
This paper offers a dataset for computer vision applications with a demonstration of its value in three tasks. The primary contribution relates to the dataset.
Fairness Through Awareness (Dwork et al., 2011)
This paper gives a definition and implementation of individual fairness, with a secondary contribution that provides an algorithm to improve on this metric. The paper relies heavily on a theoretical framework. While the authors could consider different options (main/general due to algorithm development or main/theory), we believe the Evaluations & Datasets Track is most appropriate given that the main contribution is about the definition of a new fairness metric.
The Illusion of Readiness in Health AI (Gu et al., 2025)
This paper focuses on evaluations for a use-case inspired application in healthcare providing negative results through experimentation. While this is a use-case inspired, negative result, the primary focus of this paper is on experimental evaluations.
Position Paper Track
Position: Bayesian Deep Learning is Needed in the Age of Large-Scale AI (Papamarkou et al., 2024)
This paper revisits the role of Bayesian deep learning in the modern AI landscape. It argues that uncertainty-aware approaches should play a central role. While it is close to a review paper due to its broad synthesis of existing methods, it advances a clear normative stance on the field’s future direction rather than merely summarizing prior work.
Position: Graph Learning Will Lose Relevance Due To Poor Benchmarks (Bechler-Speicher et al., 2025)
This paper examines the current state of graph learning, arguing that progress in the field is hindered by flawed benchmark datasets and evaluation practices. The position is about benchmarking and evaluation practices in graph learning, arguing that they must be fundamentally rethought. While being close to an Evaluations and Datasets paper due to the introduction and analysis of concrete benchmarking setups, it focuses on a broader critique and agenda for the field, which is characteristic of a Position Paper.
Position: Probabilistic Modelling is Sufficient for Causal Inference (Mlodozeniec et al., 2025)
This paper challenges the common view that causal inference requires specialized frameworks such as structural causal models or do-calculus. The position is about the foundations and methodology of causal inference. While lying close to a Main Track paper, it does not introduce a concrete algorithmic pipeline as its primary contribution, but instead advances a conceptual stance on how causal inference should be framed.