NeurIPS Supports Authors with Google’s Paper Assistant Tool (PAT)
NeurIPS Program Chairs: Marc Deisenroth, Finale Doshi-Velez, Nika Haghtalab, David Rolnick, Jenna Wiens
Google Research: Rajesh Jayaram Vincent Cohen-Addad Drew Tyler David Woodruff
Google Cloud: Jinsung Yoon Mihir Parmar Palash Goyal
Google Sponsors: Corinna Cortes Vahab Mirrokni Tomas Pfister Burak Gokturk
NeurIPS Communication Chair: Jean Kossaifi
Following positive feedback from other venues, like STOC and ICML, NeurIPS is pleased to announce a new initiative, in partnership with Google, that will provide authors access to their Paper Assistant Tool (PAT) and support authors in improving their submissions to NeurIPS 2026.
This program offers authors a limited opportunity to receive free, automated, and actionable feedback on their manuscripts before the final deadline, private to the authors. The feedback the authors will receive from PAT through the NeurIPS program will not be used in the review process. Reviewers, area chairs, and program committee members will not have access to the PAT feedback.
What is PAT?
PAT is a specialized, experimental tool powered by Google’s Gemini models, utilizing a “reasoning”-focused pipeline. It is similar to those that have achieved high-level performance on mathematical problem-solving benchmarks. The model is designed to help authors identify issues that human reviewers might flag, including (but not limited to) experimental and methodological rigor, narrative clarity in English, and technical correctness.
In a pilot at the Annual ACM Symposium on Theory of Computing, STOC, (see blog post), 94% of participants found the pre-submission feedback generated by an AI assistant to be helpful, and 85% reported that the feedback resulted in improved clarity of their paper. Following this pilot, PAT was expanded and launched in partnership with ICML (blog), which had a similarly positive reception. Notably, 35.4% of responding authors with theoretical results reported the tool identified significant theory gaps that took more than an hour to fix, and 31% of responding authors with experimental results said the feedback prompted them to run new experiments.
While the program at STOC focused heavily on theoretical correctness, the ICML program was then specifically tuned to address the needs of the machine learning community. After considering the feedback received during the ICML program, including pain points raised by authors, the Google Research team augmented PAT by integrating components of the ScholarPeer system into the PAT feedback. As a result of this integration, compared to the ICML iteration, this version of PAT has:
- Improved search and tool capabilities: allowing for fact checking and reducing the hallucination rate.
- Improved comparison with related work: resulting from the ScholarPeer deep literature review agent. This empowers PAT to take into account related work in its analysis of the paper.
- Strengths / Weaknesses Analysis: the feedback produced by PAT will contain an analysis of the potential strengths and weaknesses of the paper. This will enable authors to improve their paper at a higher level, rather than just addressing individual errors.
Like the ICML and STOC programs, the goal is to help authors improve the quality of their papers, not to replace human peer review. By fixing clarity issues and potential technical gaps before the work is submitted to the conference, we hope to give authors actionable feedback before their paper enters the review process.
Logistics and Eligibility
The program is entirely optional. It operates inside OpenReview, but completely outside the official review process. The program will run for a 7 day window, ending on the Abstract Deadline (May 4th, Anywhere on Earth).
To manage resources fairly, each eligible author is granted one virtual “voucher” to have a single paper run through the AI feedback system. In addition, each paper can be run through the system at most once. In alignment with the system ultimately used during ICML 2026, we define an eligible author as any author whose OpenReview Account was created prior to April 1, 2026.
To redeem this voucher, authors will select a checkbox “Ready for LLM Feedback” on the OpenReview paper submission form to flag the manuscript for AI review. This feature will only work after a PDF has been successfully uploaded to the OpenReview server. The author who checks the box redeems their voucher for the paper once the edit to the submission is submitted, and the version of the document at the time of the edit is then sent to the pipeline for automated feedback. If an ineligible author or an author who has already used their voucher attempts to select the “Ready for LLM Feedback” button, an error message will appear and the paper will not be sent out for review. Eligible submissions will typically receive the feedback within 12 hours of being submitted.
Submission Timing: To ensure system stability and incentivize early submissions, papers submitted earlier in the feedback window are guaranteed the full compute budget of the PAT pipeline. Submissions made very close to the deadline (1-2 days) may be subject to throttling of their overall compute allocation depending on the demand.
The technical staff will be able to provide limited, best-effort support on the program, such as answering questions or checking for failed paper delivery. Please direct such questions, as well as any feedback you may have on the program, to paper-assistant@google.com.
Privacy and Data Safety
We recognize the sensitivity of unpublished research. Trust is the cornerstone of this experiment, and we have implemented strict protocols to ensure author safety:
- Strict Separation from Peer Review: The AI Feedback is entirely independent of the NeurIPS review process. It is visible only to the authors. Reviewers, Area Chairs, and Program Chairs will have no access to this feedback. Furthermore, the PAT system will not be used in any part of the NeurIPS review process, including the NeurIPS 2026 AI Assisted Reviewing Experiment.
- Stateless Inference (No Training): Submissions will not be used to train, fine-tune, or improve Google’s models. The model operates in a stateless “inference-only” mode; it processes the text to generate feedback and retains no memory of the specific content for future learning.
- Data Destruction: To minimize data exposure, Google will employ a strict deletion policy. All PDFs and feedback submitted to Google are stored in a restricted access environment and are scheduled for permanent deletion within 7 days after the feedback is delivered and the program is completed.
- Restricted Access: No one has access to this data unless there is a technical difficulty, in which case Google staff (Rajesh Jayaram and Drew Tyler from the Google Research Organizing Committee) will only inspect the data (submission PDFs and generated feedback) with explicit author approval.
Caveats and Disclaimers
Like all LLMs, the models used by the PAT pipeline are not infallible. Authors should treat the generated feedback with the same critical eye they would apply to a human review.
- The model may occasionally flag correct statements as errors or miss actual flaws. It is the author’s responsibility to verify the validity of the feedback.
- Note that the model may make suggestions for the paper that you disagree with. This is not necessarily a bug, and may even be desirable as such behavior could also occur with reviewers. By considering why you disagree with the suggestion, you may be able to add justification to the paper which preempts certain reviewers’ comments, thereby increasing the strength of the paper.
Outcomes
Our primary objective is to empower authors to elevate the quality of their submissions by acting as a high-precision filter. This tool is designed to help authors catch nuanced errors—in proofs, experimental setups, or reasoning—that human reviewers might miss or lack the time to detail.
We believe this tool represents a promising opportunity to test the use of AI to elevate the standard of our own scientific submissions. After the full paper deadline, an anonymous author survey will be sent to authors who used PAT to request feedback so that Google can improve PAT. Submitting feedback via the survey is optional. We look forward to seeing the results of the program, which will be shared with the broader community via the NeurIPS blog once the program is over.
FAQ
PDF Size: Due to context limitations, very large PDF’s containing, for example, multiple high resolution images or plots, may have their images stripped before processing. This would result in PAT running only on the extracted text portion of the paper. In extreme cases, the pipeline may fail altogether. We recommend submitting PDFs no larger than 20MB (ideally less than 10) to the PAT system to ensure success of the pipeline.
Turnaround Time: We expect that feedback will be posted within 1-2 hours of submission to the PAT system. During times of high demand, such as closer to the deadline, the latency may be longer. We strive to have all feedback posted within 12 hours of the submission to PAT.
How do I know my paper was successfully submitted?: After clicking the “Ready for PAT Review” button, your PDF should be picked up by the PAT system within a few minutes. At that post, a private comment (visible only to authors) will be posted to your paper notifying you that the paper has entered the PAT processing queue.
Contact (for feedback and best-effort assistance): paper-assistant@google.com