UChicago Researchers Build a Tool to Help Fix Peer Review
Academic peer review is under strain. In recent years, the volume of submissions to academic journals and conferences has grown faster than the pool of qualified reviewers, leading to a cycle where overloaded reviewers produce noisier evaluations, acceptance becomes more random, and submission volumes climb ever higher. Now, AI writing tools are accelerating the problem by lowering the cost of producing papers. Chenhao Tan, Faculty Co-Director of Novel Intelligence at the DSI and Associate Professor of Computer Science and Data Science, thinks AI can also be part of the fix.
Tan and his team at Chicago Human+AI (CHAI) Lab have built OpenAIReview, an open-source, AI-assisted reviewing tool designed to give any researcher access to high-quality paper feedback, for less than the cost of a coffee. The tool uses a progressive approach, processing a paper sequentially and producing a running summary of its key claims, definitions, and equations. This allows it to catch inconsistencies across a paper rather than treating each section in isolation, reaching 87% location coverage of issues flagged by comparable tools.
A central motivation for the project was accessibility. Closed, proprietary review tools are typically only available to researchers at well-resourced institutions, their inner workings opaque. OpenAIReview is built to be transparent and customizable: researchers can inspect the prompts, adapt the tool to their field’s specific methodological standards, and use it throughout the writing process rather than as a one-time check.
The team draws a distinction between reviewing for quality, which offers formative feedback to help authors improve their work, and reviewing for gatekeeping, which determines acceptance or rejection. OpenAIReview is designed for the former, but Tan is cautious about fully automating gatekeeping decisions. He argues that accepting an article will still require that reviewers weigh factors like originality, significance, broader impact that, at present, resist automation and still demand human judgment.
The team is working to refine the platform and is currently exploring evaluation methods that would allow the tool to verify claims in papers rather than relying solely on author-reported results.
OpenAIReview is live and accessible at openaireview.github.io. As AI continues to reshape how science is produced, the team sees open, accountable reviewing tools as essential infrastructure, and welcomes feedback from conference organizers, journal editors, and researchers who want to help build it.
This article first appeared on the Data Science Institute website.