Most Peer Reviewers Now Use AI and Research Publishing Policies Are Racing to Catch Up
Artificial intelligence has quietly but rapidly become part of the everyday workflow of academic peer review. A new global whitepaper released by Frontiers reveals that more than half of all peer reviewersโ53%โare already using AI tools in some form while evaluating research papers. This finding signals a major shift in how science is reviewed and raises an important question for the research world: can publishing policies evolve fast enough to match reality?
The whitepaper, titled โUnlocking AIโs Untapped Potential: Responsible Innovation in Research and Publishing,โ is based on survey responses from 1,645 active researchers across the world. Together, their responses paint a clear picture of a research community that has already embraced AI but is still waiting for clear, consistent guidance on how to use it responsibly.
AI Has Become a Normal Part of Peer Review
Peer review has traditionally been a human-centered process, relying on expert judgment, careful reading, and detailed critique. According to this new research, that picture has changed dramatically. Today, AI tools are commonly used by reviewers to help draft review reports, summarize complex manuscripts, and clarify key findings.
This does not mean AI is replacing reviewers. Instead, it is acting as a support tool that helps researchers manage growing workloads and increasingly complex submissions. Many reviewers report that AI improves efficiency and helps them communicate feedback more clearly. However, the whitepaper also notes that most current uses of AI remain fairly basic, focused on surface-level assistance rather than deeper analytical support.
Untapped Potential Beyond Summaries and Drafting
One of the most striking insights from the report is how much unused potential still exists. While AI is already saving time, researchers believe it could do far more. Respondents pointed to opportunities for AI to support research rigor, reproducibility, and deeper methodological analysisโareas that are critical to scientific quality but often limited by time and human capacity.
The authors of the whitepaper emphasize that these benefits will only materialize if AI use is paired with strong governance, transparency, and proper training. Without those safeguards, the risks of misuse, overreliance, or uneven adoption increase.
Early-Career Researchers Are Leading the Way
AI adoption is not evenly distributed across the research community. The survey found particularly high usage among early-career researchers, with an impressive 87% reporting that they already use AI tools. This likely reflects both comfort with new technologies and the pressure younger researchers face to publish, review, and communicate efficiently.
Geographically, adoption is also especially strong in rapidly growing research regions. Researchers in China reported a 77% adoption rate, while Africa followed at 66%. These figures suggest that AI may be playing a role in leveling the playing field by helping researchers overcome resource constraints and heavy workloads.
A Clear Gap Between Practice and Policy
Despite widespread adoption, one issue stands out clearly: publishing policies have not kept pace with reality. Many reviewers are already using AI without consistent rules about disclosure, acceptable use, or accountability. This lack of alignment creates uncertainty for reviewers and editors alike.
Researchers surveyed for the whitepaper repeatedly expressed a desire for clear, consistent, and globally aligned policies. They want to know when AI use should be disclosed, how tools can be used ethically, and what standards publishers expect.
Policy Recommendations to Guide Responsible AI Use
In response to these findings, Frontiers has outlined a set of evidence-based policy recommendations designed to guide publishers, institutions, funders, and tool developers. These recommendations aim to align formal policy with actual researcher behavior while safeguarding research integrity.
Key recommendations include:
- Mandating transparency around AI use, ensuring that reviewers and authors clearly disclose when and how AI tools are involved.
- Embedding AI literacy and competency training throughout the research ecosystem so that users understand both the strengths and limitations of these tools.
- Strengthening integrity and oversight standards to prevent misuse and maintain trust in the peer-review process.
- Improving data provenance and auditability, making it easier to track how AI influences research outputs.
- Ensuring equitable access to trustworthy AI tools, so that researchers in all regions can benefit, not just those in well-funded institutions.
Together, these measures form a practical roadmap for responsible innovation.
Why Transparency and Trust Matter More Than Ever
Trust is the foundation of scientific publishing. As AI becomes more deeply embedded in research workflows, transparency becomes non-negotiable. Readers, editors, and policymakers need confidence that AI is enhancing quality rather than obscuring accountability.
The whitepaper stresses that AI should not be treated as a hidden shortcut but as a visible, well-regulated tool. Clear disclosure practices help protect the credibility of the scientific record and ensure that responsibility remains with human researchers.
How AI Is Changing the Research Ecosystem
Beyond peer review, AI is already influencing other stages of the research cycle, from literature discovery to data analysis and manuscript preparation. These developments suggest that peer review is just one part of a much larger transformation.
If managed responsibly, AI could help reduce reviewer fatigue, speed up publication timelines, and improve the clarity of scientific communication. At the same time, inconsistent policies or uneven access could deepen inequalities or introduce new risks.
The Bigger Picture for Research Publishing
This whitepaper serves as a call to action for the entire research ecosystem. Publishers, institutions, funders, and policymakers are encouraged to collaborate on shared standards, training pathways, and transparent communication strategies.
The message is not that AI is comingโit is already here. The real challenge now is ensuring that its use strengthens scientific rigor, supports global participation, and maintains trust in the research record.
Research Reference
Unlocking AIโs untapped potential: Responsible innovation in research and publishing
https://www.frontiersin.org/documents/unlocking-ai-potential.pdf