In a groundbreaking study published in Science Advances on August 27, researchers have leveraged an artificial intelligence (AI) tool to identify over 1,000 potentially problematic open-access journals. This innovative approach involved screening approximately 15,000 journal titles for indicators of dubious publishing practices, aiming to tackle the growing issue of what the authors refer to as “questionable open-access journals.” These journals often charge publication fees without conducting rigorous peer reviews or maintaining quality checks.
The significance of this research lies in its potential to combat the proliferation of predatory publishers, which have been known to exploit the open-access model. These publishers charge authors to publish papers but often bypass essential editorial standards. The AI tool developed by the researchers has flagged journals that had not previously appeared on any watchlists, revealing that some of these titles are even owned by established and reputable publishers. Collectively, these flagged journals have published hundreds of thousands of research papers that have garnered millions of citations.
According to Jennifer Byrne, a research integrity expert and cancer researcher at the University of Sydney, “there’s a whole group of problematic journals in plain sight that are functioning as supposedly respected journals that really don’t deserve that qualification.” This statement emphasizes the pressing need for improved monitoring and evaluation of open-access publications.
The AI tool, currently available in a closed beta version, allows organizations that index journals and publishers to review their portfolios. Daniel Acuña, a computer scientist at the University of Colorado Boulder and co-author of the study, highlights that while the AI tool is effective, it is not infallible. “A human expert should be part of the vetting process,” he asserts, underscoring the importance of thorough evaluations before any action is taken regarding flagged journals.
The AI tool utilizes advanced algorithms to analyze a vast array of information from journal websites and the papers they publish. It searches for red flags, such as short turnaround times for article publication and elevated rates of self-citation. Additionally, it assesses the affiliations of editorial board members with reputable research institutions and evaluates the transparency of publications regarding licensing and fees.
The criteria used to train this AI tool are informed by best-practice guidance from the Directory of Open Access Journals (DOAJ), a respected index of open-access journals managed by the non-profit DOAJ Foundation in Roskilde, Denmark. Cenyu Shen, the DOAJ’s deputy head of editorial quality, notes a concerning increase in problematic journals and indicates that their tactics are becoming more sophisticated. “We are observing more instances where questionable publishers acquire legitimate journals or where paper mills purchase journals to publish low-quality work,” she explains. Paper mills are businesses that sell fake papers and authorships, further complicating the landscape of academic publishing.
The DOAJ conducts its quality assessments mainly through manual checks initiated upon receiving complaints. In 2024, the directory investigated 473 journals, reflecting a 40% increase compared to 2021. Shen reports that the time spent on these investigations grew significantly, totaling 837 hours, which highlights the increasing workload in maintaining journal quality. Utilizing AI tools could expedite some of these assessments, making it easier to identify problematic journals.
The researchers trained their AI model on a dataset of 12,869 journals currently indexed in the DOAJ as legitimate, alongside 2,536 journals identified as violating quality standards. When applied to 15,191 open-access journals from the public database Unpaywall, the AI identified 1,437 journals as questionable. However, the team acknowledged that approximately 345 of these were mistakenly flagged, including discontinued titles and journals from small publishers. Notably, the tool also failed to flag an additional 1,782 questionable journals, indicating room for improvement in its accuracy.
This innovative use of AI in identifying problematic open-access journals represents a significant step towards enhancing the integrity of academic publishing. As the landscape continues to evolve, ongoing vigilance and the integration of human expertise will be crucial in ensuring that the quality of research remains uncompromised.