The proposal of using blockchain technology as a mean to prevent medical plagiarism started off with an irony. Back in 2014, when Benjamin Gregory Carlisle, a PhD student from McGill University posted the idea of treating medical research protocol like bitcoin on his blog “The Grey Literature”, he was not expecting any attention.
Two years later, Carlisle received a message informing him that a F1000Research paper written by Greg Irving of University of Cambridge and John Holden of Garswood Surgery UK bear high resemblance to his blog post.
Carlisle notified the journal immediately, highlighting phrases and paragraphs which are identical but the journal did not retract the paper. The journal has an open review policy, which means identities of peer-reviewers were made known. Nevertheless, none of them seemed to have verified the method proposed in the paper. They also did not cross-check if the paper shares similar content with what had been published before.
No editor, peer-reviewer and writer receive a penalty during the process. A second edition of the same paper was shortly published, giving a brief reference to Carlisle and a loath effort to change some of the sentences. Eventually, the case was brought to the Committee on Publication Ethics (COPE) but since Carlisle was not represented, nothing could be changed. When other publications, including The Economist, cited the method, they often referred back to Irving and Holden, instead of Carlisle.
Detecting plagiarism as part of the peer-review process
Peer-reviewing is a chore for both submitters and reviewers and only about 20% of the scientists are willing to take up the responsibility. As such, artificial intelligence seems appropriate to fill up the gap which we human do not wish to undertake.
This June, publishing tycoon Elsevier began to adopt StatReviewer, a statistical methods checking software. Peer-review platform ScholarOne is also partnering with UNSILO of Denmark to use machine learning to analyze manuscripts and summarize them by extracting main concepts. This believe will provide a more rounded overview of a journal paper as compared to just going through keywords rendered by researchers. At the same time, the machine will also compare phrases to detect for plagiarism and pool it with other published work.
Others like Penelope.ai is looking at whether a paper fits the prerequisite and format of a particular journal. A tool like statcheck will justify the quality of a paper by highlighting errors and statistical faults. This is something which peer-reviewers might have miss due to the time constrains.
Change the present reviewing process before technology
Supporters of AI powered peer-reviewers said these tools are meticulous in picking out papers which do not conform to CONSORT, a popular manuscript format and errors which human reviewers tend to overlook. However, it’s challenging to get journals and authors to pay for additional services like this to get their papers checked.
Besides, AI is trained using papers published in the past. Some researchers are afraid that this particular trait alone may fortify existing bias. As a result, it will not benefit revolutionary papers or ideas. As such, most researchers believe while the tools may help editors and peer-reviewers, it should not replace them.
Ultimately, none of these technology changed the present way of how academic papers are being reviewed. As Carlisle told retractionwatch.com after the incident, there is a clear indication that Irving and Holden are not familiar with blockchain technology but they knowledge was never questioned.
Furthermore, the fact that Carlisle was denied a representation when the case was brought to COPE, it’s a clear indication that authority is siding with journals and editors rather than writers. Hence, unless the present reviewing process can be changed, bringing in technology may not prevent similar incident from happening again.