WhatFinger

Open-access journals have mushroomed into a global industry, driven by author publication fees rather than traditional subscriptions

A Science Journal Sting



Want to get your work published in a scientific journal? No problem if you have a few thousand dollars you are willing to part with.
These days a number of journals display trappings of a journal, promising peer-review and other services, but do not deliver. They perform no peer review, and provide no services, beyond posting papers and cashing checks for the publication fees. There has been a recent dramatic increase in the number of publishers that appear to be engaged in the practice, growing by an order of magnitude in 2012 alone. (1)

Network of bank accounts based mostly in the developing world

From humble and idealistic beginnings a decade ago, open-access journals have mushroomed into a global industry, driven by author publication fees rather than traditional subscriptions. Most of the players are murky. The identity and location of the journals' editors, as well as the financial workings of their publishers, are often purposefully obscured. Invoices for publication fees reveal a network of bank accounts based mostly in the developing world, reports John Bohannon. (2)

A striking picture emerges from the global distribution of open-access publishers, editors and bank accounts. Most of the publishing operations cloak their true geographic locations Some examples: The American Journal of Medical and Dental Science is published in Pakistan, while the European Journal of Chemistry sees publication in Turkey. (2) Inspired by the experience of a colleague in Nigeria, who felt deceived by a certain journal—one with a business model that involves charging fees to the scientific authors ranging from $50 to more than $3,000, the above-mentioned John Bohannon, a biologist at Harvard, submitted 304 versions of a wonder drug paper to open-access journals. More than half of the journals accepted the paper, failing to notice its fatal flaws. (2) The paper, about a new cancer drug, included nonsensical graphs and an utter disregard for the scientific method. In addition, it was written by fake authors, from a fake university in Africa, and as a final flourish, changed it through Google Translation into French and back to English. Collaborators at Harvard helped Bohannon make it convincingly boring. (3) “Any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper's short coming immediately. Its experiments are so hopelessly flawed that the results are meaningless,” Bohannon wrote in the journal Science. And yet his informal sting operation revealed that 156 publishers completely missed the hints. (2)

Whether fee-charging open-access journals were actually keeping their promise to do peer review

Bohannon wanted to find out whether fee-charging open-access journals were actually keeping their promise to do peer review—a process in which scientists with some knowledge of a paper's topic volunteer to check it out for scientific flaws. In the end, what he concluded was that 'a huge proportion' of the journals were not ensuring their papers were peer reviewed. He added that his experiment could be the tip of the iceberg, and that peer review at traditional journals—not just fee-based open-access journals—could be just as bad. “It could be the whole peer review system is just failing under the strain of the tens of thousands of journals that now exist.” (4) Some examples of the issue with 'prestigious' journals:
  • In a classic 1998 study, Fiona Godlee, editor of the prestigious British Medical Journal (BMJ), sent an article containing eight deliberate mistakes in study design, analysis and interpretation to more than 200 of the BMJ's regular reviewers. Not one picked out all the mistakes. On average they reported fewer than two; some did not spot any. (5)
  • Another experiment at BMJ showed that reviewers did no better when more clearly instructed on the problems they might encounter. They also seemed to get worse with experience. Charles McCulloch and Michael Callahan, of the University of California, San Francisco, looked at how 1,500 referees were rated by editors at leading journals over a 14-year period and found that 92% showed a slow but steady drop in their scores. (5)
  • The Economist adds, “As well as not spotting things they ought to spot, there is a lot that peer reviewers do not even try to check. They do not typically re-analyze the data presented from scratch, contenting themselves with a sense that the authors' analysis is properly conceived. And they cannot be expected to spot deliberate falsifications if they are carried out with a modicum of subtlety.” (5)
On another front, The Institute of Medicine estimates that only 4 percent of treatments and tests are backed up by strong scientific evidence; more than half have very weak evidence or none. (6) John Ioannidis reported that one-third of studies published in three reputable peer reviewed journals didn't hold up. He looked at 45 studies published between 1990 and 2003 and found that subsequent research contradicted the results of seven of those studies, and another seven were found to have weaker results than originally published. In other words, 32% did not withstand the test of time. (7) This translates into a lot of medical misinformation. Ioannidis reviewed prestigious journals including The New England Journal of Medicine, The Journal of the American Medical Association (JAMA), and Lancet along with a number of others. Each article had been cited at least 1,000 times, all within a span of 13 years. These results are worse than it sounds. Ioannidis had been examining only the less than one-tenth of one percent of published medical research that makes it to the most prestigious journals. Throw in the presumably less careful work from lesser journals discussed earlier, and take into account the way the results end up being spun and misinterpreted by university and industrial PR departments and by journals and it's clear that whatever it was about the wrongness that Ioannidis had found in these journals, the wrongness rate would only worsen from there, notes David Freedman. (8) All of this does not mean that medical studies are of no value or that health reports are always wrong. It simply serves as a warning that science is fluid, not static or absolute. It does suggest that every time you see a headline claiming that X causes cancer or that Y prevents it, some skepticism might be in order. References
  1. “The occasional pamphlet: lessons from the faux journal investigation,” blogs.law.harvard.edu, October 15, 2013
  2. John Bohannon, “Who's afraid of peer review”, Science, 342, 60,, October 4, 2013
  3. “Fake research paper accepted into hundreds of online journals,” democraticunderground.com, October 4, 2013
  4. “Bogus science paper reveals peer review's flaws,” cbc.ca/news, October 14, 2013
  5. “Trouble at the lab,” The Economist, October 19, 2013
  6. Shannon Brownlee, Overtreated, (New York, Bloomsbury, 2007), 92
  7. John P. Ioannidis, “Contradicted and initially stronger effects in highly cited clinical research,” JAMA, 294(2), 218, July 2005
  8. David H. Freedman, Wrong, (New York, Little, Brown & Company, 2010), 64

Support Canada Free Press

Donate


Subscribe

View Comments

Jack Dini——

Jack Dini is author of Challenging Environmental Mythology.  He has also written for American Council on Science and Health, Environment & Climate News, and Hawaii Reporter.


Sponsored