This week (October 21-27) is Open Access Week, when universities, colleges, libraries, funding agencies, and other interested parties come together to ask questions about who owns the research they produce and/or pay for. If you’re reading this blog, you probably already know the importance of these debates: the dominance of companies like Elsevier, which make enormous profits by keeping research behind a paywall, or the well-publicized trial and suicide of Aaron Swartz.
I have a passing interest in these topics, and a general belief in Open Access, so I’ve been following OA week — mostly be reading tweets with the hashtag #oaweek, and nodding in agreement. Yesterday, though, Amanda French’s live tweets of a presentation at George Mason’s Roy Rosenzweig Center for History and New Media caught my attention. I’ve written a bit about Wikipedia on this blog, and Jake Orlowitz was talking about the Wikipedia Library project. As Amanda points out, Wikipedia is the most-visited non-profit website in the world, making it a kind of poster-child for Open Access. And as students in my “Writing about Wikipedia” class will tell you, everything on Wikipedia should be verifiable; in other words, it has to be sourced. The Wikipedia Library defines itself as “a place for active Wikipedia editors to gain access to the vital reliable sources that they need to do their work,” connecting Wikipedians to “libraries, open access resources, paywalled databases, and research experts.” When access to information is limited, the Wikipedia Library opens the door.
These tweets reminded me of two articles I’d come across recently, both relating Wikipedia to the sciences. The first is about UC San Francisco’s medical school, which is offering course credit for participating in WikiProject Medicine, one of the online encyclopedia’s many topic-focused collaborative endeavors. Julie Beck, covering the UCSF program in The Atlantic, quotes the course’s instructor Dr. Amin Azzam, who notes that people turn to Wikipedia for health advice “more than any other website. More than the National Institutes of Health, more than WebMD, more than Mayo Clinic. It’s more than many of those combined.” This is troubling, given that “the fraction of high-quality information on Wikipedia in the medicine-related topics is significantly lower than other domains of Wikipedia.” The motivation behind Azzam’s course is the feeling that this deficiency results from the medical community’s unwillingness to participate. Ideally, Azzam’s course and the resulting media coverage (Noam Cohen wrote about in the New York Times as well) will increase participation.
These medical students will turn to peer-reviewed science journals, the kind of thing the Wikipedia Library provides access to. That’s the good news. But can we trust the information from those journals?
Writing for Science Magazine, John Bohannon describes sending an obviously terrible paper to hundreds of (purportedly) peer-reviewed science journals. The results were shocking (or exactly what you’d expect, depending on your feelings about such things):
Any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper’s short-comings immediately. Its experiments are so hopelessly flawed that the results are meaningless … More than half of the journals accepted the paper, failing to notice its fatal flaws.
Humanists who remember the Sokal hoax might feel vindicated: the sciences can’t spot a fake paper, either. But that’s not Bohannon’s takeaway. There are very real problems in scientific publishing, and only some of them are related to open access (as a side note, the fact that Bohannon targeted OA journals seems ancillary to me; would he have gotten the same results with paywalled journals? Maybe. And lots of the journals are provided by services like Elsevier.) Professional pressures to publish drive scientists to extremes, with journals’ rejection rates sometimes in the 90% range or higher. Fake scientific journals see this as a market: many charge the author for publication, and one of the invoices Bohannon received with his acceptance letter was for a whopping $3100.
Bohannon’s article is well worth reading for its own sake, but when I read it I thought immediately of Wikipedia, especially when he gets to the reactions from the journal editors. I wrote a few weeks ago about a short assignment in which my students messed with Wikipedia. Responses ranged from supportive to downright threatening. Bohannon, in a much more serious context and with a much more serious subject, got similar feedback. Some journals appreciated his test (especially those that rightly rejected the paper). Others didn’t. Here is how Malcolm Lader, editor-in-chief of one of the journals, responded to Bohannon: “An element of trust must necessarily exist in research … Your activities here detract from that trust.”
Bohannon’s whole point, of course, is that readers also trust the journals to submit the articles to peer review, and his activities revealed a good reason to detract from that trust.
Science journals obviously matter: peer review is the best system we have and we should be worried that it doesn’t work. But Wikipedia, the sixth most-visited website in the world, matters too. Endeavors like WikiProject Medicine and TooFEW hope to increase participation among groups who don’t historically contribute to the encyclopedia. But one thing Bohannon’s article emphasizes is that Wikipedia is in the real world: it’s only as good as the sources on which it’s based.
Of course, right now Wikipedia is worse than the sources on which it’s based: hence the need for more diverse editors, whether in terms of gender, nationality, or expertise. Unfortunately the trend seems to be the other way: Wikipedia is getting more and more popular, but the number of editors is getting smaller and more insulated. Might the poster child for Open Access might actually reveal its limitations, rather than highlight its strengths?