8 min read

Peer Review Models: Crowdsourced Open Review

Crowdsourced peer review takes many forms—as pre-print review, modified open review, crowdsourced review, and post-publication review. This post defines and details some examples from each model.
Black and white photograph of two suited men standing in a room filled with noise machines.
Luigi Russolo and Ugo Piatti in their Milanese intonarumori workshop, ca. 1916. https://publicdomainreview.org/essay/luigi-russolos-cacophonous-futures/

This is a post in our series on peer-review, following What is Peer Review and Who Are Peer-Reviewers?, Can AI Peer Review, Peer Review Models: Editorial and Anonymous Reviewing, and Open Peer Review Models.

In the crowdsourced peer-review model, authors are known to everyone, and reviewers can be anyone. Depending on the platform, the reviewers may or may not be anonymous—in fact, crowdsourced reviews are the only form of open review that allow for anonymity.  In another post, we'll discuss partially and fully open review models that rely on selected expert reviews rather than opening the review process to any who are interested in commenting on a given manuscript.

Perhaps the best known form of crowdsourced open review happens on pre-print servers, where an article manuscript is posted and requests comments and critiques before being sent to a journal for peer review. The commenters in this case tend to be disciplinary experts and informed non-experts who have an interest in the content of the work; it often takes too much time and insider knowledge for individuals outside of the expected audience of the work to take the time to comment, so it is much more peer-like than crowd-like in its actual implementation. 

This model got its start in the sciences, beginning with the establishment of arXiv in 1991, which focused on physics, mathematics, and computer science. By 2009, it was estimated that over 90% of journal articles in the high-energy physics field were posted and reviewed on arXiv prior to publication (Gentil-Beccot, Mele, & Brooks, 2009).  Other preprint servers in the sciences followed suit—albeit much later—including bioRxiv (2013), EarthArXiv (2017), ChemRxiv (2017), and medRxiv (2019).

Pre-print servers also gained popularity during the first Covid wave, as research could very rapidly be released and disseminated but could also be subject to peer review; we're seeing a similar surge in pre-print work around generative AI, which is also a fast-moving target because of how quickly the underlying technologies and computational advances are being made. It’s becoming more typical for journals—specifically those in the sciences where scholars have already been using pre-print servers for some time—to use a pre-print server’s review process as part of the review cycle (Sengupta, 2022). eLife and the European Geosciences Union use preprint reviews as their review mechanism for their own journals, using their own preprint servers. In addition, several journals partner with platforms like Review Commons, which provides journals with high-quality referee reports to speed the review and publishing process, and which also minimizes the labor for journal staff and reviewers. 

In addition to pre-prints, there are other models of open peer review that mix traditional and open review. For example, as reported by Angus Whyte (2010) the Journal of Interactive Media in Education, which was founded the same year as Kairos (1996), used a modified form of open reviewing from its inception to at least 2010. Their model begins with an internal review by reviewers who are editor-selected but publicly identified. Based on this initial review, the editor decided whether to move the work forward into the crowdsourced review stage. The feedback from the crowdsourced open review was used to generate revision or edit requests prior to publication. The moderated comments from the crowdsourced review would be published alongside the final version of the article. (As of 2025, however, the journal is using a more standard double-anonymous peer review model.)

A different model, dubbed intelligent crowd sourcing was piloted in 2016 by the chemistry journal Synlett. The journal's editor developed "a protected platform whereby many expert reviewers could anonymously read and comment on submissions, as well as on fellow reviewers’ comments" (List, 2017). The journal invites around 80 reviewers to participate in the anonymous collaborative review over three to four days; it seems one of the main benefits is the speed with which reviewing takes place. In many ways, this is quite similar to the Kairos model of collaborative reviewing, although our model is not anonymous, and we usually draw our reviewers solely from our editorial board. This is an example of how crowdsourcing can be anonymous in some cases. 

In 2013, the journal F1000 Research offered a post-publication review (incidentally, the journal itself was named for the 1000 expert faculty reviewers it had on its roster when it was founded in 2000). The journal's website explains their process:

Articles are published rapidly as soon as they are accepted, after passing a series of prepublication checks to assess originality, readability, author eligibility, and compliance with F1000Research’s policies and ethical guidelines. Peer review by invited experts takes place openly after publication.

This post publication model was specific to F1000 until fairly recently, but in the last few years, other journals, such as eLife and Sci, have started to experiment with or change to post-publication review.

On occasion, crowdsourcing has been used to review full book manuscripts as well. Two prominent examples in the humanities include Kathleen Fitzpatrick’s posted manuscript of Planned Obsolescence: Publishing, Technology, and the Future of the Academy on MediaCommons. She invited readers to provide feedback in late 2009, and the work was later published as a print book by NYU Press in 2011. In 2013, the University of Michigan Press made available for open review Writing History in the Digital Age (Eds. Dougherty & Nawrotzki, 2013); The feedback provided was then used to revise the book before it was published in both print and open-access online editions. 

Typically, the option of crowdsourcing entire book manuscripts is met with disillusionment from university presses and authors alike, and sometimes for good reason—crowdsourcing reviews on long-form texts requires an immense amount of work on reviewers’ parts and it’s rare for reviewers (who may or may not be anonymous) to give over their time to publicly review an entire book manuscript. 

In the case of Kathleen Fitzpatrick’s book with NYU Press, for instance, the press contracted reviewers in its typical fashion to do an offline, anonymous review while Fitzpatrick managed the online, crowdsourced comments, which—given its early status as a “first” in open peer review for a book that was about open peer review—garnered a decent amount of useful feedback from known scholars in related fields but didn’t provide the same level of in-depth review as the contracted reviewers. Fitzpatrick elaborated in the Conclusion to the printed version of the book: 

On September 15, 2009, I emailed nine colleagues with whom I’d discussed the project in earlier stages, asking them to stop by, take a look, and leave some comments before I began announcing the project’s availability widely. Over the next week or so, two of those colleagues read and commented on the entire manuscript and three others read a chapter or two and left a few comments, and I responded to those comments where it seemed appropriate for me to do so. Finally, on September 28, we announced the open-review experiment widely, inviting comment and response from any interested reader. (p. 189)

We highly recommend reading the entire Conclusion, which details both the single-anonymous (typical of book manuscript peer review) process compared to the crowdsourced peer review process.

Fitzpatrick noted as well something that we have also been hearkening to for a long time at Kairos; she said:  

the issues raised in comparing the online review process to the traditional reviews indicate that the system that needs the most careful engineering is less technical than it is social: for peer-to-peer review to succeed, we must find ways to build the commitment to a scholarly com- munity that such a system will require. (p. 192)

Said another way, open peer review works best when there is a social infrastructure to support the building of such scholarly community. See our chapter in Jim Ridolfo and Bill Hart-Davidson’s book Rhetoric and the Digital Humanities that details our take on what social, and scholarly infrastructures for digital publishing require, nevermind the technical infrastructures. 

With open peer review, negotiations between author and editor need to occur to determine who is responsible for mining the comments and choosing which ones to use, a process usually reserved for editors handling anonymous reviews but one that may feel overwhelming to either editors or authors using open review. Perhaps for this reason alone, crowdsourcing peer review may be better left to article-length pieces rather than book-length. 

Oil painting of a bunch of white men in various states of verklemptness, just like typical scholars upset over open peer review.
Caption: Václav Brožík, The Prague Defenestration of 1618, 1890.

Perhaps one of the most well-known examples of crowdsourced peer review in a humanities journal was that of Shakespeare Quarterly’s experiment in 2010 to review an entire special issue on Shakespeare and Performance in MediaCommons (the same platform Fitzpatrick used, and also managed, for her book’s open peer review). In that review process, all of the articles in the special issue were posted freely online, where reviewers anyone was able to read and comment on them. Whether this process was a success or not is debated. When we speak to editors and scholars in the humanities, particularly literary studies, about open peer review, their immediate reaction is cringe because their memory of the Shakespeare Quarterly open peer-review experiment was highly negative. However, Fitzpatrick and Jennifer Howard, writer for Chronicle of Higher Education, assessed it positively at the time. Quoting Howard’s column in the Chronicle, Fitzpatrick said: 

Howard notes one participant’s sense [that] “the humanities’ subjective, conversational tendencies may make them well suited to open review — better suited, perhaps, than the sciences,” and yet, of course, the humanities have in general been very slow to such experimentation.

We agree that the humanities is ripe for more conversational, dialogic methods of open peer review, although creating such social structures requires dedication and planning. Not simply turning on the review spigot, as PLOS One did in 2017 when it bombarded 7,000 reviewers via mass email invitation through their online system, asking them to review 110 manuscripts. That they were surprised when the “crowdsourced” reviews returned poor responses (Almquist et al., 2017) is laughable to those of us well-versed in successful open peer review. 

In our next post on open peer review, we will detail both partially open peer review and fully open peer review, which are distinct from crowdsourced versions of the process. 

Acknowledgements

Thank you to Rebecca Kennison for providing pinch-hit citation support for some of our pre-print examples! 

References

Almquist, Michael, von Allmen, Regula S., Carradice, Dan, Oosterling, Steven J., McFarlane, Kirsty, & Wijnhoven, Bas. (2017). A prospective study on an innovative online forum for peer reviewing of surgical science. PLOS One. https://doi.org/10.1371/journal.pone.0179031

Dougherty, Jack, & Nawrotzki, Kristen, Eds. (2013). Writing History in the Digital Age. University of Michigan Press.

Eyman, Douglas, & Ball, Cheryl E. (2015). Digital humanities scholarship and electronic publication. In Jim Ridolfo & William Hart-Davidson (Eds.), Rhetoric and the digital humanities. University of Chicago Press. 

Fitzpatrick, Kathleen. (2011). Planned obsolescence: Publishing, technology, and the future of the academy. NYU Press.

Fitzpatrick, Kathleen. (2010, July 27). MediaCommons, Shakespeare Quarterly, and open review. MediaCommons https://kfitz.info/mediacommons-shakespeare-quarterly-and-open-review/

Gentil-Beccot, Anne, Mele, Salvatore, & Brooks, Travis. (2009). Citing and reading behaviours in high-energy physics. How a community stopped worrying about journals and learned to love repositories. arXiv. https://arxiv.org/abs/0906.5418

Howard, Jennifer. (2010, July 26). Leading humanities journal debuts ‘open’ peer review, and likes it. Chronicle of Higher Education. https://www.chronicle.com/article/leading-humanities-journal-debuts-open-peer-review-and-likes-it/

List, Benjamin. (2017). Crowd-based peer review can be good and fast. Nature 546(9): https://doi.org/10.1038/546009a

Media Commons Press. (2011). Shakespeare Quarterly: Open review: “Shakespeare and Performance.” https://mcpress.media-commons.org/shakespearequarterlyperformance/

Sengupta, Aditi. (2022, December). How journals are innovating in peer review through preprints. ASAPBio. https://asapbio.org/how-journals-are-innovating-in-peer-review-through-preprints/

Whyte, Angus. Shakespeare Quarterly "open peer review" experiment. Digital Curation Centre, University of Edinburgh. https://www.dcc.ac.uk/news/shakespeare-quarterly-open-peer-review-experiment