This past March I wrote about an idea for peer review that emerged after 25 years of watching it not work. The idea was simple in principle: that researchers have incentives to write but not read, and until those incentives are aligned, reading will forever be unsustainable and intractable. The pandemic amplified this inherent unsustainability, with research communities across academia maintaining the same pressure to publish, but less incentive to review than ever. This was personal for me too: as Editor-in-Chief of ACM Transactions on Computing Education, I was seeing record declines in reviewing invitations, and record declines in Associate Editor resignations. And this trend continues; my board is smaller than ever, finding three reviewers now takes months rather than weeks, and in many cases, reviews are of unacceptably low quality.
The response to the idea was mini-viral; scholars from across academia reached out to me, validating the idea, talking about similar efforts elsewhere in other disciplines, and pointing to the gaps in infrastructure for aligning incentives. But I took the positive feedback as a sign that I should see if my own research community would be willing to try the idea. The proposal received hundreds of comments, most in favor, and some with fear of worsened inequities for scholars on the margins. But our community poll made clear that nearly all junior scholars, and most senior scholars want to try this. It’s really only a minority senior faculty who are skeptical.
And so I set out to try it. I asked for volunteers and we built a team of folks in computing education research to design and build the infrastructure we need to make a system possible. You can see our team on our GitHub repository:
https://github.com/reciprocalreviews
Here’s the progress we’ve made so far in the past few months:
We’ve set up our GitHub repository and a basic website
We created a public Discord for discussion and collaboration.
We’ve worked on a design specification, detailing requirements from the original proposal and public comments, as well as data types and architectural details.
We’ve audited all of the relevant policies (ScholarOne, ACM, GDPR, ORCID, etc.) to identify policy constraints and requirements.
Jérémie and I are taking on the bulk of the design and engineering work and meeting regularly to work through some of the challenging questions about trust, minting, maintenance and governance. Andrew continues work on policy, filling in gaps about data use in particular, as none of the intersecting organizations say much about how review data might be used.
Of course, because this is all volunteer work, it’s going slowly. I only have a few hours a week to make progress on this. But we are determined to keep it moving, and to start building in the coming months. We don’t know precisely when we’ll have something ready for piloting, but we intend to take our time to get something right. After all, this is a rare opportunity to take action on a longstanding challenge in peer review with a community of people who want change.
If you’re willing to help in any way, reach out to me and we can find a role for you! We intend for this work to be open, participatory, and entirely in-service of making our own community — and others — more equitable and sustainable. After all, peer review is something that we oversee and run, and so we can shape it into whatever we believe it needs to be. You’re an essential part of that.
I’ll try to write with updates every few weeks, but if you ever have any questions, don’t hesitate to reach out to me, and I’ll use that as a signal for what the community wants to know more about.
Thanks, and meanwhile, keep saying yes to review requests!