Ian Mulvany

November 12, 2022

eLife, peer review, and architectures of attention


IMG_9042.png

(How I imagine peer review works - via Stable Diffusion) 

#blog/draft #publihsing #stm #peer-review #elife 

eLife announced a new peer review model a few weeks back that will be fully rolled out by end of Jan 2023. 

It’s received a lot of attention, so you may have heard about it already. This post outlines the model, some reactions to it, and also why I believe that most people writing about it have missed a really important perspective on it. 

I was part of the team that launched eLife ten years ago, so I was delighted to see this continued evolution of the journal. 

In case you don’t want to read the whole post, here is the key thing that I think post people have missed - this new model doesn’t solve the main problem that we have in research, which is finding people to review things. (For the sake of this argument I’ll put aside the efficiency issues). I think the hard thing is attracting people’s attention. Over 10 years elio has built up that ability. Most journals don’t have that. Without getting peoples attention, radically changing how you do peer review is somewhat irrelevant. I’m not saying that what they are doing isn’t great, and I’m not saying that it can’t help a lot, but what I am saying is that a critical and underlying issue is the architecture of how we attract attention, and elife had already solved that, which in a way gave them the luxury of being able to experiment with aspects of the process. That it, that’s my argument. 

I’ll come back to that point at the end of this post. The architecture of attention, and distributed attention models, is something that I wrote about a long time ago, and I’ll also come back to that again in the next few weeks when I start to look at what’s going on at twitter. 

But back to eLife. 

The key part of their new process is that it removes the “accept / reject” step, and publishes everything that is sent for review. If a paper is sent for peer review then the paper will either be published as a “Reviewed Preprint” - the preprint, the eLife assessment and the public reviews, or as a version of record (at the discretion of the author) that is fully typeset and indexed in pubmed. I think this last step is the step where the author pays the APC. 

More info on this here: 

eLife’s New Model: Changing the way you share your research | Inside eLife | eLife

From next year, we will no longer make accept/reject decisions at the end of the peer-review process; rather, all papers that have been peer-reviewed will be published on the eLife website as Reviewed Preprints, accompanied by an eLife assessment and public reviews. The authors will also be able to include a response to the assessment and reviews.

The decision on what to do next will then entirely be in the hands of the author; whether that’s to revise and resubmit, or to declare it as the final Version of Record.

It is worth looking at the video (90 seconds) — YouTube
“The eLife model puts you in control” 

All papers will undergo this process from the end of Jan 2023. Some papers have already been through the process and you can see some example papers here. 
Example paper: 
https://elifesciences.org/reviewed-preprints/81535
Reviews:
https://elifesciences.org/reviewed-preprints/81535

Why is this interesting?


eLife talks about the waste of time and effort when papers get rejected, and also that an “accept/reject” label places more emphasis on the journal that a paper is published in, rather than what the paper says itself. 

Of course there is another aspect to this too — if you don’t reject any papers you can increase your throughput of published work, either increasing the volume of content that draws readership — if those papers are posted freely, or increasing revenues if those papers have an APC associated with them. (Clarke and Esposito have a fantastic analysis of the economics of the approach here https://www.ce-strategy.com/the-brief/journalesque/). 

The great game with OA for the last few years has been to find ways to increase publication volume wile maintaining acceptable standards. What eLife are doing here is a bit like a ninja move in that they are routing around the quality questions alltogether — at least this is almost what they are doing. 

The implication for all other OA publishers is clear. If we can emulate some of this approach, we can increase the number of papers that we publish, and improve on revenue as a result. 

What has the reaction been like? 


It’s generated a lot of discussion on twitter, and been very polarising. A lot of people are hailing this as the future, and lauding the approach. Others are declaiming it as heralding the destruction of eLife! . Whatever happens, the level of debate shows that academic twitter is highly invested in the peer review process. 

This is one good negative analysis - https://twitter.com/vectorgen/status/1583141454205300738 which suggests that the new eLife process could lead to “high impact papers” where ”correctness is not important”. The implication here is that it opens the door to junk science that can get traction in fake news circles (you know, they don’t care about correctness. It’s probably more of an issues for some peer post doc or grad student if the result leads them to waste half a year).  

Another negative take is this one: Killing eLife’s selectivity reputation hurts science | Times Higher Education (THE)
Critically it has the following sentence “We supported eLife, not just because its peer review process was the fairest of any scientific journal, but also because of the imprimatur and kudos that acceptance of a manuscript in eLife implied. Indeed, some of the papers we published in eLife were springboards for members of my team, helping them land elusive faculty appointments and launch independent laboratories. It is an uncomfortable, but nonetheless true, fact of life that the same work published in a lower-tier journal might not have provided the same career-defining boost.”

The author is basically complaining that removing selectivity removes the “career advantage” of publishing in a journal. 

On the positive side a lot of commentators are hailing the efficiency gain and the boldness of trying something new. 

I think that when you have several tens of thousands of journals out there, it seems very fair to support something that is iterating on the core way that we think peer review should implicitly work. After all, peer review in it’s current form was introduced relatively recently (in the 1960s) to help clear a physical backlog of papers submitted to nature. Why should there not be further experimentation? 

What’s the catch? 


eLife has the luxury of playing around with this new model because it is not hard for eLife to find reviewers for its papers. It has spent 10 years building up its reputation (heavily leveraged on the back of HHMI and Wellcome Trust) to create a high value brand. We invested heavily on this when we launched the journal. In the first couple of years our marketing budget was so large compared to the volume of papers that we were publishing that I remember at one board meeting one of the board members casuallly suggested that we should buy everyone who published in elife a nee iPhone with an elife branded cover, as it was going to be cheaper than the marketing budget plans. We spend considerable sums paying cohorts of editors in order to ensure fast responses and fast turnaround times. The science that was published was fantastic and we created a great author experience, but it was all focussed and well funded effort to get there. Elife has an outstanding panel of section editors and it can feel confident that when it sends a paper for review, the paper will get reviewed. I think this is the thing that is really really hard. 

So for me this whole things is much less about the “accept / reject” aspect of the process, and much more about the ability to attract attention - initially of the reviewers and then of the readers, and then of the authors to submit to the journal. Most journals struggle to find reviewers, so they could not follow this policy as effectively as elife, even if they wanted to. That’s not an argument that they shouldn’t try, but I guess rather I’m suggesting here that the attention issue is perhaps more primary than the effiency issue. 

The other thing that eLife has is a large cohort of active editors who can make the decision on whether to send a paper for review. They attract the attention of those editors, again through brand. The selectivity does not go away, but is just moved to another location in the publishing process, giving more power to the ’desk reject’ step. 

What other strategies are out there ? 


If we just think about this attention issue, plenty of others have tried to crack this over the years, using different approaches. 

•  PLOS one introduced a model where reviewers were asked to judge papers on correctness, not impact. It helps with volume, but is still hard to get some reviewers to step away from impact assessment. It is also hard to find reviewers for some papers, and a small tail of papers tie up most of the reviewing time. There remains a criticism that a lot of papers that get published in PLOS one are of very low utility. 

•  MDPI have created an engine to drive rapid publication, and huge engine to pump out papers. This allows the publisher to scale, and provide timely reviews, but much of this activity is predatory. Good analysis suggesting that it is both predatory and non predatory - Is MDPI a predatory publisher? – Paolo Crosetto. By instrumenting every aspect they have turned the review reckon into something like a stochastic step in the process. A step that one feels they would quite like to eliminate. My bet is on MDPI to be the first to adopt the elife model! (Of they haven’t already). 

•  F1000 pioneered the publish, then review, model. A paper is published immediately, and when it gets two positive reviews it then gets indexed in PubMed. Many papers struggle to get the required number of positive reviews to get published. 

•  Review commons Review Commons – Improve your paper and streamline publication through journal-independent peer-review. is creating a way to have a pre-print reviewed, and then have the preprint and reviews submitted to a journal. eLife is one of the biggest contributors to this effort, and MedRxiV is a key participant. 

•  Standard publishing houses cascading strategies. This is a good strategy when you have a sufficient number of titles, and a sufficiently aligned set of editorial boards, but launching new titles only for the basis of having somewhere to cascade articles too is probably inefficient, and getting editorial boards to work collectively across journals has a significant coordination cost. 

•  Elsevier aim to get to scale by being a scale player. They have something just under 3000 journal titles, and are well known in the industry for having very high production standards and processes. By being the largest player, if they need to introduce some new process change, they have the clout to make that happen. Whether their scale hinders speed to adopt new approaches is something that could be an issue. 



About Ian Mulvany

Hi, I'm Ian - I work on academic publishing systems. You can find out more about me at mulvany.net. I'm always interested in engaging with folk on these topics, if you have made your way here don't hesitate to reach out if there is anything you want to share, discuss, or ask for help with!