Skip to main content
+

How to republish

Read the  original article in french and consult terms of republication.

What does open evaluation contribute to the scientific conversation?

Published in October 2023
-

The Covid crisis and, more specifically, advances in knowledge about the disease and how to respond to it have thrust preprints, scientific publications posted online before they have been officially evaluated and accepted by a scientific journal - into the media spotlight.

Given the polarised nature of the discussions and the importance of the public health issues at stake in the debate on the scientific reliability of preprints, the general public has not always been able to grasp the importance of the peer review process, also known as “evaluation”, in the way the scientific community operates.

The publication model in today's academic world is increasingly tightly regulated in most disciplines. The rules governing it are designed to promote the transparency of publications by subjecting them to a complex editorial circuit involving different people who are more or less independent of each other.

Traditional evaluation

Authors submit their draft articles to bodies known as « journals » ; which may be associations of learned societies, research teams, university presses or independent commercial publishers (publishers) of varying degrees of general interest and power.

Journals are truly collective entities, organised into centres of responsibility, or committees, and dealing with two essential stages in transforming a text into a published scientific article: the reviewing stage (reviewing) and the editorial preparation stage (spelling corrections, standardisation, page layout, etc.).

The evaluation stage, which determines whether the article is accepted or rejected, is fundamental. Journals are responsible for putting in place the conditions to ensure that this evaluation guarantees an “objective” approach to the texts submitted to them, avoiding conflicts of interest and personal agendas.

This evaluation should also offer authors the opportunity to significantly improve their texts, not only to make them publishable, but also to use them to advance the body of scientific knowledge on a given subject.

For all these reasons, in a highly competitive and sometimes downright toxic academic environment, the preferred framework for expert evaluation is a double-blind peer review: the author does not know the names of the people evaluating their text, and vice versa. The expert is chosen from the academic world on the basis of their own work and experience on the subjects covered in the article.

This general anonymity, combined with the confidential nature of the content for evaluation (reviews are sent directly to the author but are not made public), is considered by a majority of the academic world to be the best guarantee of sound and effective evaluation. It is also at the heart of the “prestige economy” of scientific publications, which recent studies have shown enables publishing houses to exert a strong hold on scientific research.

New formats

Double-blind evaluation is not without its drawbacks. It did not become established in the scientific community until late in the 1970s. It is not uncommon for it to lead to harsh, or even downright violent, criticism - a flaw facilitated by the anonymous nature of the exchanges in which it takes place. Authors and reviewers may feel frustrated at not being able to exchange ideas “as responsible adults”.

Other forms of peer review are beginning to gain ground in the scientific community, based on two principles: transparency and distribution. Transparency, meaning that the author and reviewer know each other's names and can talk to each other. Distribution, meaning that the review process can be open not just to a small circle of experts commissioned by a journal, but to anyone who wants to delve into the article, reread it and comment on it.

These new forms of evaluation correspond to what is known as an open peer review, which has developed platforms such as open peer review, which has developed platforms such as hypothes.is to implement these principles. The European e-learning platform FOSTER also offers a complete online course for training in open evaluation, a sign of the growing interest in this new way of evaluating.

Open peer review offers a number of advantages:

  • the versions of articles submitted to journals are of better quality to begin with (because the version is published, authors generally take more care with the content and form of their text);

  • the evaluations tend to be more benevolent;

  • the arguments and counter-arguments can be published, thereby contributing to public scientific debate.

But the ideal form of scientific conversation that seems to be taking shape here cannot be achieved without some drawbacks. For example, young researchers or those without permanent status may feel better protected by the anonymity of double-blind evaluation than by the exposure represented by the disclosure of their names, or even the public posting of their reviews online.

Even open evaluation requires safeguards to enable the scientific community to engage in dialogue and form a community, without putting the most vulnerable at risk.

Openness and benevolence

Other alternatives, within the more traditional framework of the peer review, are also being put in place: for example, not sending evaluation reports as they are, but summarising them, editing out the most violent but also the most debatable forms by reformulating unnecessarily hurtful phrases.

This also helps to avoid contradicting instructions: “What do I do when the first evaluation tells me not to do what the second one asks me to do more of?” It takes the weight off the shoulders of the reviewers, who are no longer asked for an opinion “with a view to publication” (or not), but simply an opinion on the text.

An evaluation that is designed to be open, and announced from the outset, promotes better teamwork. In this way, each and every one of us can acknowledge our contribution to scientific research - the annual or ongoing publication of the names of those who have contributed to evaluations in journals provides recognition for this necessary but invisible step. It also facilitates the work of publishing professionals, who are responsible for preparing copies or versions for publication.

Approved texts that have been carefully worked on in advance, reread by several pairs of eyes and revised following constructive exchanges between peers are clearly better in terms of both content and form, with particular care generally having been taken with the form of the text given that it has already been circulated prior to publication. These various qualitative factors in favour of open peer review have been documented in a number of publications which offer quantified analyses of the positive or negative impact of the different model blind or open peer review.

These questions do not only apply to the publication of scientific articles. Proofreading and evaluation are now an everyday part of research activity. In addition to articles in journals, publications of chapters in books or complete books, applications for permanent positions, prizes, medals and other awards, contributions to conferences and congresses, and funding for travel or research projects may also be submitted for evaluation.

In turn, reviewers and those being reviewed are caught up in a system of rewards that eludes them: researchers undoubtedly need to rethink the qualities of dialogue, openness and benevolence that can enable the scientific community to operate more serenely.

Using these terms is not about making scientific practice a matter of “good feelings”, but about rethinking the interlocking relationships that make it possible; these relationships, looked at head-on rather than swept under the carpet, reveal the interdependencies between the different actors who make up scientific practice, instead of the forms of individualisation, “stardom” and hierarchisation. Journals are therefore a good place to experiment with forms of transparency and pluralisation.

In conclusion, it should be noted that the information presented here is intended to provoke a wide-ranging debate. It is based on our experience as a stakeholder, at one time or another, in the review process. Some pioneering experiments (such as that of the magazine Vertigo) already offer food for thought. Let's hope that the scientific community as a whole will take it on board and continue the debate.

This article is part of the series “Les belles histoires de la science ouverte”, published with the support of the French Ministry of Higher Education, Research and Innovation. To find out more, please visit Ouvrirlascience.fr.

Identity card of the article

Original title:Comment l'évaluation ouverte renouvelle-t-elle la conversation scientifique ?
Authors:Anne Baillot, Anthony Pecqueux, Cédric Poivret, Céline Barthonnat and Julie Giovacchini
Publisher:The Conversation France
Collection:The Conversation France
License:

This article is republished from The Conversation France under a Creative Commons license. Read the original article. An English version was created by Hancock & Hutton for Université Gustave Eiffel and was published by Reflexscience under the same license.

Date:October 17th, 2023
Langages:French and english
Keywords:

universities, scientific journals, research, human sciences, open science, Les belles histoires de la science ouverte