General comments
The qualitative responses show that journals can and should aspire to
making improvements to their processes: no journal scored highly in
every respect. To take just one example of opportunity for improvement
from each of the Essential Areas:
- Integrity: 46% of journals do not refer reviewers to reporting
guidelines
- Ethics: 41% of journals for whom it is relevant do not have a process
in place for checking for potential image manipulation
- Fairness: 75% of journals do not collect editor conflicts of interest
- Usefulness: 74% of journals have no practice around soliciting
feedback from reviewers
- Timeliness: 56% of journals do not inform authors when they might
experience delays
It is encouraging to see that the highest ratings were in the area of
Ethics, no doubt because this area has received the most attention and
is arguably where the stakes are highest; journals which have weak
ethics processes in place stand to lose a great deal if their poor
practice is exposed.
It was instructive, but not surprising, to note that there were
differences between journals’ self-rating and our rating. Journals
published by large publishing houses may not always be aware of systems
and processes which the publisher has implemented on a wide scale.
Publishers could invest more effort in helping journals stay aware of
these developments.
Some aspects of the peer review processes are checked for in submission
systems and so higher compliance is expected. For example, it is now
standard practice across Wiley journals to use iThenticate to check for
overlapping text (Q27). For certain questions it easier than others to
perform well. Ethics questions score highly, perhaps because so much is
at stake in having poor practice in this area, and it is a more
regulated area with guidelines and recommendations for good practice.
From our analysis of the qualitative responses we identified a number of
obstacles to good practice. These included a lack of technical knowledge
or awareness of the opportunity to adopt good practice, for example in
not being familiar with readily available technological solutions; a
lack of consistency, for example asking authors to comply with reporting
guidelines but not asking reviewers to assess manuscripts against these;
a fear of additional workload, for example in not offering authors the
opportunity to appeal against decisions; and a fear of exposure, for
example in having weaknesses in the peer review process identified and
called out.
Rather than it being an end in itself, we view the self-assessment as
being the beginning of a journey. Journals can use the badges they
receive as part of the exercise to identify strengths and areas for
improvement, and then be guided by the hints and tips infographic
(
http://secure.wiley.com/better-peer-review) to make adjustments
to their processes. Other suggestions for follow-up include discussion
of some of these practices at editorial meetings, for example in
exploring how to make editorial boards more diverse; providing informal
or formal training for editors and reviewers; and repeating the
self-assessment after, say, six months or one year, to assess progress.