Reviews and CommentaryFree Access

2016: Reviewing for Radiology—Reporting Guidelines and Why We Use Them

Published Online:https://doi.org/10.1148/radiol.2016161204

Abstract

It is our hope that increased use and awareness of guideline criteria will allow for the manuscripts, at the time of submission, to be more complete and aid our reviewers in better understanding, and thus critiquing, the methodology and results of submissions they receive.

Online supplemental material is available for this article.

Introduction

Over the past several years, there has been increasing concern raised in the scientific and lay press regarding the ability of the peer review process to adequately filter out flawed science (1,2). In 2009 Chalmers and Glasziou estimated that 85% of biomedical research funding was being avoidably wasted (3). Such waste can occur when a research question is not clinically important, if important outcomes are not assessed, when appropriate design and methods are not used, when biases limit the generalizability of the results, when planned study outcomes are not assessed, and when findings cannot be adequately reproduced (3). An article by Ioannidis in 2005 nicely illustrated how the probability that a study is true depends on the pretest probability, statistical power, and bias (4). It is well recognized that peer reviewers (and editors) frequently fail to detect widespread methodological errors and reporting deficiencies (510). A 2011 United Kingdom government inquiry into the current peer review system concluded, “There is much that can be done to improve the quality of pre-publication peer review across the board and to better equip the key players to carry out their roles” (11).

Our reviewers are essential in our effort to publish only the highest quality research. Reviewers have expertise inthe subject matter and in-depth knowledge of the relevant published literature, as well as expertise in the research methods. Thus they provide critical information to the editors regarding the novelty of the manuscript, the importance of subject matter, and the quality, validity, and reproducibility of its science.

However, reviewers, like authors, may benefit from clear guidance as to what a journal expects from them and their expertise. The purpose of this editorial is to inform our reviewers regarding our adoption and increased attention to the use of reporting guidelines. Our hope is that by drawing attention to reporting guidelines, we can help authors enhance their study designs before research is initiated and before manuscripts are written. In addition, by increasing awareness of these guidelines among our reviewers, we hope to further enhance the peer review process.

Reporting Guidelines and Why We Use Them

Much has been written about how to improve the quality of scientific reporting. The use of guidelines has arisen as a method for improving the description of the study and the reporting of results (12). However, there is still room for improvement. In an assessment of journal use of reporting guidelines, a study published in 2012 reported that of the health research journal websites evaluated, only 41 of 116 provided online instructions to peer reviewers, and less than half of those (19 of 41, 46%) mentioned reporting guidelines (13). Initial evaluations of reporting guidelines found that their use is associated with modest improvements in the quality of reporting (1416).

In our experience, many authors and reviewers are unaware of the guidelines and their potential utility. We are aware that some might think of these requirements as yet another bureaucratic “hoop to jump through.” Rather, it is our hope that by stressing the adherence to reporting guidelines we will enhance the quality of the reporting in the research that we publish. In recognition of the need for more complete reporting, as of January 2016 (17), we announced that we will require authors to follow the reporting guidelines (where appropriate) and to submit the appropriate completed checklists with their submissions. Moreover, we began to forward the completed checklists to the reviewers in the hopes that knowledge of reporting guidelines and the author adherence to them will help inform their critiques.

Our journal has endorsed the use of the following guidelines by our authors (and reviewers) to help ensure more complete reporting of study methods and results:

  1. For studies dealing with diagnostic accuracy, Standards for Reporting of Diagnostic Accuracy (STARD) (18);

  2. For observational studies (such as cohort, case-control, or cross-sectional studies), Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines (19).

  3. For randomized controlled trials, Consolidated Standards of Reporting Trials (CONSORT) statement (20);

  4. For meta-analyses of randomized controlled trials, Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement (21);

STARD (18) and STROBE (19) are the guidelines that are most commonly applicable for the types of studies, diagnostic accuracy (STARD) and observational (STROBE), that are submitted to our journal. These guidelines contain many of the same recommendations. Key features of each of these are the instructions to state specific objectives of the study including hypotheses, to describe eligibility criteria, to clearly describe methods and outcomes, and to address potential sources of bias. These guidelines aid the reviewer in understanding that the results of diagnostic accuracy or outcomes of population studies are in many ways driven by how the patient population was accrued.

The PRISMA statement describes how to perform and report systematic reviews and meta-analyses. Among the many items described in this guideline are the rationale for the systematic review in the context of what is already known, the need to clearly describe the search strategy utilized, and to describe methods for assessing the risk of bias in the studies that are synthesized in the report. The PRISMA guideline helps to highlight the effect of heterogeneity in the studies that are analyzed and point out the limitations of employing different reference standards when aggregating study data for combined analysis. When data are heterogeneous and methodologies differ, it can be difficult to meaningfully aggregate outcomes, and a systematic review might be more appropriate than a meta-analysis, as described in a recent publication in our journal (22).

Although randomized controlled trials are not frequently submitted to our journal, we recognize the importance of optimizing the reporting of these studies, as they are prospective and have the potential to provide a higher level of scientific evidence. Moreover, the patients who participate in these research projects deserve the utmost care in reporting of studies of which they are a part. We expect clinical trials to be registered with prespecified data analysis. The CONSORT guideline gives information regarding the need to prespecify primary and secondary outcomes measures, including how and when they were assessed. Of particular importance is how patients were randomized.

Each of these guidelines recommends use of a flow diagram so that the reader (and reviewer) can better understand how subjects (or publications, in the case of PRISMA) were included and excluded. These diagrams are extremely helpful in understanding study limitations, sources of bias, and generalizability of the results. Each of these guidelines provides checklists to help authors be complete in their description of methodology and reporting of results.

The EQUATOR Network (Enhancing the QUAlity and Transparency Of health Research) provides updated resources on checklists and guidelines on reporting medical research to help improve the quality of published health research (www.equator-network.org). We encourage our reviewers (and authors) to become familiar with this informative website.

On the EQUATOR website there are many guidelines available. We chose the above four for our initial foray into requiring use of reporting guidelines for submission and review, as they represent the most commonly used clinical research study designs in the research submitted to our journal. We realize that some very early exploratory clinical studies as well as basic laboratory or animal research do not fall under the descriptions for these guidelines. Thus, at this time we are not requiring checklists for these types of manuscripts. We encourage our authors to use the instructions for authors and checklists that we have in our author toolkit (pubs.rsna.org/page/radiology/pia) to enhance their organization of material sent for review.

It is our hope that increased use and awareness of guideline criteria will allow for the manuscripts, at the time of submission, to be more complete and aid our reviewers in better understanding, and thus critiquing, the methodology and results of submissions they receive. Moreover, we will of course continue to ask our reviewers to provide concrete suggestions for improvement in description of methodology, reporting results, and suggestions for focusing the introduction and discussion. We ask that our reviewers ensure that the manuscript has up-to-date information and that the conclusions are not overstated (23). As manuscripts are assessed for statistical methodology by statisticians, are edited for scientific content by the editors, and then are copyedited by experts, we encourage reviewers to focus on the quality and clarity of the scientific reporting and not focus on complex statistics, grammar, or word use unless these affect the ability of readers to understand the content in a manuscript.

Assessment of Novelty and Importance

Quality, validity, and reproducibility are crucial, but if a study does not address a novel, important, or interesting topic, it is unlikely to be accepted for publication in our journal.

Thus, no reporting guideline can replace the main issue at the heart of editorial decisions: Is the manuscript novel, important, and/or interesting? We count on our reviewers to guide us in these assessments.

Novelty is an important criterion in our acceptance of manuscripts. We try to publish manuscripts that provide new information and new concepts, that describe new technology, that define new diagnostic or therapeutic approaches, and that resolve existing controversies. While novelty is key for many accepted manuscripts, sometimes a study is not the first report or even the largest series of patients but may be important. We think of study importance as findings that potentially change practice. “News you can use” is a phrase often mentioned at our weekly editorial meetings. We consider importance also in manuscripts that help us understand biology and/or technology. For early or preclinical studies, importance can be generation of a new hypothesis or stimulation of further research. Other main items in our assessment of manuscripts are studies that are interesting and/or informative. Included in this category are those studies that have conclusions that provide clear direction, add considerably to our available information and/or provide useful information, and/or address a topic of reader interest. Thus, the nature of the subject matter, the perceived interest of the readers, and the needs of the journal can drive some of our decisions regarding acceptance.

Ethics of Peer Review

Since peer review is a fundamental part of the functioning of our journal, it is crucial that all aspects of handling of manuscripts be done in an ethical manner. In a 2002 report on integrity in scientific research, the National Research Council wrote, “Researchers should agree to be peer reviewers only when they can be impartial in their judgments and only when have revealed their conflicts of interest….A delicate balance pervades the peer-review system, because the best reviewers are precisely those individuals who have the most to gain from ‘insider information’: They are doing similar work and they will be unable to ‘strike’ from memory and thought what they learn through the review process. Investigators serving as peer reviewers should treat submitted manuscripts … fairly and confidentially and avoid using them inappropriately” (24).

We ask our reviewers to be respectful of the studies and authors of manuscripts they review and to remember that the information in the manuscripts they review is confidential and not to be shared with others. This respect includes not only the wording of recommendations for improvement but also the timely return of the review itself (2-week turn-around time is allotted per review). Such timely return allows us to maintain a reasonable time to first decision for the manuscript (average time to first decision was 30 days in 2015).

Reviewers are required to inform the editor of any biases or conflicts of interest they may have regarding the manuscript. If a reviewer considers that his or her knowledge of the likely authors of the manuscript may cause the review to be biased, then the reviewer should not review the manuscript. Rather, the reviewer should notify the editorial office so the manuscript can be reassigned.

The Rewards of Manuscript Review

We have a dedicated cadre of outstanding volunteer reviewers, and we greatly appreciate the time and effort they dedicate to our journal. The rewards of manuscript review are considerable:

  1. Being asked to be a peer reviewer is an acknowledgment of your expertise.

  2. Participation in the review process helps clinicians and scientists to be aware of new developments.

  3. Reviewers receive feedback in the form of the written comments from other peer reviewers which helps sharpen their skills.

  4. Each peer review is evaluated by the Editor for its timeliness and quality of the review itself. Editor’s Recognition Awards are given yearly on the basis of the number of manuscripts reviewed and the mean quality and timeliness score of the reviews received. Reviewers that have been so recognized repeatedly are eligible to serve on our editorial board.

  5. Reviewers can potentially obtain up to 3 AMA PRA Category 1 CME credits for their completed reviews, and up to 15 hours of such credit can be obtained in a given calendar year.

  6. The time and effort put in by peer reviewers aids in selection of topics and enhancing the quality of manuscripts published by the journal.

Details of Review

Details of the essential components of manuscript review are provided in Appendix E1 (online) and are an update to the editorial provided by Dr Proto in 2007 (25). Although the information provided in Appendix E1 is intended for our peer reviewers, we hope authors will also find it of use in the preparation of their manuscripts. We have also created a reviewer toolkit (pubs.rsna.org/page/radiology/reviewers), which includes details on how to become a reviewer as well as links to a reviewer template and checklist that we invite reviewers to use to help format their critiques.

Conclusion

Peer review is an essential component of the final product that is our monthly journal. The reviewer’s expertise in the subject matter and research methods and his or her knowledge of the literature help ensure that the manuscript under evaluation contains up-to-date information and is appropriately focused on the new information contained therein. The use of scientific reporting guidelines should aid reviewers in assessing the standard methodologies we expect from our publications. We hope our Appendix with detailed guidelines and our enhanced focus on guidelines will be of help to both our reviewers and potential authors, and we welcome any feedback they would like to provide. Last, we thank our reviewers for their outstanding service both to the peer-review process and to Radiology.

References

  • 1. Ioannidis JP. Contradicted and initially stronger effects in highly cited clinical research. JAMA 2005;294(2):218–228. Crossref, MedlineGoogle Scholar
  • 2. Lehrer J. The truth wears off. New Yorker. Dec 13, 2010. Google Scholar
  • 3. Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet 2009;374(9683):86–89. Crossref, MedlineGoogle Scholar
  • 4. Ioannidis JP. Why most published research findings are false. PLoS Med 2005;2(8):e124. Crossref, MedlineGoogle Scholar
  • 5. Glasziou P, Meats E, Heneghan C, Shepperd S. What is missing from descriptions of treatment in trials and reviews? BMJ 2008;336(7659):1472–1474. Crossref, MedlineGoogle Scholar
  • 6. Vähänikkilä H, Tjäderhane L, Nieminen P. The statistical reporting quality of articles published in 2010 in five dental journals. Acta Odontol Scand 2015;73(1):76–80. Crossref, MedlineGoogle Scholar
  • 7. House of Commons Science and Technology Committee. Peer review in scientific publications.http://www.publications.parliament.uk/pa/cm201012/cmselect/cmsctech/856/856.pdf. Published 2011. Accessed March 31, 2016. Google Scholar
  • 8. Dwan K, Altman DG, Arnaiz JA, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS One 2008;3(8):e3081. Crossref, MedlineGoogle Scholar
  • 9. Papathanasiou AA, Zintzaras E. Assessing the quality of reporting of observational studies in cancer. Ann Epidemiol 2010;20(1):67–73. Crossref, MedlineGoogle Scholar
  • 10. Nieuwenhuis S, Forstmann BU, Wagenmakers EJ. Erroneous analyses of interactions in neuroscience: a problem of significance. Nat Neurosci 2011;14(9):1105–1107. Crossref, MedlineGoogle Scholar
  • 11. House of Commons Science and Technology Committee. Peer review in scientific publications. London, England: House of Commons, 2011. Google Scholar
  • 12. Collins FS, Tabak LA. Policy: NIH plans to enhance reproducibility. Nature 2014;505(7485):612–613. Crossref, MedlineGoogle Scholar
  • 13. Hirst A, Altman DG. Are peer reviewers encouraged to use reporting guidelines? A survey of 116 health research journals. PLoS One 2012;7(4):e35621. Crossref, MedlineGoogle Scholar
  • 14. Plint AC, Moher D, Morrison A, et al. Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Med J Aust 2006;185(5):263–267. Crossref, MedlineGoogle Scholar
  • 15. Smidt N, Rutjes AW, van der Windt DA, et al. The quality of diagnostic accuracy studies since the STARD statement: has it improved? Neurology 2006;67(5):792–797. [Published correction appears in Neurology 2008;71(2):152.] Crossref, MedlineGoogle Scholar
  • 16. Prady SL, Richmond SJ, Morton VM, Macpherson H. A systematic evaluation of the impact of STRICTA and CONSORT recommendations on quality of reporting for acupuncture trials. PLoS One 2008;3(2):e1577. Crossref, MedlineGoogle Scholar
  • 17. Levine D, Kressel HY. Radiology 2016: the care and scientific rigor used to process and evaluate original research manuscripts for publication. Radiology 2016;278(1):6–10. LinkGoogle Scholar
  • 18. Bossuyt PM, Reitsma JB, Bruns DE, et al. STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies. Radiology 2015;277(3):826–832. LinkGoogle Scholar
  • 19. Institute of Social and Preventive Medicine (ISPM). STROBE (2007) checklists, version 4 2007. http://www.strobe-statement.org/index.php?id=available-checklists. Accessed January 13, 2016. Google Scholar
  • 20. Schulz KF, Altman DG, Moher D; CONSORT Group. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMJ 2010;340:c332. Crossref, MedlineGoogle Scholar
  • 21. Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. Ann Intern Med 2009;151(4):W65–W94. Crossref, MedlineGoogle Scholar
  • 22. McInnes MD, Hibbert RM, Inácio JR, Schieda N. Focal nodular hyperplasia and hepatocellular adenoma: accuracy of gadoxetic acid-enhanced MR imaging—a systematic review. Radiology 2015;277(2):413–423. [Published correction appears in Radiology 2015;277(3):927.] LinkGoogle Scholar
  • 23. Levine D, Bankier AA, Kressel HY. Spin in radiology research: let the data speak for themselves. Radiology 2013;267(2):324–325. LinkGoogle Scholar
  • 24. Institute of Medicine and National Research Council. Integrity in scientific research: creating an environment that promotes responsible conduct. Washington, DC: The National Academies Press, 2002. doi:10.17226/10430. Google Scholar
  • 25. Proto AV. Radiology 2007: reviewing for Radiology. Radiology 2007;244(1):7–11. LinkGoogle Scholar

Article History

Received May 25, 2016; final version accepted May 25.
Published online: July 11 2016
Published in print: Sept 2016