Lessons Learned from Peer Learning Conference in Cardiothoracic Radiology
Abstract
Medical errors may lead to patient harm and may also have a devastating effect on medical providers, who may suffer from guilt and the personal impact of a given error (second victim experience). While it is important to recognize and remedy errors, it should be done in a way that leads to long-standing practice improvement and focuses on systems-level opportunities rather than in a punitive fashion. Traditional peer review systems are score based and have some undesirable attributes. The authors discuss the differences between traditional peer review systems and peer learning approaches and offer practical suggestions for transitioning to peer learning conferences. Peer learning conferences focus on learning opportunities and embrace errors as an opportunity to learn. The authors also discuss various types and sources of errors relevant to the practice of radiology and how discussions in peer learning conferences can lead to widespread system improvement. In the authors’ experience, these strategies have resulted in practice improvement not only at a division level in radiology but in a broader multidisciplinary setting as well.
The online slide presentation from the RSNA Annual Meeting is available for this article.
©RSNA, 2022
SA-CME LEARNING OBJECTIVES
After completing this journal-based SA-CME activity, participants will be able to:
■ Explain the difference between peer review and peer learning and apply practical tips on how to transition to organizing an effective PLC.
■ Describe the spectrum of diagnostic errors and strategies used to reduce them.
■ Discuss the scope of opportunities that peer learning can open in a collegial environment by reviewing case examples from cardiothoracic radiology.
Introduction
Diagnostic errors contribute to 10% of patient deaths and 6%–17% of adverse events in hospitals (1). Recognizing and preventing different types of medical errors is important for optimal patient care. Historically, in radiology practice, score-based second-reader peer review systems (such as RADPEER) have been traditionally used to ensure maintenance of performance standards within a radiology department (2). However, this type of system can have undesirable attributes and consequences, including subjectivity, sampling bias, underreporting of errors due to peer relationships, and fear of punitive action (3).
Recently, the Institute of Medicine has emphasized collaboration, teamwork, and culture and system improvement as keys to reducing diagnostic errors, and pioneer authors in the field have offered an alternative approach to peer review, that is, peer learning (4). Peer learning through peer learning conferences (PLCs) provides an opportunity to accomplish quality improvement and provider-level education in a collegial collaborative environment. This article focuses on comparing and contrasting traditional peer review and PLCs and offers practical tips for transitioning to a peer learning format.
Understanding the cognitive biases and contributing factors during these conferences can help identify the root causes of errors and guide targeted interventions to prevent future occurrences. We highlight case examples from our cardiothoracic PLC that help reinforce educational concepts and uncover opportunities for system-level improvement not only in the division but in a broader multidisciplinary setting as well.
Peer Review and Peer Learning
In the past 20 years, the dominant model for quality assurance in radiology has been the RADPEER program of the American College of Radiology (ACR), with several iterations (5–7). This is basically a numeric scoring system based on peer-to-peer interpretation ratings. In this model, originally interpreted cases are randomly selected and reviewed by a peer radiologist. The cases are scored numerically (from 1 to 3) on the basis of the severity of the miss, allowing error rates to be computed for individuals (7).
This can help in identification of an outlier or underperforming radiologist, and performance improvement can then occur through various means such as remediation or restriction (5–7). A proposed benefit of traditional peer review is quantification of practitioner-specific error rates and compliance with organizational and board requirements (5). For instance, although not specifically designed for Ongoing Professional Practice Evaluation (OPPE), it has been advocated by some authors that RADPEER data for individual radiologists can meet OPPE requirements (8).
However, the scoring-based model is flawed owing to subjectivity, inaccuracy, sampling bias, and a false sense of security that may result from underreporting (3,9–12). The emotional toll of peer review systems on radiologists can be high, leading to feelings of anxiety, shame, and humiliation, which lead to a culture of defensiveness. Also, the focus of these systems tends to be on individual radiologists rather than specific areas of practice needing improvement, leading to resentment, poor attendance, and lack of compliance among radiologists (4). Harm in high-risk situations has been found to be more often related to organizational activities and workflow rather than individual mistakes. Therefore, attributing errors to individuals is not effective if quality improvement is desired.
Maintaining a respectful nonpunitive collaborative culture improves performance in modern radiology practice (4). This environment fosters embracement of errors as learning opportunities rather than a measure of one’s performance. Several articles in the literature are now showing the educational value of sharing, studying, and discussing errors (5,6,11,16,17).
Although peer learning has several advantages, there are important challenges and limitations that need to be overcome for a robust learning experience. Transition to a successful peer learning program requires culture change, considerable time investment, and strong leadership support. Also, this may require information technology (IT) support or tools, buy-in from the practicing radiologists, and incentive for participation.
Another potential limitation is lack of a quantitative performance measure that can qualify for OPPE, Focused Professional Practice Evaluation (FPPE), or other organizational requirements. However, to enable learning, it is essential to separate peer learning from measurement of individual performance. There are several other metrics that can be used for OPPE, such as PLC attendance. Importantly, the focus should be on participation rather than performance evaluation. Performance evaluation can be done, using several other metrics beyond the scope of this article (18).
Another important aspect to consider when transitioning to peer learning is to ensure active participation of all members. Despite its shortcomings, traditional peer review involves the entire group in case collection as well as grading of the “miss.” However, in PLCs the responsibilities of conference preparation and curating cases fall on a few specific people. Therefore, it becomes important to encourage all team members to provide cases and participate in the discussion.
In our experience, we have noticed that shifting to PLCs has led to more engaging discussions focused on quality improvement rather than the scoring-based discussions in peer review conferences. In turn, this has led to more engagement of team members and collaborative rather than defensive discussions. The differences between peer review and peer learning are summarized in Table 1.
Transitioning to Peer Learning
Several key aspects need to be carefully considered for successful transition to peer learning and for achieving the full educational and quality improvement potential of these conferences. In the following section, we describe some of our strategies.
Targeted Selection of Cases with Learning Opportunities and Inclusion of “Great Call” Cases
Instead of random audits of cases, targeted case selection with learning opportunities can be achieved through many sources, such as review of prior studies during interpretation, multidisciplinary conferences, surgery or pathology discrepancy reports, reinterpretation of outside studies, incident-reporting systems, and complaints to radiology leadership (4). Having several venues of case selection helps enrich the diversity of cases presented at the conference.
Conference Organization
Given the time commitment, it is helpful to designate a leader supported by a committee to serve as the point of contact for case submissions, conference organization, and case preparation. For large multispecialty practices, it may be helpful to have several subspecialty committees, while smaller general radiology practices may be better served by a single committee or individual. In our experience, having two designated faculty members who serve as the coleaders for peer learning has been helpful to maintain continuity of the conference, in case one of the leaders is unavailable or away. Meetings at regular intervals with frequency depending on size of the group and specialization are suggested.
Also, care should be taken to arrange the PLC at a time allowing maximum participation and avoiding conflicts with other meetings, such as tumor boards or multidisciplinary meetings. We typically hold our PLCs during the noon hour and present cases using PowerPoint (Microsoft) in an anonymous way, protecting the privacy of the patient and the interpreting radiologist. This is important to maintain trust and to have engagement of the participants (21). Our PLC is also accredited for 1 hour of continuing medical education (CME), which is another way of incentivizing participation.
These meetings can be held in person and broadcast, so that members who are offsite can have the opportunity to participate. In fact, the virtual environment imposed by the need for physical distancing during the COVID-19 pandemic has opened new vistas, potentially extending these learning opportunities outside one’s institution. These kinds of multi-institutional conferences can further broaden horizons and help colleagues learn from each other (22). We also record the conferences, so that they can be available for future reference and for participants unable to attend.
Case Discussion
Case discussion is one of the most important elements of these conferences (23). The focus should be on determining potential contributing factors, rather than scoring errors.
Providing Feedback
In our PLCs, we strive to ensure confidentiality and provide feedback to the radiologist in an objective and professional manner. In some cases, this feedback is already provided by the radiologist who first came across the error. Otherwise, this is provided by the conference moderator. Important elements of feedback include the discrepancy and any take-away points from the discussion. In addition, to maximize learning opportunities, the conference moderator sends out a postconference summary to the group, attaching relevant articles related to the topic discussed.
We also organize a separate meeting for the trainees (fellows and residents) to maintain the dynamics between different groups and cater to learning needs at various levels.
Causes and Classification of Diagnostic Errors
As discussed earlier, classifying errors is an important aspect of PLCs, with the potential to change practice and improve patient safety.
An error is a discrepancy that substantially differs from peer consensus. Errors in diagnostic radiology can be classified into three major categories: (a) perceptual (inability to identify the abnormality), (b) interpretive (incorrect interpretation), and (c) inadequate communication with the clinical providers.
Perceptual and interpretive errors are particularly subject to brain processing and cognitive bias (Fig 1). In the following sections, we discuss some key cognitive biases, which are highlighted in the subsequent case examples from our PLC. While this article is not meant to be an exhaustive review of various biases, we hope that a brief review will help the reader understand the concept and implementation strategies for quality improvement from peer learning activities. For a detailed review of errors, the reader is directed to excellent articles specifically dedicated to this topic (24–26).
Framing Bias
This is the tendency of the radiologist to be influenced by how the clinical question is presented, leading to different conclusions from the same data. Clinical information may be scant or even misleading on requisitions, which can have an effect on image interpretation. Masking the clinical history at the time of image interpretation and then reviewing the history can help avoid this error. Also, reviewing the electronic medical records is helpful when the provided clinical information may be inadequate.
Satisfaction of Search Bias
This happens when the visual search pattern is discontinued after identification of an abnormality that can explain the patient’s symptoms. This error can be reduced by using a systematic or checklists approach and structured reporting.
Anchoring Bias
This is failure to adjust an initial impression despite receiving additional information. Stepping back to look at the big picture when interpreting a study, being aware of this bias, and avoiding the tendency to anchor on a diagnosis early in the process can be helpful.
Alliterative Bias
This is the influence of previous imaging reports on a radiologist’s judgment. To avoid this error, one must consider reviewing prior reports only after rendering an interpretation.
Scout Neglect Bias
This happens when radiologists do not expect to find anything meaningful on scout images. This error can be reduced by always looking at scout images for any incidental findings.
Other Errors
Apart from cognitive biases, system-related factors such as understaffing, distractions in the reading room, information technology (IT) issues, and technical artifacts can also contribute to errors. Different causes of diagnostic errors and practical tips to avoid them are summarized in Table 2.
In the following sections, we discuss specific case examples from our cardiothoracic division PLC and how they led to local quality improvement efforts.
Case 1: Retained Catheter Stylet Fragment
Case Description
An 80-year-old woman underwent chest radiography after placement of a peripherally inserted central catheter (PICC); radiography demonstrated a wire fragment overlying the right hilum (Fig 2A, 2B). This was correctly identified as a fragment of a wire but was attributed to an inferior vena cava (IVC) filter strut at original interpretation. The fragment was removed; after detailed investigation, it was found to be the tip of the stylet used during PICC insertion.
Error and Contributing Factors
There may be a component of anchoring bias (the assumption that the wire fragment was a migrated IVC filter strut), despite the fragment appearing for the first time at post–PICC placement imaging. Also, there was no experience with encountering migrated PICC stylet fragments; hence, this possibility was not considered.
Outcome and Practical Tips
Within a few months, additional cases came to light. It was found that the incidents involved placement by different PICC nurses. Discussions were organized with each of the individual nurses to understand their technique of placement and see if there was a common technical factor or particular step in placement that might be problematic, leading to the sudden increase in incidence of these migrated fragments. However, the discussions were unrevealing.
Radiography of various PICC components was then performed to try to determine exactly which component of the PICC system had migrated (Fig 2C, 2D). On detailed analysis, it became clear that the wire fragment corresponded to the tip of the PICC stylet. Note the small lucency at the distal part of the stylet wire (arrow in Fig 2D), which matched the migrated fragment (arrow in Fig 2B). All cases with the migrated PICC stylet fragment had the same lucency at the tip, consistent with a broken and migrated stylet tip.
A multidisciplinary review revealed that all such cases were related to a specific new PICC system that had been instituted and were not related to operator technique. This case series was presented at our PLC to educate the group about the appearance of these migrated fragments. At the same time, a review with nursing and correlation with PICC components helped determine the exact cause and subsequently led to abandonment of use of this particular PICC system at an institutional level.
Case 2: Missed Surgical Sponge
Case Description
A 40-year-old man underwent postoperative chest radiography after cardiac surgery. The radiography was reported to show expected postoperative findings. However, a tonsil sponge in the mediastinum was missed (Fig 3A).
Error and Contributing Factors
The indication for the study did not specify a missing sponge (framing bias), and the overlying sternotomy wires in the field made it difficult to identify the sponge. Additionally, a tonsil sponge has a distinctive appearance, and unfamiliarity with its radiographic appearance was a challenge in this case.
Outcome and Practical Tips
This case was reviewed and discussed at our PLC. An important challenge was the unfamiliarity with various types of surgical devices and sponges. This led to a practice improvement measure of creating a gallery of sponges and postsurgical devices in the picture archiving and communication system (PACS) for future reference (Fig 3B). It is well known that a significant proportion (88%) of retained surgical items occur in the setting of correct sponge and instrument counts (27). Hence, the importance of looking for retained surgical material in a postoperative setting—irrespective of the provided clinical history—was emphasized to avoid framing effect.
Case 3: Missed Follow-up Due to Communication Error
Case Description
A 72-year-old man underwent an annual aortic surveillance study. The report described a new 13 × 10-mm lobulated nodule in the right lower lobe (Fig 4A). Follow-up of the nodule was advised, and a table delineating the Fleischner Society follow-up guidelines was included. Chest CT 1 year later showed significant enlargement of the nodule (Fig 4B).
Error and Contributing Factors
This is an example of a communication error. No follow-up imaging was performed until the next aortic surveillance study. Although the nodule was detected and interpreted accurately, direct one-on-one communication between the radiologist and the referring provider was not documented. Additionally, the referring provider (a vascular surgeon in this case, who may not have been familiar with nodule management) was directed to a list of guidelines embedded in the report, rather than a specific action. The nodule turned out to be a lung cancer.
Outcome and Practical Tips
This case highlights the importance of closed-loop communication and providing a specific action plan in the radiology report. Communication errors are not uncommon and can occur owing to a breach at various levels: radiologist to physician ordering the examination and clinician to patient. In some instances, the ordering physician may not be the treating physician, and the incidental finding may be outside the scope of their practice, which can further complicate issues. This could then require the finding to be communicated by the ordering physician to the primary care physician or another specialist, depending on the specific case.
Also, from the radiologist’s perspective it is important not only to communicate this finding but also to document it clearly in the patient’s records. The American College of Radiology (ACR) has developed practice guidelines for results communication to provide guidance for radiologists and clinicians (28).
This case led to identification of opportunities for system-level changes at our institution, which included developing an electronic way of flagging reports containing important incidental findings and implementing a nodule navigator system. A nodule navigator or coordinator can help facilitate interaction and communication with health care staff and providers and ensure that appropriate follow-up occurs for incidentally noted findings.
Case 4: Missed Hilar Mass—“Watching the Grass Grow”
Case Description
A 32-year-old man presented with cough and chest discomfort. Different interpreting radiologists interpreted three consecutive chest radiographs as normal. When viewed side by side, the radiographs showed a growing right perihilar mass, which was missed on multiple radiographs over a period of 1 year and 3 months (Fig 5A–5C).
Error and Contributing Factors
An alliterative error is demonstrated in this case. Also, this is an example of what we call a “watching the grass grow” error—that is, small changes from one study to the next that can be difficult to notice. The radiologists likely compared the study they were reading to the most recent comparison. However, if they had compared the study to a more remote study, an obvious change would have been noticed.
Outcome and Practical Tips
We presented this case at our PLC to raise awareness of this type of error. Reviewing all prior related studies during the interpretation process, comparison with the oldest study (not just the immediately previous study), and reviewing prior radiologists’ reports after individually interpreting the study (to avoid undue influence) are essential to avoid these errors.
Case 5: Missed Cardiac Metastasis at Chest CT
Case Description
A 74-year-old man with a history of renal cell carcinoma underwent chest and abdominal CT. The chest CT report described a left ventricular thrombus. A rim-enhancing ventricular septal lesion (later shown to be a metastasis at MRI) was missed at original chest CT interpretation (Fig 6A). Concurrent abdominal CT showed the septal lesion clearly (Fig 6B).
Error and Contributing Factors
This was a perceptual error due to various factors. It was difficult to perceive the enhancing metastasis adjacent to the blood pool within the ventricles on the chest CT portion of the study, although the finding was readily appreciable at abdominal CT given the phase of enhancement. Another contributing bias here was satisfaction of search. The visual search pattern was likely discontinued after identification of the left ventricular thrombus.
Outcome and Practical Tips
This case was used at our conference to highlight the importance of using a systematic or checklist approach to avoid satisfaction of search. This case was also an example of how some lesions in the chest (especially near the lower neck or upper abdomen) can sometimes be seen more clearly at concurrently performed CT for these regions (with a different enhancement phase). It is conceivable that such lesions could be missed by specialty-trained radiologists, who are more focused on their region of interest. Hence, it is helpful for cardiothoracic radiologists to pay attention to the limited sections of the chest included in the abdomen or neck portion.
Case 6: Missed Lung Cancer at Cardiac MRI
Case Description
A 72-year-old man with a history of ventricular arrhythmia underwent cardiac MRI to assess for myocardial scarring. The radiology report described cardiac findings and a left lower lobe mass that was missed on the localizer images (Fig 7A, 7B). The patient underwent coronary angiography weeks later, at which the mass was demonstrated (Fig 7C).
Error and Contributing Factors
This is another example of perceptual error, probably due to failure to look at the scout images (scout neglect bias). The clinical question in this case was to assess for myocardial scarring, which also led to framing bias.
Outcome and Practical Tips
This case was used to remind radiologists to avoid framing bias and to look at scout images in different modalities including MRI (in which the number of images and sequences can make the task overwhelming). A suggestion was also made to incorporate a separate section for scout images in report templates and to change the search pattern so that incidental findings are evaluated first on localizer images before proceeding to answer the cardiac question.
Case 7: Learning to Incorporate Artificial Intelligence Software in Daily Workflow
Case Description
A 76-year-old woman with a history of colorectal cancer underwent several CT studies for disease surveillance. In one of the studies (Fig 8A), a 5-mm nodule was appropriately described in the report and also picked up by the artificial intelligence (AI) software. At a follow-up study 7 months later (Fig 8B), the nodule was larger but was missed, although the nodule was detected by the AI software at that time. In a third consecutive study 1 year later (Fig 8C), the growing nodule was missed by both the reading radiologist and the AI software. Finally, the growing nodule was detected (by the radiologist and the AI software) after another 9 months (Fig 8D), when it was clearly larger (1.9 × 1.6 cm).
Error and Contributing Factors
With evolving technology, it is important for radiologists to keep abreast of technological advancements and incorporate them in their workflow. There was an opportunity for identifying the lesion, particularly at the first follow-up study. Although the nodule was larger, it was not mentioned in the report despite detection by the AI software and description in the preceding radiology report. A challenge was probably the medial location of the growing nodule, which can be a blind spot.
At the second follow-up study (Fig 8C), the nodule was missed by both the radiologist and the AI software, despite being larger. Again, the medial location of the nodule can make it difficult to perceive. Additional challenges are lack of mention in the immediately previous CT report and lack of detection by the AI software.
Outcome and Practical Tips
This case demonstrates the importance of looking at all prior imaging studies and is an example of how relying on the AI results (at the second follow-up study) or ignoring the AI results (at the first follow-up study) can both lead to missing a significant finding. While AI can lead to several false positives, it is important to evaluate all these nodules before dismissing them. Opportunities for improvement in this case are to pay attention to prior reports (which described the nodule and may have led the reader to take a second look) and to the results of AI, which detected the nodule (at the first follow-up study).
As a practice improvement measure, we instituted automatic insertion of AI output in our reporting templates, which would force the radiologist to account for the nodules picked up by the AI software. This automatically inserted output can be edited by the radiologist. Also, looking at the prior report before finalizing the interpretation can direct attention to important findings and minimize perceptual errors. However, we suggest looking at the prior report later in the interpretive process to avoid alliterative errors.
Case 8: Misinterpreted Dilated Basivertebral Veins as Sclerotic Metastatic Disease
Case Description
A 69-year-old woman with a history of lung cancer, left upper lobectomy, and radiation therapy underwent CT chest for surveillance. The radiology report mentioned several new patchy sclerotic foci in T1–T5, concerning for sclerotic metastases (Fig 9).
Error and Contributing Factors
This is an example of interpretation error, which was probably due to inadequate knowledge of this entity. The “sclerotic” lesions (Fig 9A–9C) represent dilated vertebral venous lakes, which were opacified owing to chronic stenosis of the left brachiocephalic vein. These areas of enhancement are seen only when contrast material is injected from the same side as the stenotic vein.
Outcome and Practical Tips
This case was presented at the conference as a learning opportunity, and a search of the literature was performed, revealing similar cases of these “vanishing bone metastases” (29). This was also an opportunity to discuss other pitfalls that can occur with superior vena cava (SVC) syndrome, such as pseudo liver lesions. These cases provide an open forum for discussion, allowing exchange of ideas and allowing colleagues to learn from each other in a nonjudgmental environment. With increasing specialization in specific areas, these forums are an important source of continuing education.
Case 9: Coronary Artery Fistula Diagnosed from a Suspicious Radiographic Finding—Great Call
Case Description
A 42-year-old woman presented with cough and chest pain and underwent chest radiography. The radiography reader noted an opacity in the right infrahilar region overlapping the right cardiac border (Fig 10A); to confirm its vascular nature, the reader compared the radiographic study with a prior abdominal CT study. The opacity was found to correspond to a pulmonary vein, a normal finding. However, during the review, the reader also noted prominent vascular enhancement in the left atrioventricular groove (Fig 10B) at the nongated study. This prompted the reader to consider coronary artery fistula, and a dedicated coronary CT study was recommended, which allowed confirmation of the diagnosis (Fig 10C–10E).
Outcome and Practical Tips
Incorporating great call cases not only boosts the morale of the whole team but also allows opportunities for colleagues to learn from each other and incorporate best practices. In this case, avoiding framing bias and comparing the radiographic study with all available prior imaging studies (even nondedicated studies such as nongated abdominal CT in this case) helped the reader diagnose an unsuspected coronary artery fistula.
Case 10: Liver Lesion Detected and Characterized at Cardiac MRI—Great Call
Case Description
A 52-year-old woman with a history of lung adenocarcinoma underwent PET/CT for cancer surveillance; the report described focal fluorodeoxyglucose uptake in the region of the right atrioventricular groove of the heart, concerning for metastasis (Fig 11A, 11B). Cardiac MRI was performed to further evaluate for a possible cardiac mass. Although no cardiac mass was found (Fig 11C), the cardiothoracic radiologist noticed a left hepatic lesion while monitoring the cardiac MRI study. Specific sequences were performed, and the hepatic mass was characterized as metastatic disease (Fig 11D, 11E). At review of the PET/CT study, the apparent uptake at the atrioventricular groove was secondary to the misregistered liver lesion, which had gone undetected.
Outcome and Practical Tips
This case shows the importance of awareness of such artifacts at PET/CT and highlights the importance of monitoring and modifying examination protocols as needed and keeping an open mind to avoid framing bias. The case was presented and discussed at the PLC to highlight its educational value.
Conclusion
Errors are an inherent part of the practice of medicine, including radiology. Transitioning from peer review to peer learning is desirable, as it provides many learning advantages. Peer learning focuses on collective improvement of performance rather than on logging individual errors. The conference moderator can play an essential role in fostering the environment of trust required for individual and group learning.
The shift from peer review to peer learning requires considerable time, effort, and resources but will be rewarded by improvement in radiologist performance and ultimately patient outcomes. Our division-level PLC opened up opportunities to learn and evolve as a team, encouraged a culture of trust between team members, and helped implement practice improvement initiatives.
Recipient of a Certificate of Merit award for an education exhibit at the 2020 RSNA Annual Meeting.
For this journal-based SA-CME activity, the author P.P.A. has provided disclosures (see end of article); all other authors, the editor, and the reviewers have disclosed no relevant relationships.
References
- 1. . The frequency of diagnostic errors in outpatient care: estimations from three large observational studies involving US adult populations. BMJ Qual Saf 2014;23(9):727–731.
- 2. . RADPEER scoring white paper. J Am Coll Radiol 2009;6(1):21–25.
- 3. . Transitioning from Peer Review to Peer Learning: Report of the 2020 Peer Learning Summit. J Am Coll Radiol 2020;17(11):1499–1508.
- 4. . Peer Feedback, Learning, and Improvement: Answering the Call of the Institute of Medicine Report on Diagnostic Error. Radiology 2017;283(1):231–241.
- 5. . Radiology peer review as an opportunity to reduce errors and improve patient care. J Am Coll Radiol 2004;1(12):984–987.
- 6. . RADPEER peer review: relevance, use, concerns, challenges, and direction forward. J Am Coll Radiol 2014;11(9):899–904.
- 7. . ACR RADPEER Committee White Paper with 2016 Updates: Revised Scoring System, New Classifications, Self-Review, and Subspecialized Reports. J Am Coll Radiol 2017;14(8):1080–1086.
- 8. . Getting the most out of RADPEER™. J Am Coll Radiol 2011;8(8):543–548.
- 9. . Transitioning from peer review to peer learning for abdominal radiologists. Abdom Radiol (NY) 2016;41(3):416–428.
- 10. . Survey of faculty perceptions regarding a peer review system. J Am Coll Radiol 2014;11(4):397–401.
- 11. . Rethinking peer review: what aviation can teach radiology about performance improvement. Radiology 2011;259(3):626–632.
- 12. . Quality assurance in radiology: peer review and peer feedback. Clin Radiol 2015;70(11):1158–1164.
- 13. . The effect of multidisciplinary care teams on intensive care unit mortality. Arch Intern Med 2010;170(4):369–376.
- 14. . Quality of care management decisions by multidisciplinary cancer teams: a systematic review. Ann Surg Oncol 2011;18(8):2116–2125.
- 15. . Do team processes really have an effect on clinical performance? A systematic literature review. Br J Anaesth 2013;110(4):529–544.
- 16. . Consensus-oriented group peer review: a new process to review radiologist work output. J Am Coll Radiol 2014;11(2):131–138.
- 17. . Radiologist Peer Review by Group Consensus. J Am Coll Radiol 2016;13(6):656–662.
- 18. . Focused Professional Performance Evaluation of a Radiologist: a Centers for Medicare and Medicaid Services and Joint Commission Requirement. Curr Probl Diagn Radiol 2016;45(2):87–93.
- 19. . Peer Review: Lessons Learned in a Pediatric Radiology Department. Curr Probl Diagn Radiol 2016;45(2):139–148.
- 20. . The complementary nature of peer review and quality assurance data collection. Radiology 2015;274(1):221–229.
- 21. . Practical considerations when implementing peer learning conferences. Pediatr Radiol 2019;49(4):526–530.
- 22. . Peer Learning Through Multi-Institutional Case Conferences: Abdominal and Cardiothoracic Radiology Experience. Acad Radiol 2021;28(2):255–260.
- 23. . Practical Suggestions on How to Move from Peer Review to Peer Learning. AJR Am J Roentgenol 2018;210(3):578–582.
- 24. . Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction. RadioGraphics 2015;35(6):1668–1676.
- 25. . Bias in Radiology: The How and Why of Misses and Misinterpretations. RadioGraphics 2018;38(1):236–247.
- 26. . Fundamentals of Diagnostic Error in Imaging. RadioGraphics 2018;38(6):1845–1865.
- 27. . Retained Surgical Items at Chest Imaging. RadioGraphics 2021;41(2):E10–E11.
- 28. . ACR practice parameter for communication of diagnostic imaging findings. Reston, Va: American College of Radiology, 2020.
- 29. . Vanishing bone metastases: a pitfall in the interpretation of contrast enhanced CT in patients with superior vena cava obstruction. Br J Radiol 2011;84(1005):e176–e178.
Article History
Received: Apr 11 2021Revision requested: June 10 2021
Revision received: July 4 2021
Accepted: July 8 2021
Published online: Feb 11 2022
Published in print: Mar 2022