Strategies for Establishing a Comprehensive Quality and Performance Improvement Program in a Radiology Department
Abstract
To improve the safety and quality of the care that radiologists provide, and to allow radiologists and radiology personnel to remain competitive in an increasingly complex environment, it is essential that all imaging departments establish and maintain managed, comprehensive, and effective performance improvement programs. Although the structure and focus of these programs can vary, a number of common components exist, many of which are now widely mandated by organizations that regulate the field of radiology. Basic components include patient safety, process improvement, customer service, professional staff assessment, and education, each of which requires strategies for implementing continuous programs to monitor performance, analyzing and depicting data, implementing change, and meeting regulatory requirements. All of these components are part of a comprehensive quality management system in a large academic radiology department. For smaller departments or practices, the gradual introduction of one or more of these components is useful in ensuring the safety and quality of their services.
© RSNA, 2009
LEARNING OBJECTIVES
After reading article and the test, the will be able
| •. | List the principles of quality and performance improvement and key elements in ensuring the success of an improvement program. | ||||
| •. | Discuss the specifics of applying quality management to radiology, including compliance with the various regulatory groups. | ||||
| •. | Describe the key performance indicators and their applications in a radiology department. | ||||
Introduction
To ensure safety, accuracy, and high-quality care and to allow radiologists and radiology personnel to maintain a competitive edge in a complex environment, the introduction of comprehensive continuous quality management processes is essential. Many of these processes are now widely mandated by organizations that regulate our profession (,Table 1). Whether it’s the Joint Commission, the ACR, the ACGME, the ABR, or insurance payers, more and more quality-related demands are being made on our daily practice.
In this article, we define various terms that are commonly used in the field of quality management, review the principles of performance improvement, and describe essential “ingredients” of a quality and safety program. In addition, we suggest components that we believe should be part of a comprehensive program, including institutional leadership, process improvement, patient safety, staff assessment, customer and consumer relations, and education. For each component, we describe strategies for implementing continuous programs to monitor performance, analyzing and depicting data, implementing change, and meeting regulatory requirements. We also briefly discuss methods for improving organizational performance.
Defining the Terms
The field of quality management, including performance improvement and patient safety, is characterized by a host of confusing and overlapping terminologies (,Table 2). In essence, a variety of processes can be introduced to monitor quality (quality control) and safety (risk management) under a departmental umbrella (quality assurance) that serves as a cog in the larger institutional culture of safety (total quality management). The ultimate goal is to continuously improve the effectiveness of what we do (performance improvement).
Principles of Performance Improvement
Performance improvement programs are more likely to be effective when they subscribe and adhere to recognized principles. Appropriately selected and collected data should be actively, continuously, effectively, and visibly managed by a management team that consists of qualified and enthusiastic personnel. Metrics that are monitored should be mission driven and benchmarked against appropriate standards. Appropriate tools should be used for data analysis and trending. Proactive systems should exist to minimize risk. A mechanism for managing severe adverse and sentinel events should also exist. Such events should be managed promptly and effectively, thereby ensuring that action items are identified from a root cause analysis and that any implemented changes are continuously monitored to determine efficacy. All implemented changes should be monitored to determine progress toward relevant evidence-based benchmarks. Ultimately, all the members of a department are responsible for improving quality and safety. However, the chairman and the quality director are responsible for the practical implementation of the performance improvement program. For all participants, participation must be easy rather than burdensome, involvement should be rewarded, and timely feedback should be provided for all reported events. Performance improvement is a process that differs from analyses made in commercial companies, where increased revenue is the major goal. In medicine, the benefit to the individual patient is the main goal, with less emphasis being placed on economic implications.
Essential Ingredients of a Quality and Safety Program
The structure and components of a departmental performance improvement program vary depending on the size of the department and hospital, the nature of the practice and the services offered, and the institutional mission and culture of quality and safety. Examples of ingredients that we consider essential to the implementation of a successful quality management enterprise in a radiology department are shown in ,Table 3. Other important elements include a focus on the customer and the systems processes, an understanding of process variation, a willingness to experiment with implementation of ideas, and teamwork (,6).
Suggested Components of a Comprehensive Quality Management Program
Institutional Leadership
Support and Commitments.—
Tangible and visible support from the hospital administration sends a strong message endorsing the priorities of patient quality and safety in an institution. This support may be financial, allowing the hiring of a departmental quality coordinator, or administrative, such as establishing and facilitating interdepartmental quality forums or adverse event reporting systems. Active participation by hospital leaders in environmental and safety “walkabouts” serves a dual role of demonstrating both a commitment to performance improvement and a willingness to understand relevant issues at ground level. When quality metrics are mandated by regulatory organizations or the hospital (eg, compliance with the Joint Commission’s NPSGs or a credential-based peer review process), monitoring of compliance can be facilitated by the hospital, since these demands are becoming more frequent and onerous. It is equally important that the institution support a program that monitors quality and performance in all departments, which allows fair review of adverse events, including interdepartmental review of complex cases, and holds all departments to the same standards. Most hospitals now have departments of healthcare quality and risk management divisions that provide such support.
Just Culture of Quality and Safety.—
David Marx (,7) introduced the phrase just culture when he outlined principles for achieving an environment in which staff members feel comfortable disclosing errors, including their own. Whereas many traditional healthcare cultures hold individuals accountable for all errors, a just culture recognizes that individual practitioners are not accountable for system failings over which they have no control. A just culture recognizes that competent professionals make mistakes but does not tolerate disregard for risks to patients or misconduct. Such a culture should minimize fear among participants and should identify and introduce proactive rather than reactive monitoring processes.
Quality Management Team.—
A well-functioning, cohesive team is essential for the successful implementation and functioning of a quality and safety program. Members should not only be familiar with the processes and skilled with the tools necessary for managing the program, but should also be willing to devote time and energy to making the program a success. The team must be fully integrated into a hospital-wide system for facilitating exchange of interdepartmental quality assurance data and consideration of issues that bridge departments or clinical services. Members of the team should be familiar with processes and criteria for investigating serious adverse events. We have found it helpful to align the goals and mission of the hospital with those of the team, to enlist the acknowledgement and support of department and hospital leaders, and to have a clearly defined chain of accountability.
For a radiology quality management team, all stakeholders should be represented. Team members may include representatives from nursing, information technology, technical operations, marketing, and finance, as well as clinical representatives. We include a resident on our team who participates in the quality assurance elective. Depending on the process or the purpose of the team, it is likely that other stakeholders, providers, consumers, and even members of the hospital’s risk management and healthcare quality departments will participate as team members when necessary.
Process for Engaging Physicians.—
Radiologists may respond to mandated participation in quality-related processes, especially if credentialing depends on it. However, to achieve enthusiastic “buy-in” by radiologists and involve them as partners rather than customers, it is helpful to be familiar with processes that facilitate, encourage, and sustain participation (,Fig 1). A process described by Reinertsen et el (,8) illustrates how improved patient outcome relates to improved efficiency and reduced time wastage. In their system, messages and messengers should be carefully chosen, physician involvement should be visible, trust should be built within each quality initiative, and communication should be candid and open (,8). However challenging such a process may be, an essential goal should be to demonstrate the positive benefits of participation.
Figure 1. Chart illustrates a process for engaging radiologists in quality and safety programs. Initially developed by members of the Institute of Healthcare Improvement (8), the chart shows strategies and the major areas that can be addressed when attempting to achieve buy-in by colleagues and other members of a department.
Process Improvement
Some departments, institutions, and regulatory organizations mandate that specific technical and clinical metrics be continuously monitored, analyzed, and reported. Such actions allow existing processes to be improved through increased output, efficiency, or effectiveness. A variety of analytic tools are used to achieve these goals, and the results are described as performance indicators (,Figs 2,–,,4). When analyzed against stated goals, the performance indicators help evaluate an organization’s progress toward those goals (,Fig 3) (,5,,9). The main indicator categories used in many radiology departments include productivity, finance, patient safety, access to services, and customer satisfaction (,Fig 2,) (,10,,11). Within each category, different indices exist; for example, indicators of patient safety may include patient falls, contrast material reactions or extravasations, procedural complications, and radiation exposure. Such systems work best if metrics are clearly defined, easily measured, actively managed, and continuously monitored. In the following sections, we address mechanisms for collecting, selecting, and depicting relevant data.
Figure 2a. Key performance indicator domains. (a) Chart illustrates how indicators can be selected from various domains (y-axis) depending on an institution’s mission and clinical services. X-axis shows the number of indicators selected from each domain. (b) Chart illustrates percentage compliance (x-axis) with each of five distinct indicators (y-axis) selected from the domain of patient safety. These indicators are aligned with the NPSGs established by the Joint Commission and thus facilitate monitoring of compliance as required by the Commission. Comm = communication, Med recon = medicine reconciliation. Figure 2b. Key performance indicator domains. (a) Chart illustrates how indicators can be selected from various domains (y-axis) depending on an institution’s mission and clinical services. X-axis shows the number of indicators selected from each domain. (b) Chart illustrates percentage compliance (x-axis) with each of five distinct indicators (y-axis) selected from the domain of patient safety. These indicators are aligned with the NPSGs established by the Joint Commission and thus facilitate monitoring of compliance as required by the Commission. Comm = communication, Med recon = medicine reconciliation. Figure 3. Performance indicators used in a radiology department. “Spider” (“radar”) chart illustrates the major categories of indicators that can be used to monitor technical and clinical performance in a radiology department. All data are plotted against goals (in this example, 100% of defined metrics) and are shown for the prior quarter (yellow) and the current quarter (purple) relative to these goals. Areas in which performance has lagged from previous levels are shown in yellow. Access CTC = access to screening computed tomographic (CT) colonography appointments, Access mammo = access to screening mammography appointments, ACLS = advanced cardiac life support, comm = communication, Extravs: CT = extravasations seen at CT, No shows mammo = number of patients who do not show up for scheduled mammography appointments, Pt = patient, QA = quality assurance, RFA = radiofrequency ablation, RTAT = reported turnaround time, rxns = reactions, Unread CT = CT scans not yet read by the radiologist. Figure 4. Graph illustrates the performance indicators for an abdominal imaging section. Categories include the weekly number of residents and staff assigned to the section; change in the volume of fluoroscopy (Fluoro), ultrasonography (US), magnetic resonance imaging (MRI), and CT cases for the previous week; number of patient and physician complaints; access (in days) for CT colonographic studies; mean report turnaround time (RTAT) versus target for all section staff; quality assurance (QA) submissions; and percentage variance from the peer review target. Numbers along the x-axis represent the actual number of events that are recorded. Red bars indicate areas in which departmental targets were not reached, blue bars indicate areas in which targets were exceeded. Key performance indicators can be selected to meet the administrative and clinical demands of any section.



Data Collection.—
Patient charts can be randomly reviewed for hospital- and patient-level quality-of-care metrics. The most common audit is performed to monitor compliance with the Joint Commission’s NPSGs (,Fig 5). It is important to define specific measures before evaluating charts and to be sensitive to the fact that hospital-level qualities of care differ between institutions (,12).
Figure 5. Graph and chart illustrate the monitoring of compliance with the Joint Commission’s NPSGs and several radiology-specific metrics. When note is made that an adverse event (eg, bleeding requiring hospital admission) occurred following an imaging-guided procedure (eg, ultrasonography-guided liver biopsy), the review process is used to link this information to an adverse event reporting system to ensure that the event has been or is now being recorded. The data are then further linked to the departmental procedural complication key performance indicator. Abdo = abdominal imaging, Angio = angiography, IR = interventional radiology.
Adverse event reporting systems should permit anonymous, nonpunitive reporting of adverse events, including near misses (,13). In a radiology department, such events may be related to patient safety, technical quality problems, procedural complications, and diagnostic or interpretive errors. To improve participation, it is helpful to define specific criteria for reporting cases, provide feedback to the submitting staff member, and publicize the advantages and benefits of such a system. The system may reside within a department of radiology or be hospital wide, and it must be actively and regularly managed. This management should include fair and effective review of all adverse events, monitoring to identify trends, and appropriate follow-up of all action items resulting from submitted cases. In our experience, an additional useful component is the facilitation of interdepartmental reporting and exchange of cases. This process should preserve confidentiality, and all data should be afforded protection similar to that given to other peer review data.
Data Selection.—
The use of quality management indicators, particularly customer satisfaction surveys, is not a fully standardized and established process in many academic radiology departments in the United States (,14). In radiology, the identity of the customer is not clear. Is it the patient, the referring physician, or the in-house referring department? All customers should be assessed for their satisfaction levels. Many hospitals are trying to align their stated missions with the dimensions of care highlighted in the Institutes of Medicine report, Crossing the Chasm (,15), which states that patient care should be safe, effective, efficient, patient centered, and timely, categories that are readily applicable to the field of radiology (,Table 4). The nature of the data being collected should be selected to meet regulatory requirements. For example, to be compliant with the Joint Commission’s NPSGs, data describing compliance with these goals should be collected and analyzed.
Fundamental principles have been described for developing subjective and objective metrics to improve the quality of data (,16). Subjective data may reflect the personal perceptions and experiences of our customers. Objective data may reflect regulatory compliance requirements, benchmarking processes, or alignment of measurements with institutional missions and goals. Data should be accessible, believable, relevant, consistent, free of error, easily interpretable, and objective (,16).
Data Depiction: Scorecards and Dashboards.—
Electronic “dashboards” are usually designed to autopopulate selected clinical information through continuous interaction with a hospital information system network. In this way, members of the quality team are able to continuously monitor selected “vital signs” of a radiology department (,Fig 6). The material depicted on these dashboards depends entirely on the level of sophistication, the capabilities of the software and its designer (,Fig 7), and the priorities of the department (,Figs 8, ,9). Visual depiction of a department’s progress toward stated goals is a useful tool for providing feedback to and communicating with stakeholders. Many such tools exist, from data-depicting “scorecards” to radar or spider charts (,Fig 3). What all these tools have in common is the depiction of the current status compared with a benchmark, prior data, or established goals. Measurement of service levels and access to our services is commonly performed (,Fig 10). It must be recognized that a data “explosion” involving multiple indicators may be an obstacle to performance improvement. Only the most pertinent mission-driven indicators should be selected and used by the quality management team.
Figure 6. Screen shot shows examples of radiology’s vital signs. The data can be actively monitored and managed and typically relate to case management and customer service. ED = emergency department, PACS = picture archiving and communication system, US = ultrasonography. Figure 7. Screen shot shows a dashboard that indicates the number of unread, nondictated, and unapproved cases per clinical section on a daily basis. “Drilling down” into each box reveals the case type and the responsible radiologist. Figure 8. Clinical quality indicators report. The report shows compliance with providing provisional emergency department (ED) interpretations and represents one of many options for depicting quality indicators. The indicator is defined, a rationale for selecting the indicator is given, a method for data collection and exclusion is provided, and a benchmark is included. LOS = length of stay, RIS = radiology information system. Figure 9. Graph illustrates the overall report turnaround time (in hours) for different faculty (y-axis) (names removed for peer review purposes) and shows the amount of time from completion of examination to dictation (blue), from dictation to transcription (red), and from transcription to approval (yellow). Vertical lines at 20 (target) and 40 (action) hours indicate the control limits. Figure 10. Service levels and access to services. As a central component of customer relations, levels of response to customers and their ability to gain access to our services should be continuously monitored. We continuously monitor time to next available appointment (TTNA) for certain studies—here, CT colonography—as well as report turnaround time, equipment capacities, conversations between schedulers and customers, and times to populate wet-read dashboards with provisional interpretations.




Patient Safety
Compliance with NPSGs.—
In keeping with the Joint Commission’s requirements, institutions are expected to develop systems for monitoring and ensuring compliance with the NPSGs. Each of these goals is applicable to radiologists. Specifically, staff working in a radiology department are expected to verify and document patient identity; a preprocedure pause or “time-out” must occur prior to every interventional procedure to verify the side, site, and nature of the procedure; and this time-out must be documented in the record. The new goals for 2009 require that we define and communicate a system for patients and their families to report concerns about safety, and encourage them to do so.
Systems for Communicating Results.—
Figure 11. Systems for communicating abnormal results. The common features of systems used to communicate abnormal results—whether urgent, critical, or even nonurgent but requiring follow-up—include a clear definition of what results to communicate, the capacity to identify and contact the appropriate responsible physician in a timely manner, documentation of the abnormal result, and a process for ensuring that appropriate follow-up takes place. HIS = hospital information system, RIS = radiology information system.
Environmental and Safety Walkabouts.—
A formal but unannounced walkabout is a very effective way of evaluating safety processes and staff awareness of these processes in a clinical department. Not only can useful data be collected (,Fig 12) and procedures put in place to improve safety mechanisms, but the presence of an evaluating team with representation from senior hospital executives sends a message to all staff that safety is taken seriously. This message can be further reinforced if visible steps are taken to improve staff and patient safety in the workplace. From a regulatory perspective, the Joint Commission expects healthcare organizations to conduct such patient safety walkabouts. These walkabouts can focus on the environment of care or on patient care and safety, and can also serve as safety and security inspections to ensure that (a) OSHA (Occupational Safety and Health Administration) standards are upheld, (b) material safety data sheets are readily available, (c) all guests are registered and badged, (d) staff are aware of the locations and function of surveillance systems and alarms, and (e) staff are familiar with emergency codes.
Figure 12. Environmental and safety walkabout. In this example, two residents were found not to be wearing their radiation monitoring badges. These data are linked to a hospital program that monitors staff compliance with badge use. A threshold of 90% (red dashed line) has been set for compliance. IR = interventional radiology, PACS = picture archiving and communication system.
Risk Management Systems for Anticipating Problems.—
Regular analysis of reported and collected data should also be used to seek trends that may be occurring. “Run charts” depict changes in a measured process over time and are used to identify patterns, including trends or changes from the average. By applying lines to control for lower and upper limits, these so-called control charts are used to predict when a process is tending out of control and requires evaluation and remediation. Techniques have also been developed to identify and manage problems that may occur. Arising from the proactive rather than reactive school, failure mode effect analysis applies statistical processes to an organization to identify trends or potential problems and to intervene in anticipation of these problems becoming overt.
Professional Assessment of Staff
Regulatory and credentialing organizations are starting to require that hospitals establish systems for assessing staff performance. The radiology community has responded to these requirements by developing peer review processes (,Fig 13) and by mandating that all radiologists participate in a practice quality improvement project to maintain certification (,4).
Figure 13. Screen shot illustrates how data from a selected performance indicator (“Peer Review of Radiologist Performance”) are collected and depicted as a dashboard.
Peer Review Process.—
Participation in peer review processes is no longer an option for many radiologists. Peer review is one means of evaluating radiologists’ performance and offers an opportunity to reduce errors and improve patient care (,18). Peers who are being reviewed need not be radiologists, but may also be technologists or other members of a department. For example, random radiographs can be evaluated (hard copy or on a picture archiving and communication system) for defined quality indicators as a metric of technologists’ performance. Recordings of telephone conversations can be reviewed to evaluate customer service and staff performance in a scheduling office. Patient satisfaction surveys can be amended to include feedback about all staff, including nurses, front office staff, transport facilitators, physician extenders, technologists, and all clinical staff.
For peer review of physician performance, the challenge is to convince radiologists that the task of double reading or reinterpreting studies is beneficial rather than a bureaucratic requirement. The process of peer review is mandated by many regulatory groups, including the ABR and the ACR. As a component of Part IV of the ABR Maintenance of Certification program, performance must be evaluated, and peer review systems allow such evaluation (,4). Many hospitals and regulatory organizations now require that all physicians on active staff participate in peer review of clinical performance. Some departments have linked participation in peer review with incentive systems.
Peer review of radiologists exists in many forms, ranging from national programs managed by the ACR to simpler systems developed within departments or institutions. Most radiologists are aware of the ACR RadPeer program (,19), whereby agreement with interpretation of prior studies is ranked on a scale of 1 to 4 (,Fig 14). Many departments have established peer review systems that are either similar to the ACR system or in which radiologists blindly interpret a certain number of pre- or randomly selected cases. What all systems have in common are anonymous reporting and fair analysis, minimal effect on work flow, ease of participation, nonpunitiveness, and integration into the department’s quality assurance program (,19). Ideally, such systems can be integrated at the national level, allowing benchmarking of a radiologist’s performance. Although radiologists may still be reluctant to participate, the increasing regulatory requirements will ultimately result in mandatory participation.
Figure 14. Schematic illustrates options that can be used to assess the performance of a radiologist (scorecard), including an anonymous process for comparing performance with that of peers; means of evaluating diagnostic or procedural performance (chart, morbidity and mortality reviews); and a variety of metrics defined by departments (good citizenship criteria), the hospital administration (general competencies and peer evaluations), and the ABR (maintenance of certification).
Diagnostic and Procedural Competencies.—
Although peer review systems tend to focus more on diagnostic and interpretive reviews, systems for measuring procedural competencies must be developed. Such systems do exist for trainees but have not yet been fully developed at the staff level.
Physician Scorecard.—
Regulatory and credentialing organizations and an increasing number of hospitals are introducing the concept of comprehensive physician scorecards (,Fig 14), often aligned with the ACGME general competencies that we use for monitoring the performance of residents and fellows (,20).
Stakeholders: Customer and Consumer Relations
Who Are the Customers?—
It is important to know precisely who your customers are and to understand their opinions of your services. There may well be multiple simultaneous customers, including patients (external) or one of several groups of internal customers (eg, referring physicians, payers, radiology department staff). One approach is to identify who your customers are, determine their needs and expectations, and then meet and continuously try to exceed these expectations (,21).
Customer and Stakeholder Surveys.—
Surveys of patients and referring physicians must be carefully analyzed. Patients can provide invaluable feedback regarding our services. Involve patients in the peer review and quality-review processes and the 360° evaluation process. Patients and both physician and nonphysician members of radiology departments are all partners and customers in the processes of improving quality. A process should be put in place to manage and improve relations with all customers. Surveys can be targeted toward different customers and can focus on different dimensions of care. It is important that all customers be considered and surveyed for feedback regarding perceptions of safety and quality in a radiology department. Alderson (,22) lists four major categories to consider when evaluating customer satisfaction: (a) knowing the factors on which customers base their evaluations of the quality of service, (b) knowing how to identify your customers, (c) knowing how to measure your customers’ satisfaction levels, and (d) understanding the need to balance interpersonal and technologic skills in practice.
Mystery Shoppers.—
The concept of “mystery shoppers” has gained popularity; such persons can be used to monitor access into our systems and to directly evaluate scheduling, access to next available appointment, and human interactions. Furthermore, these shoppers are very valuable for providing feedback regarding cleanliness, waiting room chatter, and comments from real patients.
Anonymous Suggestion Box.—
Although reporting mechanisms are helpful for data collection and for identifying opportunities and challenges, there will always be some resistance to reporting systems that do not protect staff confidentiality. For this reason, the ready availability of anonymous suggestion boxes provides one means of raising issues that completely protects the anonymity of the reporting person. One downside is that personal accusations can be made that may not be valid. We suggest that staff be told that such a reporting mechanism is for technical quality and safety matters only, and that any personal accusations should be dealt with through hospital-defined channels or through the human resources department.
Educational Components
Although not typically included in a list of essential components, an educational program is an especially useful adjunct in academic departments of radiology, where ACGME and ABR requirements must be taught and monitored. This program should encompass staff training and self-improvement. An array of educational products can be used to stimulate staff engagement and participation, improve management of collected data, and meet regulatory requirements (eg, the ACGME outcomes projects for residents or the ABR-mandated practice quality improvement project to receive maintenance of certification).
Morbidity and Mortality or Peer Review Conference.—
Radiology Resident Quality Assurance Elective Programs.—
Quality assurance elective programs or rotations are now being offered (,25). These can be designed to meet the ACGME outcomes project requirements, to train residents to undertake their practice quality improvement projects, and to support the ACGME practice-based learning competency training.
Grand Rounds.—
Quality assurance grand rounds are held by some departments of radiology as an educational means of engaging staff in the principles and processes of quality and safety. We have held monthly quality assurance grand rounds for the last 3 years, with participation from all radiology sections. Indeed, by elevating the meeting to a grand rounds format, a message is sent about how highly a department’s leadership values quality assurance (,23). These rounds can also be designed to meet requirements for earning category 1 risk management credits, serve to provide an annual overview of the departmental quality management structure, and help orient new residents and faculty to the processes for case reporting and peer review.
Recognition and Celebration of Improvements.—
Methods for Improving Organizational Performance
A variety of problem-solving quality management methodologies are used for improving the performance of an organization or department. These methodologies, which can be integrated into an organization’s total quality management program, range from the basic model for improvement, developed by Associates in Process Improvement (,26), to more complex models such as Six Sigma and the balanced scorecard. Six Sigma is a management philosophy that set ambitious objectives that force accountability by minimizing mistakes and maximizing value (,27). In radiology departments, where we constantly strive for zero technical defects, Six Sigma methodologies are being used with increasing frequency (,28).
First developed by Robert Kaplan and David Norton at Harvard Business School in 1990 (,29), the balanced scorecard is a key management tool that provides a framework for translating an organization’s vision into strategies that incorporate all quantitative and abstract measures that are of true importance to that organization. By focusing not only on financial outcomes but also on human issues, the balanced scorecard helps provide a more comprehensive view of a business, which in turn helps organizations act in their own best long-term interests. This is particularly relevant to the healthcare field. The strategic management system helps managers focus on performance metrics while balancing financial objectives with customer and employee perspectives, as well as internal processes. Implementing balanced scorecards typically includes four processes: (a) translating the vision into operational goals, (b) communicating the vision and linking it to individual performance, (c) business planning, and (d) providing feedback and learning and adjusting the strategy accordingly.
Conclusions
We have described the components necessary for establishing and maintaining a comprehensive quality management system for a large academic radiology department. For smaller departments or practices that are seeking to introduce similar programs, the gradual introduction of one or more of these components is a useful first step toward a more comprehensive quality and safety program.
![]() |
![]() |
![]() |
![]() |
References
- 1 http://www.jointcommission.org/PatientSafety/NationalPatientSafetyGoals/. Accessed April 7, 2008. Google Scholar
- 2 http://www.acr.org/sSecondaryMainMenucategories/quality_safety/guidelines. Accessed April 7, 2008. Google Scholar
- 3 http://www.acgme.org/outcome. Accessed April 7, 2008. Google Scholar
- 4
, Kun LE, Becker GJ, Dunnick NR, Bosma J, Hattery RR. American Board of Radiology perspective on Maintenance of Certification: part IV—practice quality improvement for diagnostic radiology. RadioGraphics2007; 27: 769–774. Link, Google ScholarStrife JL - 5
, Ondategui-Parra S, Ros PR. Quality management in radiology: historical aspects and basic definitions. J Am Coll Radiol2005; 2: 985–991. Crossref, Medline, Google ScholarErturk SM - 6
. Continuous quality improvement for radiologists. Acad Radiol2004; 11: 155–161. Crossref, Medline, Google ScholarApplegate KE - 7
. Patient safety and the “just culture”: a primer for health care executives. New York, NY: Columbia University, 2001. Available at: http://www.merstm.net/support/marx_primer.pdf Accessed November 24, 2008. Google ScholarMarx D - 8
, Gosfield AG, Rupp W, Whittington JW. Engaging physicians in a shared quality agenda. IHI Innovation Series white paper. Cambridge, Mass: Institute for Healthcare Improvement, 2007. Google ScholarReinertsen JL - 9
, Ondategui-Parra S, Nathanson EM, Erturk SM, Ros PR. Utilization management in radiology: basic concepts and applications. J Am Coll Radiol2006; 3: 351–357. Crossref, Medline, Google ScholarOtero HJ - 10
, Gill IE, Bhagwat JG, et al. Clinical operations management in radiology. J Am Coll Radiol2004; 1: 632–640. Crossref, Medline, Google ScholarOndategui-Parra S - 11
, Bhagwat JG, Gill IE, Nathanson E, Seltzer S, Ros PR. Essential practice performance measurement. J Am Coll Radiol2004; 1: 559–566. Crossref, Medline, Google ScholarOndategui-Parra S - 12
, Clark K, Khuri S, Henderson W, Hur K, Daley J. Validating risk-adjusted surgical outcomes: chart review of process of care. Int J Qual Health Care2001; 13: 187–196. Crossref, Medline, Google ScholarGibbs J - 13
, Yam CS, Sosna J, Hallett DT, Milliman YJ, Kressel HY. Implementation of online radiology quality assurance reporting system for performance improvement: initial evaluation. Radiology2006; 241: 518–527. Link, Google ScholarKruskal JB - 14
, Bhagwat JG, Zou KH, et al. Practice management performance indicators in academic radiology departments. Radiology2004; 233: 716–722. Link, Google ScholarOndategui-Parra S - 15 Crossing the quality chasm: a new health system for the 21st century. Available at: http://www.iom.edu/Object.File/Master/27/184/Chasm-8pager.pdf. Accessed March 24, 2008. Google Scholar
- 16
, Lee YW, Wang RY. Data quality assessment. Commun ACM2002; 45: 211–218. Crossref, Google ScholarPipino LL - 17
, Marn CS, Bell Y, Carlos R. Efficiency of a semiautomated review process for notification of critical findings in diagnostic imaging. AJR Am J Roentgenol2006; 186: 933–936. Crossref, Medline, Google ScholarChoksi VR - 18
. Radiology peer review as an opportunity to reduce errors and improve patient care. J Am Coll Radiol2004; 1: 984–987. Crossref, Medline, Google ScholarHalsted MJ - 19
, Lewis RS, Bhargavan M, Sunshine JH. RADPEER quality assurance program: a multifacility study of interpretive disagreement rates. J Am Coll Radiol2004; 1: 59–65. Crossref, Medline, Google ScholarBorgstede JP - 20
, Strife J. Performance-based assessment of radiology faculty: a practical plan to promote improvement and meet JCAHO standards. AJR Am J Roentgenol2005; 184: 1398–1401. Crossref, Medline, Google ScholarDonnelly LF - 21
, Arora S. Total quality in radiology: a guide to implementation. Boca Raton, Fla: St Lucie, 1994. Google ScholarAdams HG - 22
. Noninterpretive skills for radiology residents: customer service and satisfaction in radiology. AJR Am J Roentgenol2000; 175: 319–323. Crossref, Medline, Google ScholarAlderson PO - 23
, Howell E, Wright SM. Morbidity and mortality conference, grand rounds, and the ACGME’s core competencies. J Gen Intern Med2006; 21: 1192–1194. Crossref, Medline, Google ScholarKravet SJ - 24
, D’Agostino H. The department of radiology quality assessment meeting: an unexpected teaching bonus. Invest Radiol1994; 29: 378–380. Crossref, Medline, Google ScholarTalner LB - 25
, Siewert B, Yam S, Kressel HY, Kruskal JB. A quality assurance elective for radiology residents. Acad Radiol2007; 14: 239–245. Crossref, Medline, Google ScholarKrajewski K - 26
, Nolan KM, Nolan TW, Norman CL, Provost LP. The improvement guide: a practical approach to enhancing organizational performance. San Francisco, Calif: Jossey-Bass, 1996. Google ScholarLangley GL - 27
. Six Sigma: the breakthrough management strategy revolutionizing the world’s top corporations. Qual Prog2000; 33: 106–107. Google ScholarSander W - 28
, Kim MH, Hong SE, Jung JH, Song MJ. The application of the Six Sigma program for the quality management of the PACS. AJR Am J Roentgenol2005; 185: 1361–1365. Crossref, Medline, Google ScholarKang JO - 29
, Norton DP. The balanced scorecard: translating strategy into action. Boston, Mass: Harvard Business School Press, 1996. Google ScholarKaplan RS












