Quality InitiativesFree Access

Quality Initiatives: Measuring and Managing the Procedural Competency of Radiologists

Published Online:https://doi.org/10.1148/rg.315105242

Abstract

Many regulatory and oversight groups require that the professional performance of radiologists be evaluated on an ongoing basis. Although the diagnostic accuracy of radiologists is routinely measured at most institutions by means of peer review processes, systems for evaluating procedural competency are not widely available. Consequently, technical skills are seldom, if ever, evaluated or managed. The key elements of a system for evaluating procedural competency include the following: (a) clear definition of all elements of a transparent evaluation process; (b) definition of standards for training and credentialing and options for maintenance of competency certification in interventional procedures; (c) collection and analysis of process and outcomes metrics; (d) multisource feedback on procedural, patient care, and safety skills; and (e) an effective, anonymous process for managing radiologists in whom deficiencies are identified. Although no ideal system for evaluating procedural competency currently exists, inclusion of these elements goes a long way toward facilitating the introduction of a simple process for providing appropriate feedback to procedural radiologists, acknowledging excellence, and identifying and managing deficiencies if they occur.

© RSNA, 2011

Introduction

The Institute of Medicine's reports highlighting the extent and impact of medical errors (1,2) have focused attention on implementing methods for improving the quality of care provided to patients. This has resulted in the introduction of regulatory processes aimed at minimizing rates of error occurrence and improving patient safety. Most recently, the Joint Commission has required that processes be implemented for both ongoing and focused professional practice evaluation of physicians (3).

Apart from peer review of diagnostic discrepancies, few widely accepted metrics exist for evaluating the professional performance of radiologists. To our knowledge, few data exist for measuring and managing procedural competency. The Society of Interventional Radiology has published clinical practice guidelines that include a classification system for complications by outcome (4). Although Donnelly and Strife (5) implemented a radiologist scorecard that records process and outcomes metrics for pediatric interventional radiologists, we are not familiar with reliable systems that permit the actual evaluation of procedural competency. Such evaluation is important, as procedures represent a major portion of the workload of interventional radiologists.

In this article, we describe the current rules governing assessment of a radiologist's procedural competency and the development of a program to accomplish this task. Specific topics discussed are the definition of professional competency, reasons for measuring it, regulatory requirements for measuring radiologist competency, measuring the procedural competency of radiologists, remediation programs, the emotional impact of performance measurement, and challenges to evaluating performance.

What Is Professional Competency and Why Should It Be Measured?

Professional competency is defined as the “habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and community being served” (6). “Clinical competence exists when a practitioner has sufficient knowledge and skills such that a procedure can be performed to obtain the intended outcomes without harm to the patient” (7).
Recent societal focus on appraisal systems, recertification, revalidations, and continuous professional development requires measuring the performance of healthcare providers to ensure maintenance of competency (8). Of even greater importance is the need to improve patient safety and the quality of care.

Apart from regulatory requirements and anticipated pay for performance programs, the competency of physicians should be evaluated to document that basic training requirements have been met, that outcomes metrics compare favorably with peer benchmarks, and that the physician possesses sufficient procedural skills to practice safely and effectively. The goal of such processes is not to be punitive but to ensure that standards of practice are achieved and adhered to and, when deficiencies are identified, that an acceptable and effective remediation program is instituted. Unless competency is measured and compared with that of peers, deficiencies and best practices will not be identified and opportunities for improvement will not exist.

From the physician's perspective, we need to be assessed to provide feedback and guide future learning, to foster self-reflection and self-remediation, and to promote access to additional training. From the institution's perspective, physicians should be assessed to promote faculty development, to guide a process of self-reflection, to identify staff requiring remediation, to express institutional values, to collect data for educational and research purposes, and to guide selection of appropriately skilled new staff.

Regulatory Requirements for Measuring Radiologist Competency

Currently, physician performance evaluation is an ad hoc, informal, reactive process. Neither physicians nor hospitals have adequately addressed performance problems. Management of identified problems appears to be haphazard. Department chairs lack the training and skills required to manage underperforming physicians. Hospitals receive little help from regulatory groups driving the assessments (9).

For radiologists practicing in the United States, the American College of Radiology (ACR), Joint Commission, American Board of Radiology (ABR), and Society of Interventional Radiology have established practice guidelines for ensuring ongoing performance evaluation and the maintenance of competency (10,11). Table 1 defines the components of regulatory oversight. Table 2 describes the requirements of the major organizations overseeing the practice of interventional or “procedural” radiology.

Table 1 Definitions of Major Terms Applicable to Training and Performance Evaluation

Table 1

*CME = continuing medical education, MOC = maintenance of certification. Numbers in parentheses are references.

Table 2 Requirements of Regulatory Groups Overseeing the Certification, Accreditation, and Reimbursement of Radiologists

Table 2

Note.—CMS = Centers for Medicare and Medicaid Services, CT = computed tomography, MR = magnetic resonance. Number in parentheses is a reference.

Healthcare organizations must meet defined performance criteria to be reimbursed. Despite the many existing requirements, there is wide variation in everyday practice. Furthermore, while groups such as the Residency Review Committee require that programs devise methods by which trainees’ procedures are supervised and evaluated, no support is provided in developing these metrics and no widely accepted metrics exist. The simple subjective determination that a physician can perform a procedure unsupervised suffices in most settings.

Measuring the Procedural Competency of Radiologists

The various peer review systems practiced by radiologists focus almost exclusively on the diagnostic accuracy of image interpretation rather than the technical skills required for procedural competency, as well as a host of other nonclinical and educational metrics.

The development of systems to measure competency should ideally take into account the cognitive and technical aspects of procedures (12). Examples of the cognitive components that can be evaluated when a physician is performing a procedure include knowledge of indications and contraindications related to the specific procedure, knowledge of specific complications and methods of recognizing and managing them, awareness of alternate therapeutic options, and the ability to effectively explain the risks and benefits of the procedure to the patient. The technical aspects that can be evaluated include knowledge of the different methods and approaches available for performing the procedure and possession of the skills needed to perform it properly (12).

The fundamental principles of an evaluation process are as follows: Any system aimed at evaluating physician performance should be formal, proactive, objective, reproducible, fair, and responsive (9). It is important to create a culture of physician acceptance of and response to quality data; such a culture can be facilitated by institutional adoption of specific and explicit performance standards of behavior and competence. For such an evaluation system to be effective, the roles and responsibilities of all personnel must be clearly defined before the process is implemented. Accountability must exist and be adhered to at every level.

There should be personalized monitoring and oversight of remedial programs that should be made available for all underlying causes of substandard performance, including behavior, psychiatric issues, substance abuse, and issues related to competency. Compliance with the evaluation process should be required, and the system must respond promptly, confidentially, and fairly to all detected deficiencies. A comprehensive physician evaluation process should consider broad categories such as knowledge, clinical decision-making judgment and technical skills, attitudes toward patients and team members, professional habits, and interpersonal skills. Clearly, there are many different ways of defining and evaluating each of these categories.

All too often, technical competence is determined by using deductive reasoning—if a physician has a low complication rate and good outcomes, he or she must be competent. The problem with this reasoning is that the definitions of complications and outcomes may be defined by the physician or his or her peers. For this reason, we believe that a more comprehensive series of parameters, as described later in this article, provides data with which competency can be evaluated or continuously improved. Procedural competency can be evaluated according to five major categories (Table 3).

Table 3 Components of a Comprehensive Process for Evaluation of Procedural Competency

Table 3

Defining the Evaluation Process

For an evaluation of procedural competency to be effective, all physicians must agree to participate in the process; in all likelihood, such participation will ultimately not be optional. The frequency of monitoring must be defined and adhered to, and an annual or semiannual confidential review of results should take place, with a prompt response plan to meet any deficiencies that may be identified. A routine, formal system of monitoring that uses validated measures and focuses on clinical performance is ideal (9).

Such a system should be objective (ie, data driven), fair, transparent, and unbiased. It should deal effectively and promptly with problems when they are identified. Whenever possible, evaluation systems need to be evidence based, with agreed-on standards for satisfactory performance evaluation. Adequate and representative sampling of data are important, and potential confounding factors must always be taken into consideration. Physician-specific data must be representative, easy, and feasible to collect.

Evaluation by physician peers is more likely to achieve physician buy-in; evaluation by nonphysician administrators or related technical, nursing, or administrative staff is likely to be strongly opposed by the radiologists being evaluated. In this way, common processes can be developed for training, credentialing, defining metrics, and collecting observational data. Much debate will take place over who is ultimately responsible for providing the resources to support these undertakings and who should actually lead the process.

Training, Credentialing, and MOC: Defining Standards for Competency and Behavior

The professional standing of a radiologist is a lifelong process. It begins early in residency and continues throughout one's career, thus offering many opportunities for its evaluation. Landon et al (13) describe physician clinical performance assessment as “quantitative assessment of physician performance based on the rates at which their patients experience certain outcomes of care and/or the rates at which physicians adhere to evidence-based processes of care during their actual practice of medicine.” Such assessment could prove useful for ensuring radiologist competency for purposes of credentialing, board certification, and licensure.

There is wide variation in how competency is determined for the purpose of privileging, and professional societies often differ in their requirements for achieving and maintaining procedural competency. Often, this is simply defined as the number of supervised and independent procedures that a physician has performed. To improve performance, an institution should try to define a set of procedure-specific process and outcomes criteria, rather than allowing them to be defined in different ways at the division or even departmental level.

Development of procedural competency begins during residency training. The Accreditation Council for Graduate Medical Education (ACGME) has outlined residency training requirements that include procedural “involvement.” The specific requirements state that each resident “should provide patient care through safe, efficient, appropriately utilized, quality-controlled diagnostic and/or interventional radiology techniques” and “must have documented supervised experience in interventional procedures” (14).

For radiologists performing interventional procedures, the procedures include image-guided biopsy and drainage procedures, angioplasty, embolization, infusion procedures, and other percutaneous interventional procedures (14). The guidelines do not specify a specific number of procedures that must be performed by each resident, nor do they ensure competency. Residents are required to keep a procedure log indicating the type and date of the procedure and whether they actively participated in or observed it—no additional information is required.

Further procedural experience comes from fellowship training. Some current radiology fellowships, including vascular, pediatric, and neuroradiology training programs, offer postgraduate training examinations that may result in the awarding of a Certificate of Added Qualification (CAQ), which may be a requirement for obtaining hospital privileges or even reimbursement in the future. The CAQ test has now been replaced by a lifelong MOC process that includes a test component.

The interventional radiology CAQ test involves a knowledge-based assessment rather than direct observation of the physician performing a procedure. For this reason, the MOC process provides a more comprehensive and continuous educational role than simply taking a test. Other radiology subspecialties that may be procedure oriented, such as abdominal imaging, do not currently offer subspecialty certification and are unlikely to do so in the near future. This makes it challenging to standardize procedural training, since the mere documentation of performance of a given number of procedures does not ensure adequate training or competency.

One must also anticipate the continuous introduction of new procedures and technologies, raising the question of how radiologists who are no longer in training programs can be trained and mentored to perform them competently. Are brief training courses sufficient? Who will evaluate competency and how will it be done? Working in isolation or small groups in a private practice setting affords few practical opportunities for the practitioner to be evaluated, and options should be explored to facilitate the process and make it fair and beneficial to the radiologist.

After training, credentialing and privileging are the next steps purporting to evaluate professional ability. According to the ACGME definitions, issuance of privileging rights to a physician can be based on a combination of methods that ensure an adequate skill set. These include the healthcare institution immediately granting rights to practice based on paperwork completed by the physician, direct firsthand documentation by a supervising clinician, and direct proctoring by a qualified clinician (11). These may not all be practical options in small practices.

Finally, the ABR's MOC process is now used to ensure adequate proficiency of a physician. This process can be applied to determining diagnostic proficiency, but it does not allow evaluation of procedural proficiency. New requirements outline the number of hours and types of CME and MOCs that must be completed. The Practice Quality Improvement component of the MOC does allow procedural outcomes metrics to be collected and analyzed, thus providing an indirect evaluation of a radiologist's technical skills.

Process and Outcomes Metrics

As radiology becomes more evidence based, radiologists will need to become more familiar with the concept of process and outcomes metrics, which can be defined for all procedures as part of the regular quality and safety surveillance program.

Process metrics as they relate to procedures are those components that document adherence to accepted processes surrounding the procedure.
These vary according to the procedure, but several common components include consideration of exclusion criteria, compliance with the National Patient Safety Goals and the Universal Protocol, and adherence to institutional requirements for hand hygiene, protective personal equipment, use of radiation monitoring devices, and reporting of adverse events. Standardized checklists may ensure compliance with and auditing of the so-called critical components of a procedure, including preprocedure, intraprocedure, and postprocedure evaluation. Table 4 lists examples of process metrics that are relevant to interventional procedures.

Table 4 Examples of Process Metrics Applicable to an Interventional Procedure

Table 4

Note.—NPO = non per os (nothing by mouth), PFT = pulmonary function test, US = ultrasonography.

Outcomes metrics can be considered an evaluation of performance in practice, the fourth component of the MOC.
Table 5 provides examples of categories of outcomes metrics, each of which can be answered by many examples. They provide a quantitative assessment of physician performance based on patients’ experience, outcomes of care, and rates of physician adherence to evidence-based processes of care (13).

Table 5 Examples of Categories of Outcomes Metrics Applicable to an Interventional Procedure

Table 5

Outcomes measures typically include the major and minor complications defined for each procedure and any adverse events and related complications or near misses. Equally important, they assess whether the intended goals of the procedure were achieved. Additional outcomes metrics that depend entirely on the procedure can include patient feedback and complaints and compliments received from the patient or other providers. For example, if three radiologists were to perform US-guided fine-needle aspirations of a thyroid nodule, their performance could be compared in terms of length of the procedure, number of passes made, postprocedure patient satisfaction, presence or absence of complications, adequacy of the sample, final histologic diagnosis, and patient feedback metrics.

Nelson and colleagues (15) described four classes of outcomes metrics that form a “balanced compass” that generally applies to healthcare delivery management and reporting. These include medical outcomes (such as complications and achievement of therapeutic goals), patient functional status and perspective on outcome, service metrics (such as access and convenience), and economic metrics (such as cost outcomes).

A balance should be struck between the selection of process and outcomes metrics used to evaluate procedural competency. By focusing on collection of process metrics, data will be skewed away from the technical evaluation and may provide a false sense of procedural competency. Challenges associated with this method include the lack of agreed-on standards for satisfactory procedural performance, the absence of standardized and reliable methods of determining the accurate steps for performing a procedure among different healthcare facilities, and the necessity to adjust for differences in patient populations (eg, a healthier, more educated population may have fewer comorbidities and thus less risk of complications after even a simple procedure) (13). Therefore, developing reliable and valid measures for assessing outcome metrics may be difficult, although they would help in evaluation of the procedural competency of healthcare professionals.

In radiology, the Society of Interventional Radiology is the organization involved in ensuring that procedures are performed for appropriate indications and with adequate technical skills to maximize safety and achieve excellent patient outcomes (16). The challenge for a credentialing-privileging committee is to make certain that the individual physician actually adheres to these evidence-based practices (17). Given differences in practice patterns, patient factors, and training, each institution (or better still, each national society such as the Society of Interventional Radiology) should develop and validate their own outcomes metrics for individual procedures wherever appropriate and where evidence-based criteria do not exist.

One challenge is to develop uniform methods of collecting the standardized data needed to assess process and outcomes metrics. Many hospitals have instituted mandatory compliance testing, such as of infection control and hand hygiene, as one way of assessing process metrics. Another method is chart review, which also allows indirect evaluation of procedural performance.

Multisource Feedback

The ACGME requires that a multisource feedback tool be used for evaluation of resident core competencies (14), but there is no similar requirement for attending physicians. A multisource feedback tool may use paper or Web-based questionnaires to collect specific information from supervising physicians who provide feedback about each resident (18). By constructively analyzing these data, any deficiencies in resident performance can be addressed.

Multisource feedback tools are being employed for practicing physicians in internal medicine (19,20), pediatrics (21,22), and emergency medicine (23), but there is no standard for use in radiology departments. Nevertheless, a recent study by Lockyer et al (18) showed the use of multisource feedback to be a valid and reliable method of evaluating radiologists.

When constructing a multisource feedback tool, the general competencies of the ACGME can be used to develop questions (Figs 14). In our experience, modifications can easily be introduced to align the evaluation process with the vision and mission of a department. For example, if customer service is a priority, specific questions can be posed to evaluate the focus of the radiologist on this area. It is important to solicit feedback on the fundamental principles of the evaluation process.

Figure 1

Figure 1 Sample screen shot from a Web-based general outline of a multisource feedback survey shows the section on clinical skills. For each listed attribute, the physician receives a rating of exceeds, meets, or below expectations (or not applicable).

Figure 2

Figure 2 Sample screen shot from a Web-based general outline of a multisource feedback survey shows the section on interpersonal skills. For each listed attribute, the physician receives a rating of always, usually, sometimes, rarely, or never.

Figure 3

Figure 3 Sample screen shot from a Web-based general outline of a multisource feedback survey shows the section on leadership. For each listed attribute, the physician receives a rating of exceeds, meets, or below expectations (or not applicable).

Figure 4

Figure 4 Sample screen shot from a Web-based general outline of a multisource feedback survey shows the section on feedback from referring physicians. For each listed attribute, the physician receives a rating of exceeds, meets, or below expectations (or not applicable). HIPAA = Health Insurance Portability and Accountability Act.

There should be a spectrum of direct reports, including section or division chiefs and supervisors, as well as reports from a sufficient number of coworkers and, whenever possible, from subordinates and even trainees so that the process can be kept anonymous. We collect data from peer radiologists, referring physicians, technologists, administrative assistants, nursing staff, trainees, and patients. This process should be anonymous, or the person being evaluated can participate in selecting or suggesting specific reviewers.

When evaluating procedural competency, the physician's cognitive knowledge, psychomotor skills, and visual recognition of abnormalities should be measured whenever possible. Cognitive knowledge may be evaluated with the test component of the MOC process, as well as feedback from trainees and colleagues. Psychomotor skills are best evaluated by means of direct observation, with data collected by using a multisource feedback tool. Visual recognition of abnormalities can be evaluated by means of the peer review process, with data collected by using a multisource feedback tool.

Regular evaluation and feedback by means of the multisource feedback process is also a valuable mechanism for avoiding serious problems with residents (24). Extrapolating from these data, the creation of a similar process for staff radiologists may be useful for evaluating the components of the core competencies, as specified by MOC requirements. A yearly assessment could ensure that a physician is continuing to demonstrate the characteristics necessary for successful procedural performance (Fig 5).

Figure 5

Figure 5 Steps in the introduction of a multisource feedback (MSF) tool. The analysis of feedback provided by reviewers includes formulating constructive suggestions for the feedback session, including acknowledging and reinforcing strengths and providing suggestions and guidance when opportunities for improvement exist. Since this is an ongoing process, the cycle can be repeated at predetermined intervals to ensure that improvement efforts are ongoing.

Evaluation with multisource feedback can be employed in conjunction with coaches or other human aids to promote career development. In certain situations, participating physicians should expect to meet with a coach or designated person to review and analyze the responses and develop a mutually agreeable plan to enhance performance in areas that may require improvement. This is typically an ongoing process that facilitates personal growth and development.

Direct Observation and Evaluation of Technical and Procedural Skills

Diagnostic performance review by peers is well established for radiologists. Among several options for direct evaluation of procedural skills are simulation of the procedure, evaluation of a clinical procedure by a trained physician or team, and multisource feedback from peers observing procedures over time.

When suitable resources exist, simulation is a reliable method of evaluating performance. In this assessment, the staff radiologist performs a procedure on a mannequin or organ simulator, with performance proficiency evaluated by using a standardized checklist (25). Such opportunities are more likely to exist in academic institutions or at industry training facilities and are not likely to be readily accessible to most private practitioners.

Methods exist for directly observing and evaluating an interventional procedure (26). A reliable, acceptable, and practical checklist can be developed by using the modified Delphi method, in which multiple survey rounds are administered to a preselected group of experts. The basic steps in the process are illustrated in Figure 6.

Figure 6

Figure 6 Steps in the development of a tool for assessing a generic interventional procedure. The tool is developed by using a modified Delphi method followed by standard setting with the Angoff method (24). The Delphi method involves a series of repeated anonymous surveys of experts to help define the list of major or minor steps that should be sequentially followed when performing a procedure.

In an academic or large practice environment, direct evaluation of procedural performance by a chief of service or designee may occasionally be required to ensure that a radiologist is adequately trained to perform a procedure. Such a process may be considered subjective and is unlikely to be widely welcomed by interventional radiologists no longer in training. Often included as a component of a Focused Professional Practice Evaluation, the process is typically performed by having a senior staff member assist in a procedure to directly determine at that time whether the primary radiologist is competent to independently perform its different components. However, similar information is frequently obtained as part of a multisource feedback process.

If multisource feedback is to be performed, one or more processes must be established to deal with responses indicating that remedial action may be required.

Remediation Programs

The remediation of perceived deficient procedural skills is a difficult challenge for both the individual radiologist and the department. In the absence of national guidelines and metrics, the practitioner is likely to challenge the assessment, making the entire process even more burdensome and demanding. Such a remedial process requires the development of an agreed-on plan of action that is documented in writing (usually placed in the physician's credentialing file) and that allows ongoing monitoring of progress. Typically, there is focused training, which may range from an additional year of fellowship to supervised performance of procedures on simulators or on patients.

Specific courses of action with appropriate counseling and treatment must be established to treat radiologists with short-term stresses, addictions, and mental or physical illness. Special attention must be given to radiologists whose procedural deficiencies are due to declining knowledge or skills. In these cases, remedial programs are simple to fashion but require buy-in and willing and ongoing participation by the radiologist. According to the specific issue being addressed, components of such a program may include a specific number of hours of focused and carefully selected CME opportunities, direct observation of cases, documentation of attendance at rehabilitation programs, emotional and psychological counseling, and random blood testing when necessary.

Emotional Impact of Performance Measurement

It is important to achieve buy-in from physicians when establishing a process to measure and manage performance, since this may mitigate or minimize any adverse emotional response. However, the unexpected detection of subpar performance may have a devastating impact and should be managed with sensitivity.

Confidentiality must be respected as much as is possible, especially during and subsequent to any remediation program. Each institution should develop and publicize to all physicians a set of guidelines about the medicolegal implications of a bad review, those cases in which data are required to be sent to regulatory agencies, and instances of severe deficiencies that could even result in loss of board certification, hospital privileges, credentialing, and licensure (13).

Challenges to Evaluating Performance

Many challenges exist in implementing an effective process for evaluating procedural competency. These include lack of acceptable evidence-based measures, problems defining thresholds for acceptable performance, sample size and statistical considerations, representativeness of data, difficulties correlating with clinical impact and outcome, and the cost and feasibility of collecting such enormous amounts of data (13). One also cannot underestimate the time, costs, and additional resources required to implement a successful system. Typically, little support is provided by institutions; given the current economic situation, the resources that may be required are likely to limit the development of ideal systems.

Performance appraisal discourages teamwork and is often overly subjective and biased. The evaluator may be considered to have complete and unjustified power over an employee, and various evaluators may use inconsistent criteria and standards. The evaluation process may induce unnecessary and unintended anxiety and tends to encourage employees to achieve short-term rather than long-term goals.

Barriers to willing participation include the lack of national benchmarks for measuring and comparing radiologist performance, administrative and associated bureaucratic requirements for undertaking such a process, and redundancy within the system. Moreover, it has not been shown that any of this extensive work really leads to improved clinical performance of radiologists. Finally, few (if any) remediation systems are in place to deal with the below-competent radiologist who requires additional training or other improvement programs.

Summary

Regulatory requirements now demand that physician performance be measured and that any deficiencies be appropriately managed. We describe a radiologist-specific system that incorporates multisource evaluation of procedural competency, assessment of physicians’ clinical performance, and direct evaluation of technical and procedural skills.

An effective evaluation process must include management of underperforming physicians, as well as steps to stimulate technical improvement and maintenance of competency.

Although the procedural competency of physicians is difficult to assess, such evaluation addresses a critical component of everyday radiology practice. Consequently, it is necessary that radiology departments implement a valid assessment method to ensure the highest possible level of patient care and safety.

J.R.S. consults for INTIO and is a stockholder in INTIO, Intelliject, MedicaSafe, and Caymus Medical; all other authors have no financial relationships to disclose.

References

  • 1 Kohn LT, Corrigan JM, Donaldson MS, eds. To err is human: building a safer health system. Washington, DC: National Academies Press, 1999. Google Scholar
  • 2 Committee on Quality Health Care in America, Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington, DC: National Academies Press, 2001. Google Scholar
  • 3 Ongoing Professional Practice Evaluation (OPPE). The Joint Commission report. http://www.jointcommission.org/AccreditationPrograms/Hospitals/Standards/09_FAQs/MS/Ongoing_Professional_Practice_Evaluation.htm. Accessed November 16, 2010. Google Scholar
  • 4 Sacks D, McClenny TE, Cardella JF, Lewis CA. Society of Interventional Radiology clinical practice guidelines. J Vasc Interv Radiol 2003;14(9 Pt 2): S199–S202. Crossref, MedlineGoogle Scholar
  • 5 Donnelly LF, Strife JL. Performance-based assessment of radiology faculty: a practical plan to promote improvement and meet JCAHO standards. AJR Am J Roentgenol 2005;184(5):1398–1401. Crossref, MedlineGoogle Scholar
  • 6 Epstein RM, Hundert EM. Defining and assessing professional competence. JAMA 2002;287(2): 226–235. Crossref, MedlineGoogle Scholar
  • 7 Miller MD. Office procedures: education, training, and proficiency of procedural skills. Prim Care 1997; 24(2):231–240. Crossref, MedlineGoogle Scholar
  • 8 Evans R, Elwyn G, Edwards A. Review of instruments for peer assessment of physicians. BMJ 2004; 328(7450):1240. CrossrefGoogle Scholar
  • 9 Leape LL, Fromson JA. Problem doctors: is there a system-level solution? Ann Intern Med 2006;144(2):107–115. Crossref, MedlineGoogle Scholar
  • 10 Accreditation Council for Graduate Medical Education. Policies and procedures. February 7, 2011. http://www.acgme.org/acWebsite/about/ab_ACGMEPoliciesProcedures.pdf. Accessed November 16, 2010. Google Scholar
  • 11 Philadelphia Center for Risk Management, Health Resources and Services Administration, U.S. Department of Health and Human Services. Policy Information Notice 01–16: credentialing and privileging of health center practitioners. http://bphc.hrsa.gov/. Published 2001, revised 2006. Accessed November 16, 2010. Google Scholar
  • 12 Durning SJ, Cation LJ, Jackson JL. Are commonly used resident measurements associated with procedural skills in internal medicine residency training? J Gen Intern Med 2007;22(3):357–361. Crossref, MedlineGoogle Scholar
  • 13 Landon BE, Normand SL, Blumenthal D, Daley J. Physician clinical performance assessment: prospects and barriers. JAMA 2003;290(9):1183–1189. Crossref, MedlineGoogle Scholar
  • 14 Accreditation Council for Graduate Medical Education. ACGME program requirements for graduate medical education in diagnostic radiology. Effective: July 1, 2008. http://www.acgme.org/acWebsite/downloads/RRC_progReq/420_diagnostic_radiology_07012010.pdf. Accessed November 16, 2010. Google Scholar
  • 15 Nelson EC, Mohr JJ, Batalden PB, Plume SK. Improving health care. I. The clinical value compass. Jt Comm J Qual Improv 1996;22(4):243–258. MedlineGoogle Scholar
  • 16 Lewis CA, Sacks D, Cardella JF, McClenny TE; Standards Division of the Society of Interventional Radiology. Position statement: documenting physician experience for credentials for peripheral arterial procedures—what you need to know. A consensus statement developed by the Standards Division of the Society of Interventional Radiology. J Vasc Interv Radiol 2003;14(9 Pt 2):S373. Crossref, MedlineGoogle Scholar
  • 17 Institute of Medicine. Envisioning the National Health Care Quality Report. Washington, DC: National Academies Press, 2001. Google Scholar
  • 18 Lockyer JM, Violato C, Fidler HM. Assessment of radiology physicians by a regulatory authority. Radiology 2008;247(3):771–778. LinkGoogle Scholar
  • 19 Lipner RS, Blank LL, Leas BF, Fortna GS. The value of patient and peer ratings in recertification. Acad Med 2002;77(10 suppl):S64–S66. Crossref, MedlineGoogle Scholar
  • 20 Ramsey PG, Wenrich MD, Carline JD, Inui TS, Larson EB, LoGerfo JP. Use of peer ratings to evaluate physician performance. JAMA 1993;269(13): 1655–1660. Crossref, MedlineGoogle Scholar
  • 21 Violato C, Lockyer J. Self and peer assessment of pediatricians, psychiatrists and medicine specialists: implications for self-directed learning. Adv Health Sci Educ Theory Pract 2006;11(3):235–244. Crossref, MedlineGoogle Scholar
  • 22 Violato C, Lockyer JM, Fidler H. Assessment of pediatricians by a regulatory authority. Pediatrics 2006;117(3):796–802. Crossref, MedlineGoogle Scholar
  • 23 Lockyer JM, Violato C, Fidler H. The assessment of emergency physicians by a regulatory authority. Acad Emerg Med 2006;13(12):1296–1303. Crossref, MedlineGoogle Scholar
  • 24 Borus JF. Recognizing and managing residents’ problems and problem residents. Acad Radiol 1997; 4(7):527–533. Crossref, MedlineGoogle Scholar
  • 25 Mendiratta-Lala M, Williams TR, de Quadros N, Bonnett J, Mendiratta V. The use of a simulation center to improve resident proficiency in performing ultrasound-guided procedures. Acad Radiol 2010;17(4):535–540. Crossref, MedlineGoogle Scholar
  • 26 Huang GC, Newman LR, Schwartzstein RMet al.. Procedural competence in internal medicine residents: validity of a central venous catheter insertion assessment instrument. Acad Med 2009;84(8): 1127–1134. Crossref, MedlineGoogle Scholar

Article History

Received: Dec 15 2010
Revision requested: Feb 15 2011
Revision received: May 1 2011
Accepted: May 12 2011
Published online: Sept 6 2011
Published in print: Sept 2011