Fostering a Healthy AI Ecosystem for Radiology: Conclusions of the 2018 RSNA Summit on AI in Radiology
Abstract
The 2018 RSNA Summit on AI in Radiology brought together a diverse group of stakeholders to identify and prioritize areas of need related to artificial intelligence in radiology. This article presents the proceedings of the summit with emphasis on RSNA’s role in leading, organizing, and catalyzing change during this important time in radiology.
Keywords: Impact of AI on education, Patient scheduling/no-show prediction, Resource allocation, Staffing
© RSNA, 2019
Summary
RSNA’s Summit on AI in Radiology identified several areas of opportunity for the organization to foster the adoption of artificial intelligence in radiology for the benefit of the patients we serve.
Key Points
■ A diverse group of stakeholders contributed to the 2018 RSNA Summit on Artificial Intelligence in Radiology.
■ RSNA has the opportunity to organize and catalyze artificial intelligence’s role in radiology.
Introduction
Artificial intelligence (AI) has made a resurgence in business and scientific disciplines (1), albeit with much hype. Stakeholders from academia, corporations, professional societies, and government agencies are attempting to organize and guide the development, validation, and dissemination of AI algorithms in health care (2,3). Several nascent companies are developing AI software to augment radiologist workflow and detection abilities. This set of stakeholders is vital to the progression and adoption of AI in radiology. The Radiological Society of North America (RSNA) is keenly interested in the constructive role that professional societies can play in fostering the development of AI applications to improve diagnostic imaging. To assess the needs and set goals and strategic directions, RSNA hosted a 2-day summit in October 2018 that brought together a diverse set of 55 stakeholders, including radiology practitioners, educators, and researchers; members of federal regulatory agencies and the National Institutes of Health; health information technology companies; and RSNA staff and volunteers. Attendees were asked to review critically the current state of AI tools applied to imaging, to determine the desired future states, to prioritize initiatives, and to identify ways in which the RSNA, in concert with other organizations, could best accelerate progress in AI for medical imaging. The core needs, emergent themes, and recommended directions that RSNA should pursue are summarized in this article.
Role of the RSNA
RSNA’s mission is to promote excellence in patient care and health care delivery through education, research, and technologic innovation. The RSNA has particular strengths and expertise as a neutral convener or facilitator that brings disparate stakeholders together and facilitates their work toward a common purpose. For example, as a convener for standards development between industry and domain experts, the RSNA can catalyze the development of standards that contribute to radiology AI development, education, and deployment.
Current AI Initiatives Led by RSNA
RSNA already provides numerous resources for research and education related to AI. The organization probably is known best for its Scientific Assembly and Annual Meeting, which offers numerous AI-focused refresher courses and scientific sessions. The Informatics area of the RSNA Learning Center contains a wide array of posters and other exhibits from which attendees can learn more about how AI is likely to affect radiology practice. The Crowds Cure Cancer exhibit enables radiologists to participate in the annotation process of a large image dataset created by the National Cancer Institute. The annotations will be used for real-world AI research. The RSNA meeting also features a concentrated showcase of AI vendors, where radiologists, scientists, and industry representatives can learn about the current state of the AI marketplace and the products that are becoming available to clinical practices.
The Deep Learning Classroom is a hands-on experience for anyone wanting to experience the thrills and challenges of AI algorithm development. Students bring their own laptops and work with instructors to solve actual AI imaging problems. Courses are available for all levels of experience throughout the week.
Outside the annual meeting, RSNA strongly supports AI research and provides numerous educational opportunities for anyone interested in learning more. The National Imaging Informatics Curriculum and Course, sponsored jointly with the Society for Imaging Informatics in Medicine, covers information technology and other noninterpretive skills. Although designed primarily for North American radiology residents, the course has attracted radiologists internationally and across the range of training to practice. The AI portion of the curriculum continues to grow with each week-long session. This course has educated 1200 residents from more than 100 training programs to date. For radiologists who have completed their training, RSNA partners with other radiology professional groups to offer Spotlight Courses around the world that help educate radiologists about AI.
For AI researchers and industry representatives, RSNA offers annual machine learning challenges, featuring a curated dataset that can be used to solve a key clinical imaging problem (4). The 1st year featured a Bone Age Challenge, which encouraged data scientists to build algorithms that predict skeletal maturity from pediatric hand radiographs (5,6). In 2018, the RSNA challenged data scientists to build algorithms that detect pneumonia on frontal chest radiographs (7). More than 1400 teams participated in the 2018 challenge.
The RSNA’s new Radiology: Artificial Intelligence journal has been established to present high-quality original research, as well as articles highlighting education and social, ethical, and legal issues related to AI in radiology. RSNA’s Research & Education Foundation provides more than $4 million in grant funding for radiology research each year. A growing number of funded grants use AI methods to accomplish their aims, including more than a dozen funded grants in 2018.
RSNA has developed a number of informatics resources that provide a foundation for development of AI in radiology. The RadLex vocabulary provides a common, organized terminology for radiology (8,9); the Logical Observation Identifiers Names and Codes/RSNA Radiology Playbook provides a uniform naming scheme for imaging procedures (10). RSNA has sponsored efforts to develop and promote structured reporting of radiology procedures (11,12). RSNA has supported development of common data elements to promote interoperability of information for radiology reporting, clinical decision support, and AI systems (13,14). These initiatives have allowed radiologists to communicate using common language and information frameworks.
RSNA’s ongoing activities as they relate to AI in radiology are listed in Table 1. Table 2 shows educational and research offerings that benefit various stakeholders.
![]() |
![]() |
Areas of Discussion and Themes
After discussion of RSNA’s current activities, the meeting focused on six topics:
Implementation of AI in clinical practice.—There is uncertainty about how to vet AI technology to assure it is ready for clinical practice. AI applications need to be integrated into clinical practice so they enhance current work processes.
Education of current and future radiologists about AI.—With much hype and misinformation surrounding AI and the perceived threat to radiologists’ jobs and relevancy, there is a need for multimodal education of radiologists and others in the AI ecosystem.
Advanced methodologies and technologies for research.—Development of AI-related infrastructure, data pipelines, and algorithm testing environments are important pillars to support rapid, yet scientifically rigorous, development and testing of AI algorithms in radiology. Research can focus on multiple data types, including images, natural language texts, and electronic health record data (15).
Improvement of data collection and curation.—Data collection and curation are significant rate-limiting steps in the development of AI algorithms. There are privacy and safety concerns regarding sharing, portability, and use of clinical data for AI research.
Ethical and legal aspects of AI.—AI algorithms can be shaped unwittingly by biases. Such bias highlights the role of radiologists to guide AI developers, users, and regulators and to hold them accountable to ensure algorithms are safe and free of bias. How courts will assign liability for patient harm when AI algorithms have informed the care process is unknown.
Use of AI to improve quality and business performance. —AI solutions can buttress the business intelligence and quality improvement pillars of health care enterprises, especially in radiology.
Four main questions emerged from the summit: (a) How should data be collected, curated, and shared for AI research? (b) How should AI education be delivered to multiple stakeholders in the radiology community? (c) How should AI be deployed in clinical practice as a means to improve quality and business performance? (d) What are the ethical and legal implications of AI in radiology?
To facilitate prioritization of areas, a poll was conducted at the end of the summit, and its results are presented in Table 3.
![]() |
Data Collection and Curation
Data collection and curation were ranked as the top two priority areas by summit participants. AI algorithms require large volumes of data to train a deep learning system effectively. This requirement underscores the need for scalable and efficient systems for data discovery, collection, curation, transfer, and manipulation (16). This technology gap presents an opportunity for RSNA to facilitate development in partnership with industry leaders.
Many summit participants focused on concerns about security, privacy, and limited interoperability that prevent sharing of image data between health care organizations or between health care organizations and AI vendors. Stakeholders face important trade-offs when considering data sharing. Greater portability and shareability of data for algorithm development can benefit patients, but at a greater risk to patient privacy.
A potential solution to the data-sharing problem could be to develop “sandbox” environments that enable institutions and their vendor partners to develop and test algorithms jointly so data security could be maintained and health care data access could be controlled. This approach offers a tenable solution for institutions that are hesitant to release patient data because of security and privacy concerns. Such environments also could be used to annotate, organize, maintain, and present training data from disparate sources. Automated pipelines will be essential to coordinate, aggregate, and store data from various sources in an efficient, sound, and reproducible manner.
Training on limited single-institution datasets usually does not yield algorithms whose accuracy generalizes across sites and patient populations. New research suggests that multi-institutional studies could be conducted without moving data by transferring the weights from an algorithm trained at one institution to one or more additional institutions for sequential training. These distributed learning methods can improve data security without significantly compromising algorithm accuracy (17).
Another major bottleneck in the development of AI algorithms is the need for labeled or annotated imaging datasets and requiring reliable normalization of imaging data across imaging sites. Normalization of images and their respective annotations is particularly challenging when the data originate from disparate sources, such as multiple research sites. The annotation bottleneck is particularly evident if bounding boxes or hand-drawn segmentation by a domain expert is required (18).
There is a need for open standards to facilitate exchange of datasets, algorithms, and validation protocols for multi-institutional collaboration by academic and industry partners. The standards also should allow investigators to compare algorithm performance in different sites and circumstances.
Educating the Radiology Community about AI
Education about AI was ranked highly (Table 3). RSNA can lead the development of medical imaging–specific educational assets that could serve multiple stakeholders, including radiology practitioners (ie, radiologists, radiology nurses, radiologic technologists, physicists, administrators, trainees), informatics and information technology personnel, software developers, radiology educators, researchers, and regulators. This is an appropriate role for RSNA’s educational capabilities.
There is an apparent chasm in communication between radiologists and the data scientists who develop AI algorithms. To bridge the gap, RSNA could develop courses and multidisciplinary groups to stimulate collaboration. Educational materials that could teach imaging to data scientists and data science to radiologists could fulfill this need.
Multimodal AI education is the best method to facilitate knowledge transfer effectively. In addition to journal articles, online webinars, and courses, RSNA could develop podcasts, symposia, workshops, hackathons, and educational fellowships, either on its own or in partnership with other organizations. Both active and passive learning sources developed jointly by radiologists and data scientists could have a significant impact on AI education (19).
These multimodal education materials could be guided by an AI curriculum developed for the radiology community. The curriculum could be located and maintained on the RSNA website as a free resource for members. An example of a joint curriculum resource is the National Imaging Informatics Curriculum and Course (20), a week-long online course jointly sponsored by the RSNA and the Society for Imaging Informatics in Medicine. The course currently is offered twice a year but could be expanded to adapt to the needs and feedback of its attendees. Such jointly sponsored courses also could be developed with other radiologic and data-science societies to offer greater opportunities for data scientists to interact with the radiology community.
AI Deployment in Clinical Practice, Quality Assurance, and Business Performance
AI applications in the clinical workspace will extend far beyond image interpretation and will have a profound impact throughout the imaging life cycle, including processes and events that precede image acquisition and interpretation. For example, AI can assist with clinical decision support and order entry, augment the protocoling and quality control process, produce scheduling efficiencies, facilitate change detection, and reduce contrast material and radiation dose (21). The implementation of AI technology in clinical practice falls into three categories: (a) clinical radiology practice, (b) quality assurance and auditing functions, and (c) evaluation of business performance with key performance indicators (KPIs) (22). We review each of these AI applications.
AI in clinical radiology practice.—The use of AI algorithms for clinical radiology practice is regulated by government organizations. The U.S. Food and Drug Administration (FDA) considers AI algorithms in health care “Software as a Medical Device” (23). Software as a Medical Device have a separate precertification pathway for FDA clearance, with their own definition, framework for risk stratification, quality management system, and clinical evaluation guidelines (24). The European Union (EU) requires the Conformité Européenne (CE) mark for a variety of medical and nonmedical innovations, including AI software. CE mark certification, like the FDA’s process, has stringent qualifying requirements, which can be difficult to navigate for novice developers. In addition to CE mark certification, the EU recently passed the General Data Protection Regulation (GDPR), a law that protects EU citizen data privacy (23). Laws such as GDPR put the power to share patient data into the hands of the patients themselves. The GDPR must be taken into account when designing data sharing and security systems but also when considering how specific EU citizens can use data. Similar laws may emerge in other countries, including the United States, as we gain more experience with AI development and use (25).
To help developers navigate the complexities of the FDA and CE mark processes, and the GDPR and any future legislation, the RSNA could educate radiologists and developers about the regulatory processes, as well as assist with connecting solutions (product) with a suitable need in the clinical space (market). Potential consumers of clinical AI algorithms need to remember that approval or certification by the FDA or CE does not guarantee a specific level of performance; it only signifies to the public that the product performed to a minimally acceptable level. Consumers and regulators must closely scrutinize postmarket monitoring of algorithm accuracy on real-world data. For example, the RSNA could assist by hosting registries for postmarket surveillance of AI algorithms.
Quality assurance and auditing features.—A major criticism of AI technology has been the “black-box” nature of the algorithms, whereby the inputs and outputs of a trained system are transparent to the user, but the decision processes in between remain hidden (1). Although the opacity of AI algorithms is unlikely to resolve completely, increased funding for Explainable AI is needed. Explainable AI methods enable the interrogation of AI models to determine which factors led to the model’s conclusion (26). This area of research has been vigorously supported by the Department of Defense and could be supported by other funding agencies that support AI research, including the RSNA.
Rigorous methods for evaluation, such as randomized controlled trials, can establish trust in an algorithm even when its methods are opaque, especially for AI products that claim to affect patient outcomes with improved diagnostic and detection capabilities. Making training datasets and algorithms freely available when possible also will improve trust. Some researchers advocate open-source publication of algorithms undergoing evaluation by randomized controlled trials to assure transparency. However, not all algorithms can be made available because the radiology software marketplace and industry ecosystem are predicated on the ability to develop proprietary algorithms.
Business performance.—The ability to measure business performance is crucial to organizational success. KPIs are financial and nonfinancial metrics that evaluate the success of a company or organization (27). Hundreds of KPIs exist in radiology (28) and can be divided into groups depending on their use and goals: (a) clinical performance (eg, complication rates, false-positive rates); (b) service-level performance (eg, missed appointments, patient wait time, report turnaround time); (c) utilization and productivity performance (eg, equipment idle time, repeat sequence rates, picture archiving and communication system downtime); and (d) financial performance (eg, net income, preauthorization effectiveness, claims rejection rate) (29).
AI tools can help to understand and optimize the business functions of radiology practices and aid radiology practice management by aiding in the development and evaluation of KPIs. For example, real-time AI evaluation of utilization and productivity KPIs (eg, equipment idle time, picture archiving and communication system downtime) could identify inefficiencies and anomalies that cause bottlenecks amenable to improvement. The veracity of the data being used to measure KPIs and to develop AI algorithms is essential to avoid downstream misleading or biased information (30), which ties to the need for better data collection and curation. RSNA’s role could include maintaining a database of radiology-specific KPIs and their evaluation methods with information on how AI algorithms could be used to measure these KPIs. Another role may be to offer a KPI benchmarking resource for comparison of best practices for business performance across institutions, aided by AI.
Ethical and Legal Implications of AI in Radiology
RSNA could take on a leading role in providing clarity about the ethical use of AI in radiology. A clear differentiation should be made between augmentation versus replacement of radiologist workflow tasks, with the former being an assistive goal and the latter being an adversarial goal (31). Making this distinction could foster guidelines about accountability and risk attribution for AI technology and its developers, radiologists, regulators, health care organizations, and patients (32). RSNA has joined with other organizations in North America and Europe to develop principles for the ethical use of AI in radiology.
Ambiguity about whether AI technology is assistive or adversarial can make it difficult to assign responsibility and liability for undesirable clinical outcomes arising from the use of AI in radiology. This ambiguity raises many unanswered questions: Who is responsible for an unfavorable outcome if an AI algorithm makes a mistake? What is the radiologist’s role and level of responsibility if an AI algorithm misdiagnoses a disease without input from the radiologist?
AI algorithm developers and users have an obligation to avoid bias. How do we ensure that AI technology is developed and used to avoid inherent bias? The possibility of inadvertently developing AI algorithms with sociodemographic, racial, and gender biases is a particular concern (33).
Conclusion
The RSNA AI Summit identified several areas of opportunity for the Society to foster adoption of AI in radiology for the benefit of the patients we serve: (a) RSNA can serve as a neutral convener or facilitator, to bring disparate stakeholders in the radiology AI space together and direct them to serve a common purpose. (b) RSNA can be a convener for standards development between industry and domain experts. (c) RSNA can catalyze the development of standards that contribute to radiology AI data discovery, collection, curation, development, and deployment with emphasis on data safety, such as sandbox environments. (d) RSNA can lead in educating the radiology community about AI. (e) RSNA can guide AI deployment in clinical practice, quality assurance, and business performance. (f) RSNA can lead the evaluation of ethical and legal implications of AI in radiology. These priorities will serve as a guide for future RSNA initiatives.
Acknowledgments
The authors thank the participants of the Summit on Artificial Intelligence and the RSNA staff for valuable discussions and contributions that enabled the development of this article.
Author Contributions
Author contributions: Guarantors of integrity of entire study, F.H.C., C.P.L.; study concepts/study design or data acquisition or data analysis/interpretation, all authors; manuscript drafting or manuscript revision for important intellectual content, all authors; approval of final version of submitted manuscript, all authors; agrees to ensure any questions related to the work are appropriately resolved, all authors; literature research, all authors; manuscript editing, all authors
Authors declared no funding for this work.
References
- 1. . Peering into the black box of artificial intelligence: evaluation metrics of machine learning methods. AJR Am J Roentgenol 2019;212(1):38–43.
- 2. . Current applications and future impact of machine learning in radiology. Radiology 2018;288(2):318–328.
- 3. . Machine learning for medical imaging. RadioGraphics 2017;37(2):505–515.
- 4. . Challenges related to artificial intelligence research in medical imaging and the importance of image analysis competitions. Radiol Artif Intell 2019;1(1):e180031.
- 5. . The RSNA pediatric bone age machine learning challenge. Radiology 2019;290(2):498–503.
- 6. . Performance of a deep-learning neural network model in assessing skeletal maturity on pediatric hand radiographs. Radiology 2018;287(1):313–322.
- 7. . Augmenting the National Institutes of Health chest radiograph dataset with expert annotations of possible pneumonia. Radiol Artif Intell 2019;1(1):e180041.
- 8. . RadLex: a new method for indexing online educational materials. RadioGraphics 2006;26(6):1595–1597.
- 9. . Creating and curating a terminology for radiology: ontology modeling and analysis. J Digit Imaging 2008;21(4):355–362.
- 10. . The LOINC RSNA radiology playbook: a unified terminology for radiology procedures. J Am Med Inform Assoc 2018;25(7):885–893.
- 11. . Toward best practices in radiology reporting. Radiology 2009;252(3):852–856.
- 12. . Reporting initiative of the Radiological Society of North America: progress and new directions. Radiology 2014;273(3):642–645.
- 13. . Common data elements in radiology. Radiology 2017;283(3):837–844.
- 14. . The ASNR-ACR-RSNA common data elements project: what will it do for the house of neuroradiology? AJNR Am J Neuroradiol 2019;40(1):14–18.
- 15. . Doubly-robust estimation of effect of imaging resource utilization on discharge decisions in emergency departments. Conf Proc IEEE Eng Med Biol Soc 2018;2018:3256–3259.
- 16. . Deep learning: a primer for radiologists. RadioGraphics 2017;37(7):2113–2131.
- 17. . Distributed deep learning networks among institutions for medical imaging. J Am Med Inform Assoc 2018;25(8):945–954.
- 18. . Deep learning in radiology: An overview of the concepts and a survey of the state of the art with focus on MRI. J Magn Reson Imaging doi: 10.1002/jmri.26534. Published online December 21, 2018.
- 19. . The need for a machine learning curriculum for radiologists. J Am Coll Radiol 2018 doi: 10.1016/j.jacr.2018.10.008. Published online December 7, 2018.
- 20. National Imaging Informatics Curriculum and Course. https://bit.ly/2Uc0nZ2. Published 2018. Accessed February 15, 2019.
- 21. . Machine learning in radiology: applications beyond image interpretation. J Am Coll Radiol 2018;15(2):350–359.
- 22. . Leveraging technology to improve radiology workflow. Semin Musculoskelet Radiol 2018;22(5):528–539.
- 23. . Artificial intelligence as a medical device in radiology: ethical and regulatory issues in Europe and the United States. Insights Imaging 2018;9(5):745–753.
- 24. . The FDA and artificial intelligence in radiology: defining new boundaries. J Am Coll Radiol 2018 doi: 10.1016/j.jacr.2018.09.057. Published online December 4, 2018.
- 25. . Artificial intelligence and the ‘Good Society’: the US, EU, and UK approach. Sci Eng Ethics 2018;24(2):505–528.
- 26. . Learning deep features for discriminative localization. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016; 2921–2929.
- 27. . Quality initiatives: key performance indicators for measuring and improving radiology department performance. RadioGraphics 2010;30(3):571–580.
- 28. . A design protocol to develop radiology dashboards. Acta Inform Med 2014;22(5):341–346.
- 29. . Key performance indicators and the balanced scorecard. J Am Coll Radiol 2018;15(7):1000–1001.
- 30. . Big data and machine learning-strategies for driving this bus: a summary of the 2016 Intersociety Summer Conference. J Am Coll Radiol 2017;14(6):811–817.
- 31. . Will machine learning end the viability of radiology as a thriving medical specialty? Br J Radiol 2019;92(1094):20180416.
- 32. . Ethics, artificial intelligence, and radiology. J Am Coll Radiol 2018;15(9):1317–1319.
- 33. . Machine learning in medicine: addressing ethical challenges. PLoS Med 2018;15(11):e1002689.
Article History
Received: Feb 23 2019Revision requested: Feb 28 2019
Revision received: Mar 1 2019
Accepted: Mar 4 2019
Published online: Mar 27 2019