Using Machine Learning to Reduce the Need for Contrast Agents in Breast MRI through Synthetic Images
Abstract
Background
Reducing the amount of contrast agent needed for contrast-enhanced breast MRI is desirable.
Purpose
To investigate if generative adversarial networks (GANs) can recover contrast-enhanced breast MRI scans from unenhanced images and virtual low-contrast-enhanced images.
Materials and Methods
In this retrospective study of breast MRI performed from January 2010 to December 2019, simulated low-contrast images were produced by adding virtual noise to the existing contrast-enhanced images. GANs were then trained to recover the contrast-enhanced images from the simulated low-contrast images (approach A) or from the unenhanced T1- and T2-weighted images (approach B). Two experienced radiologists were tasked with distinguishing between real and synthesized contrast-enhanced images using both approaches. Image appearance and conspicuity of enhancing lesions on the real versus synthesized contrast-enhanced images were independently compared and rated on a five-point Likert scale. P values were calculated by using bootstrapping.
Results
A total of 9751 breast MRI examinations from 5086 patients (mean age, 56 years ± 10 [SD]) were included. Readers who were blinded to the nature of the images could not distinguish real from synthetic contrast-enhanced images (average accuracy of differentiation: approach A, 52 of 100; approach B, 61 of 100). The test set included images with and without enhancing lesions (29 enhancing masses and 21 nonmass enhancement; 50 total). When readers who were not blinded compared the appearance of the real versus synthetic contrast-enhanced images side by side, approach A image ratings were significantly higher than those of approach B (mean rating, 4.6 ± 0.1 vs 3.0 ± 0.2; P < .001), with the noninferiority margin met by synthetic images from approach A (P < .001) but not B (P > .99).
Conclusion
Generative adversarial networks may be useful to enable breast MRI with reduced contrast agent dose.
© RSNA, 2023
Supplemental material is available for this article.
See also the editorial by Bahl in this issue.
Summary
Generative adversarial networks can help recover the full contrast information from simulated low-dose contrast-enhanced breast MRI examinations.
Key Results
■ Using a training set (n = 9551) of standardized contrast-enhanced breast MRI scans, generative adversarial networks (GANs) recovered image appearance and conspicuity of enhancing lesions from simulated low–contrast agent dose images, as verified on independent test sets (two sets of 100 images each).
■ Two experienced radiologists could not distinguish the synthetic from the real postcontrast images and were correct in 49 of 100 (P = .62) and 55 of 100 (P = .18) images, respectively.
■ When the same GANs were trained with unenhanced images, the conspicuity of enhancing lesions was significantly reduced when measured on a five-point Likert scale (mean rating, 4.9 vs 1.8; P < .001).
Introduction
Breast cancer is one of the most common types of cancer in women and continues to represent a major cause of cancer-related death (1). New European Society of Breast Imaging guidelines recommend contrast-enhanced MRI for breast cancer screening for all women aged 50–70 years with extremely dense breasts (2), and similar recommendations have been issued by the American College of Radiology (3).
Administration of a gadolinium-based contrast agent (GBCA) is required for breast MRI (4). While there is broad consensus that the functional information provided by depicting the angiogenic activity of breast cancer is what drives the sensitivity of breast imaging methods, especially in women with dense breasts, the need for contrast agents is an obstacle when imaging methods are used for screening (5). It has been demonstrated that in healthy individuals with normal kidney function, linear GBCAs can deposit in deep cerebral nuclei (6). Such deposition appears to be less likely with macrocyclic GBCAs (7). Although no long-term side effects of gadolinium deposition have been reported in the literature thus far, there is still an obvious clinical need to reduce the amount of gadolinium injected for diagnostic purposes (5,8,9).
Accordingly, there is growing interest in developing methods that help reduce the amount of GBCA needed for breast MRI. Although unenhanced breast MRI protocols have been described and tested, they have thus far not attained the same level of diagnostic accuracy as contrast-enhanced imaging (10). Recent reports suggest that decreasing doses to as low as 20% of the recommended GBCA dose may be equally effective in the detection of breast lesions (8,11). Although images with such low signal-to-noise ratio (SNR) may be useful for depicting certain types of breast cancer tumors, such as large tumors or cancers already visible on unenhanced images, it is possible that these images may not be satisfactory for detecting smaller enhancing lesions expected in a screening setting. Thus, a method that restores the SNR for low–contrast agent dose images is highly desirable.
One approach to achieve this is to use methods of machine learning to virtually restore the SNR of MRI scans obtained with lower doses of contrast agent. It has already been shown that deep learning has the potential to reduce gadolinium dose in brain MRI while avoiding substantial image quality degradation (12–14). It has also been postulated that postcontrast images in breast MRI could be reliably generated solely using precontrast images; however, this has only been examined using a small test set (15).
The purpose of our study was to investigate if generative adversarial networks (GANs) can recover contrast-enhanced breast MRI scans from unenhanced images and virtual low-contrast-enhanced images.
Materials and Methods
Patients and Study Design
Local institutional review board approval was obtained for this retrospective study that included breast MRI data acquired at the Department of Diagnostic and Interventional Radiology of University Hospital Aachen (Germany). Using the department’s database of patient medical records between January 2010 and December 2019, we included all breast MRI examinations consisting of at least a T2-weighted image and a dynamic series with images obtained before and after contrast agent injection; a total of 165 incomplete examinations were excluded. The final number of breast MRI examinations included in this study was 9751 performed in 5086 women. Of these women, 3803 presented for screening or follow-up after breast-conserving treatment, and 1283 presented for diagnostic assessment or preoperative staging. Images were anonymized and then partitioned into a training set of 9551 MRI examinations from 4886 women and a separate held-out test set of 200 examinations from 200 women (Fig 1). We randomly chose the test set from all patients who had an untreated enhancing lesion in one breast and no findings on the contralateral side to test the algorithms on breasts both with and without findings.

Figure 1: Study flow diagram shows the selection process for the breast MRI scans used in our study.
Image Acquisition
Dynamic contrast-enhanced MRI studies of the breast had been performed according to a standardized protocol (16) on a 1.5-T system (Achieva or Ingenia, Philips Medical Systems) by using a double-breast four-element surface coil (Invivo) with two paddles used to immobilize the breast in the craniocaudal direction (Noras). Protocol details can be found in Appendix S1.
Image Preprocessing and Training of the Machine Learning Algorithms
To simulate the situation in which a lower dose of GBCA is administered to the patient, the SNR of the existing examinations was reduced such that they corresponded to a contrast agent dose of 25% of the recommended body weight–adjusted dose. Details of the calculation and transformation of the images into a machine-readable format are given in Appendix S1.
To synthesize images, we used the Pix2PixHD image-to-image translation framework described in reference 17. In approach A, the T1-weighted, T2-weighted, and simulated low-dose subtraction images served as input to the model (Fig 2, approach A). The model predicted what the subtraction image might have looked like. In approach B, the same procedure was used, but the model received only the T1- and T2-weighted precontrast images as input for predicting the postcontrast image (Fig 2, approach B). The full code of our setup is publicly available at https://github.com/mueller-franzes/BreastMRI-Pix2PixHD.

Figure 2: Diagram shows training setup for approaches A and B. Image volumes are first resampled and cropped to provide standardized image sections for the generative models. For approach A, inputs to the model are T1- and T2-weighted sections along with the simulated low-dose subtraction image. Based on these inputs, the model predicts the full-dose subtraction image. During training, this predicted image is compared with the acquired image, and a loss is calculated as the difference between these two images. The loss is used to adjust the weights in the model (ie, to train the model). Training for approach B proceeds in the same manner, but the model predicts the full-dose subtraction image based on the T1- and T2-weighted image sections only.
Image Evaluation
Three experiments were performed to compare the appearance of the synthesized postcontrast subtracted images generated by approaches A and B with the real postcontrast subtracted images (Table 1).
![]() |
In experiment 1, two experienced radiologists (L.H. [radiologist 1] and E.D. [radiologist 2], each with at least 5 years of experience in breast MRI) were asked to review 100 patient images from the test set that included enhancing lesions (n = 50) or no enhancing lesions (n = 50). The data set included real postcontrast subtracted images and synthetic postcontrast images.
Images were depicted in random order. Readers were blinded to the nature of the presented image and were then asked, independently of each other, to determine whether a presented postcontrast subtraction image was real or synthetic. Readers did not have access to the clinical history or other imaging examinations and were not trained on this particular task before participating in the study.
In experiment 2, the same readers were unblinded, and real and synthetic images from the same 100 women used for experiment 1 were presented in a pairwise fashion, side by side. Readers were asked to decide, on a five-point scale (1 = entirely different, 5 = exactly equivalent), whether the synthetic postcontrast subtraction images were equivalent in terms of image contrast, image sharpness, and visual SNR to the respective real postcontrast subtraction images.
In experiment 3, the same radiologists assessed breast MRI scans side by side from a different set of patients with contrast-enhancing lesions (n = 100), none of which were included in experiment 1 or 2. Unblinded readers were asked to determine, again on a five-point scale (1 = lesion not visible, 5 = lesion conspicuity exactly equivalent), whether the enhancing lesion was depicted on the synthetic postcontrast subtracted image with equivalent conspicuity as on the real postcontrast subtracted image.
In all experiments, two runs were conducted: first with the synthetic images generated according to approach A (from virtual low-contrast images) and, after a washout period of 7 days, with the synthetic images generated according to approach B (from unenhanced images).
Statistical Analysis
Statistical analysis was carried out by two authors (G.M.F. and D.T., with 3 and 8 years, respectively, of experience in statistical analysis), and tests were considered to show statistically significant difference at P < .05. Bootstrapping (n = 10 000 drawings with replacement) was used to test if there was a significant difference versus random choice (accuracy = 0.5) in experiment 1.
Noninferiority of the synthesized versus real images in experiments 2 and 3 was tested by the null hypothesis that the mean rating was 4 (“almost equivalent”) or less using the one-sided Wilcoxon test. In addition, the one-sided Wilcoxon test was used to test if the ordinally scaled ratings of the synthetic images from approach A were higher than the ratings from approach B.
As an additional quantitative measure to compare the similarity or dissimilarity of the real and synthetic images from approaches A and B, we used the structural similarity index as proposed by Wang et al (18). The structural similarity index measures the degree of agreement between two images in terms of luminance (ie, the brightness of individual pixels), contrast (ie, the SD of brightness values), and structure (ie, the correlation of brightness values). For the interested reader, we provide detailed formulas for the calculation of each of these values and additional quantitative comparisons of similarity using mean absolute error, mean squared error, and peak SNR in Appendix S1 (Table S2).
Tests and metrics were implemented in Python version 3.8 using the SciPy (19) and scikit-image (20) libraries.
Results
Patient Characteristics
The study included 5086 women, with 4886 (mean age, 56 years ± 10 [SD]) in the training set and 200 (mean age, 55 years ± 12) in the test set (Table 2). A total of 1478 enhancing lesions were observed; of those, 70.2% (1037 of 1478) were benign (test set, n = 97) and 29.8% (441 of 1478) were malignant (test set, n = 53). A total 910 lesions were enhancing masses (test set, n = 89), and 568 lesions were nonmass enhancement (test set, n = 61). The median size of enhancing masses was 15 mm (test set, 15 mm), with a range of 4–57 mm (test set, 5–55 mm); the median size of nonmass enhancement was 25 mm (test set, 25 mm), with a range of 11–114 mm (test set, 11–104 mm).
![]() |
Reader Identification of Synthetic Images
Using images generated according to approach A (from virtual low-contrast images), radiologists 1 and 2 were able to correctly determine whether a presented image was real or synthetic for 49 of 100 images and 55 of 100, respectively. Thus, the rate at which readers were able to identify a synthetic image was 49% ± 5 (SD) for radiologist 1 and 55% ± 5 for radiologist 2 (Fig 3, approach A). Neither value differed significantly from random guessing (P = .62 and P = .18, respectively).

Figure 3: Blinded rating of breast MRI scans as real or synthetic. Square grids show the ground truth and blinded rating of 100 subtraction images as real or synthetic by two radiologists experienced in assessment of breast MRI scans. Synthetic images were generated by generative adversarial networks using either simulated low-contrast images and original MRI scans (approach A) or only unenhanced MRI scans (approach B). Correct ratings are those in the top left and bottom right boxes of each matrix. Together, these data show that the radiologists had trouble differentiating real from synthesized images (a completely random rating would result in an average accuracy of 0.5). SNR = signal-to-noise ratio.
Using images generated according to approach B (from unenhanced images), radiologists 1 and 2 were able to correctly determine whether an image was real or synthetic for 67 of 100 images (67% ± 5 [SD], P < .001) and 55 of 100 (55% ± 5, P = .18), respectively (Fig 3, approach B).
Direct Comparison of Image Appearance Side by Side
When directly comparing the synthetic images from approach A with the respective real images side by side, radiologists 1 and 2 rated the synthetic images as 4 or 5 for 96 of 100 images (mean rating, 4.5 ± 0.1 [SD]) and 100 of 100 images (mean rating, 4.7 ± 0.1), respectively (Fig 4, approach A). This confirmed noninferiority of the synthetic images according to the prespecified noninferiority criterion (pooled mean rating, 4.6 ± 0.1; P < .001).

Figure 4: Unblinded direct comparison of the overall appearance of breast MRI scans. Stacked bar graphs show assessment of overall appearance of real compared with synthetic images with and without enhancing lesions. Two radiologists experienced in assessment of breast MRI scans rated images on a scale from 1 (entirely different) to 5 (exactly equivalent), taking into consideration image contrast, image sharpness, and visual signal-to-noise ratio (SNR). Synthetic images were generated by generative adversarial networks using either simulated low-contrast images and original MRI scans (approach A) or only unenhanced MRI scans (approach B).
When performing the same experiment with the synthetic images generated by approach B, radiologists 1 and 2 rated the resulting synthetic images as 4 or higher for 54 of 100 images (mean rating, 3.3 ± 0.2 [SD]) and 40 of 100 images (mean rating, 2.8 ± 0.2), respectively (Fig 4, approach B). This did not meet the predefined noninferiority criterion (pooled mean rating, 3.0 ± 0.2; P > .99).
The quantitative comparison between the real and synthetic images yielded a significantly higher structural similarity index for approach A than approach B (approach A, 0.74 ± 0.10 [SD]; approach B, 0.56 ± 0.08; P < .001).
Lesion Conspicuity
Lesion conspicuity on the synthetic images generated by approach A was rated as 4 (almost exactly equivalent) or 5 (exactly equivalent) for 100 and 99 of the 100 images by radiologists 1 and 2, respectively (Fig 5, approach A). The average lesion conspicuity score was 4.9 ± 0.1 (SD) for radiologist 1 and 4.8 ± 0.1 for radiologist 2. This confirmed noninferiority of the synthetic images according to the noninferiority criterion (pooled mean rating, 4.9 ± 0.1; P < .001).

Figure 5: Unblinded comparison of the conspicuity of enhancing lesions on breast MRI scans. Stacked bar graphs show assessment of the conspicuity of enhancing lesions on real compared with synthetic images. Two radiologists experienced in the assessment of breast MRI scans rated images on a scale from 1 (lesion not visible) to 5 (lesion conspicuity exactly equivalent). Synthetic images were generated by generative adversarial networks using either simulated low-contrast images and original MRI scans (approach A) or only unenhanced MRI scans (approach B). SNR = signal-to-noise ratio.
Lesion conspicuity on the synthetic images generated by approach B was rated exactly equivalent or almost exactly equivalent for 11 and 12 of the 100 images by radiologists 1 and 2, respectively (Fig 5, approach B). The average lesion conspicuity score was 1.8 ± 0.1 (SD) for radiologist 1 and 1.7 ± 0.1 for radiologist 2. This did not meet the noninferiority criterion (pooled mean rating, 1.8 ± 0.1; P > .99).
Average ratings of lesion conspicuity on synthetic images from approach A (4.9 ± 0.1 [SD]) were significantly higher than those generated according to approach B (1.8 ± 0.1, P < .001).
Qualitatively, we found that contrast enhancement was synthesized by approach B if lesions manifested as space-occupying lesions that were visible on the non–contrast-enhanced images, in other words, as masses (Fig 6). This allowed the correct prediction of contrast enhancement on images that did indeed exhibit enhancement, but also led to false-positive enhancement (Fig 6). In three patients, approach B "invented" a contrast-enhancing mass, whereas there was no enhancement on the original images. This was not observed on images reconstructed with approach A. Moreover, approach B did not accurately synthesize contrast enhancement of lesions that did not manifest as space-occupying lesions (Fig 6). Approach A performed better and was able to correctly reconstruct the contrast enhancement in all of these patients.

Figure 6: Example breast MRI scans from the study. From left to right: axial T2-weighted images, axial T1-weighted images before contrast agent administration, real subtraction images (first postcontrast subtracted images), virtual low–signal-to-noise ratio (SNR) images, synthetic images generated with approach A, and synthetic images generated with approach B. Images in a 62-year-old woman (patient A) with invasive breast cancer that was stage pT1c, NST grade III, and triple-negative. Contrast enhancement (blue arrows) was accurately reconstructed by both approaches. Images in a 57-year-old woman (patient B) with invasive breast cancer that was stage pT1c, NST grade II, and luminal B. Approach B missed the contrast-enhancing lesion (red arrow). Images in a 64-year-old woman (patient C) who presented for follow-up after resection of invasive breast cancer 1 year prior. Contrast enhancement is both missed (red arrow, fibroadenoma) and falsely synthesized due to scar tissue (yellow arrow) by approach B. Images in a 44-year-old woman (patient D) who presented for screening. No contrast-enhancing lesions were seen. However, approach B synthesized contrast enhancement (yellow arrow).
Discussion
In this study, we demonstrated that generative adversarial networks (GANs) can use simulated low-contrast images to restore the appearance of contrast-enhanced breast MRI scans and that two experienced radiologists could not reliably distinguish synthetic from real images. When the generated and real images were compared side by side, lesion conspicuity on synthetic images was noninferior (mean rating on a five-point scale, 4.9 ± 0.1; P < .001). When GANs used only the precontrast images as inputs, experienced radiologists were more often able to distinguish synthetic from real images, and lesion conspicuity was inferior (mean rating, 1.8 ± 0.1; P > .99).
While using precontrast images alone to generate synthetic postcontrast images was not reliably possible in our study, other groups have shown promising results using unenhanced MRI scans, mostly including diffusion-weighted imaging in the unenhanced protocol. However, in the analysis of the utility of unenhanced breast MRI, it is always important to consider the type of cohort included in the respective study. For instance, the study by Chung and Calabrese et al (21) included 96 patients who underwent breast MRI for staging (ie, all had biopsy-proven invasive cancers); thus, the cohort was almost all patients with enhancing masses, and these masses were large, with a median size of 24 mm. Our study, by contrast, included mainly patients who underwent breast MRI for screening or for follow-up after breast-conserving surgery. Moreover, we include a more natural distribution of lesion types (ie, also nonmass enhancement). We believe that this is the more important difference, rather than using or not using diffusion-weighted imaging. Moreover, in the study by Chung and Calabrese et al and as opposed to our approach A, details of the tumor enhancement patterns were sometimes lost, and false-negative diagnoses did occur.
Another study used diffusion-weighted images only in the reading session of radiologists. Wang et al (15) used the same type of image data as in our approach to train neural networks to produce synthesized postcontrast breast MRI scans, but for the radiologist reading session, radiologists were able to also review diffusion-weighted images for lesion detection and characterization. In a test set of 21 patients, they found that synthesized postcontrast images had diagnostic value when read in conjunction with diffusion-weighted images. In their study, use of diffusion-weighted imaging helped avoid false-negative diagnoses due to false-negative synthetic images. Again, we believe that the major differences between this study and our findings may be the type of cohort and choice of enhancing breast lesions in the test sets; when lesions are already visible on precontrast images, either due to their size or their location within fatty breast tissue, it is predictable that unenhanced imaging will also provide sufficient accuracy in lesion detection. However, when lesions are small, as in our test set, and/or situated in dense fibroglandular tissue, this will be a challenging task.
We have shown that the SNR of MRI scans may be significantly improved by generative models that take the low-SNR image (plus additional unenhanced images) into account. It is overall beneficial to increase SNR of dynamic contrast-enhanced subtraction images because there is always a trade-off between the requirement of having a high temporal and spatial resolution; high-acquisition matrices must be acquired with only one signal average to maintain a sufficiently high temporal resolution. This is frequently associated with borderline SNR (22,23). Hence, a method that helps increase SNR would be of interest for all types of dynamic contrast-enhanced breast MRI, including ultrafast acquisition protocols.
When postcontrast images were exclusively reconstructed from the precontrast images (approach B), false-negative as well as false-positive enhancements occurred: The GAN predicts enhancement for space-occupying lesions that are visible on the non–contrast-enhanced images. Hence, false-positive findings are encountered when a mass is present that is due to a benign condition. Inversely, for the same reason, when a breast cancer is not associated with a mass, approach B produces false-negative findings. We did not encounter such false findings with approach A.
A strength of our study is the fact that the GANs worked well over a wide range of examinations within a 10-year period. Still, our study had several limitations.
First, we did not work with images obtained with a low contrast agent dose, but rather only simulated low-dose images. An image with virtually reduced SNR may not be a true representation of an image obtained with a reduced dose of a contrast agent. The patterns of contrast enhancement are most likely not a scaled version of the high–contrast agent dose images (24). While it would be desirable to use true low–contrast agent dose breast MRI scans, these images cannot be used for clinical decision-making, and thus, acquiring enough images to appropriately train a GAN would be difficult. Second, the training set used in our study had a fixed noise level. While this was acceptable in our proof-of-principle study, application to real data would benefit from broader noise distribution during training. Generalizability might be improved with the inclusion of machine learning diffusion models that have recently been shown to be capable of denoising images and generating high-quality images (25). Additionally, domain-agnostic generative models that are not trained to perform one specific task but have been pretrained on millions of photographs could be fine-tuned for synthetic breast MRI generation and potentially keep their more general capabilities that depend less on the choice of one specific noise level (26). Third, the homogeneous noise model applied to images in this study to create virtual low–contrast agent dose images may not fully re-create the noise patterns observed in real MRI scans. Again, the use of real low-dose examinations for training could fix this. Fourth, the value of 25% dose reduction was only determined empirically. Follow-up studies should investigate the minimum possible dose that still allows for accurate reconstruction.
In conclusion, our study shows that generative adversarial networks (GANs) can use simulated low-contrast contrast-enhanced images, but not unenhanced images, to recover the full image contrast in contrast-enhanced breast MRI, indicating that GANs may be useful to enable breast MRI with reduced contrast agent dose. Future studies will need to demonstrate the practical feasibility and work on robustness to bring the presented concept to clinical use.
Author Contributions
Author contributions: Guarantors of integrity of entire study, G.M.F., C.K., D.T.; study concepts/study design or data acquisition or data analysis/interpretation, all authors; manuscript drafting or manuscript revision for important intellectual content, all authors; approval of final version of submitted manuscript, all authors; agrees to ensure any questions related to the work are appropriately resolved, all authors; literature research, G.M.F., T.H., T.N., D.T.; clinical studies, L.H., S.N.; experimental studies, G.M.F., E.D., J.N.K., S.N., D.T.; statistical analysis, G.M.F., D.T.; and manuscript editing, G.M.F., S.T.A., F.K., V.S., J.N.K., S.N., T.N., C.K., D.T.
* C.K. and D.T. are co–senior authors.
Supported by the NVIDIA Applied Research Accelerator Program.
Data sharing: Data generated or analyzed during the study are available from the corresponding author upon request.
References
- 1. . Breast cancer statistics, 2019. CA Cancer J Clin 2019;69(6):438–451.
- 2. . Breast cancer screening in women with extremely dense breasts recommendations of the European Society of Breast Imaging (EUSOBI). Eur Radiol 2022;32(6):4036–4045.
- 3. . Breast cancer screening recommendations inclusive of all women at average risk: update from the ACR and Society of Breast Imaging. J Am Coll Radiol 2021;18(9):1280–1288.
- 4. . Breast MRI: state of the art. Radiology 2019;292(3):520–536.
- 5. . Evaluation of 3.0-T MRI brain signal after exposure to gadoterate meglumine in women with high breast cancer risk and screening breast MRI. Radiology 2019;293(3):523–530.
- 6. . High signal intensity in globus pallidus and dentate nucleus on unenhanced T1-weighted MR images: evaluation of two linear gadolinium-based contrast agents. Radiology 2015;276(3):836–844.
- 7. . Effects of gadolinium deposition in the brain on motor or behavioral function: a mouse model. Radiology 2021;301(2):409–416.
- 8. . Low-dose imaging technique (LITE) MRI: initial experience in breast imaging. Br J Radiol 2019;92(1103):20190302.
- 9. . Gadolinium deposition safety: seeking the patient’s perspective. AJNR Am J Neuroradiol 2020;41(6):944–946.
- 10. . Diffusion-weighted MRI for unenhanced breast cancer screening. Radiology 2019;293(3):504–520.
- 11. . Preliminary study: breast cancers can be well seen on 3T breast MRI with a half-dose of gadobutrol. Clin Imaging 2019;58:84–89.
- 12. . Deep learning enables reduced gadolinium dose for contrast-enhanced brain MRI. J Magn Reson Imaging 2018;48(2):330–340.
- 13. . Deep learning-based methods may minimize GBCA dosage in brain MRI. Eur Radiol 2021;31(9):6419–6428.
- 14. . A generic deep learning model for reduced gadolinium dose in contrast-enhanced brain MRI. Magn Reson Med 2021;86(3):1687–1700.
- 15. . Synthesizing the first phase of dynamic sequences of breast MRI for enhanced lesion identification. Front Oncol 2021;11:792516.
- 16. . Supplemental breast MR imaging screening of women with average risk of breast cancer. Radiology 2017;283(2):361–370.
- 17. . High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs.
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition . Salt Lake City, UT: IEEE, 2018; 8798–8807. - 18. . Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 2004;13(4):600–612.
- 19. . SciPy 1.0: fundamental algorithms for scientific computing in Python. Nat Methods 2020;17(3):261–272. [Published correction appears in Nat Methods 2020;17(3):352.]
- 20. . scikit-image: image processing in Python. PeerJ 2014;2:e453.
- 21. . Deep learning to simulate contrast-enhanced breast MRI of invasive breast cancer. Radiology 2022. https://doi.org/10.1148/radiol.213199. Published online November 15, 2022.
- 22. . Relationship of temporal resolution to diagnostic performance for dynamic contrast enhanced MRI of the breast. J Magn Reson Imaging 2009;30(5):999–1004.
- 23. . Comparison of magnetic properties of MRI contrast media solutions at different magnetic field strengths. Invest Radiol 2005;40(11):715–724.
- 24. . Basic MR relaxation mechanisms and contrast agent design. J Magn Reson Imaging 2015;42(3):545–565.
- 25. . High-resolution image synthesis with latent diffusion models. arXiv 2021. Posted December 20, 2021. Last revised April 13, 2022. Accessed September 1, 2022.
- 26. . Medical domain knowledge in domain-agnostic generative AI. NPJ Digit Med 2022;5(1):90.
Article History
Received: Sept 2 2022Revision requested: Oct 11 2022
Revision received: Jan 12 2023
Accepted: Feb 1 2023
Published online: Mar 21 2023