Ultra–Low-Dose 18F-Florbetaben Amyloid PET Imaging Using Deep Learning with Multi-Contrast MRI Inputs

Published Online:https://doi.org/10.1148/radiol.2018180940

Diagnostic quality amyloid (fluorine 18 [18F]–florbetaben) PET images can be generated using deep learning methods from data acquired with a markedly reduced radiotracer dose.


To reduce radiotracer requirements for amyloid PET/MRI without sacrificing diagnostic quality by using deep learning methods.

Materials and Methods

Forty data sets from 39 patients (mean age ± standard deviation [SD], 67 years ± 8), including 16 male patients and 23 female patients (mean age, 66 years ± 6 and 68 years ± 9, respectively), who underwent simultaneous amyloid (fluorine 18 [18F]–florbetaben) PET/MRI examinations were acquired from March 2016 through October 2017 and retrospectively analyzed. One hundredth of the raw list-mode PET data were randomly chosen to simulate a low-dose (1%) acquisition. Convolutional neural networks were implemented with low-dose PET and multiple MR images (PET-plus-MR model) or with low-dose PET alone (PET-only) as inputs to predict full-dose PET images. Quality of the synthesized images was evaluated while Bland-Altman plots assessed the agreement of regional standard uptake value ratios (SUVRs) between image types. Two readers scored image quality on a five-point scale (5 = excellent) and determined amyloid status (positive or negative). Statistical analyses were carried out to assess the difference of image quality metrics and reader agreement and to determine confidence intervals (CIs) for reading results.


The synthesized images (especially from the PET-plus-MR model) showed marked improvement on all quality metrics compared with the low-dose image. All PET-plus-MR images scored 3 or higher, with proportions of images rated greater than 3 similar to those for the full-dose images (−10% difference [eight of 80 readings], 95% CI: −15%, −5%). Accuracy for amyloid status was high (71 of 80 readings [89%]) and similar to intrareader reproducibility of full-dose images (73 of 80 [91%]). The PET-plus-MR model also had the smallest mean and variance for SUVR difference to full-dose images.


Simultaneously acquired MRI and ultra–low-dose PET data can be used to synthesize full-dose–like amyloid PET images.

© RSNA, 2018

Online supplemental material is available for this article.

See also the editorial by Catana in this issue.


  • 1. Pichler BJ, Kolb A, Nägele T, Schlemmer HP. PET/MRI: paving the way for the next generation of clinical multimodality imaging applications. J Nucl Med 2010;51(3):333–336. Crossref, MedlineGoogle Scholar
  • 2. Catana C, Drzezga A, Heiss WD, Rosen BR. PET/MRI for neurologic applications. J Nucl Med 2012;53(12):1916–1925. Crossref, MedlineGoogle Scholar
  • 3. Judenhofer MS, Wehrl HF, Newport DF, et al. Simultaneous PET-MRI: a new approach for functional and morphological imaging. Nat Med 2008;14(4):459–465. Crossref, MedlineGoogle Scholar
  • 4. Drzezga A, Barthel H, Minoshima S, Sabri O. Potential clinical applications of PET/MR imaging in neurodegenerative diseases. J Nucl Med 2014;55(Supplement 2):47S–55S. Crossref, MedlineGoogle Scholar
  • 5. Jack CR Jr, Knopman DS, Jagust WJ, et al. Hypothetical model of dynamic biomarkers of the Alzheimer’s pathological cascade. Lancet Neurol 2010;9(1):119–128. Crossref, MedlineGoogle Scholar
  • 6. Berti V, Pupi A, Mosconi L. PET/CT in diagnosis of dementia. Ann N Y Acad Sci 2011;1228(1):81–92. Crossref, MedlineGoogle Scholar
  • 7. Sevigny J, Chiao P, Bussière T, et al. The antibody aducanumab reduces Aβ plaques in Alzheimer’s disease. Nature 2016;537(7618):50–56. Crossref, MedlineGoogle Scholar
  • 8. Catana C, Benner T, van der Kouwe A, et al. MRI-assisted PET motion correction for neurologic studies in an integrated MR-PET scanner. J Nucl Med 2011;52(1):154–161. Crossref, MedlineGoogle Scholar
  • 9. Chen KT, Salcedo S, Chonde DB, et al. MR-assisted PET motion correction in simultaneous PET/MRI studies of dementia subjects. J Magn Reson Imaging 2018;48(5):1288–1296. Crossref, MedlineGoogle Scholar
  • 10. Gens R, Domingos P. Deep Symmetry Networks. Adv Neural Inf Process Syst 2014. Google Scholar
  • 11. He KM, Zhang XY, Ren SQ, Sun J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. 2015 IEEE International Conference on Computer Vision (ICCV), 2015; 1026–1034. Google Scholar
  • 12. Chen H. Low-Dose CT with a Residual Encoder-Decoder Convolutional Neural Network (RED-CNN) 2017. https://arxiv.org/pdf/1702.00288.pdf. Accessed November 4, 2017. Google Scholar
  • 13. Liu F, Jang H, Kijowski R, Bradshaw T, McMillan AB. Deep learning MR imaging-based attenuation correction for PET/MR imaging. Radiology 2018;286(2):676–684. LinkGoogle Scholar
  • 14. Shen D, Wu G, Suk HI. Deep learning in medical image analysis. Annu Rev Biomed Eng 2017;19(1):221–248. Crossref, MedlineGoogle Scholar
  • 15. Xiang L, Qiao Y, Nie D, An L, Wang Q, Shen D. Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI. Neurocomputing 2017;267:406–416. Crossref, MedlineGoogle Scholar
  • 16. Gong E, Guo J, Pauly J, Zaharchuk G. Deep learning enables at least 100-fold dose reduction for PET imaging [abstr]. In: Radiological Society of North America scientific assembly and annual meeting program [book online]. Oak Brook, Ill: Radiological Society of North America, 2017. https://rsna2017.rsna.org/program/index.cfm. Accessed March 11, 2018. Google Scholar
  • 17. Becker GA, Ichise M, Barthel H, et al. PET quantification of 18F-florbetaben binding to β-amyloid deposits in human brains. J Nucl Med 2013;54(5):723–731. Crossref, MedlineGoogle Scholar
  • 18. Gatidis S, Würslin C, Seith F, et al. Towards tracer dose reduction in PET studies: simulation of dose reduction by retrospective randomized undersampling of list-mode data. Hell J Nucl Med 2016;19(1):15–18. MedlineGoogle Scholar
  • 19. Iagaru A, Mittra E, Minamimoto R, et al. Simultaneous whole-body time-of-flight 18F-FDG PET/MRI: a pilot study comparing SUVmax with PET/CT and assessment of MR image quality. Clin Nucl Med 2015;40(1):1–8. Crossref, MedlineGoogle Scholar
  • 20. Jenkinson M, Beckmann CF, Behrens TE, Woolrich MW, Smith SM. Fsl. Neuroimage 2012;62(2):782–790. Crossref, MedlineGoogle Scholar
  • 21. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation 2015. https://arxiv.org/pdf/1505.04597.pdf. Accessed November 4, 2017. Google Scholar
  • 22. Kingma DP, Ba J. Adam: A Method for Stochastic Optimization 2014. https://arxiv.org/pdf/1412.6980.pdf. Accessed April 12, 2018. Google Scholar
  • 23. Fischl B, Sereno MI, Dale AM. Cortical surface-based analysis. II: Inflation, flattening, and a surface-based coordinate system. Neuroimage 1999;9(2):195–207. Crossref, MedlineGoogle Scholar
  • 24. Dale AM, Fischl B, Sereno MI. Cortical surface-based analysis. I. Segmentation and surface reconstruction. Neuroimage 1999;9(2):179–194. Crossref, MedlineGoogle Scholar
  • 25. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 2004;13(4):600–612. Crossref, MedlineGoogle Scholar
  • 26. Desikan RS, Ségonne F, Fischl B, et al. An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. Neuroimage 2006;31(3):968–980. Crossref, MedlineGoogle Scholar
  • 27. Rowe CC, Villemagne VL. Brain amyloid imaging. J Nucl Med 2011;52(11):1733–1740. Crossref, MedlineGoogle Scholar
  • 28. Ikari Y, Nishio T, Makishi Y, et al. Head motion evaluation and correction for PET scans with 18F-FDG in the Japanese Alzheimer’s disease neuroimaging initiative (J-ADNI) multi-center study. Ann Nucl Med 2012;26(7):535–544. Crossref, MedlineGoogle Scholar
  • 29. Maclaren J, Aksoy M, Ooi MB, Zahneisen B, Bammer R. Prospective motion correction using coil-mounted cameras: cross-calibration considerations. Magn Reson Med 2018;79(4):1911–1921. Crossref, MedlineGoogle Scholar

Article History

Received: Apr 20 2018
Revision requested: June 22 2018
Revision received: Oct 5 2018
Accepted: Oct 23 2018
Published online: Dec 11 2018
Published in print: Mar 2019