Comparison of Radiologists and Deep Learning for US Grading of Hepatic Steatosis
Screening for nonalcoholic fatty liver disease (NAFLD) is suboptimal due to the subjective interpretation of US images.
To evaluate the agreement and diagnostic performance of radiologists and a deep learning model in grading hepatic steatosis in NAFLD at US, with biopsy as the reference standard.
Materials and Methods
This retrospective study included patients with NAFLD and control patients without hepatic steatosis who underwent abdominal US and contemporaneous liver biopsy from September 2010 to October 2019. Six readers visually graded steatosis on US images twice, 2 weeks apart. Reader agreement was assessed with use of κ statistics. Three deep learning techniques applied to B-mode US images were used to classify dichotomized steatosis grades. Classification performance of human radiologists and the deep learning model for dichotomized steatosis grades (S0, S1, S2, and S3) was assessed with area under the receiver operating characteristic curve (AUC) on a separate test set.
The study included 199 patients (mean age, 53 years ± 13 [SD]; 101 men). On the test set (n = 52), radiologists had fair interreader agreement (0.34 [95% CI: 0.31, 0.37]) for classifying steatosis grades S0 versus S1 or higher, while AUCs were between 0.49 and 0.84 for radiologists and 0.85 (95% CI: 0.83, 0.87) for the deep learning model. For S0 or S1 versus S2 or S3, radiologists had fair interreader agreement (0.30 [95% CI: 0.27, 0.33]), while AUCs were between 0.57 and 0.76 for radiologists and 0.73 (95% CI: 0.71, 0.75) for the deep learning model. For S2 or lower versus S3, radiologists had fair interreader agreement (0.37 [95% CI: 0.33, 0.40]), while AUCs were between 0.52 and 0.81 for radiologists and 0.67 (95% CI: 0.64, 0.69) for the deep learning model.
Deep learning approaches applied to B-mode US images provided comparable performance with human readers for detection and grading of hepatic steatosis.
Published under a CC BY 4.0 license.
See also the editorial by Tuthill in this issue.
Deep learning methods applied to B-mode US images showed comparable performance with six human readers in grading hepatic steatosis and may be used as a valuable tool for screening patients.
■ In a retrospective study of 199 patients with a spectrum of nonalcoholic fatty liver disease, six radiologists visually graded hepatic steatosis on B-mode US images; interreader agreement was fair in detecting patients with any degree of steatosis (S0 vs S1 or higher, 0.34 [95% CI: 0.31, 0.37]), for differentiating no or mild steatosis from those with moderate or more severe steatosis (S0 or S1 vs S2 or S3, 0.30 [95% CI: 0.27, 0.33]), and for grading patients with severe steatosis against all other grades (S2 or lower vs S3, 0.37 [95% CI: 0.33, 0.40]).
■ Individual readers graded hepatic steatosis, with areas under the receiver operating characteristic curve (AUC) ranging from 0.49 (95% CI: 0.46, 0.51) to 0.84 (95% CI: 0.70, 0.98) for classifying S0 versus S1 or higher, from 0.57 (95% CI: 0.43, 0.71) to 0.76 (95% CI: 0.64, 0.87) for S0 or S1 versus S2 or S3, and from 0.52 (95% CI: 0.43, 0.60) to 0.81 (95% CI: 0.67, 0.95) for S2 or lower versus S3.
■ A deep learning model graded hepatic steatosis with AUCs of 0.85 (95% CI: 0.83, 0.87) for classifying S0 versus S1 or higher, 0.73 (95% CI: 0.71, 0.75) for S0 or S1 versus S2 or S3, and 0.67 (95% CI: 0.64, 0.69) for S2 or lower versus S3.
Nonalcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease, with an estimated global prevalence of 38% (1). NAFLD can progress to nonalcoholic steatohepatitis (NASH) in 16% of patients (1) and to cirrhosis in 3% of patients (2). NAFLD is associated with obesity and type 2 diabetes, which are globally increasing (3). The anticipated increase in NAFLD will have implications on health care systems, as the higher demand for imaging studies will require additional equipment, trained personnel, infrastructure, and likely assistance using artificial intelligence.
Hepatic steatosis is defined by the presence of vacuoles of fat within hepatocytes, which constitutes a hallmark histopathologic feature. Increased grades of steatosis are associated with worse outcomes in liver diseases (4). Currently, liver biopsy and MRI are respectively considered the historical and noninvasive reference standard techniques for assessment of hepatic steatosis (5–7). Both techniques have limitations associated with cost-effectiveness and availability, hampering their use for screening and monitoring of patients (8).
B-mode US is often used for screening and monitoring of hepatic steatosis due to its wide availability, low cost, and lack of radiation (9,10). However, B-mode US has some limitations. In the setting of chronic liver disease, it may be difficult to attribute the echogenicity of the liver to steatosis, fibrosis, or both, given that both conditions affect the brightness on gray-scale images (11). Additionally, image acquisition may be affected by technical sources of variability, such as scanner, probe, and settings (9). Moreover, grading of hepatic steatosis at US is reader-dependent (9), with high intraobserver and interobserver variability (10,12). Improving the ability to objectively grade hepatic steatosis on B-mode US images would provide a scalable option for the diagnosis and follow-up of hepatic steatosis.
The continuous development of artificial intelligence technologies, specifically advances in deep learning in the past decade, has shown potential for diverse applications (13–15). Deep learning techniques have been increasingly popular in medical imaging (16), aiming to automate computer vision tasks such as classification, detection, and segmentation. Deep learning has been recently investigated for steatosis detection and classification (17,18). However, to our knowledge, prior studies have not compared its performance with human readers, including fellowship-trained abdominal radiologists, on the same test data set.
The purpose of this study was to evaluate the classification agreement and diagnostic performance of radiologists and deep learning models applied to B-mode US images for grading hepatic steatosis in NAFLD, using biopsy as the reference standard.
Materials and Methods
The Centre Hospitalier de l’Université de Montréal institutional review board approved this retrospective, cross-sectional, case-control, diagnostic, single-site model creation study. Requirement for informed consent was waived, and consent was obtained for data access.
Patients were identified in a convenience series among those who underwent B-mode abdominal US and liver biopsy within 1 year of each other for suspected or confirmed chronic liver disease. The study period was across 9 years (September 2010 to October 2019). Patients were included if they had a histopathologic diagnosis of NAFLD, NASH, or NASH-related cirrhosis and excluded if they had any other causes of chronic liver disease. Control patients were included when there was no other lesion present or if they had a minor nonspecific lesion (such as cholestasis, dysplasia, nonspecific inflammation, and microgranulomas). Therefore, they had less than 5% hepatocytes containing macrovesicular steatosis and no inflammation or fibrosis. To protect health information, patient identifiers were encrypted using salt and pepper cryptographic hashing, nominal information and headers were cropped from images, and pathology reports were anonymized using randomization algorithms.
The index test was B-mode abdominal US from seven different scanners, including iU22 (Philips), Aplio 500 and i800 (Canon Medical Systems), Acuson S2000 and S3000, (Siemens Healthineers) Sequoia (Siemens Healthineers), and LOGIQ E9 (GE HealthCare). Images were acquired according to the institutional clinical US protocol. Image analysts (S.I.C., Y.H.L., C.L.R., and L.P.B., medical students in training with 1–3 years of experience) reviewed images associated with selected pathology reports and excluded images in Doppler mode or dual-display view or images with severe visualization limitations according to the Liver Imaging Reporting and Data System (19).
Liver biopsies were interpreted using NASH score (20), including steatosis grade, ballooning grade, lobular inflammation grade, and fibrosis stage. Biopsies were performed via a transcutaneous approach with US guidance using 16- or 18-gauge core needles (21). Medical students (S.I.C. and Y.H.L.) extracted steatosis grades from pathology reports (22,23), which determined steatosis as the percentage of hepatocytes containing fat macrovesicles according to the following ordinal scale: none (<5%, grade S0), mild (5%–33%, grade S1), moderate (34%–66%, grade S2), and severe (>66%, grade S3). Fat fraction is a common data element for radiology, and histologic scores were dichotomized as follows: S0 versus S1 or higher; S0 or S1 versus S2 or S3; and S2 or lower versus S3.
Reader Subset Selection
Considering all eligible patients, a subset was created for reader assessment by selecting those with at least one good quality image showing (a) the liver and kidney, (b) portal vein, and (c) hepatic vein. Image quality was considered good when there were no or minimal limitations (US visualization score A) or moderate limitations (US visualization score B) according to the Liver Imaging Reporting and Data System US (19). Not all patients in the database had appropriate images meeting the three requirements. The sampling maintained a similar proportion of steatosis grades across the selected patients. The final number of patients included in the study was based on feasibility and convenience for the readers.
To assess intra- and interobserver agreement and diagnostic performance of steatosis grading by humans, six readers independently graded steatosis. The six readers had increasing levels of expertise in abdominal sonography: junior resident, senior resident, medical fellow, junior, midcareer, and senior fellowship-trained abdominal radiologists (J.M., E.A., T.M.A., K.N.V., J.M.L., and J.S.B., respectively a 2nd-year resident, 5th-year resident, 1st-year abdominal fellow, and radiologists with 4, 13, and 29 years of experience). Before image interpretation, a radiologist (A.T., with 18 years of experience) provided instructions on how to grade liver steatosis according to features such as degree of ultrasound attenuation, echogenicity of liver parenchyma relative to the adjacent kidney, visibility and clarity of vessel border definition, and visualization of the diaphragm. The readers graded steatosis on an ordinal scale from S0 to S3. Steatosis grading was based on visual assessment of figure collages presenting three views per patient that included the liver, right kidney, right portal vein, and hepatic veins. To minimize recall bias, readers graded steatosis severity again 2 weeks later, but with figure collages of the same patients presented in a randomized order. During both reading sessions, the readers were blinded to the corresponding pathology reports as well as to the steatosis grading assigned in other readings.
Data Set Selection for Deep Learning
The entire data set was split into training, validation, and test sets. The independent test set contained 15% of the data, while the remainder was used for fivefold cross-validation. For each fold, 15% of the data were used for validation, with the remaining 85% were used for training. The data were partitioned at the patient level to prevent data leakage, and the proportion of steatosis grades was similar for each set. Preprocessing was applied by cropping the display to remove all screen information and by resizing images to a determined input size with use of bicubic interpolation.
To allow a direct comparison between the deep learning model and readers, a second split of the data set was created. Specifically, the test set was the same subset evaluated by the readers, while the remainder was used for training, without cross-validation. Data partitioning methods are presented in Figure S1.
Deep Learning Models
Training sets containing images and biopsy scores were used to train binary classifiers for predicting hepatic steatosis grades. To standardize the scale and distribution of the input data, the mean and SD were computed across all images in the training set. Standardization of all sets was performed by subtracting the mean and dividing by the SD of the training set. No data augmentation was used in this study.
The VGG16 (24) architecture was used for the binary classification task, with transfer learning and dropout layers (Table S1). VGG16 is a widely adopted architecture for classification, featuring a moderate depth and availability of pretrained weights on the ImageNet data set (24). VGG16 performance was compared with that of ResNet-50 and Inception-v3.
During fivefold cross-validation, after each training epoch, the validation set provided an evaluation of the models. Hyperparameter tuning and configuration changes were evaluated in the cross-validation setting by varying input size, batch size, optimizer, learning rate, loss function, and dropout to determine the optimal settings (Table S2). To monitor training performance, the lowest validation loss was used, and the epoch with the lowest average loss across the five folds was selected. Subsequently, only the best-performing configuration was used for the test sets.
All layers of the network were allowed to update during training. Input size was 128 × 128, and batch size was 32. Stochastic gradient descent with a learning rate of 0.0001 was used as the optimizer. Binary cross-entropy was selected as the loss function, and the dropout rate was set at 0.5 on the fully connected layers before the softmax.
Class activation mapping (25) was implemented to investigate the influence of areas of an image on the output of a trained classifier. A weighted activation map was generated for any given image, allowing for localizable deep features that can be interpreted to better understand the class identified by the trained model.
Software was developed in Python (version 3.7; Python Software Foundation) using NumPy (version 1.19.2), Pillow (version 6.2.0), Pandas (version 0.25.1), scikit-learn (version 1.0.2), Keras (version 2.2.4), and TensorFlow (version 1.14.0) libraries. The code is publicly available at https://github.com/LCTI-AnTang/binary_steatosis_classifier.
The kappa coefficient, κ, was used to measure intra- and interreader agreement. Multireader Fleiss κ was used for the interreader agreement between multiple readers, and Cohen κ was used when performing paired comparisons. A κ value less than 0.21 was considered poor agreement; 0.21–0.40, fair agreement; 0.41–0.60, moderate agreement; 0.61–0.80, substantial agreement; and 0.81–1.00, excellent agreement. To investigate the relationship between the years of experience of the readers and their accuracy and intrareader agreement, linear regression was used. Steatosis grade predictions of readers and deep learning models were compared with reference standard values, with measurements of sensitivity, specificity, positive predictive value, negative predictive value, and area under the receiver operating characteristic curve (AUC). AUC was the preferred metric for comparisons, as it is less sensitive to class imbalance and allows for comparisons in different thresholds. Metrics calculated for deep learning architectures considered the cutoff point with the highest Youden index on the receiver operating characteristic curve. For the independent test set and the reader set, results were obtained for all images in single tests. All statistical analyses were performed and implemented in Python by using the SciPy package (version 1.3.1), and 95% CIs for AUC were calculated with the DeLong method. To compare the results of the deep learning model and the six readers, DeLong tests were conducted, pairing each reading session with the model. The power of the test was evaluated by using the fixed sample size and the effect size for the comparisons. To investigate the potential confounding effects of fibrosis, inflammation, and ballooning on the steatosis classification, Spearman rank correlation was used. The level of significance was set at P < .05 for all tests. Bonferroni correction was applied for the comparison between the deep learning model and reading sessions.
Figure 1 shows the study flowchart. The initial database contained 3758 biopsy reports. Ninety-four patients without US examinations linked to the reports or with a delay longer than 1 year between the biopsy and US examination were excluded, and 3465 reports were excluded for reporting a pathologic abnormality other than NAFLD or NASH. Table 1 summarizes patient characteristics. Table S3 presents the steatosis distributions across the sets, and Table S4 describes the patient distributions across the US scanners. The data set for this study consisted of 7529 B-mode US images from 142 patients (mean age, 53 years ± 13 [SD]; 70 men) with hepatic steatosis and 57 control patients (54 years ± 13; 31 men). The data set included 57 patients and 966 images for grade S0, 87 patients and 3312 images for grade S1, 25 patients and 1683 images for grade S2, and 30 patients and 1568 images for grade S3. The majority of the images (5765 of 7529 [77%]) were collected using transducers C5–1, PVI-475BX, or PVT-375BT, which have frequency ranges from 1.0 to 6.2 MHz. Most images in the data set were acquired using frequencies of 2.5 MHz, 4.0 MHz, or 4.5 MHz. The average body mass index (calculated as patient weight in kilograms divided by patient height in meters squared) was 30.5 ± 7.8, and 44% of the patients (87 of 199) had mild steatosis. Figure 2 shows representative B-mode images of different steatosis grades. The subset selected for reader assessment contained 52 patients, including 12 patients with grade S0, 17 patients with grade S1, 11 patients with grade S2, and 12 patients with grade S3.
Intra- and Interreader Agreement
Table 2 shows intra- and interreader agreements for classification of visual grading of steatosis. Based on κ values, intrareader and interreader agreements were moderate (0.45 [95% CI: 0.32, 0.59]) and fair (0.34 [95% CI: 0.31, 0.37]), respectively, for classifying steatosis as S0 versus S1 or higher; moderate (0.56 [95% CI: 0.45, 0.66]) and fair (0.30 [95% CI: 0.27, 0.33]) for S0 or S1 versus S2 or S3; and moderate (0.44 [95% CI: 0.28, 0.60]) and fair (0.37 [95% CI: 0.33, 0.40]) for S2 or lower versus S3. Table S5 shows the κ values for all reading pairs. The impact of years of experience on intrareader agreement was weak (R2 = 0.09) (Appendix S1). Across all steatosis grades, the mean interreader κ was 0.25 (95% CI: 0.19, 0.32) for the two residents and 0.17 (95% CI: 0.10, 0.23) for the three radiologists. The mean interreader κ was 0.24 (95% CI: 0.21, 0.27) between the medical fellow and radiologists and 0.19 (95% CI: 0.15, 0.23) between the medical fellow and residents. There was no relationship between the level of training and interreader agreements.
Diagnostic Performance of Readers
Table 3 shows the diagnostic performance of the readers. For classifying S0 versus S1 or higher, readers had AUCs ranging from 0.49 (95% CI: 0.46, 0.51) to 0.84 (95% CI: 0.70, 0.98). For classifying S0 or S1 versus S2 or S3, readers had AUCs ranging from 0.57 (95% CI: 0.43, 0.71) to 0.76 (95% CI: 0.64, 0.87). For classifying S2 or lower versus S3, readers had AUCs ranging from 0.52 (95% CI: 0.43, 0.60) to 0.81 (95% CI: 0.67, 0.95). Table S6 and Figure S2 show detailed results for each reader. There was no strong relationship between the level of training and the accuracy of readers (R2 < 0.15).
Diagnostic Performance of Deep Learning
In this study, VGG16 outperformed ResNet-50 and Inception-v3 in terms of AUC. The deep learning model performance was assessed on the independent test set and on the reader set. Detailed results for both test splits can be found in Table 4. Figure 3 shows representative class activation maps generated using trained models on images with different outcomes on the three dichotomized classification tasks (Appendix S1), and receiver operating characteristic curves are given in Figure 4. Confusion matrixes are presented in Figure S3.
Comparing the results of the reading sessions and deep learning (Tables 3, 4), in S0 versus S1 or higher, the model significantly outperformed 11 of 12 readings (P < .001 for all), except for the highest-performing reading (P = .84). In S0 or S1 versus S2 or S3, the model outperformed only the lowest-performing reading (P = .03) when not adjusted for multiple comparisons. When Bonferroni correction was applied for 12 comparisons, no statistical difference was observed. In S2 or lower versus S3, the model outperformed only one reading (P = .002; P ≥ .04 for all remaining comparisons). For all comparisons with significant difference, the power of the test was higher than 80%. Secondary analysis was performed to observe the influence of time delays between US examinations and biopsies and is presented in Table S7.
Univariable correlation coefficients demonstrated that steatosis classification was correlated with fibrosis stage in S2 or lower versus S3 (P < .001), but not for S0 versus S1 or higher (P = .40) or for S0 or S1 versus S2 or S3 (P = .61). Unweighted sum of inflammation and ballooning was correlated with steatosis classification for S2 or lower versus S3 (P = .01) and for S0 versus S1 or higher (P = .02) while not showing any correlation in S0 or S1 versus S2 or S3 (P = .48). The detailed results for confounding variables are presented in Table S8.
This study evaluated intra- and interreader agreements and diagnostic performance between six readers with different levels of training and performed a head-to-head comparison with a deep learning model in grading hepatic steatosis on B-mode US images in patients with a spectrum of nonalcoholic fatty liver disease and control patients without steatosis, using histopathologic findings as the reference standard for both groups. For S0 versus S1 or higher, the readers achieved fair interreader agreement (κ = 0.34) and performance ranging between 0.49 and 0.84 in area under the receiver operating characteristic curve (AUC), while the deep learning model achieved an AUC of 0.85, which was better than 11 of 12 readings (P < .001). In S0 or S1 versus S2 or S3, the readers achieved fair interreader agreement (0.30) and performance ranging between 0.57 and 0.76 in AUC, while the deep learning model achieved an AUC of 0.73, with no statistically significant difference. In S2 or lower versus S3, the readers achieved fair interreader agreement (0.37) and performance ranging between 0.52 to 0.81 in AUC, while the deep learning model achieved an AUC of 0.67, which was better than one reading (P = .002). For human visual classification, the sensitivity was the highest when distinguishing patients with steatosis from those without steatosis. This could be helpful in the clinical setting to screen patients with any level of fat.
Automatic deep learning classification of steatosis grades at B-mode US achieved a higher AUC for detection of steatosis than for grading severity of steatosis (ie, S0 or S1 vs S2 or S3 and S2 or lower vs S3). This is consistent with the prior literature reporting high performance for detection of hepatic steatosis (17,18). Using US images obtained with a single system and using biopsy reports as the reference standard, Byra et al (17) reported an AUC of 0.98 in 55 patients with obesity, and using visual assessment as reference, Cao et al (18) reported an AUC of 0.93 in 240 patients. Direct comparison with other studies is limited owing to the differences in the data sets, although the results for steatosis detection by the deep learning model in the two separate test sets in our study (AUC, 0.85 for the reader set and 0.98 for the independent test set) are similar to those reported in the literature.
While there was substantial variation between readers, there was no evidence of a difference across levels of experience for all readers. Prior studies reporting κ for readers assessing fatty liver at B-mode US addressed either detection or grading of steatosis. For detection of steatosis, prior studies reported intrareader κ values ranging from 0.54 to 0.89 (12,26–28) and interreader κ from 0.40 to 1.00 (12,27,29–31). For grading of steatosis, prior studies reported intrareader κ from 0.58 to 0.68 (12,26) and interreader κ from 0.49 to 0.93 (12,32–34). However, most studies used readers with similar levels of expertise, except one (28), which was the only study with more than three readers.
Our study presents several advantages over the current literature, including the use of images from multiple equipment types and biopsy reports as the reference standard. Additionally, model results are compared with various human readers. A recent study used radiofrequency-based techniques for steatosis detection and grading (35), but our proposed method uses existing B-mode images from various manufacturers and produces real-time predictions without the need for dedicated software or calibration phantoms.
Our study had limitations. First, the potential confounding effect of fibrosis, inflammation, and ballooning on the performance of steatosis classification could not be fully explored given the available data. In our results, the relationship between steatosis classification and the remaining features is inconsistent across different grade comparisons. Therefore, there is no evidence to suggest that these effects had a significant influence on the overall results. Second, we did not address the NASH diagnosis, as we were only focusing on steatosis. Third, we assessed the performance of our model on a split test set from a single-center study, which limits generalizability, as the data set is reflective of the patient population at our institution and limited by the eligibility criteria. Finally, readers did not have access to image settings such as transmit frequency, harmonics, compounding, and time gain compensation, which could hide the influence of attenuation.
In conclusion, this cross-sectional, case-control, single-center study demonstrates that deep learning approaches may provide similar classification accuracies compared with human readers for steatosis detection and grading. The performance of our model suggests that deep learning may be used for opportunistic screening of steatosis with use of B-mode US across scanners from different manufacturers or even for epidemiologic studies at a populational level if deployed on large regional imaging repositories. These results support the need to conduct further multicenter studies to validate deep learning models for hepatic steatosis screening with use of B-mode US.
Author contributions: Guarantors of integrity of entire study, P.V., E.A., J.S.B., B.T., A.T.; study concepts/study design or data acquisition or data analysis/interpretation, all authors; manuscript drafting or manuscript revision for important intellectual content, all authors; approval of final version of submitted manuscript, all authors; agrees to ensure any questions related to the work are appropriately resolved, all authors; literature research, P.V., S.I.C., C.L.R., L.P.B., Y.H.L., E.A., J.S.B., G.W., G.C., A.T.; clinical studies, P.V., S.I.C., C.L.R., Y.H.L., E.A., J.M., K.N.V., J.M.L., J.S.B., B.N.N., M.C., A.T.; experimental studies, P.V., L.P.B., E.A., T.M.A., H.L., S.G., B.T.; statistical analysis, P.V., L.P.B., E.A., J.M., J.S.B., B.T., E.B., G.W., M.C., A.T.; and manuscript editing, P.V., S.I.C., P.B., C.L.R., E.A., K.N.V., J.S.B., E.M., S.K., B.N.N., I.R., G.W., M.C., G.C., A.T.
Fonds de Recherche du Québec–Santé (FRQ-S) and Fondation de l’Association des Radiologistes du Québec (FARQ) Clinical Research Scholarship–Senior Salary Award (FRQS-ARQ no. 298509) to A.T. Institute of Data Valorization (IVADO) operating grant awarded to A.T.
- 1. . The global epidemiology of nonalcoholic fatty liver disease (NAFLD) and nonalcoholic steatohepatitis (NASH): a systematic review. Hepatology 2023;77(4):1335–1347. Crossref, Medline, Google Scholar
- 2. . Mechanisms and disease consequences of nonalcoholic fatty liver disease. Cell 2021;184(10):2537–2564. Crossref, Medline, Google Scholar
- 3. . Modeling NAFLD disease burden in China, France, Germany, Italy, Japan, Spain, United Kingdom, and United States for the period 2016-2030. J Hepatol 2018;69(4):896–904. Crossref, Medline, Google Scholar
- 4. . MRI steatosis grading: development and initial validation of a color mapping system. AJR Am J Roentgenol 2012;198(3):582–588. Crossref, Medline, Google Scholar
- 5. . Diagnostic accuracy of hepatic proton density fat fraction measured by magnetic resonance imaging for the evaluation of liver steatosis with histology as reference standard: a meta-analysis. Eur Radiol 2019;29(10):5180–5189. Crossref, Medline, Google Scholar
- 6. . Linearity, bias, and precision of hepatic proton density fat fraction measurements by using MR imaging: a meta-analysis. Radiology 2018;286(2):486–498. Link, Google Scholar
- 7. . Accuracy of MR imaging-estimated proton density fat fraction for classification of dichotomized histologic steatosis grades in nonalcoholic fatty liver disease. Radiology 2015;274(2):416–425. Link, Google Scholar
- 8. . Cost-utility analysis of nonalcoholic steatohepatitis screening. Eur Radiol 2015;25(11):3282–3294. Crossref, Medline, Google Scholar
- 9. . Sonographic diagnosis of diffuse liver disease. Semin Ultrasound CT MR 1995;16(1):8–15. Crossref, Medline, Google Scholar
- 10. . Diagnostic accuracy and reliability of ultrasonography for the detection of fatty liver: a meta-analysis. Hepatology 2011;54(3):1082–1090. Crossref, Medline, Google Scholar
- 11. . Pulse-echo quantitative US biomarkers for liver steatosis: toward technical standardization. Radiology 2022;305(2):265–276. Link, Google Scholar
- 12. . Interobserver and intraobserver variability in the sonographic assessment of fatty liver. AJR Am J Roentgenol 2007;189(6):W320–W323. Crossref, Medline, Google Scholar
- 13. . Deep learning: a primer for radiologists. RadioGraphics 2017;37(7):2113–2131. Link, Google Scholar
- 14. . Deep learning: an update for radiologists. RadioGraphics 2021;41(5):1427–1445. Link, Google Scholar
- 15. . The need for medical artificial intelligence that incorporates prior images. Radiology 2022;304(2):283–288. Link, Google Scholar
- 16. . A survey on deep learning in medical image analysis. Med Image Anal 2017;42:60–88. Crossref, Medline, Google Scholar
- 17. . Transfer learning with deep convolutional neural network for liver steatosis assessment in ultrasound images. Int J CARS 2018;13(12):1895–1903. Crossref, Google Scholar
- 18. . Application of deep learning in quantitative analysis of 2-dimensional ultrasound imaging of nonalcoholic fatty liver disease. J Ultrasound Med 2020;39(1):51–59. Crossref, Medline, Google Scholar
- 19. ;
American College of Radiology Ultrasound Liver Imaging and Reporting Data System (US LI-RADS) Working Group. US LI-RADS: Ultrasound Liver Imaging Reporting and Data System for screening and surveillance of hepatocellular carcinoma. Abdom Radiol (NY) 2018;43(1):41–55. Crossref, Medline, Google Scholar
- 20. . Medical liver biopsy: background, indications, procedure and histopathology. Frontline Gastroenterol 2020;11(1):40–47. Crossref, Medline, Google Scholar
- 21. . Guidelines on the use of liver biopsy in clinical practice from the British Society of Gastroenterology, the Royal College of Radiologists and the Royal College of Pathology. Gut 2020;69(8):1382–1403. Crossref, Medline, Google Scholar
- 22. . Histopathological algorithm and scoring system for evaluation of liver lesions in morbidly obese patients. Hepatology 2012;56(5):1751–1759. Crossref, Medline, Google Scholar
- 23. . Design and validation of a histological scoring system for nonalcoholic fatty liver disease. Hepatology 2005;41(6):1313–1321. Crossref, Medline, Google Scholar
- 24. . Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 1409.1556 [preprint] https://arxiv.org/abs/1409.1556. Posted September 4, 2014. Accessed January 2023. Google Scholar
- 25. . Learning Deep Features for Discriminative Localization.
In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016; 2921–2929. Google Scholar
- 26. . Fatty liver is an integral feature of familial combined hyperlipidaemia: relationship with fat distribution and plasma lipids. Clin Sci (Lond) 2007;112(2):123–130. Crossref, Medline, Google Scholar
- 27. . Diagnostic value of ultrasonographic examination for nonalcoholic steatohepatitis in morbidly obese patients undergoing laparoscopic bariatric surgery. Obes Surg 2007;17(1):45–56. Crossref, Medline, Google Scholar
- 28. . Bedside ultrasound can predict nonalcoholic fatty liver disease in the hands of clinicians using a prototype image. Dig Dis Sci 2006;51(5):982–985. Crossref, Medline, Google Scholar
- 29. . Higher concentrations of alanine aminotransferase within the reference interval predict nonalcoholic fatty liver disease. Clin Chem 2007;53(4):686–692. Crossref, Medline, Google Scholar
- 30. . Association between low thigh fat and non-alcoholic fatty liver disease. J Gastroenterol Hepatol 2008;23(6):888–893. Crossref, Medline, Google Scholar
- 31. . Mesenteric fat thickness as an independent determinant of fatty liver. Int J Obes 2006;30(5):787–793. Crossref, Google Scholar
- 32. . Hepatic steatosis in obese Chinese children. Int J Obes 2004;28(10):1257–1263. Crossref, Google Scholar
- 33. . Hepatic MRI for fat quantitation: its relationship to fat morphology, diagnosis, and ultrasound. J Clin Gastroenterol 2005;39(7):619–625. Crossref, Medline, Google Scholar
- 34. . Non-esterified fatty acid concentrations are independently associated with hepatic steatosis in obese subjects. Diabetologia 2006;49(1):141–148. Crossref, Medline, Google Scholar
- 35. . Quantitative ultrasound imaging of soft biological tissues: a primer for radiologists and medical physicists. Insights Imaging 2021;12(1):127. Crossref, Medline, Google Scholar
Article HistoryReceived: Mar 15 2023
Revision requested: May 15 2023
Revision received: Aug 13 2023
Accepted: Aug 22 2023
Published online: Oct 03 2023