Understanding Bibliometric Parameters and Analysis
Abstract
Bibliometric parameters have become an important part of modern assessment of academic productivity. These parameters exist for the purpose of evaluating authors (publication count, citation count, h-index, m-quotient, hc-index, e-index, g-index, i-10 [i-n] index) and journals (impact factor, Eigenfactor, article influence score, SCImago journal rank, source-normalized impact per paper). Although in recent years there has been a proliferation of bibliometric parameters, the true meaning and appropriate use of these parameters is generally not well understood. Effective use of existing and emerging bibliometric tools can aid in assessment of academic productivity, including readiness for promotions and other awards. However, if not properly understood, the data can be misinterpreted and may be subject to manipulation. Familiarity with bibliometric parameters will aid in their effective implementation in the review of authors—whether individuals or groups—and journals, as well as their possible use in the promotions review process, maximizing the effectiveness of bibliometric analysis.
©RSNA, 2015
Introduction
Bibliometrics is a field that uses quantitative means to evaluate academic productivity. This quantitative analysis of scientific literature is rapidly changing with the creation of new evaluation tools, parameters, and normative data. The use of bibliometrics in academic medicine is in a relative state of infancy.
The most widely known bibliometrics parameter for evaluation of individual authors is publication count, followed by citation count. Fueled by the imperfections of these data, as well as the recent availability of digital resources for calculating more complex parameters, a new generation of bibliometric tools has arisen (Table 1). Perhaps the h-index (1), which has recently undergone numerous modifications (2–5), can be considered the prototype of advanced bibliometric parameters.
![]() |
There are several quantitative parameters that can help measure the academic strength of scientific journals (Table 2), the most widely known of which is the 2-year journal impact factor (6). Recently developed tools such as the Eigenfactor provide more complex analysis of journals (7). Each of these parameters has its own strengths and weaknesses, from both theoretic and practical standpoints.
![]() |
Bibliometric parameters are playing an increasing role in the evaluation of academic productivity, readiness for promotion and/or tenure (8), and scores on grant applications (9), with journals with higher impact factors being more likely to receive submissions of high-quality manuscripts. A variety of databases exist for obtaining the information required to calculate bibliometric parameters (Table 3). These databases have sparked recent interest; however, bibliometrics is not a new concept, and important analyses have been performed in the past in the field of radiology (10).
![]() |
The potential utility of bibliometrics may be apparent to those in academic medicine, whether (a) individuals seeking promotion or grants or (b) departments or institutions looking to assess the performance of their current and potential faculty. However, bibliometric parameters are also relevant for physicians outside academic medicine, as well as nonphysicians. With the current push for evidence-based medicine, evaluating the medical literature will become increasingly important for physicians in private practice. Bibliometrics can aid in determining which articles, journals, and authors will be the most helpful in assessing the quality of a given product; however, an accurate understanding of these tools is required for their effective use.
In this article, we discuss bibliometric parameters in terms of author evaluation (publication count, citation count, h-index, m-quotient, hc-index, e-index, g-index, and i-10 [i-n] index) and journal evaluation (impact factor, Eigenfactor, AIS, SJR, and SNIP), as well as databases and tools for calculating these parameters.
Evaluation of Authors
Publication Count
Calculating the publication count is the simplest bibliometric parameter. Typically, only peer-reviewed articles in journals indexed in a database such as PubMed or Index Medicus are considered. Book chapters are not often considered. Case reports are included, whereas editorials and opinion pieces are not typically included. This parameter, although easy to calculate, does not take into consideration the position of authorship of an author or the quality of the journal. Articles published in “throwaway” journals are counted equally with those in more academically rigorous journals. With this metric, no distinction is made between original groundbreaking research and less impactful articles such as case reports.
Citation Count
The citation count for an article is a method of giving weight to articles that have influenced subsequent publications. The limitation of this technique is that a widely read educational or informative article may not be cited, despite having a positive impact on the dissemination of information. A case report that provides guidance to others who encounter the same rare entity may be cited much less frequently than other types of articles. In addition, citation count does not differentiate between positive and negative citations. Thus, an article cited in a critical fashion could still influence the citation count positively. Self-citations by an author are also counted. Self-citations may be appropriate when an author is building on the research performed by his or her research group; however, self-citations of uncertain relevance to the newly published work may be considered dishonest.
h-Index
Recognition of limitations in the utility of publication count and citation count led to development of a metric that attempts to quantify both publication count and citation count in a single metric that is less prone to manipulation. The h-index is a dimensionless number that represents an attempt to describe the quantity and quality or impact of a given author’s academic publications and is based on a set of the author’s most frequently cited articles.
The h-index indicates that a given author has had h articles published, each of which has h or more citations (Fig 1a) (1). Additional publications will not increase the h-index until and unless they are cited an appropriate number of times.
Figure 1a (a) Graph shows the citation count for selected publications (in order from highest to lowest number of citations). The curve crosses the identity line at publication number h, so that the gray square has an area of h2. The area in the curve above this square (e2) represents excess citations in the top h articles that do not contribute to the h-index. (b) Graph shows that the g-index uses all citations up to publication number g; therefore, there are no excess citations. By definition, g ≥ h. (Fig 1 adapted, with permission, from reference 5.)

Figure 1b (a) Graph shows the citation count for selected publications (in order from highest to lowest number of citations). The curve crosses the identity line at publication number h, so that the gray square has an area of h2. The area in the curve above this square (e2) represents excess citations in the top h articles that do not contribute to the h-index. (b) Graph shows that the g-index uses all citations up to publication number g; therefore, there are no excess citations. By definition, g ≥ h. (Fig 1 adapted, with permission, from reference 5.)
Citations within the h-index do include self-citations; however, owing to the nature of the metric, it is difficult for most authors to bring about a significant change in their h-index simply through self-citation.
In recent years, extensive work has been performed to determine, on a specialty-by-specialty basis, normative data for h-index values at different levels of academic rank.
In multiple fields of medicine, a progressive increase in the h-index is seen for academic physicians as they move from instructor to assistant professor to associate professor to full professor, although the absolute values vary by specialtym-Quotient
The m-quotient (or m-index) is a variant of the h-index and is defined as an individual’s h-index divided by the number of years since his or her first publication (3). This value represents the average amount the author’s h-index has increased per year over his or her publishing career and can help differentiate between two authors with similar h-indexes but different career lengths. An h-index of 12 for an individual 10 years into his or her career (m-quotient of 1.2) may be considered as more substantial than an h-index of 12 for an individual 24 years into his or her career (m-quotient of 0.5). However, use of the m-quotient can penalize individuals who demonstrate research productivity early in their career (eg, during undergraduate school) followed by years during which their focus is on education (ie, medical school and residency training).
hc-Index
The contemporary index (hc) is a variant of the h-index that time-weights citations. It is derived by multiplying the citation count for an article by four, then dividing by the number of years since publication. The citation count for an article published in the current year would simply be multiplied by four, whereas that for an article published 4 years ago would be multiplied by one and that for an article published 6 years ago would be multiplied by four and divided by six. Thus, older published articles are given less weight, and emphasis is placed on articles that are more recent (4).
e-Index
Given that incremental increases in the h-index become progressively more difficult, additional citations of articles that constitute the h-index do not count toward an individual’s h-index. A recently created adjunct parameter called the e-index indicates the excess citations of the top h articles that do not count toward the h-index (5). For instance, for an author with an h-index of 10, the e-index is the average number of citations beyond 10 for the 10 most frequently cited articles. In other words, e is the square root of all excess citations for articles that constitute the h-index. Alternatively, if one identifies the set of articles that constitute the h-index and adds the number of citations, h2 citations are used for the h-index, and the remainder are considered excess citations (e2) (Fig 1a). Thus, e = (total citations − h2)0.5. It is probably easier to understand the equation after learning the concept than the other way around. The e-index does not have to represent a whole number and will typically decrease when the h-index increases; however, the e-index will increase during the time when the h-index is stable.
g-Index
The g-index is defined as a number such that the top g articles are cited an average of g times (or are cited g2 or more times). Whereas the e-index attempts to complement the h-index by addressing excess citations beyond h (which are ignored by the h-index), the g-index includes all citations for the top g articles (2). That is, the top g articles average g citations each (Fig 1b).
i-10 (i-n) Index
The i-10 index is the number of publications that have been cited 10 or more times and represents an attempt to sift through unsubstantial work (ie, throwaway articles). An i-n index could accordingly be calculated for any n, such as i-5 (which may be more helpful in evaluating more junior authors) or i-100 (which could be applied to compare entire departments).
Evaluation of Journals
Impact Factor
The impact factor was developed by Eugene Garfield and the Institute for Scientific Information (acquired by Thomson Scientific and Healthcare in 1992).
The impact factor takes into account all indexed citations received by a given journal (target window) divided by the number of “citable” articles published by a journal (census period). By convention, the impact factor usually refers to the previous 2 years of publication, although a 5-year impact factor is also used (6).Because different sciences can have vastly different publication and citation rates, it is incorrect to use the impact factor to compare journals in different fields. It is also very difficult to compare individual researchers. Instead, the impact factor is best used to compare different journals from the same scientific discipline.
The impact factor has been published since 1972 and has been widely used to determine the importance of a journal. Unfortunately, this has led to many criticisms regarding impact factor and to certain changes in editorial policy. For example, the definition of what constitutes a citable article in a journal can be manipulated to decrease the denominator and increase the impact factor.
One widely cited article in a journal can artificially inflate the impact factor for that journal, even if it is a controversial article that is cited in criticism. Case reports are often cited only infrequently, which contributes to many journals opting to discontinue publishing this type of article for fear of negatively affecting the impact factor.Eigenfactor
The Eigenfactor was developed by researchers at the University of Washington and the University of California at Santa Barbara (7,22). The Eigenfactor differs from the impact factor in two important ways. First, citations from more widely read journals, as determined by the citing journal’s Eigenfactor score, are given greater weight, thereby limiting the ability to use articles in low-impact journals as a means of garnering excess citations. Second, although there is a 1-year census period (as with the impact factor), the target window is 5 years. The mathematic algorithm used to calculate the Eigenfactor is much more robust and less subject to rapid fluctuations or manipulation.
The Eigenfactor gives increased weight to citations from more widely read journals by using the concept of eigenvector centrality, which measures the importance of a specific “node” to a network. On the Internet, a more highly trafficked website would receive a higher eigenvector centrality score than a less highly trafficked website. Similarly, the Eigenfactor calculates which journals (nodes) receive more citations (higher eigenvector centrality score). Citations from more active journals are given more weight than citations from less active journals.
At its core, this is what gives the Eigenfactor a little more credibility than the impact factor. It is more difficult to “game” where citations come from and to do so consistently. It is also very difficult for a journal to artificially increase the number of citations it receives from a more popular and widely read journal. Journals with a higher Eigenfactor than impact factor are those that have garnered the attention of more established journals and more researchers within their community.
Article Influence Score
The AIS is derived from the Eigenfactor score (23). The first step in deriving the AIS is to determine the number of articles published by a journal over a 5-year period and divide by the total number of articles published by all journals during the same period. This gives an idea of what percentage of the total number of scientific articles were published in a given journal. The Eigenfactor score is then divided by this percentage, and the number is normalized to 1. An AIS greater than 1 means that each article in that journal has above-average influence, whereas a score below 1 means that each article has below-average influence. For instance, the 2012 AIS for RadioGraphics was 1.087, suggesting that articles in RadioGraphics have a greater influence than the average article in the scientific literature.
SCImago Journal Rank
The SJR was developed by the SCImago Research Group at the University of Extremadura in Spain (24). Like the Eigenfactor, the SJR uses a page-rank algorithm to determine which citations are from more widely read journals, with these citations being given more weight. The main difference between the Eigenfactor and the SJR is that the former relies on the Institute for Scientific Information WOS database (25), whereas the SJR relies on the Scopus database (26).
In 2012, an updated version of the SJR known as the SJR2 was introduced (27). The SJR2 differs from the SJR in that it measures the cosine of the citing and cited journals to determine the thematic relationship of the journals. Journals that often cite each other are considered to be thematically close and are given greater weight. In addition, unlike any other bibliometric parameters, the SJR2 divides the prestige gained by a journal by the number of citable documents. The more often that related journals cite a specific journal, the more prominence that journal is given in its respective discipline. This computation was included to address a fundamental issue that affects many other bibliometric measures: As more journals and articles are added to research databases, bibliometric parameters are “diluted,” and comparison of numbers over time becomes limited.
Source-normalized Impact per Paper
SNIP was created by Professor Henk Moed at the University of Leiden, the Netherlands (28). Similar to the SJR, SNIP gives greater weight to citations from the same scientific discipline. Citations in fields that have fewer overall citations are given more weight. In essence, SNIP divides a journal’s citation count per paper by the “citation potential” in a given discipline. A major factor is the number of citations included in a given article, a number that varies by discipline. For instance, a citation from an article with 200 references will count for less than a citation from an article with only 20 references (29). Because SNIP takes this citation potential into account, it can be used to compare journals from different disciplines, and even to compare different disciplines with one another. Like the SJR, SNIP makes use of the Scopus database (discussed later).
Databases
PubMed
Created by the National Library of Medicine and launched in 1997 as a freely available interface to the MEDLINE database, PubMed has become one of the most popular and widely used search engines for use with the medical literature. A unique advantage of PubMed is the incorporation of “published online ahead of print” articles, which have previously not been available on other databases. The major limitations of PubMed are that (a) articles are confined to the biomedical and life sciences journal literature and (b) there is very little citation analysis. MEDLINE is the most common source of material sought by physicians using PubMed, although PubMed does include other databases.
Scopus
Launched in 2004 by Elsevier, Scopus (26) is the largest online bibliometric database and includes journal articles from all major disciplines published from 1966 onward, including articles from the social and physical sciences that are not included in PubMed. Citation analysis is more robust with Scopus than with PubMed and is available for articles published after 1996; however, there are stated plans to extend archiving back to 1970. A unique advantage of the Scopus database is individual author identification, which groups articles by author on the basis of affiliation and coauthors. This allows separation of results for authors with similar names, and authors can report errors or omissions to maintain the accuracy of their listings. In contrast, PubMed, Google Scholar, and WOS all search for specific strings of text to group authors, so that authors with similar names are not separated. More recently, Scopus has added partial indexing of articles published online ahead of print. However, with Scopus, each author has a unique identifier, which provides accurate author identification. Unlike PubMed, Scopus is not free to users. Although it is owned by Elsevier, Scopus is run by a separate administrative group to limit any conflict of interest.
Web of Science
Although the Thomson Reuters WOS database contains fewer articles than Scopus, it has articles from 1900 onward (25). Like Scopus, WOS includes journal articles from all major disciplines, although the total number of disciplines covered is slightly less than that of Scopus. WOS does have robust citation analysis, although a recent study found 20% more articles in a citation analysis performed with Scopus compared with WOS (30). WOS is perhaps most robust when evaluating research conducted prior to 1996. Like Scopus, WOS is not free to users.
Google Scholar
Google Scholar is the newest entry into the scientific database community. It is free to users, offers robust search capacity, and may be the best way to access obscure information, such as articles published in journals that have yet to be indexed in other databases. Google Scholar includes citations from books, online sources, and conference proceedings. The major limitation of Google Scholar is that it is updated only monthly (as opposed to weekly for Scopus and WOS and daily for PubMed), and there is very little in the way of citation analysis or author identification. Although Google Scholar will create a profile for a given author and provide a list of articles the author may have written, the author must select the articles that should be assigned to his or her profile. For a given author profile, Google Scholar will calculate the h-index and i-10 index for that author’s publishing career, as well as citations earned during the past 5 years.
Tools
Publish or Perish
PoP is a software program that can be used to retrieve and analyze academic citations (31). PoP can calculate an author’s h-index, g-index, and e-index, as well as many other bibliometric parameters. It is used by individual researchers to determine the impact of their research. PoP primarily uses the Google Scholar database and is free of charge for personal nonprofit use. Authors can search for their articles by author name, just as they would with Google Scholar; however, they must manually confirm which articles should be included in their calculation. Selecting articles to be used in a PoP calculation can be time consuming for authors with the same name as another author and/or a large number of articles.
Conclusion
Bibliometric parameters are increasingly being used to summarize the academic output of researchers and journals alike. The limitations of these tools have led to the development of new metrics, which must be understood in terms of their underlying theory and their theoretic and practical strengths and weaknesses before being adopted. Academic productivity, whether of an individual, department, or journal, cannot accurately be reduced to a single number on a linear scale. Any imperfect metric is subject to manipulation, and an understanding of the strengths and weaknesses of different bibliometric parameters is required to detect attempts at manipulation. The possible role of bibliometrics in comparing (a) individuals with different publication histories and (b) individuals with similar publication totals but different bibliometric parameters is illustrated in Tables 4 and 5, respectively.
![]() |
![]() |
Currently, normative data exist primarily for the h-index, publication count, and citation count; further work will be required to determine normative values for other parameters. In addition, defining normative values according to existing levels of academic rank assumes that individuals deserve their rank based on publication history but without regard to other measures of academic productivity and service.
It is worth nothing that, for many researchers, the citation count and h-index calculated by Scopus will be lower than those calculated by Google Scholar (Fig 2) (32–34), primarily for two reasons: For researchers who started their careers before 1996, Scopus will not include their early career (articles published before 1996) in citation analysis or h-index calculation. In addition, Google Scholar includes more sources for citations, including books and some online sources. Even within a database, Scopus will tend to underestimate true bibliometric calculations for authors whose publication history started before 1996 but will be fully inclusive for those whose publication history began in 1996 or thereafter. Thus, comparing parameters between individuals or to an established normative table requires using similar databases and understanding the relative strengths and weaknesses of each parameter. Future changes to these databases, such as the attempt by Scopus to include all articles published since 1970, may mitigate some of these issues, although they may simultaneously alter the accuracy of benchmarks created using prior versions of the database.

Figure 2 Graph shows the citation count (y-axis), number of publications (x-axis), and identity line (x = y; angled black line) as calculated by Scopus (blue) and Google Scholar (red) for a single author. Although the curved lines follow one another closely, Google Scholar is higher at all levels, with a Google Scholar h-index of 17 and a Scopus h-index of 14 (shown by the vertical lines drawn to where the curved lines cross the identity line).
Although the h-index seems to be a rather crude tool for measuring academic performance, it has stood the test of time and has correlated well with more mathematically elaborate techniques. In a study by Silagadze (35), the h-index showed a strong correlation with a much more complex S-index. The explanation for this correlation is that citations follow Zipf’s law (36), meaning that citations within the scientific community follow a logical pattern of mathematic progression regardless of the author or scientific community being evaluated. Of course, exceptions do exist, such as articles with an extremely high citation count. Overall, the h-index correlates strongly with more elaborate measures because of the “zipfian” nature of academic citations.
The same cannot be said of the various bibliometric parameters used for assessing journals.
The journal impact factor has been widely criticized as being imprecise and subject to manipulation, yet it continues to be the most widely used metric.In recent years, there have been multiple studies investigating the most frequently cited radiology articles, either for a specific journal (37,38) or for the field of radiology (39–41). Similar work has been performed for other medical subspecialties (42–57). The use of different methodologies (eg, different databases, years of inclusion, or exclusion criteria) yields article lists that largely overlap but elucidate different trends. For instance, three articles that evaluated the 100 most frequently cited articles in the field of radiology were published in 2013 and early 2014 (39–41). Each of these articles used different databases and different journal and time selection criteria and extracted different information from each article, resulting in different lists of the “top 100” articles (39–41). Comparison of two of these lists demonstrates an overlap of approximately 70%, indicating that each list identified 30 frequently cited articles unique to the corresponding search criteria (39,41).
Bibliometric analysis adds a quantitative aspect to an otherwise somewhat qualitative process. Moving beyond simple tallies of publication totals and impact factors, modern analytic tools have emerged to improve on prior methods. Although there is no one ideal tool, an accurate understanding of bibliometric parameters can aid in effectively evaluating individual authors, departments, and institutions, as well as individual articles and journals.
Presented as an education exhibit at the 2013 RSNA Annual Meeting.
All authors have disclosed no financial relationships.
References
- 1. . An index to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A 2005;102(46): 16569–16572. Crossref, Medline, Google Scholar
- 2. . Theory and practice of the g-index. Scientometrics 2006;69(1):131–152. Crossref, Google Scholar
- 3. . Does the h index have predictive power? Proc Natl Acad Sci U S A 2007;104(49):19193–19198. Crossref, Medline, Google Scholar
- 4. . Generalized Hirsch h-index for disclosing latent facts in citation networks. Scientometrics 2007;72(2):253–280. Crossref, Google Scholar
- 5. . The e-index, complementing the h-index for excess citations. PLoS ONE 2009;4(5):e5429. Crossref, Medline, Google Scholar
- 6. . Citation analysis as a tool in journal evaluation. Science 1972;178(4060):471–479. Crossref, Medline, Google Scholar
- 7. . The Eigenfactor metrics. J Neurosci 2008;28(45):11433–11434. Crossref, Medline, Google Scholar
- 8. . The H-index in academic radiology. Acad Radiol 2010;17(7):817–821. Crossref, Medline, Google Scholar
- 9. . Is NIH funding predictive of greater research productivity and impact among academic otolaryngologists? Laryngoscope 2013;123(1):118–122. Crossref, Medline, Google Scholar
- 10. . The scientific literature in diagnostic radiology for American readers: a survey and analysis of journals, papers, and authors. AJR Am J Roentgenol 1986;147(5):1055–1061. Crossref, Medline, Google Scholar
- 11. . H-index is a sensitive indicator of academic activity in highly productive anaesthesiologists: results of a bibliometric analysis. Acta Anaesthesiol Scand 2011;55(9):1085–1089. Crossref, Medline, Google Scholar
- 12. . Ranking hepatologists: which Hirsch’s h-index to prevent the “e-crise de foi-e”? Clin Res Hepatol Gastroenterol 2011;35(5):375–386. Crossref, Medline, Google Scholar
- 13. . The use of the h-index in academic otolaryngology. Laryngoscope 2013;123(1):103–106. Crossref, Medline, Google Scholar
- 14. . Distribution of the h-index in radiation oncology conforms to a variation of power law: implications for assessing academic productivity. J Cancer Educ 2012;27(3):463–466. Crossref, Medline, Google Scholar
- 15. . Measuring the surgical academic output of an institution: the “institutional” H-index. J Surg Educ 2012;69(4):499–503. Crossref, Medline, Google Scholar
- 16. . Does citation analysis reveal association between h-index and academic rank in urology? Urology 2009;74(1):30–33. Crossref, Medline, Google Scholar
- 17. . Use of the h index in neurosurgery. J Neurosurg 2009;111(2):387–392. Crossref, Medline, Google Scholar
- 18. . Should the h-index be modified? An analysis of the m-quotient, contemporary h-index, authorship value, and impact factor. World Neurosurg 2013;80(6):766–774. Crossref, Medline, Google Scholar
- 19. . The application of the h-index to groups of individuals and departments in academic neurosurgery. World Neurosurg 2013;80(6):759, e3. Crossref, Medline, Google Scholar
- 20. . Academic impact and rankings of American and Canadian neurosurgical departments as assessed using the h index. J Neurosurg 2010;113(3):447–457. Crossref, Medline, Google Scholar
- 21. . Survey of the h index for all of academic neurosurgery: another power-law phenomenon? J Neurosurg 2010;113(5): 929–933. Crossref, Medline, Google Scholar
- 22. . Eigenfactor: measuring the value and prestige of scholarly journals. Coll Res Libr News 2007;68(5):314–316. http://crln.acrl.org/content/68/5/314.full.pdf+html. Crossref, Google Scholar
- 23. . The Eigenfactor metrics: a network approach to assessing scholarly journals. Coll Res Libr 2010;71(3):236–244. Crossref, Google Scholar
- 24. . A new approach to the metric of journals’ scientific prestige: the SJR indicator. J Informetrics 2010;4(3):379–391. Crossref, Google Scholar
- 25. . Thomson Reuters Web site. http://www.isiwebofknowledge.com. Published 2013. Accessed March 1, 2013. Google Scholar
- 26. . Elsevier Web site. http://www.scopus.com. Accessed November 20, 2013. Google Scholar
- 27. . A further step forward in measuring journals’ scientific prestige: the SJR2 indicator. J Informetrics 2012;6(4):674–688. Crossref, Google Scholar
- 28. . The source normalized impact per paper is a valid and sophisticated indicator of journal citation impact. J Am Soc Inf Sci Technol 2011;62(1):211–213. Crossref, Google Scholar
- 29. . Some modifications to the SNIP journal impact indicator. J Informetrics 2013;7(2):272–285. Crossref, Google Scholar
- 30. . A comparison of Scopus and Web of Science for a typical university. Scientometrics 2009;81(2):587–600. Crossref, Google Scholar
- 31. . Publish or Perish. http://www.harzing.com/pop.htm. Published 2007. Accessed June 10, 2014. Google Scholar
- 32. . Which h-index? A comparison of WoS, Scopus and Google Scholar. Scientometrics 2008;74(2):257–271. Crossref, Google Scholar
- 33. . h-Index: a review focused in its variants, computation and standardization for different scientific fields. J Informetrics 2009;3(4):273–289. Crossref, Google Scholar
- 34. . Comparisons of citations in Web of Science, Scopus, and Google Scholar for articles published in general medical journals. JAMA 2009;302(10):1092–1096. Crossref, Medline, Google Scholar
- 35. . Citation entropy and research impact estimation. Acta Phys Polon B 2010;41:2325–2333. http://arxiv.org/abs/0905.1039. Google Scholar
- 36. . Citations and the Zipf-Mandelbrot’s law. Complex Syst 2010;11:487–499. http://arxiv.org/abs/physics/9901035. Google Scholar
- 37. . Top 100 cited AJR articles at the AJR’s centennial. AJR Am J Roentgenol 2006;186(1):3–6. Crossref, Medline, Google Scholar
- 38. . Whatever happened to the 50 most frequently cited articles published in AJR? AJR Am J Roentgenol 2005;185(3):597–601. Crossref, Medline, Google Scholar
- 39. . Citation classics in radiology journals: the 100 top-cited articles, 1945−2012. AJR Am J Roentgenol 2013;201(3):471–481. Crossref, Medline, Google Scholar
- 40. . The 100 most-cited articles in the imaging literature. Radiology 2013;269(1):272–276. Link, Google Scholar
- 41. . Highly cited works in radiology: the top 100 cited articles in radiologic journals. Acad Radiol 2014;21(8):1056–1066. Crossref, Medline, Google Scholar
- 42. . The most influential articles in critical care medicine. J Crit Care 2010;25(1):157–170. Crossref, Medline, Google Scholar
- 43. . Citation classics in occupational medicine journals. Scand J Work Environ Health 2007;33(4):245–251. Crossref, Medline, Google Scholar
- 44. . The 100 most frequently cited articles in ophthalmology journals. Arch Ophthalmol 2007;125(7):952–960. Crossref, Medline, Google Scholar
- 45. . The 101 most frequently cited articles in ophthalmology journals from 1850 to 1949. Arch Ophthalmol 2010;128(12):1610–1617. Crossref, Medline, Google Scholar
- 46. . 100 most cited articles in orthopaedic surgery. Clin Orthop Relat Res 2011;469(5):1487–1497. Crossref, Medline, Google Scholar
- 47. . Fifty most cited articles in orthopedic shoulder surgery. J Shoulder Elbow Surg 2012;21(12):1796–1802. Crossref, Medline, Google Scholar
- 48. . The 50 most cited articles in pediatric orthopedic surgery. J Pediatr Orthop B 2012;21(5): 463–468. Crossref, Medline, Google Scholar
- 49. . A century of citation classics in otolaryngology–head and neck surgery journals. J Laryngol Otol 2002;116(7):494–498. Crossref, Medline, Google Scholar
- 50. . Highly cited works in neurosurgery. I. The 100 top-cited papers in neurosurgical journals. J Neurosurg 2010;112(2):223–232. Crossref, Medline, Google Scholar
- 51. . Highly cited works in neurosurgery. II. The citation classics. J Neurosurg 2010;112(2): 233–246. Crossref, Medline, Google Scholar
- 52. . A bibliometric search of citation classics in anesthesiology. BMC Anesthesiol 2011;11:24. Crossref, Medline, Google Scholar
- 53. . Classic citations in main plastic and reconstructive surgery journals. Ann Plast Surg 2013;71(1):103–108. Crossref, Medline, Google Scholar
- 54. . Plastic surgery classics: characteristics of 50 top-cited articles in four plastic surgery journals since 1946. Plast Reconstr Surg 2008;121(5):320e–327e. Crossref, Medline, Google Scholar
- 55. . The top 100 cited articles in urology: an update. Can Urol Assoc J 2013;7(1-2): E16–E24. Crossref, Medline, Google Scholar
- 56. . Classics of urology: a half century history of the most frequently cited articles (1955−2009). Urology 2010;75(6):1261–1268. Crossref, Medline, Google Scholar
- 57. . Classic papers in urology. Eur Urol 2003;43(6):591–595. Crossref, Medline, Google Scholar
Article History
Received: Feb 15 2014Revision requested: May 23 2014
Revision received: June 12 2014
Accepted: July 1 2014
Published online: May 13 2015
Published in print: May 2015













