H-index
The H-index is a popular metric that assesses both the number of articles and their citation counts together. In 2005, Argentine-American physicist Jorge E. Hirsch from UC San Diego first proposed it. For this reason, it is sometimes referred to as the Hirsch index or Hirsch number. A researcher’s H-index is n if they have n articles, each of which has been cited at least n times. For example, if a researcher has five articles with citations of 9, 7, 6, 2, and 1, respectively, their H-index will be 3, because three articles have at least three citations, but there are not four papers with 4 or more citations. Here’s another example: if a researcher has five articles with citation counts of 10, 8, 5, 4, and 3, then their H-index will be 4, as the fourth article has 4 citations and the fifth has 3. On the other hand, if the citations are 25, 8, 5, 3, and 3, the H-index will be 3, because the third article has 5 citations but the fourth only 3. The specialty of the H-index is that it values both the quantity and impact of the research equally. However, it has some limitations. It depends only on citation numbers and does not assess the quality or depth of citations. Additionally, the H-index is more effective for experienced researchers, as they tend to accumulate more citations over time, but it may not accurately reflect the contributions of early-career researchers. The H-index is usually calculated using databases like Web of Science, Scopus, and Google Scholar. Differences can be seen among these databases because each has varying coverage and methods for counting articles and citations.
Eigenfactor Score
Another modern method for assessing the importance of journals is the Eigenfactor Score. This indicator was developed in 2007 by Jevin West and Carl Bergstrom of the University of Washington. To calculate the Eigenfactor Score, you first need the citation counts for every article in a journal. For example, if a journal’s five articles have citation counts of 20, 15, 10, 5, and 2, these are arranged from highest to lowest: 20, 15, 10, 5, 2. While calculating the Eigenfactor Score, citations from highly ranked journals are given more weight. In other words, if an article is cited by a high-ranking journal, its Eigenfactor Score will increase. This score does not depend solely on the number of citations—it also values the source or origin of the citations. The score can be calculated for free at the eigenfactor.org website. However, the size of the journal also affects the score; if a journal publishes many articles each year, its Eigenfactor Score can double.
SNIP
SNIP (Source Normalized Impact per Paper) is a metric that measures a journal’s citation impact. However, it does not rely solely on the number of citations—it also evaluates by comparing each subject field’s average citation rate. This enables comparisons between different subject fields. SNIP calculates the ratio between the average citations per article in a journal and the citation potential for that subject field. The metric was introduced by Henk F. Moed in 2010, with the main goal of simply reflecting the true impact of research. SNIP does not only consider citation count, but also takes into account the characteristics of the relevant field. The SNIP for a journal is calculated using the formula: SNIP = (citation count per paper)/(citation potential within its field). In 2012, some changes were made to the calculation of SNIP, so now the average SNIP score is set to 1. This means that journals with a SNIP above 1 are performing better than the field average. For example, if the average citation count in one field is 40 and in another field it is 10, then the citation potential of the first field will be four times higher compared to the second. Citation counts are usually higher in fields like life sciences, and lower in areas such as mathematics or social sciences.
SJR
The SJR (SCImago Journal Rank) indicator was developed by SCImago Lab in 2007, and is used as an alternative to the Impact Factor. SJR does not depend solely on the number of citations; it also considers the qualitative value and sources of the citations. This means if a journal is cited by a high-impact journal, that citation is weighted more heavily. This index helps determine the quality of a journal, as it specifically measures the value and source of citations. For example, if a journal has three articles with citation counts of 10, 15, and 25 respectively, when calculating SJR, the qualitative weight (such as 0.8) of the citation source is also included.
Altmetric Score
The Altmetric Score primarily evaluates not just the academic impact of research but also its social acceptance and popularity. After Altmetric LLP was founded in 2010, the score was launched in 2011 under the leadership of Euan Adie. This score is based on the volume of research discussions found on social media, blogs, news outlets, and other platforms. For instance, if a research article is widely discussed on social media or in the news, its Altmetric Score will be higher. Altmetric tracks ‘mentions’ (links or written references) across various online sources and sites, reflecting the broader impact of research. These sources include mainstream media, public policy documents, social and academic networks, post-publication peer review forums, and—more recently—Wikipedia and the Open Syllabus Project.
i10-index
Among the various indices used to assess researchers’ impact and contribution, the i10-index is notable. Google Scholar introduced it in 2006 as a simple and effective metric. The i10-index is primarily used on the Google Scholar platform. For example, if a researcher has 20 articles, and 15 of those articles have more than 10 citations each, while 5 articles have fewer than 10 citations, then their i10-index will be 15, as 15 articles have at least 10 citations.

Leave a comment