Scientific output is usually measured in two ways: by the ranking of the journals that publish the papers, and by the impact of the paper, i.e. the number of citations it receives. Quantities used to evaluate journals:

  • Impact factor: The number of citations to articles in the two years preceding the current year, divided by the number of articles in the two years. Unfortunately, due to differences in publication and citation habits, the impact factor is not independent of the research field, and the global increase in the number of articles means that IF is inflated from year to year. Accordingly, it is not fair to compare the IFs of different fields, or the IFs of different years for the same field.
  • Normalized impact factor: The NIF is calculated by dividing the IF by the median IF of the research field calculated in the reference year. This metric is more suitable for measuring academic performance.
  • Rank: Each year, journals are ranked by some metric (e.g. IF, other citation based metrics) and the position in the ranking determines the prestige of the journal. Q1 represents journals in the top 25%, Q2, Q3, Q4 represent the other quartiles. It is also common to label the top 10% of journals (D1) and the top 1% (C1).Unfortunately, there are two organisations involved in ranking, Scimago and WoS, and they do not always give the same results.
  • Q number: This metric was introduced by the Section of Engineering Sciences of the Hungarian Academy of Sciences. It considers journals with IF by the IF value, non-IF journals by 0.3-0.4 points, conference articles by 0.1-0.2 points, and books/book chapters by length, in all cases divided by author number.

Evaluating the citations:

  • Number of independent citations: In this case, citations from publications having common authors with the publication in question are not taken into account. Almost all evaluation systems only look for independent citations.
  • WoS/Scopus citations: In these indexing databases only registered publications that are considered to be of some quality appear, so if a WoS/Scopus publication cites our article, it already means something positive about its citation value.
  • Hirsch index: Also called the H-index, it does not evaluate a specific publication, but a career. If a researcher has an H-index of “N”, then he has “N” articles with at least “N” citations. Its flaw is that it does not grow linearly: if one publishes and collects citations at a steady rate, the H-index will grow more slowly over time.
  • WoS InCite percentile: From time to time, WoS reports the 0.1%, 1%, 5%, etc. percentile of the number of citations for an article published in a given field in a given year.
  • I number: The solution used by the Hungarian Academy of Sciences for measuring citation rates. It is the same as the number of independent citations, but counting only citations from scientific publications. In the case of thesis-type citations, it counts only theses independent of the author.

Scoring systems

Online tools for scientometry

In the engineering field, a useful online tool that works with MTMT data to calculate the scores for the “Doctor of the Hungarian Acedemy of Sciences” applications and the VIK habilitation:

Another useful tool (also used in the evaluation of OTKA research proposals) is to check the performance of researchers on a national scale, also based on MTMT data:

During the PhD degree awarding procedures, the Doctoral Council of the Faculty uses the following tool to calculate the doctoral score of applicants: