Scientist Eugene Garfield created the
journal bibliometric indicator Impact Factor in the 1950s. The aim of impact
factor was to help in selecting journals for the new Science Citation Index and
it has been a useful tool for librarians to identify journals to purchase.
This indicator measures the quality of
scientific journals that depends from the number of citations of papers
published in a journal compared to the total number of papers published by the
same journal in the previews years.
It is quite clear that Impact Factor is an
appropriate measure for scientific journals (very useful in order to create
ranking of journals within the same subject area) but not for individual
assessment. This has been the incorrect use that agencies, universities and
governments have made of it.
In recent years, the debate about its use became
more and more inflamed within the
scientific community, until to reach some important results.
USA
In May has been published the San FranciscoDeclaration on Research Assessment (DORA), a document initiated by the American Society
for Cell Biology and pulled together with a group of editors and publishers. The
declaration, which has already been signed by over 75 institutions and 150
senior figures in science and scientific publishing, has the purpose of
providing some recommendations against the mis-use of impact factor as primary
parameter with which to compare the scientific output of individuals and
institutions.
UK
In occasion of Research Excellence Framework2014 (REF), during which British institutions of higher education will be
evaluated, outputs are to be assessed in terms of "‘originality,
significance and rigour’, with reference to international research quality
standards."
Here’s what the REF team wrote about journal impact factor in REF 2014 guidelines:
Here’s what the REF team wrote about journal impact factor in REF 2014 guidelines:
"No sub-panel will make any use of journal impact factors, rankings, lists or the perceived standing of publishers in assessing the quality of research outputs."
ITALY
On the other hand in Italy, the just ended VQR2004-2010 made use of impact factor for the creation of the journal classes,
but it was the first evaluation exercise for our Institutions and we hope that all
the problems identified in the procedures will lead to a substantial
improvement, even in the use of impact factor.
BRAZIL
In Brazil similar problems are encountered, here
the evaluation of graduate programs relies heavily on journal impact factors.
As we can read on an article produced by some Brazilian researchers and published
in Frontiers in Statistical Genetics and
Methodology:
“The governmental agency CAPES from the Education Ministry
monopolize this evaluation and pressure programs by the distribution of funding
resources and departmental fellowships conditioned to adherence to a journal
classification system called “Qualis” which is a discretization of the
continuous distribution of journals ranking by their impact factors”.
More: “In several institutions the graduate
committee authorizes professors to act as thesis advisors only if in a certain
period (e.g., 4 years) they publish at least one paper in a journal classified
as “Qualis A2.”
Commenti