doi: 10.56294/dm20229
SHORT COMMUNICATION
Bibliometric indicators and decision making
Indicadores bibliométricos y toma de decisiones
Fernando
Ledesma1 *, Beltrán
Enrique Malave González2
*
1Unilever Argentina. Buenos Aires, Argentina.
2Universidad Abierta Interamericana. Buenos Aires, Argentina.
Cite as: Ledesma F, Malave González BE. Bibliometric indicators and decision making. Data & Metadata. 2022; 1:9. https://doi.org/10.56294/dm20229
Submitted: 12-10-2022 Revised: 27-10-2022 Accepted: 20-12-2022 Published: 21-12-2022
Editor: Prof.
Dr. Javier González Argote
ABSTRACT
The analysis of the publication of scientific articles, a fundamental link within the research process, has become the standard unit that allows for the assessment of the quality of the knowledge generation process and its impact on the scientific environment. This article aims to describe the general aspects of the relationship between bibliometric studies and decision-making. Data on scientific activities are increasingly being used to govern science. Research evaluations that were once individually designed for their specific context and conducted by peers are now routine and based on metrics. The issue is that the evaluation has shifted from being based on expert assessments to relying on these metrics. The opportunity to apply bibliometric techniques proves useful when making decisions that involve a redirection of all research-development plans and the selection of capable leaders to coordinate projects with the purpose of generating technological and financial resources.
Keywords: Bibliometrics; Decision-making; Science; Metrics.
RESUMEN
El análisis de la publicación de artículos científicos, eslabón fundamental dentro del proceso de investigación, se ha convertido en la unidad estándar que permite calificar la calidad del proceso generador de conocimiento y su impacto en el entorno científico. Este artículo tiene como objetivo describir los aspectos generales de la relación entre estudios bibliométricos y la toma de decisiones. Los datos sobre las actividades científicas están siendo cada vez más utilizados para gobernar la ciencia. Evaluaciones sobre investigación que fueron en su día diseñadas individualmente para su contexto específico y realizadas por pares son ahora rutinarias y están basadas en métricas. El problema es que la evaluación pasó de estar basada en valoraciones de expertos a depender de estas métricas. La oportunidad de aplicar técnicas bibliométricas resulta útil a la hora de la toma de decisiones que impliquen una redirección de todos los planes de investigación-desarrollo y la selección de líderes capaces de coordinar proyectos con la finalidad de generar recursos tecnológicos y financieros.
Palabras clave: Bibliometría; Toma de decisiones; Ciencia; Metría.
INTRODUCTION
The publication analysis of scientific articles, a fundamental link in the research process, has become the standard unit that allows the quality of the knowledge-generating process and its impact on the scientific environment to be qualified.
Bibliometric indicators, used for decades to assess the quantity and origin of scientific journals, have recently acquired an essential role in evaluating the quality of scientific activity. Both the authors of the texts and the academic groups that support their reports are permanently qualified according to the impact of the journals in which they publish and their effect on the generation of new knowledge.(1)
This article describes the general aspects of the relationship between bibliometric studies and decision-making.
DEVELOPMENT
Traditionally, the most used method to assess the quality of the scientific domain, researchers, institutions, or countries is through scientific publications.(2)
Derived from the increasingly frequent use of mathematical and statistical methods for the scientific production analysis and its quality, bibliometrics was born, defined by Garfield as "the quantification of bibliographic information capable of being analyzed". It originated two large branches of development, fundamental and applied bibliometrics, the latter includes the inferential and descriptive part of scientometrics.(3)
Isabel Gómez and María Bordons consider that bibliometric indicators “are statistical data deduced from scientific publications. Its use is supported by publications' important role in disseminating new knowledge. A role assumed at all levels of the scientific process”.(4)
Velazco(2) states that activity indicators allow us to know the actual state of science and classifies them into six groups:
• Production indicators: count the number of scientific publications by an author, research group, or institution. These indicators only provide information on the quantity of publications but not their quality.
• Circulation indicators: measure the number of publications in libraries and databases.
• Dispersion indicators: analysis of the publications on a subject or area among the various sources of information. It allows us to know if the works of a specific area are concentrated in a few or many journals.
• Indicators of use of scientific literature: they measure the number of publications and the number of references included in the publications. Each publisher has its publication standards, and the number of bibliographical references that can be included in an article differs from one journal to another.
• Collaboration indicators: these are the ones that evaluate the collaboration between authors and institutions. The most widely used indicator to assess the cooperation between authors is the co-authorship index, which is an average of the number of authors who sign the documents and makes it possible to determine the size of the research groups. Another indicator is the rate of co-authored documents, which is the proportion of documents signed by more than one author. Regarding the collaboration between institutions, it is essential to determine the degree and the established type of collaboration that can be known through national and international collaboration indicators.
• Indicators of obsolescence of scientific literature: they measure the average life of an article through the number of citations received over the years. The average life of an article depends on its subject area. The so-called “hot papers” are those documents that are highly cited in a very short period, and the classics are those that continue to be cited for many years.(2)
Beyond the bibliometric indicators used to evaluate scientific publications, it is also necessary to consider the databases where they are indexed.
The Internet became widespread in the mid-1980s, providing quick and efficient access to sources of knowledge storage and management. Within the internet, bibliographic databases constitute an essential component of the current scientific communication model. The more than 1000 bibliographic databases in science register and index the published information and include one of the primary mechanisms to control and promote the publication of scientific results.(5)
Peralta et al.(6), when referring to the use of one or another database for bibliometric studies of scientific production, it has been shown that: the Web of Science (WoS) should not be used only to study the citation of an author or published work, nor as a single resource for analysis of citations and impact of an author or publication. Scopus and Google Scholar can help to identify a considerable number of citations that are not found in the WoS since they offer a more complete and comprehensive image of the international and interdisciplinary nature of scientific communication. The problem of the measurement techniques of Google Scholar to accurately and efficiently analyze the citations is presented. The use of one or the other resource for measurement depends on the scientific domain being studied. In Scopus's case, one of the limitations indicated refers to the indexing of citations only from 1996.
As a summary of the theoretical analysis of bibliometric methods for the assessment of science, it supposes assuming a set of premises:
1. The results of most of the research carried out by scientists and technicians are transmitted through written communication through scientific and technical publications (journal articles, books, conference proceedings, patents, etc., which constitute the primary sources). Therefore, published papers comprise one of the final products of all scientific activity and represent an indicator of the research volume produced.
2. The works published in the primary sources are compiled in an abbreviated form in the databases. Consulting the appropriate databases is the proper method to obtain information on publications in any scientific field.
3. The number of citations a work receives from the rest of the scientific community quantifies the impact achieved by said work.
4. The prestige of the bibliographic sources where the results of the investigations are published represents a measure of the influence the works published in them can exert.
5. The bibliographical references that include the works have often been taken to indicate their scientific value and have sometimes been used as criteria for analyzing information consumption.(1,6,7,8)
Bibliometric studies and decision making
Data about scientific activities is increasingly being used to govern science. Once individually designed for their context and carried out by peers, research evaluations are routine and metric-based. The problem is that the assessment went from being based on expert evaluations to depending on these metrics.(9)
The growing need of the different government entities and the industry to select with uniform and objective criteria the sources of information to which to resort when making technical, administrative, and political decisions has generated a greater interest in the use of these indicators, given that they offer a standard method for evaluating the quality and effectiveness of the contribution to the scientific development of the texts produced by the research groups.(1)
Indicators have proliferated: usually well-intentioned, not always well-informed, and often misapplied. When organizations without knowledge about good practices and appropriate interpretation of indicators carry out evaluations, there is a risk of damaging the scientific system with the same instruments designed to improve them.(9)
Despite its global acceptance, the application of these indicators has shown over time to have some weaknesses and shortcomings when applied as a tool for evaluating scientific production, which is promoting a current of critics and contradictors who promulgate new methods and alternatives to assess these parameters, starting from the observation that the bibliometric indicators were designed to evaluate the impact of journals and not the quality of scientific processes.(1)
A detailed analysis by Pickton(10) analyzes several manifestos and initiatives that aim to raise awareness about the problems of quantitative research evaluation. The author argues that when combined with a single indicator, quantitative approaches are reductionist and fundamentally flawed.
On the other hand, Wilsdon et al.(11) state that journal indicators (such as the impact factor) as proxies for research quality are insufficient. Consequently, research evaluation must be carried out responsibly and adopt concepts such as robustness, humility, transparency, diversity, and reflexivity during bibliometric exercises.
The most popular example of recommendations in this regard is the Leiden Manifesto which renowned bibliometric researchers have published. It is named after the place where it was first developed, the STI Conference 2014, in Leiden, the Netherlands. The Manifesto recommends ten principles derived from the best practices of (quantitative) bibliometric exercises.(12)
In 2016, the European Association for the Study of Science and Technology awarded the Leiden Manifesto with the Ziman Prize for a collaborative promotion of public engagement with science and technology, designed to influence assessment practice instead of just criticizing it. This is an impressive effort to bring specialized scientometric knowledge to the broader science policy arena. The Leiden recommendations have now found their way into practice as part of a building excellence strategy.(13)
The Leiden Manifesto emphasizes that quantitative assessment has to support qualitative assessment by experts. It is important to take into account that the indicators cannot replace informed reasoning. Therefore, the decision-makers have complete responsibility for the evaluations, and in the choice and use of indicators, socioeconomic and cultural contexts must be considered.(12)
Referring to science and technology indicators, Kenji(14) explains that they monitor the performance of science and technology systems, evaluate the system and modify the distribution of resources to improve its efficiency, and provide inputs for establishing policies.
Zacca(15) states that any indicator is a construct that can be used or abused in management and decision-making processes. Transparency and traceability should be the first objective when scientometric indicators are used in the public domain. For bibliometric indicators to achieve meaning, they must be compared and interpreted based on trends and taking other reference domains such as similar groups, the superior aggregate such as the country, the region, or the world.
Indicators have advantages and disadvantages that must be considered in the interpretation. They are partial and convergent because they describe a specific aspect of the study being carried out. To be effective, the interpretation of scientific activity must be based on several indicators that contextualize the information resulting from its analysis. As they show a partial vision of scientific activity, it is necessary to have specialists from the disciplines complete, correct, and interpret the results.(16)
Some users of science and technology indicators often see these numbers as representatives of "truth" about the state of science and technology and not as possible approximations of reality.(14)
However, bibliometric indicators help evaluate research performance if they are accurate, sophisticated, up-to-date, combined with expert knowledge, and interpreted and used with caution.(17)
Likewise, the data collection and analysis processes must be open, transparent, simple, and open to verification by those evaluated.(12)
Science as a publication production system, being any information recorded in permanent formats and available for common use, stands out as a social system whose first function is to disseminate knowledge, the second to guarantee the preservation of certain patterns. The third is to attribute merits and acknowledgments to those with their work that have contributed to developing ideas in different fields.(15)
Therefore, an important measure of knowledge production, transfer, and use, can be derived from the publications in which knowledge is expressed, an element that is not the only one in scientific communication but the most critical.(15,18)
Science and technology are relevant elements in developing and evaluating a country. Moravcsik(19) conceptualizes evaluation as measuring the extent to which a segment of science and technology works favorably in the context of a set of determined goals and objectives. The assessment results should be a complex and composite statement and reflect information from all relevant aspects of the situation.
The evaluation of the national research system is essential in decision-making. Its implementation is justified by the need to optimize the distribution of resources, reduce information asymmetry between knowledge producers and users, and demonstrate that the investment is effective and provides public benefits.(15,20)
Scientific and technological research activities need to be evaluated to judge how well the objectives were met and to measure the effectiveness of the research in meeting social and economic goals.(21)
In her doctoral thesis, Zacca(15) states that detecting the use of resources, which groups or disciplines are generating cutting-edge knowledge and constitute the strengths of a country (from the point of view of its scientific production), which the impact of the research has been in terms of knowledge use and transfer, as well as identifying with which other groups, institutions or countries strategic alliances are established, are vital tasks that provide helpful information to decision-makers to define, provide feedback and reorient the scientific policies around its priority axes.
The author adds that, in recent years, there has been a growing interest in scientometric indicators, both by information specialists and by authorities in charge of decision-making in science and technology. The purpose has been to understand the dynamics of science, identify communication patterns, and support the evaluation and planning of scientific policies.(15)
Research indicators can provide crucial information that would be difficult to aggregate or understand from individual experiences.(12)
Vallejera et al.(23) state that developing their Research and Development (R&D) plan in companies makes them feel the need to know at what level of development and use of resources they have, as well as the perspectives from their competitors or potential business partners. In this order, to the extent that scientific performance is known nationally or internationally, it will allow evidence-based decisions to be made.
FINAL CONSIDERATIONS
A new knowledge product of scientific research acquires value when it is published and later applied in the specific field. It contributes to the development of society. With the application of bibliometric techniques, a global overview of the performance and impact of scientific activity in a given region is obtained. This objective data serves as a point of comparison to be able to measure the differences between the productivity of the different scientific specialties and their contribution to the development, which then makes decision-making easier when establishing policies or allocating resources to lines of research, choice by researchers of high-impact journals to publish their studies, support for low-productivity specialties.(22)
The opportunity to apply bibliometric techniques is valuable when making decisions that imply redirecting all research-development plans and selecting leaders capable of coordinating projects to generate technological and financial resources.(23)
From this same perspective, Rodríguez et al.(24) affirm that integrating critical success factors with competitive intelligence activity within a structural and operational model contributes to a company's strategic plan to focus on the competition. And that to specify these factors, a bibliometric analysis is appropriate, whose statistical base allows a multidimensional analysis of the scientific literature to be carried out.
The theoretical foundations analyzed provide conclusive support for the importance of bibliometric and scientometric techniques, delimiting a set of applications such as: identifying trends and knowledge growth in different disciplines; estimating coverage of secondary journals; identifying users from other fields; identifying authors and trends in various disciplines; measure the usefulness of selective information dissemination services; predict posting trends; identify the core journals of each discipline; formulate budget-adjusted procurement policies; adapt post discard policies; study the dispersion and obsolescence of scientific literature; design norms for standardization; design processes of indexing, classification, and preparation of automatic summaries; predict the productivity of publishers, individual authors, organizations, countries, etc.(16,25,26)
REFERENCES
1. Gómez CFR-C, Gutíerrez CV-R, Pinzón CER-C. Indicadores bibliométricos: origen, aplicación, contradicción y nuevas propuestas. MedUNAB 2005; 8:29-36.
2. Velasco B, Bouza JME, Pinilla JM, Román JAS. La utilización de los indicadores bibliométricos para evaluar la actividad investigadora. Aula abierta 2012; 40:75-84.
3. Garfield E. How can impact factors be improved? BMJ 1996; 313:411-3. https://doi.org/10.1136/bmj.313.7054.411.
4. Caridad IG, Gangas MB. Limitaciones en el uso de los indicadores bibliométricos para la evaluación científica. Política científica 1996:21-6.
5. Ospina EG, Herault LR, Cardona AF. Uso de bases de datos bibliográficas por investigadores biomédicos latinoamericanos hispanoparlantes: estudio transversal. Rev Panam Salud Publica 2005;17:230-6. https://doi.org/10.1590/S1020-49892005000400003.
6. Peralta González MJ, Frías Guzmán M, Gregorio Chaviano O. Criterios, clasificaciones y tendencias de los indicadores bibliométricos en la evaluación de la ciencia. Revista Cubana de Información en Ciencias de la Salud 2015;26:290-309.
7. Lozano RS. Indicadores bibliométricos utilizados en la evaluación de la ciencia y la tecnología. Revisión bibliográfica. Revista española de documentación científica 1990;13:842-65.
8. Rousseau R. Indicadores bibliométricos y econométricos en la evaluación de instituciones científicas. Acimed 2001;9:50-60.
9. Hicks D, Wouters P, Waltman L, de Rijcke S, Rafols I. El Manifiesto de Leiden sobre indicadores de investigación. Revista Iberoamericana de Ciencia, Tecnología y Sociedad-CTS 2015;10:275-80.
10. Pickton M. The Metric Tide: Are you using bibliometrics responsibly? UON Graduate School Blog 2015. https://researchsupporthub.northampton.ac.uk/2015/12/09/the-metric-tide-are-you-using-bibliometrics-responsibly/
11. Wilsdon J, Allen L, Belfiore E, Campbell P, Curry S, Hill S, et al. The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. 2015. https://doi.org/10.13140/RG.2.1.4929.1363.
12. Hicks D, Wouters P, Waltman L, de Rijcke S, Rafols I. Bibliometrics: The Leiden Manifesto for research metrics. Nature 2015;520:429-31. https://doi.org/10.1038/520429a.
13. Coombs SK, Peters I. The Leiden Manifesto under review: what libraries can learn from it. Digital Library Perspectives 2017;33:324-38. https://doi.org/10.1108/DLP-01-2017-0004.
14. Kenji Kondo E. Desarrollo de indicadores estratégicos en ciencia y tecnología: principales problemas. ACIMED 2001;9:29-34.
15. Zacca González G. Producción científica latinoamericana en salud pública. Cuba en el contexto regional. Scopus 2003-2011. Tesis Doctoral. Universidad de Granada, 2015.
16. Spinak E. Indicadores cienciométricos. ACIMED 2001;9:16-8.
17. Moed HF. New developments in the use of citation analysis in research evaluation. Arch Immunol Ther Exp 2009;57:13. https://doi.org/10.1007/s00005-009-0001-5.
18. Tijssen RJW, van Leeuwen TN. Extended Technical Annex to Chapter 5 of the «Third European Report on S&T Indicators»: Bibliometric Analyses of World Science. Leiden: Leiden University; 2003.
19. Moravcsik M. Applied scientometrics: an assessment methodology for developing countries. Scientometrics 1985;7:165-76.
20. Abramo G, Cicero T, D’Angelo CA. Revisiting the scaling of citations for research assessment. Journal of Informetrics 2012;6:470-9. https://doi.org/10.1016/j.joi.2012.03.005.
21. Rodríguez ZC, Anegón F de M. La investigación científica española (1995-2002): una aproximación métrica. Granada: Universidad de Granada; 2007.
22. Dávila Rodríguez M, Guzmán Sáenz R, Macareno Arroyo H, Piñeres Herera D, de la Rosa Barranco D, Caballero-Uribe CV. Bibliometría: conceptos y utilidades para el estudio médico y la formación profesional. Revista Salud Uninorte 2009;25:319-30.
23. Vallejera DH, Díaz IAL, Sánchez YR. Análisis bibliométrico en una universidad cubana como herramienta para la inteligencia empresarial. Perspectivas em Gestão & Conhecimento 2016;6:217-29.
24. Rodríguez Aldana ML, Fong Reynoso C, Rodríguez Aldana ML, Fong Reynoso C. Análisis bibliométrico de los factores críticos de éxito para la gestión estratégica de las PyMES. Nova scientia 2020;12:0-0. https://doi.org/10.21640/ns.v12i24.2267.
25. Palmer J. Scientists and information: II. Personal factors in information behaviour. Journal of Documentation 1991;47:254-75. https://doi.org/10.1108/eb026880.
26. Palmer J. Scientists and information: I. Using cluster analysis to identify information style. Journal of Documentation 1991;47:105-29. https://doi.org/10.1108/eb026873.
FINANCING
No external financing.
DECLARATION OF CONFLICT OF INTEREST
The authors declare that they have no conflict of interest.
AUTHORSHIP CONTRIBUTION
Conceptualization: Fernando Ledesma and Beltrán Enrique Malave González.
Research: Fernando Ledesma and Beltrán Enrique Malave González.
Methodology: Fernando Ledesma and Beltrán Enrique Malave González.
Writing - original draft: Fernando Ledesma and Beltrán Enrique Malave González.
Writing - revision and editing: Fernando Ledesma and Beltrán Enrique Malave González.