edited by Birger Hjørland and Claudio Gnoli


Research classification system

Preliminary editorial placeholder article; to be replaced when an author is found for an improved article

Table of contents:
1. Introduction
2. Examples of research classifications
3. Conclusion

Part 1 describes the purpose of this article, to introduce research classification systems, which are systems used and developed for research administrative purposes, including research statistics and research evaluation, often by national authorities. The focus in this article is their similarities with and differences from bibliographical classification systems with library classification systems. Part 2 briefly presents some of the existing research classification systems, while part 3, the conclusion compares the two kinds of classifications along three dimensions: (1) national versus international standards, (2) kinds of input used for their creation, (3) issue concerning currency and updating. It is concluded that research classifications and bibliographical classifications may sometimes be identical (some systems are used for both purposes) and that the knowledge organization community, which already is engaged in research classification systems, should engage in developing updated classifications useful for both (as well as other) purposes.

[top of entry]

1. Introduction

The term research classification system is used here to denote a kind of classification systems, which primarily are serving statistical and administrative purposes for reporting research activities (rather than, for example, bibliographic classifications, which primarily are serving document organization and retrieval). They have also been termed classifications for “current research information systems” (Sivertsen 2019). They aim at registering research output for political and evaluative purposes (Sile et al. 2018), i.e., research output rather than available publications for subject queries. They may be considered a kind of → statistical classifications. Research classifications should therefore not be confused with scientific and scholarly classifications used in research practice [1], such as the Periodic Table, the Linnaean hierarchy, → ontologies etc. The problem of subject classification for research administrative purposes was addressed by Sivertsen (2019, 674) and different classification systems and their effect on research evaluation was examined by Sile et al. (2021).

An important question is, of course, if the same kind of classifications can (or should) be used for both research administrative and for bibliographic purposes, and whether these two kinds of classification systems are based on the same principles and methods for their construction. (There are examples of classification systems produced by different kinds of institutions, which probably should be considered as belonging to the same kind, such as the → BISAC Subject Headings List (Martínez-Ávila 2016), which is a bibliographical classification system developed by the publishing industry rather than by the library and information sector and should be seen as a competing tool for information retrieval rather than another kind of system).

Although the main focus for the → knowledge organization community always have been bibliographic systems, this community is also involved in the field of research classification as demonstrated, for example, by a special issue of Knowledge Organization (Scharnhorst and Doorn 2019) and by the open lectures arranged by the ISKO UK section (Biesenbender 2021 and Poelmans 2021).

2. Examples of research classifications

Among the existing research classifications, the following examples can be mentioned:

  • Australian and New Zealand Standard Research Classification, ANZSRC, including “the Fields of Research” (FoR) system covering the areas of research in ANZSRC.
    ANZSRC is an example of a classification system developed for research administrative purposes, but is also used by the advanced bibliographical database Dimensions (Hancock 2022)
  • The Bibliometric Research Indicator, the BFI-lists.
    This Danish research register is inspired by the corresponding “Norwegian model” (Sivertsen 2018). A part of the funds for research to universities is allocated according to the number of publications published by each university in the so-called BFI-lists (one for serials, one for publishers), in which the research fields are classified in 68 research fields. Each serial or publisher is further classified in levels (one, two, seldom three) determining the weight and thereby revenue of each publication for the university. The system function as an almost complete index of all research publications published by Danish universities. The prioritized lists of serials and publishers were regularly updated by teams of specialists until December 2021, but its updating was then terminated.
  • Canadian Research and Development Classification, CRDC (2020).
    CRDC is a standard classification, inspired by the Frascati Model 2015 of the Organisation for Economic Co-operation and Development (OECD), used by the federal granting agencies and Statistics Canada to collect and disseminate data related to research and development in Canada. The CRDC first official version is the 2020 version 1.0 and it is composed of 3 main pieces: the type of activity or TOA (with 3 categories), the field of research or FOR (with 1663 fields at the lowest level) and socioeconomic objective or SEO (with 85 main groups at the lowest level) (Legendre 2019).
  • European Research Council (ERC).
    ERC Evaluation Panels and Keywords (ERC Panel Structure) is a classification system for science developed by the ERC with the aim of characterizing the research projects supported by ERC funding in 2021 and 2022 (European Research Council 2020a; 2020b).
  • Flemish Research Discipline Classification Standard, VODS.
    VODS (for Vlaamse Onderzoeksdiscipline Standaard) is a classification of research disciplines, developed by the Expert Centre for Research & Development Monitoring (ECOOM-Hasselt) in the context of administrative reduction of research reporting (Vancauwenbergh and Poelmans 2019)
  • Frascati Manual
    The Frascati Manual is the most widely used internationally recognized standard for collecting and using research and development (R&D) statistics. It provides definitions for three types of activity: basic research, applied research and experimental development and proposes the use of a classification of fields of research and development by knowledge domains (OECD 2015)
  • Kerndatensatz Forschung, KDSF.
    This system describes what information universities, universities of applied sciences, non-university research institutions and other research institutions should provide about their research activities (Kerndatensatz Forschung 2022; Wikipedia 2022)
  • National Academic Research and Collaborations Information System, Netherlands, NARCIS.
    This system serves both research administrative as well as information retrieval purposes (Smiraglia 2019)

[top of entry]

3. Conclusion

Three issues seem important for comparing research classification systems with bibliographic ones: (1) national versus international focus; (2) the issue of warrant, where bibliographic classifications depend on the principle of → literary warrant (Barité 2018); (3) the issue of currency and systems which are up-to date.

Re 1: The examples shown in Section 2 may give the impression that research classifications are more national that bibliographic classification systems usually are. However, some systems (e.g., ANZSRC and CRDC) take as their point of departure the international Frascati Manual and therefore can be said to represent national adaptations to international standards. The bibliographic classification systems are dominated by international systems such as the Dewey Decimal Classification (DDC) and the Universal Decimal Classification (UDC), but libraries too often use either homegrown [2] classifications or adaptations of international systems (see Furner 2021; 2022 concerning → library classifications in Scandinavia including local versions of the DDC).

Re 2: Bibliographic classifications depend on the principle of literary warrant, although this principle may be understood and practiced in very different ways (Barité 2018). This means, that when a system is constructed, it is mostly partly based on titles in the literature to be classified, directly or indirectly, for example via other bibliographic classification systems, in addition, subject experts are consulted, but subject experts input is probably also informed by the concepts in the literature. This literature is input for establishing the concepts and classes on which the classification is made, at the least partly. But what is the input used for establishing concepts and classes for research classifications? As in bibliographic databases, it is partly input from subject experts, but it may also involve data more directly related to the tasks of the science managers, such as data from higher education R&D surveys, or even R&D funding applications (cf., Hancock 2022). There seems however not to principal reasons for not using the same kinds of input for both kinds of systems, and → science mapping techniques (Petrovich 2021) may also be useful for both.

Re 3: It is well-known that bibliographic classifications in general, and particularly library classifications, rarely reflect updated knowledge. The main reasons being (a) that each updated version of a library classification requires either reclassification of the entire collection which in practice is unrealistic or the split of the catalog in more parts, which involves user-unfriendliness; (b) that updating of bibliographical classification systems requires use of expensive teams of subject specialists. It is a sad fact that library classifications such as DDC and UDC reflect obsolete knowledge (see Blake 2011, 469-70 and Hjørland 2007). This situation is certainly not satisfactory, and it is strange that this problem is neglected or even denied [3]. Hedden (2016), for example, claims that “they don't need changing” [4]. By contrast research classification is about today's research landscape (about current information rather than retrospective information), and probably therefore research authorities have found it necessary to bypass bibliographic classifications and develop their own.

As the case of ANZSRC shows, research classifications may be useful for bibliographic databases. There is every reason that the knowledge organization community should engage in developing updated classifications, that are useful for both the functions served by bibliographic classifications and by research classifications.

[top of entry]


1. With the exception of bibliometric research and research about science policy.

2. Homegrown systems have become more rare in the last decades due to (1) a fading belief in the need for classifications for information retrieval combined with the high cost of their management, (2) greatly improved opportunities for harvesting metadata from external sources, primarily the Library of Congress.

3. Ibekwe-SanJuan and Bowker (2017, 190) wrote: "While we agree with the soundness of Hjørland's fundamental criticisms, it is important to underline that the role of universal bibliographic classifications is not only to represent the state of domain knowledge at every given moment in time but also to organize knowledge artefacts in physical spaces like libraries such that their relationship with one another can be perceived. Furthermore, given the dynamic and evolving nature of digital data and the uncertainties underlying the knowledge contained therein (see section 3.0 hereafter for a discussion), universal bibliographic classifications cannot be expected to constantly change their classifications to follow every discovery made at each instant. This will not only prove an impossible task to accomplish in real time for libraries, but it can also be very disruptive for end users. There is, of necessity, a waiting period between a scientific discovery and its inclusion into universal bibliographic classifications that are known for portraying knowledge validated by the scientific community and which have acquired a certain degree of permanence. Also, the practical value of universal bibliographic classifications—that of enabling patrons to collocate material artefacts in a physical space, is not entirely dependent on the theoretical "up-to-dateness" of their class structure. Finally, universal classification schemes like the DCC and UDC which are the focus of Hjørland's criticisms form only a subset of KOSs. The other types — thesauri, ontologies and specialised classification schemes are all domain-dependent knowledge artefacts that make no claim to universalism and should therefore be amenable to more frequent updates". Ibekwe-SanJuan and Bowker does not, however, address the problem, of how much classification systems need to be updated (or in other words how much obsolete knowledge should be tolerated). Obviously, this is only one among more criteria for the quality of such systems, but the concern behind Hjørland's criticism is that the knowledge organization community (and the information science community generally) over time has lost much of its connection to subject expertise. The issue of how often classification systems need to be updated (or need to delay updating) has not been addressed in knowledge organization, but was discussed in biology by Mayr and Bock (1994).

4. Hedden (2016, 9; italics added) wrote: "Furthermore, numeric code-based systems are not flexible and cannot easily be changed. It is not usually practical to insert additional codes into the scheme, unless perhaps the system allows for one additional hierarchical level. Because these systems are relatively unchanging, they don't need to be created or updated, and their applications are somewhat limited. Thus, most of the subject areas that could use classification systems already have them, and they don't need changing. And those subject areas that don't have them are not suitable for them. What this means is that there is not much work for taxonomists in the area of classification systems".

5. YKL stands for Yleisten kirjastojen luokitusjärjelstelmä, in English PLC - Finnish Public Libraries Classification System.

[top of entry]


Barité, Mario. 2018. “Literary warrant”. Knowledge Organization 45, no. 6: 517-536. Also available in ISKO Encyclopedia of Knowledge Organization, eds. Birger Hjørland and Claudio Gnoli. https://www.isko.org/cyclo/literary_warrant.

Bibliometric Research Indicator, the. Danish Ministry for Higher Education and Science (http://web.archive.org/...; BFI list for serials: http://web.archive.org/...

Biesenbender, Sophie. 2021. Developing a Classification for Interdisciplinary Research Fields for the German Science System. (Presentation, slides available at https://www.iskouk.org/event-4392012.

Blake, James. 2011. “Some Issues in the Classification of Zoology”. Knowledge Organization 38, no. 6: 463-72.

Canadian Research and Development Classification, CRDC (2020). Version 1.0. Available at: https://www.statcan.gc.ca/en/subjects/standard/crdc/2020v1/index.

European Commission. 2020a. Revision of ERC [European Research Council] Panel Structure for the 2021/2022 Calls: Rationale and Main Changes. PowerPoint presentation. https://erc.europa.eu/sites/default/files/document/file/Revision_ERC_panel_structure.pdf

European Research Council. 2020b. ERC Evaluation Panels and Keywords. https://erc.europa.eu/sites/default/files/document/file/ERC_Panel_structure_2020.pdf; it can also be browsed at Loterre: https://skosmos.loterre.fr/ERC/en/

Furner, Jonathan. 2021. “From the 'Four Faculties' to YKL [5]: A Brief History of Library Classification in the Nordic Countries, Part 1: Denmark, Norway, and Iceland”. Library & Information History 37, no. 1: 1-34. DOI 10.3366/lih.2021.0044.

Furner, Jonathan. 2022. “From the 'Four Faculties' to YKL: A Brief History of Library Classification in the Nordic Countries, Part 2: Sweden, Finland, and Analysis”. Library & Information History 38, no. 1: 42-68. DOI 10.3366/lih.2022.0098.

Hancock, Andrew. 2022. “Australian and New Zealand Standard Research Classification (ANZSRC)”. ISKO Encyclopedia of Knowledge Organization, eds. Birger Hjørland and Claudio Gnoli, https://www.isko.org/cyclo/anzsrc.

Hedden, Heather. 2016. The Accidental Taxonomist, 2nd ed. Medford, NJ: Information Today.

Heikkilä, Jussi T. S. 2022. ” Journal of Economic Literature codes classification system (JEL)”. ISKO Encyclopedia of Knowledge Organization, eds. Birger Hjørland and Claudio Gnoli. jel.

Hjørland, Birger. 2007. “Arguments for 'the Bibliographical Paradigm': Some Thoughts Inspired by the new English Edition of the UDC”. Information Research 12, no. 4. Available at http://informationr.net/ir/12-4/colis/colis06.html.

Hjørland, Birger and Claudio Gnoli. 2020. “Statistical Classification.” (editorial placeholder article). In ISKO Encyclopedia of Knowledge Organization, eds. Birger Hjørland and Claudio Gnoli. https://www.isko.org/cyclo/statistical.

Ibekwe-SanJuan, Fidelia and Geoffrey C. Bowker. 2017. “Implications of Big Data for Knowledge Organization”. Knowledge Organization 44, no. 3: 187-19.

Kerndatensatz Forschung (KDSF). https://www.kerndatensatz-forschung.de/.

Legendre, Ariadne. 2019. “The Development of the Canadian Research and Development Classification”. Knowledge Organization 46, no. 5: 371-379. Also available in ISKO Encyclopedia of Knowledge Organization, eds. Birger Hjørland and Claudio Gnoli. https://www.isko.org/cyclo/crdc.

Martínez-Ávila, Daniel. 2016. “BISAC Subject Headings List”. Knowledge Organization 43, no. 8: 655-62. Also available in ISKO Encyclopedia of Knowledge Organization, eds. Birger Hjørland and Claudio Gnoli. https://www.isko.org/cyclo/bisac.

Mayr, Ernst and Walter J. Bock. 1994. “Provisional Classifications v Standard Avian Sequences: Heuristics and Communication in Ornithology“. Ibis 136, no. 1: 12-18.

OECD. 2015. The Measurement of Scientific, Technological and Innovation Activities: Frascati Manual 2015: Guidelines for Collecting and Reporting Data on Research and Experimental Development. Paris: Organization for Economic Cooperation and Development. http://dx.doi.org/10.1787/9789264239012-en.

Petrovich, Eugenio. 2021. “Science Mapping and Science Maps”. Knowledge Organization 48, no. 7-8: 535–562. Also available in ISKO Encyclopedia of Knowledge Organization, eds. Birger Hjørland and Claudio Gnoli, https://www.isko.org/cyclo/science_mapping.

Poelmans, Hanne. 2021. The Flemish Research Discipline Standard: One Step Towards Harmonised Research Information in Flanders (Presentation, slides available at https://www.iskouk.org/event-4392012).

Scharnhorst, Andrea and Peter Doorn (eds.). 2019. “Special Issue: Research Information Systems and Science Classifications; including papers from “Trajectories for Research: Fathoming the Promise of the NARCIS Classification,” 27-28 September 2018, The Hague, The Netherlands” Knowledge Organization 46, no. 5: 337-97.

Sile, Linda, Janne Pölönen, Gunnar Sivertsen, Raf Guns, Tim C. E. Engels, Pavel Arefiev, Marta Dusková, Lotte Faurbæk, András Holl, Emanuel Kulczycki, Bojan Macan, Gustaf Nelhans, Michal Petr, Marjeta Pisk, Sándor Soós, Jadranka Stojanovski, Ari Stone, Jaroslav Susol and Ruth Teitelbaum. 2018. “Comprehensive-ness of National Bibliographic Databases for Social Sciences and Humanities: Findings from a European Survey”. Research Evaluation 27, no. 4: 310–22. DOI: 10.1093/reseval/rvy016.

Sile, Linda, Raf Guns, Frédéric Vandermoere, Gunnar Sivertsen and Tim C. E. Engels. 2021. “Tracing the Context in Disciplinary Classifications: A Bibliometric Pairwise Comparison of Five Classifications of Journals in the Social Sciences and Humanities”. Quantitative Science Studies 2, no. 1: 65–88. https://doi.org/10.1162/qss_a_00110.

Sivertsen, Gunnar. 2018. “The Norwegin Model in Norway”. Journal of Data and Information Science 3, no. 4: 2-18. DOI: 10.2478/jdis-2018-0017

Sivertsen, Gunnar. 2019. “Developing Current Research Information Systems”. In Springer Handbook of Science and Technology Indicators, eds. Wolfgang Glänzel, Henk F. Moed, Ulrich Schmoch and Mike Thelwall. Cham, Switzerland: Springer, 667-83.

Smiraglia, Richard P. 2019. “Trajectories for Research: Fathoming the Promise of the NARCIS Classification”. Knowledge Organization 46 no. 5: 337-344. DOI: 10.5771/0943-7444-2019-5-337.

Trkulja, Violeta, Juliane Stiller, Sophie Biesenbender and Vivien Petras. 2022. “Eine interdisziplinäre Forschungsfeldklassifikation für die Wissenschaft”. Information, Wissenschaft & Praxis 73, nos. 2-3: 75-83. DOI: 10.1515/iwp-2021-2209.

Vancauwenbergh, Sadia and Hanne Poelmans. 2019. “The Flemish Research Discipline Classification Standard: A Practical Approach”. Knowledge Organization 46, no. 5: 354-63. DOI: 10.5771/0943-7444-2019-5-354.

Wikipedia: Die freie Enzyklopädie. “Kerndatensatz Forschung“. Retrieved 2022-06-02 from: https://de.wikipedia.org/wiki/Kerndatensatz_Forschung.

Wikipedia: the free Encyclopedia. “NARCIS (Netherlands)”. https://en.wikipedia.org/wiki/NARCIS_(Netherlands).

[top of entry]


Visited Hit Counter by Digits times.

Version 1.0 published 2022-06-15, last edited 2022-10-13

Article category: KO in different contexts and applications

This editorial article is not peer-reviewed and is not being published in the journal Knowledge Organization. How to cite it:
Hjørland, Birger and Claudio Gnoli. 2022. “Research Classification System”. Preliminary editorial placeholder article. In ISKO Encyclopedia of Knowledge Organization, eds. Birger Hjørland and Claudio Gnoli, https://www.isko.org/cyclo/research.

©2022 ISKO. All rights reserved.