I S K O

edited by Birger Hjørland and Claudio Gnoli

 

Review article

by

(This article, version 1.0, is a version of an article published in 2023, see colophon.)

Table of contents:
1. Introduction
2. Methodology
3. History of review literature
    3.1 Research synthesis
    3.2 History of ARIST
4. Taxonomy of review literature
5. Uses and users of reviews
6. Process of preparing reviews
    6.1 Roles for information professionals
7. Assessment of review quality and impact
    7.1 Assessment of review quality
    7.2 Assessment of review impact
8. Impact of information technology
    8.1 Information technology support for specific tasks
    8.2 Adoption of technology support
    8.3 Applications of technology
    8.4 Artificial intelligence technologies employed
9. Research opportunities for information science
    9.1 Reviews and reviewing in library and information science
    9.2 Research agendas
10. Conclusion
Acknowledgments
References
Colophon

Abstract:
Reviews have long been recognized as among the most important forms of scientific communication. The rapid growth of the primary literature has further increased the need for reviews to distill and interpret the literature. This article encompasses the evolution of the review literature, taxonomy of review literature, uses and users of reviews, the process of preparing reviews, assessment of review quality and impact, the impact of information technology on the preparation of reviews, and research opportunities for information science related to reviews and reviewing. In addition to providing a synthesis of prior research, this article seeks to identify gaps in the published research and to suggest possible future research directions.

[top of entry]

1. Introduction

In a review of volume 33 of the Annual Review of Information Science and Technology (ARIST), Hjørland (2000) observed that ARIST had never included a chapter devoted to research on reviews despite the importance of reviews as resources in addressing information overload. This paper seeks to fill that gap by reviewing key aspects of research on both the process of reviewing the literature and the resulting products in the form of reviews [that is review articles, as distinct from e.g. book reviews]. The topic became particularly salient in the past few years during the COVID-19 pandemic when existing approaches to synthesizing findings from the primary literature could not keep pace with the flood of publications (Khalil et al. 2022b).

Characterization of review articles in The Scientific Literature: A Guided Tour (Harmon and Gross 2007, 190) provides a starting point for this review: “Review articles describe and evaluate the recent literature in a field. Unlike the other types, however, these articles are written solely by means of the study of other texts”. Following a discussion of the methodology used in this review, sections cover history (the evolution of the review literature), taxonomy (classes of review literature), uses and users of reviews, the process of preparing reviews, assessment of review quality and impact, the impact of information technology, and research opportunities for information science.

[top of entry]

2. Methodology

The challenge in searching for relevant literature to include in this review article arises from the need to distinguish research about literature reviews and reviewing from the far more numerous studies that are themselves published review articles. A further complication is the development of new terms for processes (e.g., evidence synthesis) and products (e.g., research syntheses) that need to be searched to encompass the research of interest. Multiple strategies were therefore needed to identify publications for study and inclusion in this review. The Handbook of Research Synthesis and Meta-Analysis (Cooper et al. 2019) includes a chapter on “Scientific Communication and Literature Retrieval” that identifies five major modes of searching: footnote chasing, consultation of experts, searches in subject indexes, browsing, and citation searches (White 2019, 59). All but consultation were used for this review.

In most databases specific subject headings for the topic were lacking. PubMed included the subject heading Review literature as topic that could be searched, but in Library & Information Science Source, INSPEC, ERIC, and Web of Science title keyword searches for “review articles” provided the starting point. To remove dependence on specific vocabulary terms, the bibliographies of relevant articles supported footnote chasing and the articles themselves (e.g., Virgo 1971) were starting points for citation searching in Web of Science and Google Scholar. Hand searching (browsing tables of contents) of recent issues of key journals, in both the methodological literature (e.g., Journal of Clinical Epidemiology, Research Synthesis Methods, Systematic Reviews) and the library and information science literature (e.g., Evidence Based Library and Information Practice, Health Information and Libraries Journal, Journal of the Medical Library Association), brought the search up to date. Literature reviewed was limited to English language but there were no specific time limits.

Referring to the typology of review types based on review objectives recently proposed by Haddaway et al. (2023), this review is a narrative introductory review, a “traditional literature review” following past practice in ARIST. In contrast to reviews seeking to accomplish evidence synthesis, introductory reviews follow no formal guidance on method. Publications were selected for inclusion by the author to provide an overview of key topics within reviews and reviewing.

[top of entry]

3. History of review literature

The scientific review journal arose in the 19th century (Gross et al. 2002, 199; Vickery 2000, 123), but a significant increase in interest in the production of review articles followed the explosion in scientific information shortly after World War II. Landmark conferences (National Academy of Sciences 1959; Royal Society 1948) and committees (National Academy of Sciences 1969; President's Science Advisory Committee 1963) that were organized to make recommendations to deal with this crisis quickly identified review articles as part of the solution, but with the caution that it could be difficult to find enough qualified people with the time necessary to produce them. In his thorough overview of review literature, Woodward (1974) identified multiple types of sources of reviews that had emerged including annual review publications (e.g., Annual Review of ..., Advances in ..., Progress in ...), journals devoted to or including some reviews, conference proceedings, and technical reports. Challenges in motivating scientists to prepare review articles remained. Branscomb (1975, 603) lamented that “federal science policy seems to make support for review scholarship the stepchild of research support”. Concerns were raised about the quality of the medical review literature, with Mulrow (1987) demonstrating that such reviews did not routinely use systematic methods to identify, assess, and synthesize information. She later concisely characterized the promise of systematic reviews (Mulrow 1994, 597): “Through critical exploration, evaluation, and synthesis the systematic review separates the insignificant, unsound or redundant deadwood in the medical literature from the salient and critical studies that are worthy of reflection”.

[top of entry]

3.1 Research synthesis

Given these concerns with the quality of the review literature, the research synthesis movement emerged as a strong counterforce to the older, “narrative” style of reviewing. Cooper et al. (2019, 535) offer this glossary definition:

Research synthesis. A review of a clearly formulated question that uses systematic and explicit methods to identify, select and critically appraise relevant research, and to collect and analyze data from the studies that are included in the synthesis. Statistical methods (meta-analysis) may or may not be used to analyze and summarize the results of the included studies. Systematic review is a synonym of research synthesis.

Hong and Pluye (2018) identify three main periods in the evolution of systematic reviews: foundation period (1970–1989), institutionalization period (1990–2000), and diversification period (2001–present). Characteristics of the foundation period included increasing recognition of the need to apply explicit, transparent, and rigorous methods to enhance the validity of reviews; expansion of randomized controlled trials (RCTs) in medical research; computerization of bibliographic databases; and the development of aggregative synthesis methods to combine results of studies. The institutionalization period was characterized by the growing use of evidence for decision-making by practitioners; establishment of several organizations dedicated to production of systematic reviews (e.g., Cochrane Collaboration for health sciences; Campbell Collaboration for social sciences); development of guidelines for systematic reviews and tools to facilitate their production (e.g., software for managing bibliographic references, coding studies, performing meta-analysis); and expansion from synthesis of quantitative data to include qualitative and mixed data. Finally, some characteristics of the most recent period include diversification of synthesis methods and types of reviews as well as users of those reviews; rapid review to speed production; and applications of automation including natural language processing and machine learning to the selection of relevant studies, appraisal of the quality of studies, and extraction of data.

In 2002 Chalmers et al. noted that expansion of research synthesis in social science and medical research was now coupled with interest in research evidence among policy makers, practitioners, and the public more generally (Chalmers et al. 2002). By 2018 Clarke and Chalmers could observe that “reviews are now widely accepted as the most reliable source of knowledge from research” (Clarke and Chalmers 2018, 121). As this brief historical overview makes clear, syntheses written since the 1990s have been held to standards far more demanding than those applied to their predecessors. Slebodnik et al. (2022b) found growing numbers of reviews in social science, environmental science, business, computer science, and engineering. They recommend more research to determine if reviews outside of the health sciences follow accepted standards. Lemire et al. (2023) identify three waves of evidence reviews over the past 50 years (meta-analysis, mixed-methods synthesis, core components) and their benefits and limitations in the context of policy analysis. They cite three trends that are likely to inform future directions of evidence reviews including using data science tools, embedding an equity focus, and translating research into practice.

Sheble (2017) observes that motivations for research synthesis include the translation of research-based knowledge to inform practice and policy decisions and the integration of relatively large and diverse knowledge bases to yield novel insights. She provides two histories of the diffusion of research synthesis methods: a narrative history based primarily in the health and social sciences; and a bibliometric overview across science broadly. She found that engagement with research synthesis was strongly correlated with evidence-based practice, extending from medicine to other practice disciplines, including nursing, social work, and librarianship. She also found that research synthesis methods have contributed to changes in the practice and use of research in diverse fields across science.

[top of entry]

3.2 History of ARIST

The history of ARIST provides an example of the development of an annual review publication. Carlos Cuadra, who became the first editor of ARIST, noted that “in order to make rapid progress toward a true information science, we will need not only better general bibliographies and textbooks but better guideposts to significant issues, findings and unsolved problems” (Cuadra 1964, 295). He advocated for the initiation of an annual review series to provide a critical analysis of developments and progress, with individual chapters authored by distinguished workers in the field. Heilprin (1988) provides the history leading up to the publication of the first volume of ARIST in 1966. Smith (2012) documents the selection of the three editors (Carlos A. Cuadra, Martha E. Williams, Blaise Cronin) and their contributions over the span of 45 years. Her paper considers both the intellectual work of shaping the contents of each volume and the production work of editing, copyediting, proofreading, and indexing. Hjørland (2000, 684) spoke for many information scientists with his assessment that:

ARIST is a most important institution in IS, and we would not be well off without it. It is the single most important source of the state of the art in IS, and the old volumes are extremely helpful for researchers in this field, and the cumulative index in the back of each volume provides a valuable access to earlier volumes.

The decision to cease publication after 45 years reflected declines in sales of print volumes, readers' interest in rapid access, and difficulty in recruiting chapter authors (Cronin 2011). ARIST was relaunched in November 2021 with a call for new submissions of individual review articles in digital form.

[top of entry]

4. Taxonomy of review literature

Multiple researchers have sought to make sense of the growing variety of reviews both within the health sciences and in other subject domains. The large number of review types illustrates how the extent to which a review is “systematic” lies on a continuum. In an extensive synthesis, drawing on 15 prior typologies, Sutton et al. (2019) couple classification of health-related reviews with recommendations on appropriate methods of information retrieval for gathering the evidence to be reviewed. Forty-eight review types were identified and categorized into seven families. However, many review types lack explicit requirements for the identification of evidence. The seven families include: traditional review, systematic review, review of reviews, rapid review, qualitative review, mixed methods review, and purpose-specific review (e.g., policy review, technology assessment review, scoping review, methodological review). For example, a scoping review seeks “to explore and define conceptual and logistic boundaries around a particular topic with a view to informing a future predetermined systematic review or primary research” (Sutton et al. 2019, 211). Campbell et al. (2023) distinguish among mapping reviews, scoping reviews, and evidence and gap maps with the shared aim of describing a bigger picture rather than addressing a specific question.

Although health-related review typologies are the most fully developed, several of these review types are also found in other fields. For example, Raitskaya and Tikhonova (2019) discuss the extension of scoping reviews from the health sciences to the social sciences and humanities. Paré et al. (2015) first define a typology of review types in information systems and then analyze their prevalence in the published literature. Categories defined include: narrative, descriptive, scoping, meta-analyses, qualitative systematic, umbrella, theoretical, realist, and critical. Schryen and Sperling (2023) propose a taxonomy for reviews in operations research, distinguishing nine types: scoping, selective, tutorial, theoretical, algorithmic, computational, meta-analyses, qualitative systematic, and meta.

Breslin and Gatrell (2023) use a miner-prospector continuum metaphor to describe the various approaches researchers can use to leverage reviews for theory building. These approaches include (from the miner to prospector ends of the continuum) “spotting conceptual gaps, organizing and categorizing literatures, problematizing the literature, identifying and exposing contradictions, transferring theories across domains, developing analogies and metaphors across domains, blending and merging of literatures across domains, and setting out new narratives and conceptualizations” (Breslin and Gatrell 2023, 145). Miners position their contributions within a bounded and established domain, while prospectors step outside disciplinary boundaries and introduce novel perspectives.

The latest development, accelerated as a result of the COVID-19 pandemic, is the proposal for living systematic reviews (LSRs). According to Elliott et al. (2014), such reviews are high quality, up-to-date online summaries of health research, updated as new research becomes available, and enabled by improved production efficiency. This initiative produces ready-to-go evidence summaries that serve the needs of decision makers because they are both rigorous and up to date (Elliott et al. 2021). Research is essential to understand how living evidence can best serve policy and practice across diverse domains. Heron et al. (2023) address practical aspects of developing living systematic reviews, with recommendations at each step drawing from their own experience. Given this new type of review, Khabsa et al. (2023) offer guidance on reporting LSRs. Additional research and evaluation are needed to understand and address challenges faced by authors, editors, publishers, and users of LSRs.

Citing what they characterize as a “Pandora's box of evidence synthesis” with a “plethora of overlapping evidence synthesis approaches”, Munn et al. (2023a, 148) highlight the limitations of existing typologies of evidence synthesis that remain static, not up-to-date, and with minimal links to supporting materials. As an alternative, they advocate development of a continuously updated framework to name and categorize the different approaches to evidence synthesis. This aligns with the position statement from Evidence Synthesis International (Gough et al. 2020) in recognizing the need to develop and share terminology and methodology consistently “while acknowledging the need for ‘fit for purpose’ approaches for diverse evidence requirements” (Munn et al. 2023a, 149). Munn et al. (2023b) have developed a scoping review protocol to identify all the available classification systems, typologies, or taxonomies; how they were developed; their characteristics; and the types of evidence synthesis included within them.

[top of entry]

5. Uses and users of reviews

As Woodward (1977) observed, given the apparent usefulness of reviews and their high cost of production due to the labor required, there has been surprisingly little research into the uses made of reviews. He differentiated what he termed “historical functions” (peer evaluation of published papers; collection of information from different sources; compaction of existing knowledge; superseding of primary papers as the written record; identification of emerging specialties; direction of research into new areas) from “contemporary functions” (informed notification of the published literature; current awareness of related fields; back-up to other literature searching; searching for alternative techniques; initial orientation to a new field; as teaching aids). A given user may turn to published reviews because of an interest in one or more of these functions.

The publisher Annual Reviews (n.d.), which produces annual reviews in more than 45 scientific disciplines from analytical chemistry to vision science, characterizes their readers as follows: researchers who want to keep abreast of their field and integrate this information with their own activities; researchers who want an introduction to new fields, with a view to developing an interface between different areas of research; students at all levels who want to gain a thorough understanding of a topic; and business people, journalists, policy makers, practitioners, patients and patient advocates, and others who wish to be informed about developments in research. Early in the history of ARIST, Cuadra (1971) reported findings from a survey that documented the ways in which ARIST had been most often used: keeping up with current work in peripheral areas of interest; keeping up with current work in one's own areas of interest; learning about a new area; checking on particular projects or ideas. Not surprisingly, he noted that ARIST served different uses for people at different professional levels and with different responsibilities.

More recently, the traditional role of reviews in mapping research activity and consolidating existing knowledge has been supplemented by an emphasis on the ways in which reviews can support evidence-based decision making. As such their potential health-related uses include: for clinicians to integrate research findings into their daily practice; for patients to make well-informed choices about their own care; for professional medical societies and other organizations to develop clinical practice guidelines; and for agencies to support health technology assessments. Evidence synthesis also can address questions of policy and practice in education, economics, environment, criminal justice, and more. The Royal Society and The Academy of Medical Sciences (2018) have articulated guidelines of evidence synthesis for policy, emphasizing that such efforts should be inclusive, accessible, rigorous, and transparent. Kunisch et al. (2023a) report that stand-alone review articles have become an important form of research in the field of business and management. They note that analyses and syntheses of prior research can develop new knowledge of value to academia, practice, and policy making and characterize a range of review purposes: classifying, representing, problematizing, configuring, aggregating, integrating, interpreting, and explaining.

Within library and information science, over the past two decades there has been growing emphasis on evidence-based library and information practice. Eldredge (2000) demonstrated how the core characteristics of evidence-based health care could be adapted to health sciences librarianship. He emphasized the importance of drawing on the best available research evidence to inform practice. Related research is regularly reported in the open access journal Evidence Based Library and Information Practice and periodic Evidence Based Library and Information Practice conferences.

[top of entry]

6. Process of preparing reviews

Despite the differences in procedures across various types of literature reviews, Xiao and Watson (2019) identify a consistent set of steps that should be followed: formulating the research problem, developing and validating the review protocol, searching the literature, screening for inclusion, assessing quality, extracting data, analyzing and synthesizing data, and reporting findings. The review protocol should describe all the elements of the review, including the purpose of the study, research questions, inclusion criteria, search strategies, quality assessment criteria and screening procedures, strategies for data extraction, synthesis, and reporting. The step of searching the literature can be further subdivided (Gore and Jones 2015): selecting which databases and gray literature sources will be searched; developing the search strategy for the primary database; translating the search strategy from the primary database search to other databases specified in the protocol; exporting bibliographic records from multiple databases to citation software; removing duplicate records; hand searching journals and the reference lists of articles; conducting citation searching for key articles; and searching the gray literature to include unpublished results.

There are many guides documenting best practices for preparing reviews, ranging from book-length handbooks (e.g., Booth et al. 2022; Cooper et al. 2019; Higgins et al. 2022) to individual articles for specific disciplinary audiences (e.g., James et al. 2016, in environmental science; Siddaway et al. 2019, in psychology). Because successful completion of the various steps may require different types of expertise, an initial step may include establishing a review team that can bring together multiple skill sets to successfully produce an evidence synthesis. These may include experienced systematic reviewers, information specialists, statisticians, and content experts (Uttley and Montgomery 2017). Nicholson et al. (2017) identify collaboration challenges that information specialists may face as team members in completing these reviews and suggest how to overcome them. Time invested in the systematic review process can be significant. Bullers et al. (2018) document time spent on the full range of tasks while Saleh et al. (2014) focus on gray literature searching.

Because errors made in the step of searching the literature can potentially result in a biased or otherwise incomplete evidence base for the review, much attention has been directed toward systematic searching (Levay and Craven 2019). As Lefebvre (2019) explains, the focus on searching for published studies solely from bibliographic databases has been augmented with a recognition of the importance of also considering unpublished studies and those published in gray literature (conference papers, official publications, reports from non-governmental organizations, dissertations, and theses). In particular, web searching may be used to identify sources of gray literature, with the need to adapt strategies to the limitations of web search engines such as Google (Briscoe et al. 2020; 2023). Future studies on the conduct of web searching should test how different approaches to web searching affect results retrieved and overall contribution to results and conclusions of systematic reviews. Belter (2016) explores the strengths and limitations of citation analysis as a literature search method for systematic reviews, demonstrating the value of exploiting four types of relationships: cited and citing articles as well as cociting and cocited articles. Hirt et al. (2023) completed a scoping review to explore the use of citation tracking for systematic literature searching, finding that the use of citation tracking showed an added value in most studies but recommended against using it as the sole mode of searching.

Searches require careful planning to take into account the characteristics of different databases and interfaces (McGowan and Sampson 2005). Search strategies are often complex with many combinations of Boolean logic, wildcards, and adjacency/proximity instructions. Bramer et al. (2018) describe a step-by-step process of developing comprehensive search strategies. Because not all database platforms are designed for complex searching, Bethel and Rogers (2014) have developed a checklist of criteria to assess the ability of host platforms to cope with complex searching. Craven et al. (2014) demonstrate the lack of consistency in searching the same database across different interfaces and the need to take these differences into account in formulating search strategies. Gusenbauer and Haddaway (2020) compare systematic search capabilities of 28 widely used academic search systems including Google Scholar, PubMed, and Web of Science. They found that 14 of the 28 systems met their performance requirements, while 14 had limitations in formulating queries, the correct interpretation of queries by the system, data retrieval capabilities, and the reproducibility of searches. Based on interviews with systematic searchers, Hickner (2023) identifies priorities for improving search systems.

Although data extraction is a prerequisite for analyzing and interpreting evidence in systematic reviews, guidance is limited and little is known about current approaches. Büchter et al. (2023) surveyed systematic reviewers on their approaches to data extraction, opinions on methods, and research needs. Based on the results of their survey, they conclude that there is a need for developing and improving support tools for data extraction and evaluating them, including (semi-) automation tools.

[top of entry]

6.1 Roles for information professionals

There is a growing body of literature examining information professional involvement in the systematic review process. Wilson and Farid (1979, 144) anticipated the potential for information professionals to go beyond enhancing bibliographical and physical access to the research literature by working in teams with other experts for the preparation of a work of synthesis: “The goal would be to increase access to the usable content of the literature by provision of more effective substitutes for the use of the literature itself”.

Through a scoping review encompassing 310 different articles, book chapters, and presented papers and posters, Spencer and Eldredge (2018) identified 18 core roles for librarians in systematic reviews. They summarize each role and provide an accompanying bibliography of references. Roles identified include: citation management, collaboration, de-duplication of search results, evaluation of search strategies, formalized systematic review services, impact and outcomes, indexing of database terms, peer review of search strategies, planning, question formulation, reporting and documentation, research agenda, search filters and hedges, searching (databases and other resources, general, gray literature, protocol development, search strategies, subject- or topic-specific searches, other), source selection, systematic reviews in librarianship, teaching, and technological and analytical tools.

Given the diversity in roles that can be filled, some libraries have established the new position of systematic review librarian (Cooper and Crum 2013). The Medical Library Association (2023) has been particularly active in providing training opportunities through development of a Systematic Review Services Specialization continuing education opportunity. Launched in May 2022, courses continue to be developed. The initiative was inspired by the competency framework for librarians involved in systematic reviews (Townsend et al. 2017) with the stated competency:

Health and biomedical information professionals with competency in Systematic Reviews use a range of information retrieval and management knowledge and skills to support users and researchers. They promote the use of established guidelines, best practices, ethical synthesis practices, and transparent reporting in the service of high quality, reproducible scientific and biomedical research (Medical Library Association 2023).

The Medical Library Association Systematic Reviews Caucus (2023) supports and educates librarians involved in systematic reviews and meta-analyses and maintains a list of resources useful for all librarians supporting or conducting systematic reviews. A book-length work sponsored by the Medical Library Association (Foster and Jewell 2022) covers the librarian role in the phases of a review project as well as guidance for developing and running a systematic review service.

Information professionals continue to respond to new opportunities and challenges. Morris et al. (2016) sought to determine the role of the librarian in supporting scoping (in contrast to systematic) reviews. They suggest that librarians have much to contribute at two particularly important steps: the initial formulation of the research question and the related need to balance breadth and depth, systematicity, and comprehensiveness in developing the database search strategy. Charbonneau and Vela (2023) completed an environmental scan to document librarian contributions to evidence synthesis programs addressing the COVID-19 “infodemic”. They identified a range of library evidence services including preparing evidence summaries, curating research on COVID-19, providing expert searching services to monitor the evolving research evidence, and helping individuals to build critical appraisal skills to assess the increasing amounts of COVID-19 information. Premji et al. (2021) undertook a scoping review to identify how knowledge synthesis methods are being taught and to identify particularly challenging concepts or aspects. In addition to direct instruction, librarians can develop web-based LibGuides (e.g., Cornell University 2023). Based on a content analysis of systematic review online library guides, Lee et al. (2021) identified a significant opportunity for librarians to turn their guides on systematic reviews into practical learning tools through development and assessment of online instructional tools to support student and researcher learning.

Although systematic review services originated in health sciences libraries, a growing number of academic libraries beyond the health sciences have established services for systematic reviews and other types of evidence synthesis (Laynor and Roth 2022; Slebodnik 2022a). Lê et al. (in press) have completed a survey of librarians in Association of Research Libraries and Canadian Association of Research Libraries institutions benchmarking librarian support of systematic reviews in the sciences, humanities, and social sciences. They found increased demand for this support but also noted the challenges of developing needed expertise and ensuring scalability of a labor-intensive service. Ghezzi-Kopel et al. (2022) describe the role of librarians as methodology consultants and co-authors on evidence synthesis projects in agriculture. Kocher and Riegelman (2018) have identified resources useful to librarians assisting with systematic reviews in a broad range of disciplines outside the biomedical sciences. Their compilation includes research network organizations; reporting guidelines; registering protocols; tools; transparency and reproducible search methodology; disciplinary resources; search tools; and sources for gray literature.

With information professionals often serving as review team members, there is concern that their contributions are not sufficiently credited. Ross-White (2021) uses the concept of “invisible labor” to characterize this lack of acknowledgement. Brunskill and Hanneke (2022) analyzed the documented role of a librarian in published systematic reviews and meta-analyses with registered protocols mentioning librarian involvement. They found that librarians' contributions were often described with minimal, or even no, language in the final published review, with infrequent designation as co-authors. Research has demonstrated distinct benefits to close involvement of information professionals as team members. For example, Aamodt et al. (2019) found that librarian co-authored systematic reviews were associated with a lower risk of bias while Rethlefsen et al. (2015) found that librarian co-authors correlated with higher quality reported search strategies. Waffenschmidt and Bender (2023) argue that the inclusion of information specialists to select information sources, develop search strategies, conduct the searches, and report the results increases the trustworthiness of systematic reviews.

[top of entry]

7. Assessment of review quality and impact

The increasing emphasis on the role of reviews in evidence-based decision-making calls for research on factors that may affect review quality. In addition, given the expense, time, and effort involved in producing reviews, it is important to better understand their impact.

[top of entry]

7.1 Assessment of review quality

Ioannidis (2016) expressed concern that the production of systematic reviews and meta-analyses has reached epidemic proportions, but in his judgment many are unnecessary, misleading, and/or conflicted. Uttley et al. (2023) report findings from an assessment of literature identifying problems with systematic reviews, finding flaws in conduct, methods, and reporting. To address such concerns the Institute of Medicine (2011) recommends 21 standards for developing high-quality systematic reviews. These standards encompass initiating a systematic review; finding and assessing individual studies; synthesizing the body of evidence; and reporting systematic reviews. Puljak and Lund (2023) identify a number of approaches to reduce the proliferation of redundant systematic reviews. Kolaski et al. (2023) have developed a concise guide to best practices for evidence synthesis to improve the reliability of systematic reviews.

Beyond medicine Templier and Paré (2015) propose a framework for guiding and evaluating literature reviews in information systems, covering the process of conducting reviews and the output of a review paper. Guidelines cover formulating the problem, searching the literature, screening for inclusion, assessing quality, extracting data, and analyzing and synthesizing data. They subsequently completed a study analyzing 142 information systems review articles, finding inadequate reporting of the methods, procedures, and techniques used in the majority of reviews (Templier and Paré 2018). Brocke et al. (2015) also provide recommendations for literature reviews in information systems research. In the domain of business research Snyder (2019) outlines questions to consider in conducting a literature review as well as in assessing the quality of the resulting article. Guidelines span design, conduct, data abstraction and analysis, and structuring and writing the review.

A systematic review can start with a protocol, a plan that encompasses every stage of the review process. PROSPERO (Schiavo 2019) is an international database where authors can post their protocol and officially register it so that other authors know that they have embarked on a systematic review project and how they intend to accomplish their goal. Registration is voluntary but encouraged by many reporting guidelines as it promotes transparency in the conduct and reporting of the review. Elements of a protocol could include review title; anticipated start and completion date; research team members and roles; review question; search strategy; inclusion and exclusion criteria for study screening; data extraction strategy; plan for risk of bias assessment and critical appraisal; and dissemination plans.

To provide guidance to authors of systematic reviews, the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) reporting standard has been developed. The most recent update, PRISMA 2020, includes a checklist with 27 items to enhance transparency in reporting why the review was done, what the authors did, and what they found (Page et al. 2021). Differences between the role of the PRISMA Statement to guide reporting versus guidelines detailing methodological conduct can be illustrated with the following example: the PRISMA Statement recommends that authors report their complete search strategies, but it does not include recommendations for designing and conducting literature searches. The need for guidance in reporting extends beyond the medical domain. Brocke et al. (2009) found lack of thorough documentation of the literature search in reviews in the information systems field. In the environmental sciences, Oberg and Leopold (2019) express concern that the majority of journals do not provide any specific requirements or quality criteria for review papers.

An approach to ensuring the quality of search strategies is peer review by another person such as an information specialist librarian with search expertise. McGowan et al. (2016) describe the Peer Review of Electronic Search Strategies (PRESS) guidelines statement to facilitate peer review to identify any search errors as well as to suggest possible enhancements leading to retrieval of additional studies. Peer review encompasses assessment of how the research question has been translated into searchable concepts and components, effective use of Boolean and proximity operators, appropriate use of subject headings and free text searching, whether there are any mistakes in spelling, syntax or line numbers, and appropriate use of limits and filters. Neilson (2021) sought to determine the level of adoption of PRESS in published studies over the period 2009 to 2018 but found reported use of PRESS was low. Leaders of international health library associations (Iverson et al. 2021) urged the International Committee of Medical Journal Editors to seek information specialists as peer reviewers for knowledge synthesis publications with the goal of more thoroughly evaluating the reported search strategies.

Evaluation of search strategy effectiveness can contribute to assessment of review quality. Salvador-Oliván et al. (2019) analyzed errors in search strategies found in 137 published systematic reviews. There was a high frequency of errors with many affecting recall due to missing terms in both natural language and controlled vocabulary. They conclude that to improve the quality of searches and avoid errors, it is essential to plan the search strategy carefully, choosing all appropriate terms and making use of truncation to retrieve term variants.

Review quality is dependent on characteristics of the evidence cited. Brown (2004, 35) noted the importance of the Annual Reviews series “as a portal to stable and reliable information” and expressed concern that citations to digital publications lacked the same permanence as printed publications. Although not specific to review articles, both Ott (2022) and Howell and Burtis (2022) documented the problem of link rot in published journal articles, expressing concern that the inability to retrieve a reference makes statements unsupported and unverified. Craigle et al. (2022) advocated for inclusion of DOIs as part of entries in reference lists in order to counter link rot.

Other potential threats to evidence quality include articles appearing in predatory journals (Elmore and Weston 2020) and retracted articles. Predatory journals claim to be legitimate scholarly journals but misrepresent their publishing practices. Hayden (2020) cautions that inclusion of articles in systematic reviews from journals lacking rigorous peer review could distort the evidence base. Barker et al. (2023) carried out a survey to explore attitudes and experiences of experts in evidence synthesis regarding predatory journals. Survey results demonstrated a lack of consensus regarding the appropriate methods to follow when considering including studies from a predatory journal as part of an evidence synthesis project. Similarly, Faggion Jr. (2019) argues for more detailed guidance on inclusion or exclusion of retracted articles in systematic reviews. Brown et al. (2022) examined the citation of retracted publications in systematic reviews in the field of pharmacy. Reasons for retraction of cited publications included data falsification or manipulation, ethical misconduct including plagiarism, and concerns about errors in data or methods. They concluded that further analysis of systematic reviews citing retracted publications is needed to determine the impact of flawed data. Schneider et al. (2022) report recommendations from an initiative to reduce inadvertent citation of retracted publications, including developing an approach to ensure the public availability of consistent, standardized, interoperable, and timely information about retractions. Another threat to systematic review quality is publication bias, when a study does not result in a publication that can be included as part of the review. This non-publication impacts the ability to accurately synthesize the evidence in a given area (DeVito and Goldacre 2019).

[top of entry]

7.2 Assessment of review impact

Cuadra et al. (1968) undertook an early effort to assess the impact of reviews by gathering responses from a sample of users and non-users of Volume 1 of ARIST. Even at that early stage, they found that the impact was already evident with users reporting re-examination of cited literature, seeking new cited literature previously unknown to them, and seeking contact with authors of cited literature. Although they advocated continuing empirical study of the effectiveness and impact of review publications, this was a one-time study.

More common efforts to gauge impact of review articles depend on bibliometric methods. In a review of factors associating with or predicting more cited or higher quality journal articles, Kousha and Thelwall (2023) note that review articles tend to be more cited than other research articles, although there are some disciplinary differences. McMahan and McFarland (2021) sought to analyze the impact of review articles on the publications they cite, focusing on citation and co-citation as indicators of scholarly attention. Analyzing publications from the Annual Reviews series, they found that many papers cited by review articles experienced a loss in future citations as the review article was subsequently cited instead of the specific articles mentioned in the review. In this way, reviews serve to curate, synthesize, and simplify the literature concerning a research topic, bringing increased attention to a research area as a coherent whole. In a study in the field of biomedical research, Lachance et al. (2014) did not find that citation in a review article led to a decline in a paper's later citation count, so the effect of citation in a review article on subsequent citation may be domain dependent.

Although review articles tend to be more highly cited than original reports of research, this is not always the case. Wagner et al. (2021) sought to determine what factors distinguish those reviews that are highly cited from those that are less impactful in information systems. They found that the degree of methodological transparency and the development of a detailed research agenda distinguish high impact reviews. In the domain of medicine, ongoing citation of systematic reviews is not necessarily warranted. Hoffmeyer et al. (2023) analyzed citations to Cochrane reviews, noting that reviews that have not been updated for more than 5.5 years are considered inactive. They found that many reviews continued to be cited even though they were not being updated, which could be problematic if they do not represent the most up-to-date evidence.

The high frequency of citation to review articles raises questions about how they should be handled in bibliometric analyses. Miranda and Garcia-Carpintero (2018) undertook an analysis of the overcitation and overrepresentation of review papers in the most cited papers of the 35 largest subject categories in Science Citation Index-Expanded. They found that the average citations received by reviews varies by subject area (with an overall average of 2.95 times the citations received by the original research article). This also is associated with an overrepresentation of reviews among the most highly cited papers. The authors undertook their analysis in order to serve as a basis for deciding whether review articles should be taken into account together with original research articles in citation analysis. When considering the impact of an individual researcher's work, they caution that the share of reviews could impact citation count and therefore measures such as the h-index. Lei and Sun (2020) demonstrate the effect of inclusion of review articles on journal impact factor calculation, which would decrease if review articles were removed from consideration.

Given the expectation that evidence synthesis could impact policy and practice, inclusion of reviews as a source of evidence in clinical practice guidelines that support clinical decision-making is an indicator of impact. Korfitsen et al. (2022) analyzed the usefulness of Cochrane reviews as a source of evidence in guidelines published by the Danish Health Authority and found that almost one-third of the evidence-based recommendations used Cochrane reviews to inform clinical recommendations.

[top of entry]

8. Impact of information technology

A starting point for review of the impact of information technology on the production of systematic reviews are reviews that investigate this. In their review of the application of automation to the production of systematic literature reviews, Dinter et al. (2021) analyze 41 publications to identify objectives of automation studies, application domains, which steps have been automated, and automation techniques employed. Time reduction was the primary goal and the most common focus was automation of selection of primary studies, with software engineering and medicine being the two major application domains. They identified specific natural language processing and machine learning techniques used in the studies examined and a gap in the use of deep learning techniques. In their scoping review Khalil et al. (2022a) sought to identify the reliability and validity of available tools to support the automation of systematic reviews, their limitations, and any recommendations to further improve the use of these tools. Through an analysis of 47 publications, they identified a number of tools of potential use as well as algorithms not yet developed into user friendly tools. They emphasize the importance of moving from research prototypes to professionally maintained platforms in order to achieve wider impact of automation on the systematic review process.

[top of entry]

8.1 Information technology support for specific tasks

Many tools have been developed to support completion of specific tasks in the review process. Amog et al. (2022) describe Right Review, a web-based decision-support tool to guide selection from among 41 knowledge synthesis methods. Five questions are asked to select from among 26 methods for quantitative reviews, and 10 questions to select from among 15 methods for qualitative evidence synthesis. They report that the tool was used by approximately 4600 users worldwide in the first year following its release. Methods suggested by the tool are accompanied by conduct and reporting guidance and open access examples.

Tools can aid different approaches to searching for potentially relevant studies to include in a review. Portenoy (2021) has implemented Autoreview, a framework for building and evaluating systems to automatically select relevant publications for literature reviews, starting from small sets of seed papers. Haddaway et al. (2022) describe Citationchaser, a free and open source tool that allows for rapid forward and backward citation chasing with the goal of improving search comprehensiveness. Pallath and Zhang (2023) describe Paperfetcher, a tool to automate hand searching for systematic reviews. Traditional hand searching requires reviewers to manually browse through a curated list of field-specific journals and conference proceedings to find articles relevant to the review topic. Paperfetcher automates retrieval of article metadata for hand searching by querying one or more databases of bibliographic content. Guimarães et al. (2022) evaluated the accuracy of multiple tools for deduplicating records retrieved from systematic review searches. The performance of three (Rayyan, Mendeley, Systematic Review Accelerator) was sufficiently accurate for their integration into the deduplication step in systematic review studies.

In medicine systematic reviews often focus on analyzing reports of randomized controlled trials (RCTs) as evidence. Thomas et al. (2021) describe development and testing of the Cochrane RCT Classifier to achieve very high recall. In this case manual workload can be reduced by using a machine learning classifier with an acceptably low risk of missing eligible studies. The possibility of using crowdsourcing to identify studies reporting RCTs has also been explored (Noel-Storr et al. 2021). Cochrane Crowd, Cochrane's citizen science platform, offers a range of tasks aimed at identifying studies related to health care. An evaluation found that crowdsourcing produced accurate results in identifying reports of randomized trials, offering a way for a large number of people to play a voluntary role in health evidence production. Hartling and Gates (2022) discuss a trial of RobotReviewer, a tool that can perform risk of bias assessments using machine learning and natural language processing. A strength of the trial was its use of a semiautomated approach to assist rather than replace a human. Such semiautomated processes can create efficiencies while preserving the insight of humans that end users expect and trust.

Those seeking to identify available tools can be aided by resources such as the Systematic Review Toolbox (Johnson et al. 2022). Initially developed in 2014 to collate tools that can be used to support the systematic review process, the May 2022 update provided information about 235 software tools and 112 guidance documents. Software tools are programs or web-based applications to aid the evidence synthesis process. Guidance documents encompass “paper-based” tools such as quality assessment checklists and reporting standards for systematic and other types of review. The toolbox is searchable by type of review project (systematic, rapid, qualitative, scoping, mapping, mixed methods, review of reviews, or other), cost (free, free version available, free trial, or payment required), and review stage (protocol development, search, screening, data extraction, quality assessment, synthesis, report, reference management, stakeholder engagement). Entries include a summary of the tool and a link to the tool. Some entries include additional links to articles about the tool, which sometimes provide technical explanations of how the tool works and evaluations. A search strategy has been developed to enable identification of new tools, tool updates, and evaluations of tools on a regular basis (Sutton et al. 2023). Additional compilations of review production tools can be found at the websites for the Cochrane Community (n.d.), Campbell Collaboration (n.d.), and JBI (n.d.).

The International Collaboration for the Automation of Systematic Reviews (ICASR) was initiated in 2015 with the goal of putting all the parts of the automation of systematic review production together (Beller et al. 2018). At the initial ICASR meeting attended by information specialists, librarians, software engineers, statisticians, a linguist, artificial intelligence experts, and other researchers, a set of principles was articulated with the goal of allowing the integration of work by separate teams and building on their experience, code, and evaluations. Evaluations are needed to ensure that automation tools intended to expedite the systematic review process do not compromise the review quality.

[top of entry]

8.2 Adoption of technology support

In principle review tasks amenable to automation include searching, de-duplicating citations, screening of titles and abstracts, retrieving full texts of included studies, data extraction, and even collation of meta-analysis results. Tools remain underused because of a lack of acceptance, lack of knowledge about their existence, and steep learning curves. Another consideration is whether the goal is full automation or a computer-assisted mode of operation. Most tools can only assist a user in completing a task and must be incorporated into the workflow of producing a systematic review.

Scott et al. (2021) investigated systematic review automation tool use by systematic reviewers, health technology assessors, and clinical guideline developers. Data were gathered through an online survey to determine what tools were used and abandoned, how often and when tools were used, the perceived time savings and accuracy, and desired new tools. Respondents reported that tools saved time and increased accuracy. Respondents most often taught themselves how to use the tools and lack of knowledge of tools was the most frequent barrier to tool adoption. Further work is required in training and dissemination of automation tools and ensuring they meet the needs of those conducting systematic reviews.

Laynor (2022) proposed that the librarian role may shift from expert searcher to systematic review automation expert as librarians develop and contribute expertise on automation tools to review teams. In the context of developing living systematic reviews, Grbin et al. (2022) concur that including librarians in project teams can enhance outcomes and lead to continued consultation and collaboration. Dell et al. (2021) argue for the applicability of sociotechnical systems theory in understanding technology adoption. A sociotechnical perspective emphasizes the need for users of evidence synthesis tools to be involved in their design. O'Connor et al. (2019) discuss current barriers to adoption of automated tools, including lack of trust and set-up challenges. More evaluations are needed to build up an evidence base of tool effectiveness.

[top of entry]

8.3 Applications of technology

Applications of technology are enabling the development of new types of reviews and knowledge syntheses. Bendersky et al. (2022) provide a summary of methods reported for developing and reporting living evidence synthesis and Sarri et al. (2023) suggest how technology could be applied to accomplish living health technology assessments. Living evidence models treat knowledge synthesis as an ongoing endeavor.

Antons et al. (2023) introduce computational literature reviews as a new review method that seeks to combine the human researchers' insights and judgment with computers' speed and efficiency when synthesizing large bodies of text. Such reviews leverage computational algorithms for text mining and machine learning to support the analysis of the content of the text corpus to be reviewed. The goal is to automate tasks that are either inefficient or infeasible for human researchers to perform at scale, such as text preprocessing, and to augment tasks that rely on human understanding and creativity by providing machine-generated stimuli such as data visualizations. Oelen et al. (2021) identify limitations of existing reviews as only providing a snapshot of the current literature and lacking interpretability by machines. They propose a SmartReview approach to address these weaknesses by working toward semantic community-maintained review articles. Knowledge graphs are employed to make articles more machine-actionable and maintainable. A prototype of this approach has been implemented in the Open Research Knowledge Graph (ORKG) (orkg.org).

One significant goal for the application of technology is the reduction in time to completion of a systematic review. Nussbaumer-Streit et al. (2021) carried out a scoping review to determine why steps in the systematic review production process were resource intensive and where automation might be most effectively applied to improve efficiency. Their findings indicate that methods and tools to support project management and administration throughout a project, as well as methods and tools to speed up study selection, data extraction, and critical appraisal could help save resources. In an effort to demonstrate the time savings possible with available tools, Clark et al. (2020) documented completion of a full systematic review in 2 weeks. The team included experienced systematic reviewers with complementary skills (two researcher clinicians, an information specialist, and an epidemiologist), used systematic review automation tools, and blocked off time for the duration of the project. Tools used included Systematic Review Accelerator (for searching, deduplication, and writing), EndNote (for deduplication and finding full texts), RevMan (for data extraction and synthesis), SARA (for finding full texts), and RobotReviewer (for assessing risk of bias). Scott et al. (2023) successfully extended the 2-week systematic review (2weekSR) methodology to larger, more complex systematic reviews. They anticipate that with the use of automation tools the time to completion of a systematic review will continue to decrease, enabling clinical and policy decision-makers access to more timely systematic review evidence.

Schmidt et al. (2023) explore the potential of software tools for automating living systematic reviews. They identified 11 tools with relevant functionalities and discuss the features of these tools with respect to different steps of the living review workflow, including protocol formulation and scoping, reference retrieval, screening, data extraction and quality/bias assessments, synthesis, and dissemination and visualization of data. They also highlight research gaps and challenges associated with automating each step.

[top of entry]

8.4 Artificial intelligence technologies employed

Much of the recent literature on the impact of information technology on the production of reviews focuses on artificial intelligence (AI) technologies. Blaizot et al. (2022) carried out a systematic review of using AI methods for systematic review in the health sciences. Their analysis encompassed 12 reviews using 9 different tools to implement 15 different AI methods which could include machine learning, deep learning, neural network, or any other applications used to enable the full or semi-autonomous performance of one or more stages in the development of evidence synthesis. Most focused on the screening stages of the review, with others accomplishing data extraction and risk of bias assessment. Although these applications show some promise, further validation is needed to determine their enhancements of efficiency and quality in evidence synthesis.

Based on a scoping review of 273 publications on the use of AI for automating or semi-automating biomedical literature analyses, Santos et al. (2023) found applications to assembly of scientific evidence, mining the biomedical literature, and quality analysis. Most studies addressed the preparation of systematic reviews rather than the development of guidelines or evidence syntheses. They conclude that further research is needed to fill knowledge gaps on applications of machine learning, deep learning, and natural language processing and to enable use of automation by end users, including both biomedical researchers and healthcare professionals.

Müller et al. (2022) investigated the applicability of AI to different types of literature reviews, considering those that involve qualitative data analysis in addition to the quantitative focus of systematic reviews. They compared AI tools applicable to the design, conduct, analysis, and writing phases of systematic reviews with the potential AI tools that could be used in those same phases in reviews involving more qualitative analysis.

De la Torre-López et al. (2023) prepared a survey of AI techniques proposed in the last 15 years to help researchers conduct systematic analyses of scientific literature. Through an analysis of 34 studies, they described the tasks currently supported, the types of algorithms applied, available tools, and the extent to which there is human involvement. Discussion of techniques is organized by phase in the systematic review process: planning, conducting, reporting. The conducting phase has received the most attention. Applications of machine learning are more numerous than natural language processing.

Jonnalagadda et al. (2015) provided a systematic review of studies discussing automation of data extraction. Based on a study of 26 published reports, they found that the focus has been on extracting a limited number of data elements and they concluded that biomedical natural language processing techniques have not been fully utilized to automate the data extraction step of systematic reviews. O'Mara-Eves et al. (2015) carried out a systematic review of text mining for study identification in systematic reviews. Based on analysis of 44 studies, they made a number of recommendations for further research and for reviewing practice.

Marshall and Wallace (2019) provide an overview for the non-computer scientist of current machine learning methods that have been proposed to expedite evidence synthesis. They provide guidance on which are ready for use, strengths and weaknesses, and how a systematic review team might go about using them in practice. Tools include those for search, screening, and data extraction. The majority of tools are designed as “human-in-the-loop systems”: their user interfaces allow human reviewers to have the final say. Moving from the research prototype stage to professionally maintained platforms remains an important challenge.

Generative AI and large-language models (LLM), in particular ChatGPT, have captured the attention of researchers interested in their applicability to automation of various stages of the review process. In a table Peng et al. (2023) provide a concise summary of advantages of and concerns about LLMs in evidence-based medicine. For evidence retrieval, advantages could include efficiently processing large volumes of text while concerns include lack of understanding and ethical concerns about privacy, accountability, and biases. For evidence synthesis, LLMs could be used for text synthesis and summarization but concerns include a lack of continuous learning capability and temporal reasoning. For evidence dissemination, the text generated may be coherent and easy to understand but lack factual consistency and comprehensiveness. Qureshi et al. (2023) note that their experience from exploring responses of ChatGPT suggests that although ChatGPT and LLMs show some promise for aiding in systematic review-related tasks, the technology is in its infancy and needs much development for such applications. Great caution should be taken by non-content experts in using these tools due to much of the output appearing at a high level to be valid while much is erroneous and in need of active vetting. A particularly strong limitation is the lack of referencing of appropriate and verifiable sources when asked for factual information. In a study prompting ChatGPT to provide answers to questions and supporting evidence in the form of references to external sources, Zuccon et al. (2023) found that the majority of references do not actually exist even though they appear legitimate.

Wang et al. (2023) investigated the effectiveness of ChatGPT in generating effective Boolean queries for systematic review literature searches. Through a series of experiments, they found that use of ChatGPT compares favorably with current automated Boolean query generation methods in terms of precision, but at the expense of recall. The type of prompt used impacts the effectiveness of the queries produced. Cautions include the generation of terms that are not actually in the controlled vocabulary (Medical Subject Headings) and that ChatGPT generates different queries even when the same prompt is used. Haman and Školník (2023) reported the results of a brief test prompting ChatGPT to list 10 seminal academic articles in the field of medicine and provide the DOI. They concluded that ChatGPT cannot be recommended for a literature review because it yields too many fake papers.

[top of entry]

9. Research opportunities for information science

The many research studies discussed in this review typically conclude with recommendations for further research, one source of ideas for advancing research on reviews and reviewing. This section first discusses studies that have investigated reviews and reviewing in library and information science and then identifies some research agendas that deserve attention.

[top of entry]

9.1 Reviews and reviewing in library and information science

A number of studies have sought to investigate the applicability of various review methods to library and information science (LIS) and the extent of their use. Saxton (2006) presents an explanation of meta-analysis as a method, a literature review describing the application of meta-analysis in LIS, and guidelines for reporting quantitative research to enable meta-analysis. He found that the method was rarely applied in LIS. Urquhart (2010) observes that a challenge in transferring the medical systematic review model to LIS is the different knowledge base. In contrast to randomized controlled trials, data in much LIS research is derived from surveys, interviews, observations, and focus groups. Thus the data may be a mix of quantitative and qualitative data or completely qualitative. In contrast to quantitative meta-analysis, approaches to meta-synthesis are needed to integrate the findings from qualitative and quantitative studies or fully qualitative research. Urquhart examines some meta-synthesis methodologies of potential relevance to researchers in information behavior and practitioners in information literacy.

Phelps and Campbell (2012) provide examples of the use of systematic reviews in LIS to illustrate the benefits and challenges of this approach. They note that systematic reviews could be used to analyze research that has been done in librarianship, to identify the gaps, and to aid the profession in planning for new research. Maden and Kotas (2016) sought to evaluate approaches to quality assessment in LIS systematic reviews. Based on an analysis of 40 reviews, they found great variation in the breadth, depth, and transparency of the quality assessment process. They concluded that LIS reviewers need to improve the robustness and transparency with which quality assessment is undertaken and reported in systematic reviews and be explicit as to how the quality of the included studies may impact their review findings. Xu et al. (2015) used both English and Chinese data to investigate the current state of systematic reviews in LIS. They divided 51 studies into two broad topics: (1) feasibility of or introduction to systematic review in LIS; and (2) detailed application and execution. Research quality coding examined statement of the research question, inclusion and exclusion criteria, search strategy, use of critical appraisal, data extraction, and data synthesis, analysis, and presentation. Their analysis led them to conclude that overall the quality of reporting in individual studies is poor. To foster improvements, they included a report formula and list of checklist items appropriate for LIS as an appendix.

Ke and Cheng (2015) investigated the applications of meta-analysis to LIS research through a content analysis of 35 articles. Studies primarily focused on five fields: information systems, human computer interaction, library reference services, informetrics, and information resource management. They noted that a lack of sufficient LIS quantitative studies limits the applicability of meta-analysis for many LIS research topics but made recommendations for effective use of the method when it can be applied. Xie et al. (2020) analyzed 44 LIS-related meta-synthesis studies in terms of contribution, meta-synthesis methodology, and research topics. They found that meta-synthesis research has only been conducted on a limited number of topics: health informatics, information behavior, library user services, information literacy, e-government, human–computer interaction, information systems, and information management. Recommendations are offered to encourage use of this method and to guide its rigorous application.

Some studies seek to identify sources of evidence that can be used in completing reviews in LIS. Sampson et al. (2008) analyzed sources of evidence used for three systematic reviews in librarianship. They found that the evidence base for information science can be multidisciplinary, with the reviews in this case drawing on the literature in health care, published literature in information science, and unpublished literature (conference abstracts, technical reports, electronic citations, and dissertations). Stapleton et al. (2020) used a case study approach to document and analyze search methods for a scoping review in LIS. Based on their experience, for LIS topics they found LISTA and Scopus to be the most productive for peer reviewed journal articles, supplemented with alternate search techniques such as web searching to identify non-journal literature.

A recent review of reviews in human–computer interaction (HCI) (Stefanidi et al. 2023) offers a model for undertaking such a review of reviews in other areas related to information science. The study analyzed 189 literature reviews published at all SIGCHI conferences and in ACM Transactions on Computer–Human Interaction through August 2022. The authors sought to explore the topics that literature reviews in HCI address, the contribution types that they offer, and how literature reviews in HCI are conducted. The categorization of review contributions included empirical, artifact, methodological, theoretical, and opinion. Topics encompassed User Experience & Design, HCI Research, Interaction Design and Children, AI & ML, Games & Play, Work & Creativity, Accessibility, Well-being & Health, Human–Robot Interaction, AutoUI, Specific Application Area, and Specific Modality. Based on their analysis, the authors developed a literature review design document with guidance on how to conduct, analyze, or use an HCI literature review (included as supplementary material to their paper).

[top of entry]

9.2 Research agendas

A few authors have sought to outline research agendas relevant to information science. Blümel and Schniedermann (2020) observe that despite the comparative lack of attention to review articles as objects of research, they offer interesting opportunities to study phenomena of scientific communication and knowledge production. With an emphasis on bibliometrics, they outline a potential research agenda and characterize research opportunities in each of six areas:

  • Methodological caveats resulting from usage of scholarly databases to identify review articles for study.
  • Field-specific patterns of conception, reception, and usage of review articles.
  • Argumentative and textual structures of review articles.
  • Organizations and infrastructures for review articles.
  • Epistemic roles of review articles in author and citation networks.
  • Authorship patterns in review articles.

Wagner et al. (2022) outline how AI can expedite individual steps of the literature review process: problem formulation, literature search, screening for inclusion, quality assessment, data extraction, and data analysis and interpretation. They then outline a research agenda for information systems researchers to advance what they term “AI-based literature reviews” (AILR), literature reviews undertaken with the aid of AI-based tools for one or multiple steps of the review process. Their proposed research agenda encompasses three levels:

  • Level I: Supporting infrastructure (how research is stored and made accessible: quality assurance, smart search technologies, enhanced databases).
  • Level II: Methods and tools for supporting the process of conducting AILRs (development of methods and tools, evaluation and validation, improvement of usability and effectiveness).
  • Level III: Research practice (how information systems research could facilitate the conduct of AILRs: potential and boundaries of research standardization, community norms on research sharing).

They recognize that AI and AILRs raise interesting questions on how research is conducted and synthesized, with the potential for reducing prospective authors' efforts for time-consuming and repetitive tasks and enabling them to dedicate more time to creative tasks that require human interpretation and expertise. Tyler et al. (2023) outline research needed to enable large language models and other AI systems to synthesize scientific evidence for policymakers.

Sheble (2016) observes that the interests of researchers who engage with research synthesis methods intersect with library and information science research and practice. She notes that LIS, as a meta-science (Bates 1999), “has the potential to make substantive contributions to a broader variety of fields in the context of topics related to research synthesis methods” (Sheble 2016, 1990). LIS skills and methods essential to research synthesis include literature search and retrieval, and knowledge of information resources and scholarly communication systems. With the exception of LIS communities associated with health and medical information and the evidence-based librarianship movement, research synthesis has received relatively little attention in the LIS literature. Nevertheless, researchers and practitioners are engaged in activities that affect, and have the potential to inform, evidence gathering for research synthesis studies. Her conclusion suggests a research agenda: “Although topics such as information access, research evaluation, documentary forms associated with knowledge production, and information behaviors and interactions are central to the field, these topics have not been sufficiently problematized in the context of [research synthesis methods], given the prevalence of the methods” (Sheble 2016, 2004). As one example of a possible study to investigate the information behavior of researchers in systematic review processes, Jäger-Dengler-Harles et al. (2020) propose investigation of relevance assessment processes when search results are screened for inclusion in a review.

[top of entry]

10. Conclusion

When ARIST was launched in 1966, the process of reviewing the literature was very labor-intensive. Swanson (1976) sought to record and report experiential data on the work activities, performance times, and costs involved in the preparation of an analytic review, her chapter on “Design and Evaluation of Information Systems” for volume 10 of ARIST. Time had to be spent on preparing correspondence to obtain materials, searching the journal literature, photocopying materials, and typing drafts. Although much of the time was spent reading the literature and writing the review, other tasks were time-consuming and labor-intensive in the period before electronic mail, online databases, full-text digital documents, and word processing programs. As Paisley and Foster (2018, 506) reflect in their discussion of innovations in information retrieval methods for evidence synthesis studies, “developments in technology transformed beyond recognition the information space within which we realized the methods of evidence-based information retrieval”, with reviewers no longer dependent on print indexes, batch process information retrieval, or dial-up pay-as-you-go online services. The literature reviewed in this paper demonstrates how reviews and reviewing have evolved since the early days of ARIST. When the decision to cease publication of ARIST after 2011 was announced, Bawden (2010, 625) observed: “It seems sad though, that a discipline which generally argues for critical analysis of information, and for a reflective use of the research knowledge base, should be unable to sustain its own main tool for doing so”. With the recent relaunch of ARIST, there is once again a tool to embody evidence synthesis methods that can transform primary research into a resource that informs research, policy, and practice.

The discussion of uses and users of reviews identified a range of possible uses and users and how that has evolved over time as audiences and methods for research synthesis have evolved. Editors of recent compilations of review articles have reflected on the potential value of such effort. For example, in a recent compilation of systematic reviews of research on online learning, Martin et al. (2023, 1) note that, given the increased interest in online learning due to the pandemic, a compilation of systematic reviews provides an opportunity to consider: “What do we know? What do we not know? Where and how might we find answers?”. A recent special issue of the International Journal of Management Reviews is even more ambitious, seeking to demonstrate “how to use review articles to address societal grand challenges” (Kunisch et al. 2023b, 240).

In 1987 Garfield voiced concern that “the notion persists […] that a review article — even one that is highly cited — is a lesser achievement than a piece of original research” (Garfield 1987, 113). Although work remains to better understand the impact of review articles (and other research syntheses in their various forms), reviews are now recognized as distinctive research contributions. For example, an editorial in Nature Human Behaviour (2021, 539) explicitly states that “we believe strongly in the value of systematic evidence synthesis: high-quality evidence syntheses are on par with original research and can constitute contributions of outstanding scientific and applied significance”.

This review on reviews and reviewing has highlighted the evolution in forms of research synthesis, the diversification of users from peers to practitioners and policymakers, and the impacts of information technology. The future will likely be influenced by the increasing emphasis on open science, “transparent and accessible knowledge that is shared and developed through collaborative networks” (Vicente-Saez and Martinez-Fuentes 2018, 428). According to the OECD (n.d.), broadening access to scientific publications and data is key to open science. Dworkin (2023) argues that “it is crucial to recognize that truly open science requires that scientists, stakeholders, and the public are not only able to access the products of research, but the knowledge and insights embedded within those products. Given the ever-increasing quantity and complexity of scientific output, this calls for a new focus on synthesis and communication”. How best to accomplish this will depend on more research as originally called for in the conclusion to Virgo's article (Virgo 1971, 290): “Further examination of some critical questions relating to the production and organization of reviews will have to be undertaken before the review, as a means of bringing the most significant information to its users, will be accepted as a reliable adjunct to the original publications”.

[top of entry]

Acknowledgments

Open access publishing facilitated by University of Illinois Urbana-Champaign, as part of the BTAA (Big Ten Academic Alliance)-Wiley Open Access Agreement.

References

Aamodt, M., Huurdeman, H. and Strømme, H. 2019. “Librarian co-authored systematic reviews are associated with lower risk of bias compared to systematic reviews with acknowledgement of librarians or no participation by librarians”. Evidence Based Library and Information Practice, 14(4), 103–127. https://doi.org/10.18438/eblip29601.

Amog, K., Pham, B., Courvoisier, M., Mak, M., Booth, A., Godfrey, C., Hwee, J., Straus, S. E. and Tricco, A. C. 2022. “The web-based “Right Review” tool asks reviewers simple questions to suggest methods from 41 knowledge synthesis methods”. Journal of Clinical Epidemiology, 147, 42–51. https://doi.org/10.1016/j.jclinepi.2022.03.004.

Annual Reviews. n.d. “What we do”. Retrieved from https://www.annualreviews.org/about/what-we-do.

Antons, D., Breidbach, C. F., Joshi, A. M. and Salge, T. O. 2023. “Computational literature reviews: Method, algorithms and roadmap”. Organizational Research Methods, 26(1), 107–138. https://doi.org/10.1177/1094428121991230.

Barker, T. H., Pollock, D., Stone, J. C., Klugar, M., Scott, A. M., Stern, C., Wiechula, R., Shamseer, L., Aromataris, E., Ross-White, A. and Munn, Z. 2023. “How should we handle predatory journals in evidence synthesis? A descriptive survey-based cross-sectional study of evidence synthesis experts”. Research Synthesis Methods, 14(3), 370–381. https://doi.org/10.1002/jrsm.1613.

Bates, M. J. 1999. “The invisible substrate of information science”. Journal of the American Society for Information Science, 50(12), 1043–1050. https://doi.org/10.1002/(SICI)1097-4571(1999)50:12<1043::AID-ASI1>3.0.CO;2-X.

Bawden, D. 2010. “Alas poor ARIST: Reviewing the information sciences”. Journal of Documentation, 66(5), 625–626. https://doi.org/10.1108/jd.2010.27866eaa.001.

Beller, E., Clark, J., Tsafnat, G., Adams, C., Diehl, H., Lund, H., Ouzzani, M., Thayer, K., Thomas, J., Turner, T., Xia, J., Robinson, K. and Glasziou, P. 2018. “Making progress with the automation of systematic reviews: Principles of the International Collaboration for the Automation of Systematic Reviews (ICASR)”. Systematic Reviews, 7, 77. https://doi.org/10.1186/s13643-018-0740-7.

Belter, C. W. 2016. “Citation analysis as a literature search method for systematic reviews”. Journal of the Association for Information Science and Technology, 67(11), 2766–2777. https://doi.org/10.1002/asi.23605.

Bendersky, J., Auladell-Rispau, A., Urrútia, G. and Rojas-Reyes, M. X. 2022. “Methods for developing and reporting living evidence synthesis”. Journal of Clinical Epidemiology, 152, 89–100. https://doi.org/10.1016/j.jclinepi.2022.09.020.

Bethel, A. and Rogers, M. 2014. “A checklist to assess database-hosting platforms for designing and running searches for systematic reviews”. Health Information and Libraries Journal, 31(1), 43–53. https://doi.org/10.1111/hir.12054.

Blaizot, A., Veettil, S. K., Saidoung, P., Moreno-Garcia, C. F., Wiratunga, N., Aceves-Martins, M., Lai, N. M. and Chaiyakunapruk, N. 2022. “Using artificial intelligence methods for systematic review in health sciences: A systematic review”. Research Synthesis Methods, 13(3), 353–362. https://doi.org/10.1002/jrsm.1553.

Blümel, C. and Schniedermann, A. 2020. “Studying review articles in scientometrics and beyond: A research agenda”. Scientometrics, 124, 721–728. https://doi.org/10.1007/s11192-020-03431-7.

Booth, A., Sutton, A., Clowes, M. and Martyn-St James, M. 2022. Systematic approaches to a successful literature review, 3rd ed. Sage.

Bramer, W. M., de Jonge, G. B., Rethlefsen, M. L., Mast, F. and Kleijnen, J. 2018. “A systematic approach to searching: An efficient and complete method to develop literature searches”. Journal of the Medical Library Association, 106(4), 531–541. https://doi.org/10.5195/jmla.2018.283.

Branscomb, L. M. 1975. “Support for reviews and data evaluations”. Science, 187(4177), 603. https://doi.org/10.1126/science.187.4177.603.

Breslin, D. and Gatrell, C. 2023. “Theorizing through literature reviews: The miner-prospector continuum”. Organizational Research Methods, 26(1), 139–167. https://doi.org/10.1177/1094428120943288.

Briscoe, S., Abbott, R., Lawal, H., Shaw, L. and Thompson Coon, J. 2023. “Feasibility and desirability of screening search results from Google Search exhaustively for systematic reviews: A cross-case analysis”. Research Synthesis Methods, 14(3), 427–437. https://doi.org/10.1002/jrsm.1622.

Briscoe, S., Nunns, M. and Shaw, L. 2020. “How do Cochrane authors conduct web searching to identify studies? Findings from a cross-sectional sample of Cochrane Reviews”. Health Information and Libraries Journal, 37(4), 293–318. https://doi.org/10.1111/hir.12313.

Brocke, J. vom, Simons, A., Niehaves, B., Riemer, K., Plattfaut, R. and Cleven, A. 2009. “Reconstructing the giant: On the importance of rigour in documenting the literature search process”. In European conference on information systems proceedings, Vol. 161. AIS eLibrary. Retrieved from https://aisel.aisnet.org/ecis2009/161.

Brocke, J. vom, Simons, A., Riemer, K., Niehaves, B., Plattfaut, R. and Cleven, A. 2015. “Standing on the shoulders of giants: Challenges and recommendations of literature search in information systems research”. Communications of the Association for Information Systems, 37, 205–224. https://doi.org/10.17705/1CAIS.03709.

Brown, C. 2004. “The Matthew effect of the Annual Reviews series and the flow of scientific communication through the World Wide Web”. Scientometrics, 60(1), 25–36. https://doi.org/10.1023/B:SCIE.0000027304.80068.0c.

Brown, S. J., Bakker, C. J. and Theis-Mahon, N. R. 2022. “Retracted publications in pharmacy systematic reviews”. Journal of the Medical Library Association, 110(1), 47–55. https://doi.org/10.5195/jmla.2022.1280.

Brunskill, A. and Hanneke, R. 2022. “The case of the disappearing librarians: Analyzing documentation of librarians' contributions to systematic reviews”. Journal of the Medical Library Association, 110(4), 409–418. https://doi.org/10.5195/jmla.2022.1505.

Büchter, R. B., Rombey, T., Mathes, T., Khalil, H., Lunny, C., Pollock, D., Puljak, L., Tricco, A. C. and Pieper, D. 2023. “Systematic reviewers used various approaches to data extraction and expressed several research needs: A survey”. Journal of Clinical Epidemiology, 159, 214–224. https://doi.org/10.1016/j.jclinepi.2023.05.027.

Bullers, K., Howard, A. M., Hanson, A., Kearns, W. D., Orriola, J. J., Polo, R. L. and Sakmar, K. A. 2018. “It takes longer than you think: Librarian time spent on systematic review tasks”. Journal of the Medical Library Association, 106(2), 198–207. https://doi.org/10.5195/jmla.2018.323.

Campbell Collaboration. n.d. Evidence synthesis tools for Campbell authors. Retrieved from https://www.campbellcollaboration.org/research-resources/resources.html.

Campbell, F., Tricco, A. C., Munn, Z., Pollock, D., Saran, A., Sutton, A., White, H. and Khalil, H. 2023. “Mapping reviews, scoping reviews and evidence and gap maps (EGMs): The same but different—the “Big Picture” review family”. Systematic Reviews, 12, 45. https://doi.org/10.1186/s13643-023-02178-5.

Chalmers, I., Hedges, L. V. and Cooper, H. 2002. “A brief history of research synthesis”. Evaluation and the Health Professions, 25(1), 12–37. https://doi.org/10.1177/0163278702025001003.

Charbonneau, D. H. and Vela, K. 2023. “Librarian contributions to evidence synthesis programs: Addressing the COVID-19 infodemic”. Library Quarterly, 93(1), 72–85. https://doi.org/10.1086/722548.

Clark, J., Glasziou, P., Del Mar, C., Bannach-Brown, A., Stehlik, P. and Scott, A. M. 2020. “A full systematic review was completed in 2 weeks using automation tools: A case study”. Journal of Clinical Epidemiology, 121, 81–90. https://doi.org/10.1016/j.jclinepi.2020.01.008.

Clarke, M. and Chalmers, I. 2018. “Reflections on the history of systematic reviews”. BMJ Evidence-Based Medicine, 23(4), 121–122. https://doi.org/10.1136/bmjebm-2018-110968.

Cochrane Community. n.d. “Tools and software”. Retrieved from https://community.cochrane.org/help/tools-and-software.

Cooper, H., L. V. Hedges and J. C. Valentine (Eds.). 2019. The handbook of research synthesis and meta-analysis, 3rd ed. Russell Sage Foundation.

Cooper, I. D. and Crum, J. A. 2013. “New activities and changing roles of health sciences librarians: A systematic review, 1990–2012”. Journal of the Medical Library Association, 101(4), 268–277. https://doi.org/10.3163/1536-5050.101.4.008.

Cornell University. 2023. A guide to evidence synthesis: Cornell University Library evidence synthesis service. Retrieved from https://guides.library.cornell.edu/evidence-synthesis.

Craigle, V., Retteen, A. and Keele, B. J. 2022. “Ending law review link rot: A plea for adopting DOI”. Legal Reference Services Quarterly, 41(2), 93–97. https://doi.org/10.1080/0270319X.2022.2089810.

Craven, J., Jefferies, J., Kendrick, J., Nicholls, D., Boynton, J. and Frankish, R. 2014. “A comparison of searching the Cochrane library databases via CRD, Ovid and Wiley: Implications for systematic searching and information services”. Health Information and Libraries Journal, 31(1), 54–63. https://doi.org/10.1111/hir.12046.

Cronin, B. 2011. “Introduction”. Annual Review of Information Science and Technology, 45, 7–9. https://doi.org/10.1002/aris.2011.1440450101.

Cuadra, C. A. 1964. “Identifying key contributions to information science”. American Documentation, 15(4), 289–295. https://doi.org/10.1002/asi.5090150407.

Cuadra, C. A. 1971. “The Annual Review of Information Science And Technology: Its aims and impact” (ED056723). ERIC. Retrieved from https://eric.ed.gov/?id=ED056723.

Cuadra, C. A., Harris, L. and Katter, R. V. 1968). “Impact study of the Annual Review of Information Science and Technology final report” (ED025287). ERIC. Retrieved from https://eric.ed.gov/?id=ED025287.

De la Torre-López, J., Ramírez, A. and Romero, J. R. 2023. “Artificial intelligence to automate the systematic review of scientific literature”. Computing, 105, 2171–2194. https://doi.org/10.1007/s00607-023-01181-x.

Dell, N. A., Maynard, B. R., Murphy, A. M. and Stewart, M. 2021. “Technology for research synthesis: An application of sociotechnical systems theory”. Journal of the Society for Social Work and Research, 12(1), 201–222. https://doi.org/10.1086/713525.

DeVito, N. J. and Goldacre, B. 2019. “Catalogue of bias: Publication bias”. BMJ Evidence-Based Medicine, 24(2), 53–54. https://doi.org/10.1136/bmjebm-2018-111107.

Dinter, R. van, Tekinerdogan, B. and Catal, C. 2021. “Automation of systematic literature reviews: A systematic literature review”. Information and Software Technology, 136, 106589. https://doi.org/10.1016/j.infsof.2021.106589.

Dworkin, J. 2023. “Truly open science needs knowledge synthesis”. Federation of American Scientists. Retrieved from https://fas.org/publication/truly-open-science-needs-knowledge-synthesis/.

Eldredge, J. D. 2000. “Evidence-based librarianship: An overview”. Bulletin of the Medical Library Association, 88(4), 289–302. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC35250/.

Elliott, J., Lawrence, R., Minx, J. C., Oladapo, O. T., Ravaud, P., Tendal Jeppesen, B., Thomas, J., Turner, T., Vandvik, P. O. and Grimshaw, J. M. 2021. “Decision makers need constantly updated evidence synthesis”. Nature, 600(7889), 383–385. https://doi.org/10.1038/d41586-021-03690-1.

Elliott, J. H., Turner, T., Clavisi, O., Thomas, J., Higgins, J. P. T., Mavergames, C. and Gruen, R. L. 2014. “Living systematic reviews: An emerging opportunity to narrow the evidence-practice gap”. PLoS Medicine, 11(2), e1001603. https://doi.org/10.1371/journal.pmed.1001603.

Elmore, S. A. and Weston, E. H. 2020. “Predatory journals: What they are and how to avoid them”. Toxicologic Pathology, 48(4), 607–610. https://doi.org/10.1177/0192623320920209.

Faggion, C. M., Jr. 2019. “More detailed guidance on the inclusion/exclusion of retracted articles in systematic reviews is needed”. Journal of Clinical Epidemiology, 116, 133–134. https://doi.org/10.1016/j.jclinepi.2019.07.006.

Foster, M. J. and S. T. Jewell (Eds.). 2022. Piecing together systematic reviews and other evidence syntheses. Rowman and Littlefield.

Garfield, E. 1987. “Reviewing review literature. Part 1: Definitions and uses of reviews”. Essays of an Information Scientist, 10, 113–116. Retrieved from http://www.garfield.library.upenn.edu/essays/v10p113y1987.pdf.

Ghezzi-Kopel, K., Ault, J., Chimwaza, G., Diekmann, F., Eldermire, E., Gathoni, N., Kelly, J., Kinengyere, A. A., Kocher, M., Lwoga, E. T., Page, J., Young, S. and Porciello, J. 2022. “Making the case for librarian expertise to support evirieved from dence synthesis for the sustainable development goals”. Research Synthesis Methods, 13(1), 77–87. https://doi.org/10.1002/jrsm.1528.

Gore, G. C. and Jones, J. 2015. “Systematic reviews and librarians: A primer for managers”. Partnership: The Canadian Journal of Library and Information Practice and Research, 10, 1. https://doi.org/10.21083/partnership.v10i1.3343.

Gough, D., Davies, P., Jamtvedt, G., Langlois, E., Littell, J., Lotfi, T., Masset, E., Merlin, T., Pullin, A. S., Ritskes-Hoitinga, M., Røttingen, J.-A., Sena, E., Stewart, R., Tovey, D., White, H., Yost, J., Lund, H. and Grimshaw, J. 2020. “Evidence Synthesis International (ESI): Position statement”. Systematic Reviews, 9, 155. https://doi.org/10.1186/s13643-020-01415-5.

Grbin, L., Nichols, P., Russell, F., Fuller-Tyszkiewicz, M. and Olsson, C. A. 2022. “The development of a living knowledge system and implications for future systematic searching”. Journal of the Australian Library and Information Association, 71(3), 275–292. https://doi.org/10.1080/24750158.2022.2087954.

Gross, A. G., Harmon, J. E. and Reidy, M. 2002. Communicating science: The scientific article from the 17th century to the present. Oxford University Press.

Guimarães, N. S., Ferreira, A. J. F., Ribeiro Silva, R. C., Paula, A. A., Lisboa, C. S., Magno, L., Ichiara, M. Y. and Barreto, M. L. 2022. “Deduplicating records in systematic reviews: There are free, accurate automated ways to do so”. Journal of Clinical Epidemiology, 152, 110–115. https://doi.org/10.1016/j.jclinepi.2022.10.009.

Gusenbauer, M. and Haddaway, N. R. 2020. “Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed and 26 other resources”. Research Synthesis Methods, 11(2), 181–217. https://doi.org/10.1002/jrsm.1378.

Haddaway, N. R., Grainger, M. J. and Gray, C. T. 2022. “Citationchaser: A tool for transparent and efficient forward and backward citation chasing in systematic searching”. Research Synthesis Methods, 13(4), 533–545. https://doi.org/10.1002/jrsm.1563.

Haddaway, N. R., Lotfi, T. and Mbuagbaw, L. 2023. “Systematic reviews: A glossary for public health”. Scandinavian Journal of Public Health, 51(1), 1–10. https://doi.org/10.1177/14034948221074998.

Haman, M. and Školník, M. 2023. “Using ChatGPT to conduct a literature review”. Accountability in Research, 2023, 1–3. https://doi.org/10.1080/08989621.2023.2185514.

Harmon, J. E. and A. G. Gross (Eds.). 2007. The scientific literature: A guided tour. University of Chicago Press.

Hartling, L. and Gates, A. 2022. “Friend or foe? The role of robots in systematic reviews”. Annals of Internal Medicine, 175(7), 1045–1046. https://doi.org/10.7326/M22-1439.

Hayden, J. A. 2020. “Predatory publishing dilutes and distorts evidence in systematic reviews”. Journal of Clinical Epidemiology, 121, 117–119. https://doi.org/10.1016/j.jclinepi.2020.01.013.

Heilprin, L. B. 1988. “Historical note: Annual Review of Information Science and Technology: Early historical perspective”. Journal of the American Society for Information Science, 39(4), 273–280. https://doi.org/10.1002/(SICI)1097-4571(198807)39:4<273::AID-ASI7>3.0.CO;2-7.

Heron, L., Buitrago-Garcia, D., Ipekci, A. M., Baumann, R., Imeri, H., Salanti, G., Counotte, M. J. and Low, N. 2023. “How to update a living systematic review and keep it alive during a pandemic: A practical guide”. Systematic Reviews, 12, 156. https://doi.org/10.1186/s13643-023-02325-y.

Hickner, A. 2023. “How do search systems impact systematic searching? A qualitative study”. Journal of the Medical Library Association, 111(4), 774–782. https://doi.org/10.5195/jmla.2023.1647

Higgins, J., J. Thomas, J. Chandler, M. Cumpston, T. Li, M. Page and V. Welch (Eds.). 2022. Cochrane handbook for systematic reviews of interventions, version 6.4. Cochrane. Retrieved from https://training.cochrane.org/handbook/current.

Hirt, J., Nordhausen, T., Appenzeller-Herzog, C. and Ewald, H. 2023. “Citation tracking for systematic literature searching: A scoping review”. Research Synthesis Methods, 14(3), 563–579. https://doi.org/10.1002/jrsm.1635.

Hjørland, B. 2000. “Book review: Annual Review of Information Science and Technology, Vol. 33, 1998”. Journal of the American Society for Information Science, 51(7), 683–685. https://doi.org/10.1002/(SICI)1097-4571(2000)51:7<683::AID-ASI9>3.0.CO;2-9

Hoffmeyer, B., Fonnes, S., Andresen, K. and Rosenberg, J. 2023. “Use of inactive Cochrane reviews in academia: A citation analysis”. Scientometrics, 128, 2923–2934. https://doi.org/10.1007/s11192-023-04691-9.

Hong, Q. N. and Pluye, P. 2018. “Systematic reviews: A brief historical review”. Education for Information, 34(4), 261–276. https://doi.org/10.3233/EFI-180219.

Howell, S. and Burtis, A. 2022. “The continued problem of URL decay: An updated analysis of health care management journal citations”. Journal of the Medical Library Association, 110(4), 463–470. https://doi.org/10.5195/jmla.2022.1456.

Institute of Medicine. 2011. Finding what works in health care: Standards for systematic reviews. National Academies Press. Retrieved from https://www.ncbi.nlm.nih.gov/books/NBK209518/.

Ioannidis, J. P. A. 2016. “The mass production of redundant, misleading and conflicted systematic reviews and meta-analyses”. Milbank Quarterly, 94(3), 485–514. https://doi.org/10.1111/1468-0009.12210.

Iverson, S., Della Seta, M., Lefebvre, C., Ritchie, A., Traditi, L. and Baliozian, K. 2021. “International health library associations urge the ICMJE to seek information specialists as peer reviewers for knowledge synthesis publications”. Journal of the Medical Library Association, 109(3), 503–504. https://doi.org/10.5195/jmla.2021.1301.

Jäger-Dengler-Harles, I., Heck, T. and Rittberger, M. 2020. “Systematic reviews as object to study relevance assessment processes”. Information Research, 25, 4. https://doi.org/10.47989/irisic2024.

James, K. L., Randall, N. P. and Haddaway, N. R. 2016. “A methodology for systematic mapping in environmental sciences”. Environmental Evidence, 5(1), 7. https://doi.org/10.1186/s13750-016-0059-6.

JBI. n.d. Products and services/JBI software suite. Retrieved from https://jbi.global/products#tools.

Johnson, E. E., O'Keefe, H., Sutton, A. and Marshall, C. 2022. “The Systematic Review Toolbox: Keeping up to date with tools to support evidence synthesis”. Systematic Reviews, 11, 258. https://doi.org/10.1186/s13643-022-02122-z.

Jonnalagadda, S. R., Goyal, P. and Huffman, M. D. 2015. “Automating data extraction in systematic reviews: A systematic review”. Systematic Reviews, 4, 78. https://doi.org/10.1186/s13643-015-0066-7.

Ke, Q. and Cheng, Y. 2015. “Applications of meta-analysis to library and information science research: Content analysis”. Library and Information Science Research, 37(4), 370–382. https://doi.org/10.1016/j.lisr.2015.05.004.

Khabsa, J., Chang, S., McKenzie, J. E., Barker, J. M., Boutron, I., Kahale, L. A., Page, M. J., Skoetz, N. and Akl, E. A. 2023. “Conceptualizing the reporting of living systematic reviews”. Journal of Clinical Epidemiology, 156, 113–118. https://doi.org/10.1016/j.jclinepi.2023.01.008.

Khalil, H., Ameen, D. and Zarnegar, A. 2022a. “Tools to support the automation of systematic reviews: A scoping review”. Journal of Clinical Epidemiology, 144, 22–42. https://doi.org/10.1016/j.jclinepi.2021.12.005.

Khalil, H., Lotfi, T., Rada, G. and Akl, E. A. 2022b. “Challenges of evidence synthesis during the 2020 COVID pandemic: A scoping review”. Journal of Clinical Epidemiology, 142, 10–18. https://doi.org/10.1016/j.jclinepi.2021.10.017.

Kocher, M. and Riegelman, A. 2018. “Systematic reviews and evidence synthesis: Resources beyond the health sciences”. College and Research Libraries News, 79(5), 248–252. https://doi.org/10.5860/crln.79.5.248.

Kolaski, K., Logan, L. R. and Ioannidis, J. P. A. 2023. “Guidance to best tools and practices for systematic reviews”. Systematic Reviews, 12, 96. https://doi.org/10.1186/s13643-023-02255-9.

Korfitsen, C. B., Mikkelsen, M.-L. K., Ussing, A., Walker, K. C., Rohde, J. F., Andersen, H. K., Tarp, S. and Händel, M. N. 2022. “Usefulness of Cochrane reviews in clinical guideline development—A survey of 585 recommendations”. International Journal of Environmental Research and Public Health, 19(2), 685. https://doi.org/10.3390/ijerph19020685.

Kousha, K. and Thelwall, M. 2023. “Factors associating with or predicting more cited or higher quality journal articles: An Annual Review of Information Science and Technology (ARIST) paper”. Journal of the Association for Information Science and Technology, 2023, 1–30. https://doi.org/10.1002/asi.24810.

Kunisch, S., Denyer, D., Bartunek, J. M., Menz, M. and Cardinal, L. B. 2023a. “Review research as scientific inquiry”. Organizational Research Methods, 26(1), 3–45. https://doi.org/10.1177/10944281221127292.

Kunisch, S., Knyphausen-Aufsess, D., Bapuji, H., Aguinis, H., Bansal, T., Tsui, A. S. and Pinto, J. 2023b. “Using review articles to address societal grand challenges”. International Journal of Management Reviews, 25(2), 240–250. https://doi.org/10.1111/ijmr.12335.

Lachance, C., Poirier, S. and Larivière, V. 2014. “The kiss of death? The effect of being cited in a review on subsequent citations”. Journal of the Association for Information Science and Technology, 65(7), 1501–1505. https://doi.org/10.1002/asi.23166.

Laynor, G. 2022. “Can systematic reviews be automated?” Journal of Electronic Resources in Medical Libraries, 19(3), 101–106. https://doi.org/10.1080/15424065.2022.2113350.

Laynor, G. and Roth, S. 2022. “Librarians as research partners for developing evidence synthesis protocols”. In C. Forbes (Ed.), Academic libraries and collaborative research services (pp. 81–104). Rowman and Littlefield.

Lê, M.-L., Neilson, C. J. and Winkler, J. in press. “Benchmarking librarian support of systematic reviews in the sciences, humanities and social sciences”. College and Research Libraries. https://doi.org/10.31219/osf.io/v7m9y.

Lee, J., Hayden, K. A., Ganshorn, H. and Pethrick, H. 2021. “A content analysis of systematic review online library guides”. Evidence Based Library and Information Practice, 16(1), 60–77. https://doi.org/10.18438/eblip29819.

Lefebvre, C. 2019. “Foreword”. In P. Levay and J. Craven (Eds.), Systematic searching: Practical ideas for improving results (pp. 27–29). Facet Publishing.

Lei, L. and Sun, Y. 2020. “Should highly cited items be excluded in impact factor calculation? The effect of review articles on journal impact factor”. Scientometrics, 122, 1697–1706. https://doi.org/10.1007/s11192-019-03338-y.

Lemire, S., Peck, L. R. and Porowski, A. 2023. “The evolution of systematic evidence reviews: Past and future developments and their implications for policy analysis”. Politics and Policy, 51(3), 373–396. https://doi.org/10.1111/polp.12532.

Levay, P. and J. Craven (Eds.). 2019. Systematic searching: Practical ideas for improving results. Facet Publishing.

Maden, M. and Kotas, E. 2016. “Evaluating approaches to quality assessment in library and information science LIS systematic reviews: A methodology review”. Evidence Based Library and Information Practice, 11(2), 149–176. https://doi.org/10.18438/B8F630.

Marshall, I. J. and Wallace, B. C. 2019. “Toward systematic review automation: A practical guide to using machine learning tools in research synthesis”. Systematic Reviews, 8, 163. https://doi.org/10.1186/s13643-019-1074-9.

Martin, F., Dennen, V. P. and Bonk, C. J. 2023. “Systematic reviews of research on online learning: An introductory look and review”. Online Learning Journal, 27(1), 1–14. Retrieved from https://olj.onlinelearningconsortium.org/index.php/olj/article/view/3827/1259.

McGowan, J. and Sampson, M. 2005. “Systematic reviews need systematic searchers”. Journal of the Medical Library Association, 93(1), 74–80. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC545125/.

McGowan, J., Sampson, M., Salzwedel, D. M., Cogo, E., Foerster, V. and Lefebvre, C. 2016. “PRESS peer review of electronic search strategies: 2015 guideline statement”. Journal of Clinical Epidemiology, 75, 40–46. https://doi.org/10.1016/j.jclinepi.2016.01.021.

McMahan, P. and McFarland, D. A. 2021. “Creative destruction: The structural consequences of scientific curation”. American Sociological Review, 86(2), 341–376. https://doi.org/10.1177/0003122421996323.

Medical Library Association. 2023. Professional development specializations: Systematic review services specialization. Retrieved from https://www.mlanet.org/p/cm/ld/fid=1893.

Medical Library Association Systematic Reviews Caucus. 2023. Systematic reviews caucus. Retrieved from https://www.mlanet.org/page/caucus-systematic.

Miranda, R. and Garcia-Carpintero, E. 2018. “Overcitation and overrepresentation of review papers in the most cited papers”. Journal of Informetrics, 12(4), 1015–1030. https://doi.org/10.1016/j.joi.2018.08.006.

Morris, M., Boruff, J. T. and Gore, G. C. 2016. “Scoping reviews: Establishing the role of the librarian”. Journal of the Medical Library Association, 104(4), 346–353. https://doi.org/10.3163/1536-5050.104.4.020.

Müller, H., Pachnanda, S., Pahl, F. and Rosenqvist, C. 2022. “The application of artificial intelligence on different types of literature reviews: A comparative study”. In 2022 International Conference on Applied Artificial Intelligence (ICAPAI). IEEE. https://doi.org/10.1109/ICAPAI55158.2022.9801564.

Mulrow, C. D. 1987. “The medical review article: State of the science”. Annals of Internal Medicine, 106(3), 485–488. https://doi.org/10.7326/0003-4819-106-3-485.

Mulrow, C. D. 1994. “Systematic reviews: Rationale for systematic reviews”. BMJ, 309, 597–599. https://doi.org/10.1136/bmj.309.6954.597.

Munn, Z., Pollock, D., Barker, T. H., Stone, J., Stern, C., Aromataris, E., Schünemann, H. J., Clyne, B., Khalil, H., Mustafa, R. A., Godfrey, C., Booth, A., Tricco, A. C. and Pearson, A. 2023a. “The Pandora's box of evidence synthesis and the case for a living evidence synthesis taxonomy”. BMJ Evidence-Based Medicine, 28(3), 148–150. https://doi.org/10.1136/bmjebm-2022-112065.

Munn, Z., Pollock, D., Price, C., Aromataris, E., Stern, C., Stone, J. C., Barker, T. H., Godfrey, C. M., Clyne, B., Booth, A., Tricco, A. C. and Jordan, Z. 2023b. “Investigating different typologies for the synthesis of evidence: A scoping review protocol”. JBI Evidence Synthesis, 21(3), 592–600. https://doi.org/10.11124/JBIES-22-00122.

National Academy of Sciences. 1959. Proceedings of the International Conference on Scientific Information. National Academy of Sciences-National Research Council. Retrieved from https://nap.nationalacademies.org/catalog/10866/...

National Academy of Sciences. 1969. Scientific and technical communication: A pressing national problem and recommendations for its solution: A report by the Committee on Scientific and Technical Communication (ED044266). ERIC. Retrieved from https://files.eric.ed.gov/fulltext/ED044266.pdf.

Nature Human Behaviour. 2021. “Editorial: The value of evidence synthesis”. Nature Human Behaviour, 5, 539. https://doi.org/10.1038/s41562-021-01131-7.

Neilson, C. J. 2021. “Adoption of peer review of literature search strategies in knowledge synthesis from 2009 to 2018: An overview”. Health Information and Libraries Journal, 38(3), 160–171. https://doi.org/10.1111/hir.12367.

Nicholson, J., McCrillis, A. and Williams, J. D. 2017. “Collaboration challenges in systematic reviews: A survey of health sciences librarians”. Journal of the Medical Library Association, 105(4), 385–393. https://doi.org/10.5195/jmla.2017.176.

Noel-Storr, A., Dooley, G., Elliott, J., Steele, E., Shemilt, I., Mavergames, C., Wisniewski, S., McDonald, S., Murano, M., Glanville, J., Foxlee, R., Beecher, D., Ware, J. and Thomas, J. 2021. “An evaluation of Cochrane Crowd found that crowdsourcing produced accurate results in identifying randomized trials”. Journal of Clinical Epidemiology, 133, 130–139. https://doi.org/10.1016/j.jclinepi.2021.01.006

Nussbaumer-Streit, B., Ellen, M., Klerings, I., Sfetcu, R., Riva, N., Mahmic-Kaknjo, M., Poulentzas, G., Martinez, P., Baladia, E., Ziganshina, L. E., Marqués, M. E., Aguilar, L., Kassianos, A. P., Frampton, G., Silva, A. G., Affengruber, L., Spjker, R., Thomas, J., Berg, R. C., … Gartlehner, G. 2021. “Resource use during systematic review production varies widely: A scoping review”. Journal of Clinical Epidemiology, 139, 287–296. https://doi.org/10.1016/j.jclinepi.2021.05.019.

Oberg, G. and Leopold, A. 2019). “On the role of review papers in the face of escalating publication rates: A case study of research on contaminants of emerging concern (CECs)”. Environment International, 131, 104960. https://doi.org/10.1016/j.envint.2019.104960.

O'Connor, A. M., Tsafnat, G., Thomas, J., Glasziou, P., Gilbert, S. B. and Hutton, B. 2019. “A question of trust: Can we build an evidence base to gain trust in systematic review automation technologies?” Systematic Reviews, 8, 143. https://doi.org/10.1186/s13643-019-1062-0.

OECD. n.d. Open science. Retrieved from https://web-archive.oecd.org/2022-08-04/325150-open-science.htm.

Oelen, A., Stocker, M. and Auer, S. 2021. “SmartReviews: Towards human- and machine-actionable reviews”. In G. Berget, M. M. Hall, D. Brenn and S. Kumpulainen (Eds.), Linking theory and practice of digital libraries: TPDL 2021. Lecture notes in computer science (Vol. 12866, pp. 181–186). Springer. https://doi.org/10.1007/978-3-030-86324-1_22.

O'Mara-Eves, A., Thomas, J., McNaught, J., Miwa, M. and Ananiadou, S. 2015. “Using text mining for study identification in systematic reviews: A systematic review of current approaches”. Systematic Reviews, 4, 5. https://doi.org/10.1186/2046-4053-4-5.

Ott, D. E. 2022. “Reference hygiene and death on the Internet: Decay, rot, half-life, deterioration and corruption”. Journal of the Society of Laparoscopic and Robotic Surgeons, 26(1), e2021.00082. https://doi.org/10.4293/JSLS.2021.00082.

Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. 2021. “The PRISMA 2020 statement: An updated guideline for reporting systematic reviews”. International Journal of Surgery, 88, 105906. https://doi.org/10.1016/j.ijsu.2021.105906.

Paisley, S. and Foster, M. J. 2018. “Innovation in information retrieval methods for evidence synthesis studies”. Research Synthesis Methods, 9(4), 506–509. https://doi.org/10.1002/jrsm.1322.

Pallath, A. and Zhang, Q. 2023. “Paperfetcher: A tool to automate handsearching and citation searching for systematic reviews”. Research Synthesis Methods, 14(2), 323–335. https://doi.org/10.1002/jrsm.1604.

Paré, G., Trudel, M.-C., Jaana, M. and Kitsiou, S. 2015. “Synthesizing information systems knowledge: A typology of literature reviews”. Information and Management, 52(2), 183–199. https://doi.org/10.1016/j.im.2014.08.008.

Peng, Y., Rousseau, J. F., Shortliffe, E. H. and Weng, C. 2023. “AI-generated text may have a role in evidence-based medicine”. Nature Medicine, 29, 1593–1594. https://doi.org/10.1038/s41591-023-02366-9.

Phelps, S. F. and Campbell, N. 2012. “Systematic reviews in theory and practice for library and information studies”. Library and Information Research, 36(112), 6–15. https://doi.org/10.29173/lirg498.

Portenoy, J. 2021. Harnessing scholarly literature as data to curate, explore and evaluate scientific research (doctoral dissertation). University of Washington. Retrieved from https://digital.lib.washington.edu/researchworks/handle/1773/47601.

Premji, Z., Hayden, K. A. and Rutherford, S. 2021. “Teaching knowledge synthesis methodologies in a higher education setting: A scoping review of face-to-face instructional programs”. Evidence Based Library and Information Practice, 16(2), 111–144. https://doi.org/10.18438/eblip29895.

President's Science Advisory Committee. 1963. “Science, government and information: The responsibilities of the technical community and the government in the transfer of information” (ED048894). ERIC. Retrieved from https://files.eric.ed.gov/fulltext/ED048894.pdf.

Puljak, L. and Lund, H. 2023. “Definition, harms and prevention of redundant systematic reviews”. Systematic Reviews, 12, 63. https://doi.org/10.1186/s13643-023-02191-8.

Qureshi, R., Shaughnessy, D., Gill, K. A. R., Robinson, K. A., Li, T. and Agai, E. 2023. “Are ChatGPT and large language models “the answer” to bringing us closer to systematic review automation?” Systematic Reviews, 12, 72. https://doi.org/10.1186/s13643-023-02243-z.

Raitskaya, L. and Tikhonova, E. 2019. “Scoping reviews: What is in a name?” Journal of Language and Education, 5(2), 4–9. https://doi.org/10.17323/jle.2019.9689.

Rethlefsen, M. L., Farrell, A. M., Osterhaus Trzasko, L. C. and Brigham, T. J. 2015. “Librarian co-authors correlated with higher quality reported search strategies in general internal medicine systematic reviews”. Journal of Clinical Epidemiology, 68(6), 617–626. https://doi.org/10.1016/j.jclinepi.2014.11.025.

Ross-White, A. 2021. “Search is a verb: Systematic review searching as invisible labor”. Journal of the Medical Library Association, 109(3), 505–509. https://doi.org/10.5195/jmla.2021.1226.

Royal Society. 1948. The Royal Society scientific information conference: Report and papers submitted.

Royal Society and The Academy of Medical Sciences. 2018. “Evidence synthesis for policy: A statement of principles”. Retrieved from https://royalsociety.org/-/media/policy/projects/evidence-synthesis/evidence-synthesis-statement-principles.pdf.

Saleh, A. A., Ratajeski, M. A. and Bertolet, M. 2014. “Grey literature searching for health sciences systematic reviews: A prospective study of time spent and resources utilized”. Evidence Based Library and Information Practice, 9(3), 28–50. https://doi.org/10.18438/B8DW3K.

Salvador-Oliván, J. A., Marco-Cuenca, G. and Arquero-Avilés, R. 2019. “Errors in search strategies used in systematic reviews and their effects on information retrieval”. Journal of the Medical Library Association, 107(2), 210–221. https://doi.org/10.5195/jmla.2019.567.

Sampson, M., Daniel, R., Cogo, E. and Dingwall, O. 2008. “Sources of evidence to support systematic reviews in librarianship”. Journal of the Medical Library Association, 96(1), 66–69. https://doi.org/10.3163/1536-5050.96.1.66.

Santos, Á. O., Silva, E. S., Couto, L. M., Reis, G. V. L. and Belo, V. S. 2023. “The use of artificial intelligence for automating or semi-automating biomedical literature analyses: A scoping review”. Journal of Biomedical Informatics, 142, 104389. https://doi.org/10.1016/j.jbi.2023.104389.

Sarri, G., Forsythe, A., Elvidge, J. and Dawoud, D. 2023. “Living health technology assessments: How close to living reality?” BMJ Evidence-Based Medicine, 1–3. https://doi.org/10.1136/bmjebm-2022-112152.

Saxton, M. L. 2006. “Meta-analysis in library and information science: Method, history and recommendations for reporting research”. Library Trends, 55(1), 158–170. https://doi.org/10.1353/lib.2006.0052.

Schiavo, J. H. 2019. “PROSPERO: An international register of systematic review protocols”. Medical Reference Services Quarterly, 38(2), 171–180. https://doi.org/10.1080/02763869.2019.1588072.

Schmidt, L., Sinyor, M., Webb, R. T., Marshall, C., Knipe, D., Eyles, E. C., John, A., Gunnell, D. and Higgins, J. P. T. 2023. “A narrative review of recent tools and innovations toward automating living systematic reviews and evidence syntheses”. Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen, 181, 65–75. https://doi.org/10.1016/j.zefq.2023.06.007.

Schneider, J., Woods, N. D., Proescholdt, R. and The RISRS Team. 2022. “Reducing the inadvertent spread of retracted science: Recommendations from the RISRS report”. Research Integrity and Peer Review, 7, 6. https://doi.org/10.1186/s41073-022-00125-x.

Schryen, G. and Sperling, M. 2023. “Literature reviews in operations research: A new taxonomy and a meta review”. Computers and Operations Research, 157, 106269. https://doi.org/10.1016/j.cor.2023.106269.

Scott, A. M., Forbes, C., Clark, J., Carter, M., Glasziou, P. and Munn, Z. 2021. “Systematic review automation tools improve efficiency but lack of knowledge impedes their adoption: A survey”. Journal of Clinical Epidemiology, 138, 80–94. https://doi.org/10.1016/j.jclinepi.2021.06.030.

Scott, A. M., Glasziou, P. and Clark, J. 2023. “We extended the 2-week systematic review (2weekSR) methodology to larger, more complex systematic reviews: A case series”. Journal of Clinical Epidemiology, 157, 112–119. https://doi.org/10.1016/j.jclinepi.2023.03.007.

Sheble, L. 2016. “Research synthesis methods and library and information science: Shared problems, limited diffusion”. Journal of the Association for Information Science and Technology, 67(8), 1990–2008. https://doi.org/10.1002/asi.23499.

Sheble, L. 2017. “Macro-level diffusion of a methodological knowledge innovation: Research synthesis methods, 1972–2011”. Journal of the Association for Information Science and Technology, 68(12), 2693–2708. https://doi.org/10.1002/asi.23864.

Siddaway, A. P., Wood, A. M. and Hedges, L. V. 2019. “How to do a systematic review: A best practice guide for conducting and reporting narrative reviews, meta-analyses and meta-syntheses”. Annual Review of Psychology, 70, 747–770. https://doi.org/10.1146/annurev-psych-010418-102803.

Slebodnik, M., Cahoy, E. S. and Jacobsen, A. L. 2022a. “Evidence synthesis: Coming soon to a library near you?” Portal: Libraries and the Academy, 22(2), 273–280. https://doi.org/10.1353/pla.2022.0016.

Slebodnik, M., Pardon, K. and Hermer, J. 2022b. “Who's publishing systematic reviews? An examination beyond the health sciences”. Issues in Science and Technology Librarianship, 101, 1–22. https://doi.org/10.29173/istl2671.

Smith, L. C. 2012. “'Speaking volumes': Cuadra, Williams, Cronin and the evolution of the Annual Review of Information Science and Technology”. In T. Carbo and T. B. Hahn (Eds.), International perspectives on the history of information science and technology: Proceedings of the ASIS&T 2012 pre-conference on the history of ASIS&T and information science and technology (pp. 18–29). Information Today.

Snyder, H. 2019. “Literature review as a research methodology: An overview and guidelines”. Journal of Business Research, 104, 333–339. https://doi.org/10.1016/j.jbusres.2019.07.039.

Spencer, A. J. and Eldredge, J. D. 2018. “Roles for librarians in systematic reviews: A scoping review”. Journal of the Medical Library Association, 106(1), 46–56. https://doi.org/10.5195/jmla.2018.82.

Stapleton, J., Carter, C. and Bredahl, L. 2020. “Developing systematic search methods for the library literature: Methods and analysis”. Journal of Academic Librarianship, 46(5), 102190. https://doi.org/10.1016/j.acalib.2020.102190.

Stefanidi, E., Bentvelzen, M., Wozniak, P. W., Kosch, T., Wozniak, M. P., Mildner, T., Schneegass, S., Müller, H. and Niess, J. 2023. “Literature reviews in HCI: A review of reviews”. In CHI '23: Proceedings of the 2023 CHI conference on human factors in computing systems (509, pp. 1-24). ACM. https://doi.org/10.1145/3544548.3581332.

Sutton, A., Clowes, M., Preston, L. and Booth, A. 2019. “Meeting the review family: Exploring review types and associated information retrieval requirements”. Health Information and Libraries Journal, 36(3), 202–222. https://doi.org/10.1111/hir.12276.

Sutton, A., O'Keefe, H., Johnson, E. E. and Marshall, C. 2023. “A mapping exercise using automated techniques to develop a search strategy to identify systematic review tools”. Research Synthesis Methods, 1-8. https://doi.org/10.1002/jrsm.1665.

Swanson, R. W. 1976. “A work study of the review production process”. Journal of the American Society for Information Science, 27(1), 70–72. https://doi.org/10.1002/asi.4630270109.

Templier, M. and Paré, G. 2015. “A framework for guiding and evaluating literature reviews”. Communications of the Association for Information Systems, 37, 112–137. https://doi.org/10.17705/1CAIS.03706.

Templier, M. and Paré, G. 2018. “Transparency in literature reviews: An assessment of reporting practices across review types and genres in top IS journals”. European Journal of Information Systems, 27(5), 503–550. https://doi.org/10.1080/0960085X.2017.1398880.

Thomas, J., McDonald, S., Noel-Storr, A., Shemilt, I., Elliott, J., Mavergames, C. and Marshall, I. J. 2021. “Machine learning reduced workload with minimal risk of missing studies: Development and evaluation of a randomized controlled trial classifier for Cochrane Reviews”. Journal of Clinical Epidemiology, 133, 140–151. https://doi.org/10.1016/j.jclinepi.2020.11.003.

Townsend, W. A., Anderson, P. F., Ginier, E. C., MacEachern, M. P., Saylor, K. M., Shipman, B. L. and Smith, J. E. 2017. “A competency framework for librarians involved in systematic reviews”. Journal of the Medical Library Association, 105(3), 268–275. https://doi.org/10.5195/jmla.2017.189.

Tyler, C., Akerlof, K. L., Allegra, A., Arnold, Z., Canino, H., Doornenbal, M. A., Goldstein, J. A., Pederson, D. B. and Sutherland, W. J. 2023. “AI tools as science policy advisers? The potentials and the pitfalls”. Nature, 622, 27–30. https://doi.org/10.1038/d41586-023-02999-3.

Urquhart, C. 2010. “Systematic reviewing, meta-analysis and meta-synthesis for evidence-based library and information science”. Information Research, 15, 3. Retrieved from https://informationr.net/ir/15-3/colis7/colis708.html.

Uttley, L. and Montgomery, P. 2017. “The influence of the team in conducting a systematic review”. Systematic Reviews, 6, 149. https://doi.org/10.1186/s13643-017-0548-x.

Uttley, L., Quintana, D. S., Montgomery, P., Carroll, C., Page, M. J., Falzon, L., Sutton, A. and Moher, D. 2023. “The problems with systematic reviews: A living systematic review”. Journal of Clinical Epidemiology, 156, 30–41. https://doi.org/10.1016/j.jclinepi.2023.01.011.

Vicente-Saez, R. and Martinez-Fuentes, C. 2018. “Open science now: A systematic literature review for an integrated definition”. Journal of Business Research, 88, 428–436. https://doi.org/10.1016/j.jbusres.2017.12.043.

Vickery, B. C. 2000. Scientific communication in history. Scarecrow Press.

Virgo, J. A. 1971. “The review article: Its characteristics and problems”. Library Quarterly, 41(4), 275–291. https://doi.org/10.1086/619975.

Waffenschmidt, S. and Bender, R. 2023. “Involvement of information specialists and statisticians in systematic reviews”. International Journal of Technology Assessment in Health Care, 39(1), e22. https://doi.org/10.1017/S026646232300020X.

Wagner, G., Lukyanenko, R. and Paré, G. 2022. “Artificial intelligence and the conduct of literature reviews”. Journal of Information Technology, 37(2), 209–226. https://doi.org/10.1177/02683962211048201.

Wagner, G., Prester, J., Roche, M. P., Schryen, G., Benlian, A., Paré, G. and Templier, M. 2021. “Which factors affect the scientific impact of review papers in IS research? A scientometric study”. Information and Management, 58(3), 103427. https://doi.org/10.1016/j.im.2021.103427.

Wang, S., Scells, H., Koopman, B. and Zuccon, G. 2023. “Can ChatGPT write a good Boolean query for systematic review literature search?” In SIGIR '23: Proceedings of the 46th international ACM SIGIR conference on research and development in information retrieval (pp. 1426–1436). ACM. https://doi.org/10.1145/3539618.3591703.

White, H. D. 2019. “Scientific communication and literature retrieval”. In H. Cooper, L. V. Hedges and J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis, 3rd ed. (pp. 51–72). Russell Sage Foundation.

Wilson, P. and Farid, M. 1979. “On the use of the records of research”. Library Quarterly, 49(2), 127–145. https://doi.org/10.1086/630130.

Woodward, A. M. 1974. “Review literature: Characteristics, sources and output in 1972”. ASLIB Proceedings, 26(9), 367–376. https://doi.org/10.1108/eb050471.

Woodward, A. M. 1977. “The roles of reviews in information transfer”. Journal of the American Society for Information Science, 28(3), 175–180. https://doi.org/10.1002/asi.4630280306.

Xiao, Y. and Watson, M. 2019. “Guidance on conducting a systematic literature review”. Journal of Planning Education and Research, 39(1), 93–112. https://doi.org/10.1177/0739456X17723971.

Xie, J., Ke, Q., Cheng, Y. and Everhart, N. 2020. “Meta-synthesis in library and information science research”. Journal of Academic Librarianship, 46(5), 102217. https://doi.org/10.1016/j.acalib.2020.102217.

Xu, J., Kang, Q. and Song, Z. 2015. “The current state of systematic reviews in library and information studies”. Library and Information Science Research, 37(4), 296–310. https://doi.org/10.1016/j.lisr.2015.11.003.

Zuccon, G., Koopman, B. and Shaik, R. 2023. “ChatGPT hallucinates when attributing answers”. arXiv. https://arxiv.org/abs/2309.09401.

[top of entry]

 

Visited Hit Counter by Digits times.


Version 1.0 published 2023-12-14, last edited 2023-12-18

Article category: Document types, genres and media

This IEKO article, version 1.0 is a reprint of an open access article published at https://asistdl.onlinelibrary.wiley.com/doi/10.1002/asi.24851.

How to cite it:
Smith, Linda C. In press. “Reviews and Reviewing: Approaches to Research Synthesis: An Annual Review of Information Science and Technology (ARIST) paper”. Journal of the Association for Information Science and Technology. https://asistdl.onlinelibrary.wiley.com/doi/10.1002/asi.24851. Also available in ISKO Encyclopedia of Knowledge Organization, eds. Birger Hjørland and Claudio Gnoli,

CC BY-NC-ND