tool literature review

Something went wrong when searching for seed articles. Please try again soon.

No articles were found for that search term.

Author, year The title of the article goes here

LITERATURE REVIEW SOFTWARE FOR BETTER RESEARCH

tool literature review

“Litmaps is a game changer for finding novel literature... it has been invaluable for my productivity.... I also got my PhD student to use it and they also found it invaluable, finding several gaps they missed”

Varun Venkatesh

Austin Health, Australia

tool literature review

As a full-time researcher, Litmaps has become an indispensable tool in my arsenal. The Seed Maps and Discover features of Litmaps have transformed my literature review process, streamlining the identification of key citations while revealing previously overlooked relevant literature, ensuring no crucial connection goes unnoticed. A true game-changer indeed!

Ritwik Pandey

Doctoral Research Scholar – Sri Sathya Sai Institute of Higher Learning

tool literature review

Using Litmaps for my research papers has significantly improved my workflow. Typically, I start with a single paper related to my topic. Whenever I find an interesting work, I add it to my search. From there, I can quickly cover my entire Related Work section.

David Fischer

Research Associate – University of Applied Sciences Kempten

“It's nice to get a quick overview of related literature. Really easy to use, and it helps getting on top of the often complicated structures of referencing”

Christoph Ludwig

Technische Universität Dresden, Germany

“This has helped me so much in researching the literature. Currently, I am beginning to investigate new fields and this has helped me hugely”

Aran Warren

Canterbury University, NZ

“I can’t live without you anymore! I also recommend you to my students.”

Professor at The Chinese University of Hong Kong

“Seeing my literature list as a network enhances my thinking process!”

Katholieke Universiteit Leuven, Belgium

“Incredibly useful tool to get to know more literature, and to gain insight in existing research”

KU Leuven, Belgium

“As a student just venturing into the world of lit reviews, this is a tool that is outstanding and helping me find deeper results for my work.”

Franklin Jeffers

South Oregon University, USA

“Any researcher could use it! The paper recommendations are great for anyone and everyone”

Swansea University, Wales

“This tool really helped me to create good bibtex references for my research papers”

Ali Mohammed-Djafari

Director of Research at LSS-CNRS, France

“Litmaps is extremely helpful with my research. It helps me organize each one of my projects and see how they relate to each other, as well as to keep up to date on publications done in my field”

Daniel Fuller

Clarkson University, USA

As a person who is an early researcher and identifies as dyslexic, I can say that having research articles laid out in the date vs cite graph format is much more approachable than looking at a standard database interface. I feel that the maps Litmaps offers lower the barrier of entry for researchers by giving them the connections between articles spaced out visually. This helps me orientate where a paper is in the history of a field. Thus, new researchers can look at one of Litmap's "seed maps" and have the same information as hours of digging through a database.

Baylor Fain

Postdoctoral Associate – University of Florida

Our Course: Learn and Teach with Litmaps

tool literature review

SCI Journal

10 Best Literature Review Tools for Researchers

Photo of author

This post may contain affiliate links that allow us to earn a commission at no expense to you. Learn more

Best Literature Review Tools for Researchers

Boost your research game with these Best Literature Review Tools for Researchers! Uncover hidden gems, organize your findings, and ace your next research paper!

Conducting literature reviews poses challenges for researchers due to the overwhelming volume of information available and the lack of efficient methods to manage and analyze it.

Researchers struggle to identify key sources, extract relevant information, and maintain accuracy while manually conducting literature reviews. This leads to inefficiency, errors, and difficulty in identifying gaps or trends in existing literature.

Advancements in technology have resulted in a variety of literature review tools. These tools streamline the process, offering features like automated searching, filtering, citation management, and research data extraction. They save time, improve accuracy, and provide valuable insights for researchers. 

In this article, we present a curated list of the 10 best literature review tools, empowering researchers to make informed choices and revolutionize their systematic literature review process.

Table of Contents

Top 10 Literature Review Tools for Researchers: In A Nutshell (2023)

#1. semantic scholar – a free, ai-powered research tool for scientific literature.

Credits: Semantic Scholar. Best Literature Review Tools for Researchers

Semantic Scholar is a cutting-edge literature review tool that researchers rely on for its comprehensive access to academic publications. With its advanced AI algorithms and extensive database, it simplifies the discovery of relevant research papers. 

By employing semantic analysis, users can explore scholarly articles based on context and meaning, making it a go-to resource for scholars across disciplines. 

Additionally, Semantic Scholar offers personalized recommendations and alerts, ensuring researchers stay updated with the latest developments. However, users should be cautious of potential limitations. 

Not all scholarly content may be indexed, and occasional false positives or inaccurate associations can occur. Furthermore, the tool primarily focuses on computer science and related fields, potentially limiting coverage in other disciplines. 

Researchers should be mindful of these considerations and supplement Semantic Scholar with other reputable resources for a comprehensive literature review. Despite these caveats, Semantic Scholar remains a valuable tool for streamlining research and staying informed.

#2. Elicit – Research assistant using language models like GPT-3

Credits: Elicit.Org, Best Literature Review Tools for Researchers

Elicit is a game-changing literature review tool that has gained popularity among researchers worldwide. With its user-friendly interface and extensive database of scholarly articles, it streamlines the research process, saving time and effort. 

The tool employs advanced algorithms to provide personalized recommendations, ensuring researchers discover the most relevant studies for their field. Elicit also promotes collaboration by enabling users to create shared folders and annotate articles.

However, users should be cautious when using Elicit. It is important to verify the credibility and accuracy of the sources found through the tool, as the database encompasses a wide range of publications. 

Additionally, occasional glitches in the search function have been reported, leading to incomplete or inaccurate results. While Elicit offers tremendous benefits, researchers should remain vigilant and cross-reference information to ensure a comprehensive literature review.

#3. Scite.Ai – Your personal research assistant

Credits: Scite, Best Literature Review Tools for Researchers

Scite.Ai is a popular literature review tool that revolutionizes the research process for scholars. With its innovative citation analysis feature, researchers can evaluate the credibility and impact of scientific articles, making informed decisions about their inclusion in their own work. 

By assessing the context in which citations are used, Scite.Ai ensures that the sources selected are reliable and of high quality, enabling researchers to establish a strong foundation for their research.

However, while Scite.Ai offers numerous advantages, there are a few aspects to be cautious about. As with any data-driven tool, occasional errors or inaccuracies may arise, necessitating researchers to cross-reference and verify results with other reputable sources. 

Moreover, Scite.Ai’s coverage may be limited in certain subject areas and languages, with a possibility of missing relevant studies, especially in niche fields or non-English publications. 

Therefore, researchers should supplement the use of Scite.Ai with additional resources to ensure comprehensive literature coverage and avoid any potential gaps in their research.

Rayyan offers the following paid plans:

  • Monthly Plan: $20
  • Yearly Plan: $12

Credits: Scite, Best Literature Review Tools for Researchers

#4. DistillerSR – Literature Review Software

Credits: DistillerSR, Best Literature Review Tools for Researchers

DistillerSR is a powerful literature review tool trusted by researchers for its user-friendly interface and robust features. With its advanced search capabilities, researchers can quickly find relevant studies from multiple databases, saving time and effort. 

The tool offers comprehensive screening and data extraction functionalities, streamlining the review process and improving the reliability of findings. Real-time collaboration features also facilitate seamless teamwork among researchers.

While DistillerSR offers numerous advantages, there are a few considerations. Users should invest time in understanding the tool’s features and functionalities to maximize its potential. Additionally, the pricing structure may be a factor for individual researchers or small teams with limited budgets.

Despite occasional technical glitches reported by some users, the developers actively address these issues through updates and improvements, ensuring a better user experience. 

Overall, DistillerSR empowers researchers to navigate the vast sea of information, enhancing the quality and efficiency of literature reviews while fostering collaboration among research teams .

#5. Rayyan – AI Powered Tool for Systematic Literature Reviews

Credits: Rayyan, Best Literature Review Tools for Researchers

Rayyan is a powerful literature review tool that simplifies the research process for scholars and academics. With its user-friendly interface and efficient management features, Rayyan is highly regarded by researchers worldwide. 

It allows users to import and organize large volumes of scholarly articles, making it easier to identify relevant studies for their research projects. The tool also facilitates seamless collaboration among team members, enhancing productivity and streamlining the research workflow. 

However, it’s important to be aware of a few aspects. The free version of Rayyan has limitations, and upgrading to a premium subscription may be necessary for additional functionalities. 

Users should also be mindful of occasional technical glitches and compatibility issues, promptly reporting any problems. Despite these considerations, Rayyan remains a valuable asset for researchers, providing an effective solution for literature review tasks.

Rayyan offers both free and paid plans:

  • Professional: $8.25/month
  • Student: $4/month
  • Pro Team: $8.25/month
  • Team+: $24.99/month

Credits: Rayyan, Best Literature Review Tools for Researchers

#6. Consensus – Use AI to find you answers in scientific research

Credits: Consensus, Best Literature Review Tools for Researchers

Consensus is a cutting-edge literature review tool that has become a go-to choice for researchers worldwide. Its intuitive interface and powerful capabilities make it a preferred tool for navigating and analyzing scholarly articles. 

With Consensus, researchers can save significant time by efficiently organizing and accessing relevant research material.People consider Consensus for several reasons. 

Its advanced search algorithms and filters help researchers sift through vast amounts of information, ensuring they focus on the most relevant articles. By streamlining the literature review process, Consensus allows researchers to extract valuable insights and accelerate their research progress.

However, there are a few factors to watch out for when using Consensus. As with any automated tool, researchers should exercise caution and independently verify the accuracy and relevance of the generated results. Complex or niche topics may present challenges, resulting in limited search results. Researchers should also supplement Consensus with manual searches to ensure comprehensive coverage of the literature.

Overall, Consensus is a valuable resource for researchers seeking to optimize their literature review process. By leveraging its features alongside critical thinking and manual searches, researchers can enhance the efficiency and effectiveness of their work, advancing their research endeavors to new heights.

Consensus offers both free and paid plans:

  • Premium: $9.99/month
  • Enterprise: Custom

Credits: Consensus, Best Literature Review Tools for Researchers

#7. RAx – AI-powered reading assistant

Credits: RAx, Best Literature Review Tools for Researchers

Consensus is a revolutionary literature review tool that has transformed the research process for scholars worldwide. With its user-friendly interface and advanced features, it offers a vast database of academic publications across various disciplines, providing access to relevant and up-to-date literature. 

Using advanced algorithms and machine learning, Consensus delivers personalized recommendations, saving researchers time and effort in their literature search. 

However, researchers should be cautious of potential biases in the recommendation system and supplement their search with manual verification to ensure a comprehensive review. 

Additionally, occasional inaccuracies in metadata have been reported, making it essential for users to cross-reference information with reliable sources. Despite these considerations, Consensus remains an invaluable tool for enhancing the efficiency and quality of literature reviews.

RAx offers both free and paid plans. Currently offering 50% discounts as of July 2023:

  • Premium: $6/month $3/month
  • Premium with Copilot: $8/month $4/month

Credits: RAx, Best Literature Review Tools for Researchers

#8. Lateral – Advance your research with AI

Credits: Lateral, Best Literature Review Tools for Researchers

“Lateral” is a revolutionary literature review tool trusted by researchers worldwide. With its user-friendly interface and powerful search capabilities, it simplifies the process of gathering and analyzing scholarly articles. 

By leveraging advanced algorithms and machine learning, Lateral saves researchers precious time by retrieving relevant articles and uncovering new connections between them, fostering interdisciplinary exploration.

While Lateral provides numerous benefits, users should exercise caution. It is advisable to cross-reference its findings with other sources to ensure a comprehensive review. 

Additionally, researchers must be mindful of potential biases introduced by the tool’s algorithms and should critically evaluate and interpret the results. 

Despite these considerations, Lateral remains an indispensable resource, empowering researchers to delve deeper into their fields of study and make valuable contributions to the academic community.

RAx offers both free and paid plans:

  • Premium: $10.98
  • Pro: $27.46

Credits: Lateral, Best Literature Review Tools for Researchers

#9. Iris AI – Introducing the researcher workspace

Credits: Iris AI, Best Literature Review Tools for Researchers

Iris AI is an innovative literature review tool that has transformed the research process for academics and scholars. With its advanced artificial intelligence capabilities, Iris AI offers a seamless and efficient way to navigate through a vast array of academic papers and publications. 

Researchers are drawn to this tool because it saves valuable time by automating the tedious task of literature review and provides comprehensive coverage across multiple disciplines. 

Its intelligent recommendation system suggests related articles, enabling researchers to discover hidden connections and broaden their knowledge base. However, caution should be exercised while using Iris AI. 

While the tool excels at surfacing relevant papers, researchers should independently evaluate the quality and validity of the sources to ensure the reliability of their work. 

It’s important to note that Iris AI may occasionally miss niche or lesser-known publications, necessitating a supplementary search using traditional methods. 

Additionally, being an algorithm-based tool, there is a possibility of false positives or missed relevant articles due to the inherent limitations of automated text analysis. Nevertheless, Iris AI remains an invaluable asset for researchers, enhancing the quality and efficiency of their research endeavors.

Iris AI offers different pricing plans to cater to various user needs:

  • Basic: Free
  • Premium: Monthly ($82.41), Quarterly ($222.49), and Annual ($791.07)

Credits: Iris AI, Best Literature Review Tools for Researchers

#10. Scholarcy – Summarize your literature through AI

Credits:Scholarcy, Best Literature Review Tools for Researchers

Scholarcy is a powerful literature review tool that helps researchers streamline their work. By employing advanced algorithms and natural language processing, it efficiently analyzes and summarizes academic papers, saving researchers valuable time. 

Scholarcy’s ability to extract key information and generate concise summaries makes it an attractive option for scholars looking to quickly grasp the main concepts and findings of multiple papers.

However, it is important to exercise caution when relying solely on Scholarcy. While it provides a useful starting point, engaging with the original research papers is crucial to ensure a comprehensive understanding. 

Scholarcy’s automated summarization may not capture the nuanced interpretations or contextual information presented in the full text. 

Researchers should also be aware that certain types of documents, particularly those with heavy mathematical or technical content, may pose challenges for the tool. 

Despite these considerations, Scholarcy remains a valuable resource for researchers seeking to enhance their literature review process and improve overall efficiency.

Scholarcy offer the following pricing plans:

  • Browser Extension and Flashcards: Free 
  • Personal Library: $9.99
  • Academic Institution License: $8K+

Credits: Scholarcy, Best Literature Review Tools for Researchers

Final Thoughts

In conclusion, conducting a comprehensive literature review is a crucial aspect of any research project, and the availability of reliable and efficient tools can greatly facilitate this process for researchers. This article has explored the top 10 literature review tools that have gained popularity among researchers.

Moreover, the rise of AI-powered tools like Iris.ai and Sci.ai promises to revolutionize the literature review process by automating various tasks and enhancing research efficiency. 

Ultimately, the choice of literature review tool depends on individual preferences and research needs, but the tools presented in this article serve as valuable resources to enhance the quality and productivity of research endeavors. 

Researchers are encouraged to explore and utilize these tools to stay at the forefront of knowledge in their respective fields and contribute to the advancement of science and academia.

Q1. What are literature review tools for researchers?

Literature review tools for researchers are software or online platforms designed to assist researchers in efficiently conducting literature reviews. These tools help researchers find, organize, analyze, and synthesize relevant academic papers and other sources of information.

Q2. What criteria should researchers consider when choosing literature review tools?

When choosing literature review tools, researchers should consider factors such as the tool’s search capabilities, database coverage, user interface, collaboration features, citation management, annotation and highlighting options, integration with reference management software, and data extraction capabilities. 

It’s also essential to consider the tool’s accessibility, cost, and technical support.

Q3. Are there any literature review tools specifically designed for systematic reviews or meta-analyses?

Yes, there are literature review tools that cater specifically to systematic reviews and meta-analyses, which involve a rigorous and structured approach to reviewing existing literature. These tools often provide features tailored to the specific needs of these methodologies, such as:

Screening and eligibility assessment: Systematic review tools typically offer functionalities for screening and assessing the eligibility of studies based on predefined inclusion and exclusion criteria. This streamlines the process of selecting relevant studies for analysis.

Data extraction and quality assessment: These tools often include templates and forms to facilitate data extraction from selected studies. Additionally, they may provide features for assessing the quality and risk of bias in individual studies.

Meta-analysis support: Some literature review tools include statistical analysis features that assist in conducting meta-analyses. These features can help calculate effect sizes, perform statistical tests, and generate forest plots or other visual representations of the meta-analytic results.

Reporting assistance: Many tools provide templates or frameworks for generating systematic review reports, ensuring compliance with established guidelines such as PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses).

Q4. Can literature review tools help with organizing and annotating collected references?

Yes, literature review tools often come equipped with features to help researchers organize and annotate collected references. Some common functionalities include:

Reference management: These tools enable researchers to import references from various sources, such as databases or PDF files, and store them in a central library. They typically allow you to create folders or tags to organize references based on themes or categories.

Annotation capabilities: Many tools provide options for adding annotations, comments, or tags to individual references or specific sections of research articles. This helps researchers keep track of important information, highlight key findings, or note potential connections between different sources.

Full-text search: Literature review tools often offer full-text search functionality, allowing you to search within the content of imported articles or documents. This can be particularly useful when you need to locate specific information or keywords across multiple references.

Integration with citation managers: Some literature review tools integrate with popular citation managers like Zotero, Mendeley, or EndNote, allowing seamless transfer of references and annotations between platforms.

By leveraging these features, researchers can streamline the organization and annotation of their collected references, making it easier to retrieve relevant information during the literature review process.

Photo of author

Leave a Comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

We maintain and update science journals and scientific metrics. Scientific metrics data are aggregated from publicly available sources. Please note that we do NOT publish research papers on this platform. We do NOT accept any manuscript.

tool literature review

2012-2024 © scijournal.org

tool literature review

Accelerate your research with the best systematic literature review tools

The ideal literature review tool helps you make sense of the most important insights in your research field. ATLAS.ti empowers researchers to perform powerful and collaborative analysis using the leading software for literature review.

tool literature review

Finalize your literature review faster with comfort

ATLAS.ti makes it easy to manage, organize, and analyze articles, PDFs, excerpts, and more for your projects. Conduct a deep systematic literature review and get the insights you need with a comprehensive toolset built specifically for your research projects.

tool literature review

Figure out the "why" behind your participant's motivations

Understand the behaviors and emotions that are driving your focus group participants. With ATLAS.ti, you can transform your raw data and turn it into qualitative insights you can learn from. Easily determine user intent in the same spot you're deciphering your overall focus group data.

tool literature review

Visualize your research findings like never before

We make it simple to present your analysis results with meaningful charts, networks, and diagrams. Instead of figuring out how to communicate the insights you just unlocked, we enable you to leverage easy-to-use visualizations that support your goals.

tool literature review

Everything you need to elevate your literature review

Import and organize literature data.

Import and analyze any type of text content – ATLAS.ti supports all standard text and transcription files such as Word and PDF.

Analyze with ease and speed

Utilize easy-to-learn workflows that save valuable time, such as auto coding, sentiment analysis, team collaboration, and more.

Leverage AI-driven tools

Make efficiency a priority and let ATLAS.ti do your work with AI-powered research tools and features for faster results.

Visualize and present findings

With just a few clicks, you can create meaningful visualizations like charts, word clouds, tables, networks, among others for your literature data.

The faster way to make sense of your literature review. Try it for free, today.

A literature review analyzes the most current research within a research area. A literature review consists of published studies from many sources:

  • Peer-reviewed academic publications
  • Full-length books
  • University bulletins
  • Conference proceedings
  • Dissertations and theses

Literature reviews allow researchers to:

  • Summarize the state of the research
  • Identify unexplored research inquiries
  • Recommend practical applications
  • Critique currently published research

Literature reviews are either standalone publications or part of a paper as background for an original research project. A literature review, as a section of a more extensive research article, summarizes the current state of the research to justify the primary research described in the paper.

For example, a researcher may have reviewed the literature on a new supplement's health benefits and concluded that more research needs to be conducted on those with a particular condition. This research gap warrants a study examining how this understudied population reacted to the supplement. Researchers need to establish this research gap through a literature review to persuade journal editors and reviewers of the value of their research.

Consider a literature review as a typical research publication presenting a study, its results, and the salient points scholars can infer from the study. The only significant difference with a literature review treats existing literature as the research data to collect and analyze. From that analysis, a literature review can suggest new inquiries to pursue.

Identify a focus

Similar to a typical study, a literature review should have a research question or questions that analysis can answer. This sort of inquiry typically targets a particular phenomenon, population, or even research method to examine how different studies have looked at the same thing differently. A literature review, then, should center the literature collection around that focus.

Collect and analyze the literature

With a focus in mind, a researcher can collect studies that provide relevant information for that focus. They can then analyze the collected studies by finding and identifying patterns or themes that occur frequently. This analysis allows the researcher to point out what the field has frequently explored or, on the other hand, overlooked.

Suggest implications

The literature review allows the researcher to argue a particular point through the evidence provided by the analysis. For example, suppose the analysis makes it apparent that the published research on people's sleep patterns has not adequately explored the connection between sleep and a particular factor (e.g., television-watching habits, indoor air quality). In that case, the researcher can argue that further study can address this research gap.

External requirements aside (e.g., many academic journals have a word limit of 6,000-8,000 words), a literature review as a standalone publication is as long as necessary to allow readers to understand the current state of the field. Even if it is just a section in a larger paper, a literature review is long enough to allow the researcher to justify the study that is the paper's focus.

Note that a literature review needs only to incorporate a representative number of studies relevant to the research inquiry. For term papers in university courses, 10 to 20 references might be appropriate for demonstrating analytical skills. Published literature reviews in peer-reviewed journals might have 40 to 50 references. One of the essential goals of a literature review is to persuade readers that you have analyzed a representative segment of the research you are reviewing.

Researchers can find published research from various online sources:

  • Journal websites
  • Research databases
  • Search engines (Google Scholar, Semantic Scholar)
  • Research repositories
  • Social networking sites (Academia, ResearchGate)

Many journals make articles freely available under the term "open access," meaning that there are no restrictions to viewing and downloading such articles. Otherwise, collecting research articles from restricted journals usually requires access from an institution such as a university or a library.

Evidence of a rigorous literature review is more important than the word count or the number of articles that undergo data analysis. Especially when writing for a peer-reviewed journal, it is essential to consider how to demonstrate research rigor in your literature review to persuade reviewers of its scholarly value.

Select field-specific journals

The most significant research relevant to your field focuses on a narrow set of journals similar in aims and scope. Consider who the most prominent scholars in your field are and determine which journals publish their research or have them as editors or reviewers. Journals tend to look favorably on systematic reviews that include articles they have published.

Incorporate recent research

Recently published studies have greater value in determining the gaps in the current state of research. Older research is likely to have encountered challenges and critiques that may render their findings outdated or refuted. What counts as recent differs by field; start by looking for research published within the last three years and gradually expand to older research when you need to collect more articles for your review.

Consider the quality of the research

Literature reviews are only as strong as the quality of the studies that the researcher collects. You can judge any particular study by many factors, including:

  • the quality of the article's journal
  • the article's research rigor
  • the timeliness of the research

The critical point here is that you should consider more than just a study's findings or research outputs when including research in your literature review.

Narrow your research focus

Ideally, the articles you collect for your literature review have something in common, such as a research method or research context. For example, if you are conducting a literature review about teaching practices in high school contexts, it is best to narrow your literature search to studies focusing on high school. You should consider expanding your search to junior high school and university contexts only when there are not enough studies that match your focus.

You can create a project in ATLAS.ti for keeping track of your collected literature. ATLAS.ti allows you to view and analyze full text articles and PDF files in a single project. Within projects, you can use document groups to separate studies into different categories for easier and faster analysis.

For example, a researcher with a literature review that examines studies across different countries can create document groups labeled "United Kingdom," "Germany," and "United States," among others. A researcher can also use ATLAS.ti's global filters to narrow analysis to a particular set of studies and gain insights about a smaller set of literature.

ATLAS.ti allows you to search, code, and analyze text documents and PDF files. You can treat a set of research articles like other forms of qualitative data. The codes you apply to your literature collection allow for analysis through many powerful tools in ATLAS.ti:

  • Code Co-Occurrence Explorer
  • Code Co-Occurrence Table
  • Code-Document Table

Other tools in ATLAS.ti employ machine learning to facilitate parts of the coding process for you. Some of our software tools that are effective for analyzing literature include:

  • Named Entity Recognition
  • Opinion Mining
  • Sentiment Analysis

As long as your documents are text documents or text-enable PDF files, ATLAS.ti's automated tools can provide essential assistance in the data analysis process.

7 open source tools to make literature reviews easy

Open source, library schools, libraries, and digital dissemination

Opensource.com

A good literature review is critical for academic research in any field, whether it is for a research article, a critical review for coursework, or a dissertation. In a recent article, I presented detailed steps for doing  a literature review using open source software .

The following is a brief summary of seven free and open source software tools described in that article that will make your next literature review much easier.

1. GNU Linux

Most literature reviews are accomplished by graduate students working in research labs in universities. For absurd reasons, graduate students often have the worst computers on campus. They are often old, slow, and clunky Windows machines that have been discarded and recycled from the undergraduate computer labs. Installing a flavor of GNU Linux will breathe new life into these outdated PCs. There are more than 100 distributions , all of which can be downloaded and installed for free on computers. Most popular Linux distributions come with a "try-before-you-buy" feature. For example, with Ubuntu you can make a bootable USB stick that allows you to test-run the Ubuntu desktop experience without interfering in any way with your PC configuration. If you like the experience, you can use the stick to install Ubuntu on your machine permanently.

Linux distributions generally come with a free web browser, and the most popular is Firefox . Two Firefox plugins that are particularly useful for literature reviews are Unpaywall and Zotero. Keep reading to learn why.

3. Unpaywall

Often one of the hardest parts of a literature review is gaining access to the papers you want to read for your review. The unintended consequence of copyright restrictions and paywalls is it has narrowed access to the peer-reviewed literature to the point that even Harvard University is challenged to pay for it. Fortunately, there are a lot of open access articles—about a third of the literature is free (and the percentage is growing). Unpaywall is a Firefox plugin that enables researchers to click a green tab on the side of the browser and skip the paywall on millions of peer-reviewed journal articles. This makes finding accessible copies of articles much faster that searching each database individually. Unpaywall is fast, free, and legal, as it accesses many of the open access sites that I covered in my paper on using open source in lit reviews .

Formatting references is the most tedious of academic tasks. Zotero can save you from ever doing it again. It operates as an Android app, desktop program, and a Firefox plugin (which I recommend). It is a free, easy-to-use tool to help you collect, organize, cite, and share research. It replaces the functionality of proprietary packages such as RefWorks, Endnote, and Papers for zero cost. Zotero can auto-add bibliographic information directly from websites. In addition, it can scrape bibliographic data from PDF files. Notes can be easily added on each reference. Finally, and most importantly, it can import and export the bibliography databases in all publishers' various formats. With this feature, you can export bibliographic information to paste into a document editor for a paper or thesis—or even to a wiki for dynamic collaborative literature reviews (see tool #7 for more on the value of wikis in lit reviews).

5. LibreOffice

Your thesis or academic article can be written conventionally with the free office suite LibreOffice , which operates similarly to Microsoft's Office products but respects your freedom. Zotero has a word processor plugin to integrate directly with LibreOffice. LibreOffice is more than adequate for the vast majority of academic paper writing.

If LibreOffice is not enough for your layout needs, you can take your paper writing one step further with LaTeX , a high-quality typesetting system specifically designed for producing technical and scientific documentation. LaTeX is particularly useful if your writing has a lot of equations in it. Also, Zotero libraries can be directly exported to BibTeX files for use with LaTeX.

7. MediaWiki

If you want to leverage the open source way to get help with your literature review, you can facilitate a dynamic collaborative literature review . A wiki is a website that allows anyone to add, delete, or revise content directly using a web browser. MediaWiki is free software that enables you to set up your own wikis.

Researchers can (in decreasing order of complexity): 1) set up their own research group wiki with MediaWiki, 2) utilize wikis already established at their universities (e.g., Aalto University ), or 3) use wikis dedicated to areas that they research. For example, several university research groups that focus on sustainability (including mine ) use Appropedia , which is set up for collaborative solutions on sustainability, appropriate technology, poverty reduction, and permaculture.

Using a wiki makes it easy for anyone in the group to keep track of the status of and update literature reviews (both current and older or from other researchers). It also enables multiple members of the group to easily collaborate on a literature review asynchronously. Most importantly, it enables people outside the research group to help make a literature review more complete, accurate, and up-to-date.

Wrapping up

Free and open source software can cover the entire lit review toolchain, meaning there's no need for anyone to use proprietary solutions. Do you use other libre tools for making literature reviews or other academic work easier? Please let us know your favorites in the comments.

Joshua Pearce

Related Content

Two people chatting via a video conference app

Literature Review Tips & Tools

  • Tips & Examples

Organizational Tools

Tools for systematic reviews.

  • Bubbl.us Free online brainstorming/mindmapping tool that also has a free iPad app.
  • Coggle Another free online mindmapping tool.
  • Organization & Structure tips from Purdue University Online Writing Lab
  • Literature Reviews from The Writing Center at University of North Carolina at Chapel Hill Gives several suggestions and descriptions of ways to organize your lit review.
  • Cochrane Handbook for Systematic Reviews of Interventions "The Cochrane Handbook for Systematic Reviews of Interventions is the official guide that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions. "
  • Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) website "PRISMA is an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses. PRISMA focuses on the reporting of reviews evaluating randomized trials, but can also be used as a basis for reporting systematic reviews of other types of research, particularly evaluations of interventions."
  • PRISMA Flow Diagram Generator Free tool that will generate a PRISMA flow diagram from a CSV file (sample CSV template provided) more... less... Please cite as: Haddaway, N. R., Page, M. J., Pritchard, C. C., & McGuinness, L. A. (2022). PRISMA2020: An R package and Shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and Open Synthesis Campbell Systematic Reviews, 18, e1230. https://doi.org/10.1002/cl2.1230
  • Rayyan "Rayyan is a 100% FREE web application to help systematic review authors perform their job in a quick, easy and enjoyable fashion. Authors create systematic reviews, collaborate on them, maintain them over time and get suggestions for article inclusion."
  • Covidence Covidence is a tool to help manage systematic reviews (and create PRISMA flow diagrams). **UMass Amherst doesn't subscribe, but Covidence offers a free trial for 1 review of no more than 500 records. It is also set up for researchers to pay for each review.
  • PROSPERO - Systematic Review Protocol Registry "PROSPERO accepts registrations for systematic reviews, rapid reviews and umbrella reviews. PROSPERO does not accept scoping reviews or literature scans. Sibling PROSPERO sites registers systematic reviews of human studies and systematic reviews of animal studies."
  • Critical Appraisal Tools from JBI Joanna Briggs Institute at the University of Adelaide provides these checklists to help evaluate different types of publications that could be included in a review.
  • Systematic Review Toolbox "The Systematic Review Toolbox is a community-driven, searchable, web-based catalogue of tools that support the systematic review process across multiple domains. The resource aims to help reviewers find appropriate tools based on how they provide support for the systematic review process. Users can perform a simple keyword search (i.e. Quick Search) to locate tools, a more detailed search (i.e. Advanced Search) allowing users to select various criteria to find specific types of tools and submit new tools to the database. Although the focus of the Toolbox is on identifying software tools to support systematic reviews, other tools or support mechanisms (such as checklists, guidelines and reporting standards) can also be found."
  • Abstrackr Free, open-source tool that "helps you upload and organize the results of a literature search for a systematic review. It also makes it possible for your team to screen, organize, and manipulate all of your abstracts in one place." -From Center for Evidence Synthesis in Health
  • SRDR Plus (Systematic Review Data Repository: Plus) An open-source tool for extracting, managing,, and archiving data developed by the Center for Evidence Synthesis in Health at Brown University
  • RoB 2 Tool (Risk of Bias for Randomized Trials) A revised Cochrane risk of bias tool for randomized trials
  • << Previous: Tips & Examples
  • Next: Writing & Citing Help >>
  • Last Updated: Apr 2, 2024 4:46 PM
  • URL: https://guides.library.umass.edu/litreviews

© 2022 University of Massachusetts Amherst • Site Policies • Accessibility

All-in-one Literature Review Software

Start your free trial.

Free MAXQDA trial for Windows and Mac

Your trial will end automatically after 14 days and will not renew. There is no need for cancelation.

MAXQDA The All-in-one Literature Review Software

MAXQDA is the best choice for a comprehensive literature review. It works with a wide range of data types and offers powerful tools for literature review, such as reference management, qualitative, vocabulary, text analysis tools, and more.

Document viewer

Your analysis.

Literature Review Software MAXQDA Interface

As your all-in-one literature review software, MAXQDA can be used to manage your entire research project. Easily import data from texts, interviews, focus groups, PDFs, web pages, spreadsheets, articles, e-books, and even social media data. Connect the reference management system of your choice with MAXQDA to easily import bibliographic data. Organize your data in groups, link relevant quotes to each other, keep track of your literature summaries, and share and compare work with your team members. Your project file stays flexible and you can expand and refine your category system as you go to suit your research.

Developed by and for researchers – since 1989

tool literature review

Having used several qualitative data analysis software programs, there is no doubt in my mind that MAXQDA has advantages over all the others. In addition to its remarkable analytical features for harnessing data, MAXQDA’s stellar customer service, online tutorials, and global learning community make it a user friendly and top-notch product.

Sally S. Cohen – NYU Rory Meyers College of Nursing

Literature Review is Faster and Smarter with MAXQDA

All-in-one Literature Review Software MAXQDA: Import of documents

Easily import your literature review data

With a literature review software like MAXQDA, you can easily import bibliographic data from reference management programs for your literature review. MAXQDA can work with all reference management programs that can export their databases in RIS-format which is a standard format for bibliographic information. Like MAXQDA, these reference managers use project files, containing all collected bibliographic information, such as author, title, links to websites, keywords, abstracts, and other information. In addition, you can easily import the corresponding full texts. Upon import, all documents will be automatically pre-coded to facilitate your literature review at a later stage.

Capture your ideas while analyzing your literature

Great ideas will often occur to you while you’re doing your literature review. Using MAXQDA as your literature review software, you can create memos to store your ideas, such as research questions and objectives, or you can use memos for paraphrasing passages into your own words. By attaching memos like post-it notes to text passages, texts, document groups, images, audio/video clips, and of course codes, you can easily retrieve them at a later stage. Particularly useful for literature reviews are free memos written during the course of work from which passages can be copied and inserted into the final text.

Using Literature Review Software MAXQDA to Organize Your Qualitative Data: Memo Tools

Find concepts important to your generated literature review

When generating a literature review you might need to analyze a large amount of text. Luckily MAXQDA as the #1 literature review software offers Text Search tools that allow you to explore your documents without reading or coding them first. Automatically search for keywords (or dictionaries of keywords), such as important concepts for your literature review, and automatically code them with just a few clicks. Document variables that were automatically created during the import of your bibliographic information can be used for searching and retrieving certain text segments. MAXQDA’s powerful Coding Query allows you to analyze the combination of activated codes in different ways.

Aggregate your literature review

When conducting a literature review you can easily get lost. But with MAXQDA as your literature review software, you will never lose track of the bigger picture. Among other tools, MAXQDA’s overview and summary tables are especially useful for aggregating your literature review results. MAXQDA offers overview tables for almost everything, codes, memos, coded segments, links, and so on. With MAXQDA literature review tools you can create compressed summaries of sources that can be effectively compared and represented, and with just one click you can easily export your overview and summary tables and integrate them into your literature review report.

Visual text exploration with MAXQDA's Word Tree

Powerful and easy-to-use literature review tools

Quantitative aspects can also be relevant when conducting a literature review analysis. Using MAXQDA as your literature review software enables you to employ a vast range of procedures for the quantitative evaluation of your material. You can sort sources according to document variables, compare amounts with frequency tables and charts, and much more. Make sure you don’t miss the word frequency tools of MAXQDA’s add-on module for quantitative content analysis. Included are tools for visual text exploration, content analysis, vocabulary analysis, dictionary-based analysis, and more that facilitate the quantitative analysis of terms and their semantic contexts.

Visualize your literature review

As an all-in-one literature review software, MAXQDA offers a variety of visual tools that are tailor-made for qualitative research and literature reviews. Create stunning visualizations to analyze your material. Of course, you can export your visualizations in various formats to enrich your literature review analysis report. Work with word clouds to explore the central themes of a text and key terms that are used, create charts to easily compare the occurrences of concepts and important keywords, or make use of the graphical representation possibilities of MAXMaps, which in particular permit the creation of concept maps. Thanks to the interactive connection between your visualizations with your MAXQDA data, you’ll never lose sight of the big picture.

Daten visualization with Literature Review Software MAXQDA

AI Assist: literature review software meets AI

AI Assist – your virtual research assistant – supports your literature review with various tools. AI Assist simplifies your work by automatically analyzing and summarizing elements of your research project and by generating suggestions for subcodes. No matter which AI tool you use – you can customize your results to suit your needs.

Free tutorials and guides on literature review

MAXQDA offers a variety of free learning resources for literature review, making it easy for both beginners and advanced users to learn how to use the software. From free video tutorials and webinars to step-by-step guides and sample projects, these resources provide a wealth of information to help you understand the features and functionality of MAXQDA for literature review. For beginners, the software’s user-friendly interface and comprehensive help center make it easy to get started with your data analysis, while advanced users will appreciate the detailed guides and tutorials that cover more complex features and techniques. Whether you’re just starting out or are an experienced researcher, MAXQDA’s free learning resources will help you get the most out of your literature review.

Free Tutorials for Literature Review Software MAXQDA

Free MAXQDA Trial for Windows and Mac

Get your maxqda license, compare the features of maxqda and maxqda analytics pro, faq: literature review software.

Literature review software is a tool designed to help researchers efficiently manage and analyze the existing body of literature relevant to their research topic. MAXQDA, a versatile qualitative data analysis tool, can be instrumental in this process.

Literature review software, like MAXQDA, typically includes features such as data import and organization, coding and categorization, advanced search capabilities, data visualization tools, and collaboration features. These features facilitate the systematic review and analysis of relevant literature.

Literature review software, including MAXQDA, can assist in qualitative data interpretation by enabling researchers to organize, code, and categorize relevant literature. This organized data can then be analyzed to identify trends, patterns, and themes, helping researchers draw meaningful insights from the literature they’ve reviewed.

Yes, literature review software like MAXQDA is suitable for researchers of all levels of experience. It offers user-friendly interfaces and extensive support resources, making it accessible to beginners while providing advanced features that cater to the needs of experienced researchers.

Getting started with literature review software, such as MAXQDA, typically involves downloading and installing the software, importing your relevant literature, and exploring the available features. Many software providers offer tutorials and documentation to help users get started quickly.

For students, MAXQDA can be an excellent literature review software choice. Its user-friendly interface, comprehensive feature set, and educational discounts make it a valuable tool for students conducting literature reviews as part of their academic research.

MAXQDA is available for both Windows and Mac users, making it a suitable choice for Mac users looking for literature review software. It offers a consistent and feature-rich experience on Mac operating systems.

When it comes to literature review software, MAXQDA is widely regarded as one of the best choices. Its robust feature set, user-friendly interface, and versatility make it a top pick for researchers conducting literature reviews.

Yes, literature reviews can be conducted without software. However, using literature review software like MAXQDA can significantly streamline and enhance the process by providing tools for efficient data management, analysis, and visualization.

tool literature review

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • CAREER FEATURE
  • 04 December 2020
  • Correction 09 December 2020

How to write a superb literature review

Andy Tay is a freelance writer based in Singapore.

You can also search for this author in PubMed   Google Scholar

Literature reviews are important resources for scientists. They provide historical context for a field while offering opinions on its future trajectory. Creating them can provide inspiration for one’s own research, as well as some practice in writing. But few scientists are trained in how to write a review — or in what constitutes an excellent one. Even picking the appropriate software to use can be an involved decision (see ‘Tools and techniques’). So Nature asked editors and working scientists with well-cited reviews for their tips.

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

doi: https://doi.org/10.1038/d41586-020-03422-x

Interviews have been edited for length and clarity.

Updates & Corrections

Correction 09 December 2020 : An earlier version of the tables in this article included some incorrect details about the programs Zotero, Endnote and Manubot. These have now been corrected.

Hsing, I.-M., Xu, Y. & Zhao, W. Electroanalysis 19 , 755–768 (2007).

Article   Google Scholar  

Ledesma, H. A. et al. Nature Nanotechnol. 14 , 645–657 (2019).

Article   PubMed   Google Scholar  

Brahlek, M., Koirala, N., Bansal, N. & Oh, S. Solid State Commun. 215–216 , 54–62 (2015).

Choi, Y. & Lee, S. Y. Nature Rev. Chem . https://doi.org/10.1038/s41570-020-00221-w (2020).

Download references

Related Articles

tool literature review

  • Research management

I’m worried I’ve been contacted by a predatory publisher — how do I find out?

I’m worried I’ve been contacted by a predatory publisher — how do I find out?

Career Feature 15 MAY 24

How I fled bombed Aleppo to continue my career in science

How I fled bombed Aleppo to continue my career in science

Career Feature 08 MAY 24

Illuminating ‘the ugly side of science’: fresh incentives for reporting negative results

Illuminating ‘the ugly side of science’: fresh incentives for reporting negative results

Japan can embrace open science — but flexible approaches are key

Correspondence 07 MAY 24

US funders to tighten oversight of controversial ‘gain of function’ research

US funders to tighten oversight of controversial ‘gain of function’ research

News 07 MAY 24

France’s research mega-campus faces leadership crisis

France’s research mega-campus faces leadership crisis

News 03 MAY 24

Mount Etna’s spectacular smoke rings and more — April’s best science images

Mount Etna’s spectacular smoke rings and more — April’s best science images

Senior Research Assistant in Human Immunology (wet lab)

Senior Research Scientist in Human Immunology, high-dimensional (40+) cytometry, ICS and automated robotic platforms.

Boston, Massachusetts (US)

Boston University Atomic Lab

tool literature review

Postdoctoral Fellow in Systems Immunology (dry lab)

Postdoc in systems immunology with expertise in AI and data-driven approaches for deciphering human immune responses to vaccines and diseases.

Global Talent Recruitment of Xinjiang University in 2024

Recruitment involves disciplines that can contact the person in charge by phone.

Wulumuqi city, Ürümqi, Xinjiang Province, China

Xinjiang University

tool literature review

Tenure-Track Assistant Professor, Associate Professor, and Professor

Westlake Center for Genome Editing seeks exceptional scholars in the many areas.

Westlake Center for Genome Editing, Westlake University

tool literature review

Faculty Positions at SUSTech School of Medicine

SUSTech School of Medicine offers equal opportunities and welcome applicants from the world with all ethnic backgrounds.

Shenzhen, Guangdong, China

Southern University of Science and Technology, School of Medicine

tool literature review

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

RAxter is now Enago Read! Enjoy the same licensing and pricing with enhanced capabilities. No action required for existing customers.

Your all in one AI-powered Reading Assistant

A Reading Space to Ideate, Create Knowledge, and Collaborate on Your Research

  • Smartly organize your research
  • Receive recommendations that cannot be ignored
  • Collaborate with your team to read, discuss, and share knowledge

literature review research assistance

From Surface-Level Exploration to Critical Reading - All in one Place!

Fine-tune your literature search.

Our AI-powered reading assistant saves time spent on the exploration of relevant resources and allows you to focus more on reading.

Select phrases or specific sections and explore more research papers related to the core aspects of your selections. Pin the useful ones for future references.

Our platform brings you the latest research related to your and project work.

Speed up your literature review

Quickly generate a summary of key sections of any paper with our summarizer.

Make informed decisions about which papers are relevant, and where to invest your time in further reading.

Get key insights from the paper, quickly comprehend the paper’s unique approach, and recall the key points.

Bring order to your research projects

Organize your reading lists into different projects and maintain the context of your research.

Quickly sort items into collections and tag or filter them according to keywords and color codes.

Experience the power of sharing by finding all the shared literature at one place.

Decode papers effortlessly for faster comprehension

Highlight what is important so that you can retrieve it faster next time.

Select any text in the paper and ask Copilot to explain it to help you get a deeper understanding.

Ask questions and follow-ups from AI-powered Copilot.

Collaborate to read with your team, professors, or students

Share and discuss literature and drafts with your study group, colleagues, experts, and advisors. Recommend valuable resources and help each other for better understanding.

Work in shared projects efficiently and improve visibility within your study group or lab members.

Keep track of your team's progress by being constantly connected and engaging in active knowledge transfer by requesting full access to relevant papers and drafts.

Find papers from across the world's largest repositories

microsoft academic

Testimonials

Privacy and security of your research data are integral to our mission..

enago read privacy policy

Everything you add or create on Enago Read is private by default. It is visible if and when you share it with other users.

Copyright

You can put Creative Commons license on original drafts to protect your IP. For shared files, Enago Read always maintains a copy in case of deletion by collaborators or revoked access.

Security

We use state-of-the-art security protocols and algorithms including MD5 Encryption, SSL, and HTTPS to secure your data.

A free, AI-powered research tool for scientific literature

  • Doris Kearns Goodwin
  • Knot Theory

New & Improved API for Developers

Introducing semantic reader in beta.

Stay Connected With Semantic Scholar Sign Up What Is Semantic Scholar? Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI.

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Writing a Literature Review

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays). When we say “literature review” or refer to “the literature,” we are talking about the research ( scholarship ) in a given field. You will often see the terms “the research,” “the scholarship,” and “the literature” used mostly interchangeably.

Where, when, and why would I write a lit review?

There are a number of different situations where you might write a literature review, each with slightly different expectations; different disciplines, too, have field-specific expectations for what a literature review is and does. For instance, in the humanities, authors might include more overt argumentation and interpretation of source material in their literature reviews, whereas in the sciences, authors are more likely to report study designs and results in their literature reviews; these differences reflect these disciplines’ purposes and conventions in scholarship. You should always look at examples from your own discipline and talk to professors or mentors in your field to be sure you understand your discipline’s conventions, for literature reviews as well as for any other genre.

A literature review can be a part of a research paper or scholarly article, usually falling after the introduction and before the research methods sections. In these cases, the lit review just needs to cover scholarship that is important to the issue you are writing about; sometimes it will also cover key sources that informed your research methodology.

Lit reviews can also be standalone pieces, either as assignments in a class or as publications. In a class, a lit review may be assigned to help students familiarize themselves with a topic and with scholarship in their field, get an idea of the other researchers working on the topic they’re interested in, find gaps in existing research in order to propose new projects, and/or develop a theoretical framework and methodology for later research. As a publication, a lit review usually is meant to help make other scholars’ lives easier by collecting and summarizing, synthesizing, and analyzing existing research on a topic. This can be especially helpful for students or scholars getting into a new research area, or for directing an entire community of scholars toward questions that have not yet been answered.

What are the parts of a lit review?

Most lit reviews use a basic introduction-body-conclusion structure; if your lit review is part of a larger paper, the introduction and conclusion pieces may be just a few sentences while you focus most of your attention on the body. If your lit review is a standalone piece, the introduction and conclusion take up more space and give you a place to discuss your goals, research methods, and conclusions separately from where you discuss the literature itself.

Introduction:

  • An introductory paragraph that explains what your working topic and thesis is
  • A forecast of key topics or texts that will appear in the review
  • Potentially, a description of how you found sources and how you analyzed them for inclusion and discussion in the review (more often found in published, standalone literature reviews than in lit review sections in an article or research paper)
  • Summarize and synthesize: Give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: Don’t just paraphrase other researchers – add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically Evaluate: Mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: Use transition words and topic sentence to draw connections, comparisons, and contrasts.

Conclusion:

  • Summarize the key findings you have taken from the literature and emphasize their significance
  • Connect it back to your primary research question

How should I organize my lit review?

Lit reviews can take many different organizational patterns depending on what you are trying to accomplish with the review. Here are some examples:

  • Chronological : The simplest approach is to trace the development of the topic over time, which helps familiarize the audience with the topic (for instance if you are introducing something that is not commonly known in your field). If you choose this strategy, be careful to avoid simply listing and summarizing sources in order. Try to analyze the patterns, turning points, and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred (as mentioned previously, this may not be appropriate in your discipline — check with a teacher or mentor if you’re unsure).
  • Thematic : If you have found some recurring central themes that you will continue working with throughout your piece, you can organize your literature review into subsections that address different aspects of the topic. For example, if you are reviewing literature about women and religion, key themes can include the role of women in churches and the religious attitude towards women.
  • Qualitative versus quantitative research
  • Empirical versus theoretical scholarship
  • Divide the research by sociological, historical, or cultural sources
  • Theoretical : In many humanities articles, the literature review is the foundation for the theoretical framework. You can use it to discuss various theories, models, and definitions of key concepts. You can argue for the relevance of a specific theoretical approach or combine various theorical concepts to create a framework for your research.

What are some strategies or tips I can use while writing my lit review?

Any lit review is only as good as the research it discusses; make sure your sources are well-chosen and your research is thorough. Don’t be afraid to do more research if you discover a new thread as you’re writing. More info on the research process is available in our "Conducting Research" resources .

As you’re doing your research, create an annotated bibliography ( see our page on the this type of document ). Much of the information used in an annotated bibliography can be used also in a literature review, so you’ll be not only partially drafting your lit review as you research, but also developing your sense of the larger conversation going on among scholars, professionals, and any other stakeholders in your topic.

Usually you will need to synthesize research rather than just summarizing it. This means drawing connections between sources to create a picture of the scholarly conversation on a topic over time. Many student writers struggle to synthesize because they feel they don’t have anything to add to the scholars they are citing; here are some strategies to help you:

  • It often helps to remember that the point of these kinds of syntheses is to show your readers how you understand your research, to help them read the rest of your paper.
  • Writing teachers often say synthesis is like hosting a dinner party: imagine all your sources are together in a room, discussing your topic. What are they saying to each other?
  • Look at the in-text citations in each paragraph. Are you citing just one source for each paragraph? This usually indicates summary only. When you have multiple sources cited in a paragraph, you are more likely to be synthesizing them (not always, but often
  • Read more about synthesis here.

The most interesting literature reviews are often written as arguments (again, as mentioned at the beginning of the page, this is discipline-specific and doesn’t work for all situations). Often, the literature review is where you can establish your research as filling a particular gap or as relevant in a particular way. You have some chance to do this in your introduction in an article, but the literature review section gives a more extended opportunity to establish the conversation in the way you would like your readers to see it. You can choose the intellectual lineage you would like to be part of and whose definitions matter most to your thinking (mostly humanities-specific, but this goes for sciences as well). In addressing these points, you argue for your place in the conversation, which tends to make the lit review more compelling than a simple reporting of other sources.

tool literature review

AI Literature Review Generator

Generate high-quality literature reviews fast with ai.

  • Academic Research: Create a literature review for your thesis, dissertation, or research paper.
  • Professional Research: Conduct a literature review for a project, report, or proposal at work.
  • Content Creation: Write a literature review for a blog post, article, or book.
  • Personal Research: Conduct a literature review to deepen your understanding of a topic of interest.

New & Trending Tools

University administrator ai assistant, professional writer ai, paraphrase ai.

The Sheridan Libraries

  • Write a Literature Review
  • Sheridan Libraries
  • Find This link opens in a new window
  • Evaluate This link opens in a new window

What Will You Do Differently?

Please help your librarians by filling out this two-minute survey of today's class session..

Professor, this one's for you .

Introduction

Literature reviews take time. here is some general information to know before you start.  .

  •  VIDEO -- This video is a great overview of the entire process.  (2020; North Carolina State University Libraries) --The transcript is included --This is for everyone; ignore the mention of "graduate students" --9.5 minutes, and every second is important  
  • OVERVIEW -- Read this page from Purdue's OWL. It's not long, and gives some tips to fill in what you just learned from the video.  
  • NOT A RESEARCH ARTICLE -- A literature review follows a different style, format, and structure from a research article.  

Steps to Completing a Literature Review

tool literature review

  • Next: Find >>
  • Last Updated: Sep 26, 2023 10:25 AM
  • URL: https://guides.library.jhu.edu/lit-review

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • PMC10248995

Logo of sysrev

Guidance to best tools and practices for systematic reviews

Kat kolaski.

1 Departments of Orthopaedic Surgery, Pediatrics, and Neurology, Wake Forest School of Medicine, Winston-Salem, NC USA

Lynne Romeiser Logan

2 Department of Physical Medicine and Rehabilitation, SUNY Upstate Medical University, Syracuse, NY USA

John P. A. Ioannidis

3 Departments of Medicine, of Epidemiology and Population Health, of Biomedical Data Science, and of Statistics, and Meta-Research Innovation Center at Stanford (METRICS), Stanford University School of Medicine, Stanford, CA USA

Associated Data

Data continue to accumulate indicating that many systematic reviews are methodologically flawed, biased, redundant, or uninformative. Some improvements have occurred in recent years based on empirical methods research and standardization of appraisal tools; however, many authors do not routinely or consistently apply these updated methods. In addition, guideline developers, peer reviewers, and journal editors often disregard current methodological standards. Although extensively acknowledged and explored in the methodological literature, most clinicians seem unaware of these issues and may automatically accept evidence syntheses (and clinical practice guidelines based on their conclusions) as trustworthy.

A plethora of methods and tools are recommended for the development and evaluation of evidence syntheses. It is important to understand what these are intended to do (and cannot do) and how they can be utilized. Our objective is to distill this sprawling information into a format that is understandable and readily accessible to authors, peer reviewers, and editors. In doing so, we aim to promote appreciation and understanding of the demanding science of evidence synthesis among stakeholders. We focus on well-documented deficiencies in key components of evidence syntheses to elucidate the rationale for current standards. The constructs underlying the tools developed to assess reporting, risk of bias, and methodological quality of evidence syntheses are distinguished from those involved in determining overall certainty of a body of evidence. Another important distinction is made between those tools used by authors to develop their syntheses as opposed to those used to ultimately judge their work.

Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. The latter include preferred terminology and a scheme to characterize types of research evidence. We organize best practice resources in a Concise Guide that can be widely adopted and adapted for routine implementation by authors and journals. Appropriate, informed use of these is encouraged, but we caution against their superficial application and emphasize their endorsement does not substitute for in-depth methodological training. By highlighting best practices with their rationale, we hope this guidance will inspire further evolution of methods and tools that can advance the field.

Supplementary Information

The online version contains supplementary material available at 10.1186/s13643-023-02255-9.

Part 1. The state of evidence synthesis

Evidence syntheses are commonly regarded as the foundation of evidence-based medicine (EBM). They are widely accredited for providing reliable evidence and, as such, they have significantly influenced medical research and clinical practice. Despite their uptake throughout health care and ubiquity in contemporary medical literature, some important aspects of evidence syntheses are generally overlooked or not well recognized. Evidence syntheses are mostly retrospective exercises, they often depend on weak or irreparably flawed data, and they may use tools that have acknowledged or yet unrecognized limitations. They are complicated and time-consuming undertakings prone to bias and errors. Production of a good evidence synthesis requires careful preparation and high levels of organization in order to limit potential pitfalls [ 1 ]. Many authors do not recognize the complexity of such an endeavor and the many methodological challenges they may encounter. Failure to do so is likely to result in research and resource waste.

Given their potential impact on people’s lives, it is crucial for evidence syntheses to correctly report on the current knowledge base. In order to be perceived as trustworthy, reliable demonstration of the accuracy of evidence syntheses is equally imperative [ 2 ]. Concerns about the trustworthiness of evidence syntheses are not recent developments. From the early years when EBM first began to gain traction until recent times when thousands of systematic reviews are published monthly [ 3 ] the rigor of evidence syntheses has always varied. Many systematic reviews and meta-analyses had obvious deficiencies because original methods and processes had gaps, lacked precision, and/or were not widely known. The situation has improved with empirical research concerning which methods to use and standardization of appraisal tools. However, given the geometrical increase in the number of evidence syntheses being published, a relatively larger pool of unreliable evidence syntheses is being published today.

Publication of methodological studies that critically appraise the methods used in evidence syntheses is increasing at a fast pace. This reflects the availability of tools specifically developed for this purpose [ 4 – 6 ]. Yet many clinical specialties report that alarming numbers of evidence syntheses fail on these assessments. The syntheses identified report on a broad range of common conditions including, but not limited to, cancer, [ 7 ] chronic obstructive pulmonary disease, [ 8 ] osteoporosis, [ 9 ] stroke, [ 10 ] cerebral palsy, [ 11 ] chronic low back pain, [ 12 ] refractive error, [ 13 ] major depression, [ 14 ] pain, [ 15 ] and obesity [ 16 , 17 ]. The situation is even more concerning with regard to evidence syntheses included in clinical practice guidelines (CPGs) [ 18 – 20 ]. Astonishingly, in a sample of CPGs published in 2017–18, more than half did not apply even basic systematic methods in the evidence syntheses used to inform their recommendations [ 21 ].

These reports, while not widely acknowledged, suggest there are pervasive problems not limited to evidence syntheses that evaluate specific kinds of interventions or include primary research of a particular study design (eg, randomized versus non-randomized) [ 22 ]. Similar concerns about the reliability of evidence syntheses have been expressed by proponents of EBM in highly circulated medical journals [ 23 – 26 ]. These publications have also raised awareness about redundancy, inadequate input of statistical expertise, and deficient reporting. These issues plague primary research as well; however, there is heightened concern for the impact of these deficiencies given the critical role of evidence syntheses in policy and clinical decision-making.

Methods and guidance to produce a reliable evidence synthesis

Several international consortiums of EBM experts and national health care organizations currently provide detailed guidance (Table ​ (Table1). 1 ). They draw criteria from the reporting and methodological standards of currently recommended appraisal tools, and regularly review and update their methods to reflect new information and changing needs. In addition, they endorse the Grading of Recommendations Assessment, Development and Evaluation (GRADE) system for rating the overall quality of a body of evidence [ 27 ]. These groups typically certify or commission systematic reviews that are published in exclusive databases (eg, Cochrane, JBI) or are used to develop government or agency sponsored guidelines or health technology assessments (eg, National Institute for Health and Care Excellence [NICE], Scottish Intercollegiate Guidelines Network [SIGN], Agency for Healthcare Research and Quality [AHRQ]). They offer developers of evidence syntheses various levels of methodological advice, technical and administrative support, and editorial assistance. Use of specific protocols and checklists are required for development teams within these groups, but their online methodological resources are accessible to any potential author.

Guidance for development of evidence syntheses

Notably, Cochrane is the largest single producer of evidence syntheses in biomedical research; however, these only account for 15% of the total [ 28 ]. The World Health Organization requires Cochrane standards be used to develop evidence syntheses that inform their CPGs [ 29 ]. Authors investigating questions of intervention effectiveness in syntheses developed for Cochrane follow the Methodological Expectations of Cochrane Intervention Reviews [ 30 ] and undergo multi-tiered peer review [ 31 , 32 ]. Several empirical evaluations have shown that Cochrane systematic reviews are of higher methodological quality compared with non-Cochrane reviews [ 4 , 7 , 9 , 11 , 14 , 32 – 35 ]. However, some of these assessments have biases: they may be conducted by Cochrane-affiliated authors, and they sometimes use scales and tools developed and used in the Cochrane environment and by its partners. In addition, evidence syntheses published in the Cochrane database are not subject to space or word restrictions, while non-Cochrane syntheses are often limited. As a result, information that may be relevant to the critical appraisal of non-Cochrane reviews is often removed or is relegated to online-only supplements that may not be readily or fully accessible [ 28 ].

Influences on the state of evidence synthesis

Many authors are familiar with the evidence syntheses produced by the leading EBM organizations but can be intimidated by the time and effort necessary to apply their standards. Instead of following their guidance, authors may employ methods that are discouraged or outdated 28]. Suboptimal methods described in in the literature may then be taken up by others. For example, the Newcastle–Ottawa Scale (NOS) is a commonly used tool for appraising non-randomized studies [ 36 ]. Many authors justify their selection of this tool with reference to a publication that describes the unreliability of the NOS and recommends against its use [ 37 ]. Obviously, the authors who cite this report for that purpose have not read it. Authors and peer reviewers have a responsibility to use reliable and accurate methods and not copycat previous citations or substandard work [ 38 , 39 ]. Similar cautions may potentially extend to automation tools. These have concentrated on evidence searching [ 40 ] and selection given how demanding it is for humans to maintain truly up-to-date evidence [ 2 , 41 ]. Cochrane has deployed machine learning to identify randomized controlled trials (RCTs) and studies related to COVID-19, [ 2 , 42 ] but such tools are not yet commonly used [ 43 ]. The routine integration of automation tools in the development of future evidence syntheses should not displace the interpretive part of the process.

Editorials about unreliable or misleading systematic reviews highlight several of the intertwining factors that may contribute to continued publication of unreliable evidence syntheses: shortcomings and inconsistencies of the peer review process, lack of endorsement of current standards on the part of journal editors, the incentive structure of academia, industry influences, publication bias, and the lure of “predatory” journals [ 44 – 48 ]. At this juncture, clarification of the extent to which each of these factors contribute remains speculative, but their impact is likely to be synergistic.

Over time, the generalized acceptance of the conclusions of systematic reviews as incontrovertible has affected trends in the dissemination and uptake of evidence. Reporting of the results of evidence syntheses and recommendations of CPGs has shifted beyond medical journals to press releases and news headlines and, more recently, to the realm of social media and influencers. The lay public and policy makers may depend on these outlets for interpreting evidence syntheses and CPGs. Unfortunately, communication to the general public often reflects intentional or non-intentional misrepresentation or “spin” of the research findings [ 49 – 52 ] News and social media outlets also tend to reduce conclusions on a body of evidence and recommendations for treatment to binary choices (eg, “do it” versus “don’t do it”) that may be assigned an actionable symbol (eg, red/green traffic lights, smiley/frowning face emoji).

Strategies for improvement

Many authors and peer reviewers are volunteer health care professionals or trainees who lack formal training in evidence synthesis [ 46 , 53 ]. Informing them about research methodology could increase the likelihood they will apply rigorous methods [ 25 , 33 , 45 ]. We tackle this challenge, from both a theoretical and a practical perspective, by offering guidance applicable to any specialty. It is based on recent methodological research that is extensively referenced to promote self-study. However, the information presented is not intended to be substitute for committed training in evidence synthesis methodology; instead, we hope to inspire our target audience to seek such training. We also hope to inform a broader audience of clinicians and guideline developers influenced by evidence syntheses. Notably, these communities often include the same members who serve in different capacities.

In the following sections, we highlight methodological concepts and practices that may be unfamiliar, problematic, confusing, or controversial. In Part 2, we consider various types of evidence syntheses and the types of research evidence summarized by them. In Part 3, we examine some widely used (and misused) tools for the critical appraisal of systematic reviews and reporting guidelines for evidence syntheses. In Part 4, we discuss how to meet methodological conduct standards applicable to key components of systematic reviews. In Part 5, we describe the merits and caveats of rating the overall certainty of a body of evidence. Finally, in Part 6, we summarize suggested terminology, methods, and tools for development and evaluation of evidence syntheses that reflect current best practices.

Part 2. Types of syntheses and research evidence

A good foundation for the development of evidence syntheses requires an appreciation of their various methodologies and the ability to correctly identify the types of research potentially available for inclusion in the synthesis.

Types of evidence syntheses

Systematic reviews have historically focused on the benefits and harms of interventions; over time, various types of systematic reviews have emerged to address the diverse information needs of clinicians, patients, and policy makers [ 54 ] Systematic reviews with traditional components have become defined by the different topics they assess (Table 2.1 ). In addition, other distinctive types of evidence syntheses have evolved, including overviews or umbrella reviews, scoping reviews, rapid reviews, and living reviews. The popularity of these has been increasing in recent years [ 55 – 58 ]. A summary of the development, methods, available guidance, and indications for these unique types of evidence syntheses is available in Additional File 2 A.

Types of traditional systematic reviews

Both Cochrane [ 30 , 59 ] and JBI [ 60 ] provide methodologies for many types of evidence syntheses; they describe these with different terminology, but there is obvious overlap (Table 2.2 ). The majority of evidence syntheses published by Cochrane (96%) and JBI (62%) are categorized as intervention reviews. This reflects the earlier development and dissemination of their intervention review methodologies; these remain well-established [ 30 , 59 , 61 ] as both organizations continue to focus on topics related to treatment efficacy and harms. In contrast, intervention reviews represent only about half of the total published in the general medical literature, and several non-intervention review types contribute to a significant proportion of the other half.

Evidence syntheses published by Cochrane and JBI

a Data from https://www.cochranelibrary.com/cdsr/reviews . Accessed 17 Sep 2022

b Data obtained via personal email communication on 18 Sep 2022 with Emilie Francis, editorial assistant, JBI Evidence Synthesis

c Includes the following categories: prevalence, scoping, mixed methods, and realist reviews

d This methodology is not supported in the current version of the JBI Manual for Evidence Synthesis

Types of research evidence

There is consensus on the importance of using multiple study designs in evidence syntheses; at the same time, there is a lack of agreement on methods to identify included study designs. Authors of evidence syntheses may use various taxonomies and associated algorithms to guide selection and/or classification of study designs. These tools differentiate categories of research and apply labels to individual study designs (eg, RCT, cross-sectional). A familiar example is the Design Tree endorsed by the Centre for Evidence-Based Medicine [ 70 ]. Such tools may not be helpful to authors of evidence syntheses for multiple reasons.

Suboptimal levels of agreement and accuracy even among trained methodologists reflect challenges with the application of such tools [ 71 , 72 ]. Problematic distinctions or decision points (eg, experimental or observational, controlled or uncontrolled, prospective or retrospective) and design labels (eg, cohort, case control, uncontrolled trial) have been reported [ 71 ]. The variable application of ambiguous study design labels to non-randomized studies is common, making them especially prone to misclassification [ 73 ]. In addition, study labels do not denote the unique design features that make different types of non-randomized studies susceptible to different biases, including those related to how the data are obtained (eg, clinical trials, disease registries, wearable devices). Given this limitation, it is important to be aware that design labels preclude the accurate assignment of non-randomized studies to a “level of evidence” in traditional hierarchies [ 74 ].

These concerns suggest that available tools and nomenclature used to distinguish types of research evidence may not uniformly apply to biomedical research and non-health fields that utilize evidence syntheses (eg, education, economics) [ 75 , 76 ]. Moreover, primary research reports often do not describe study design or do so incompletely or inaccurately; thus, indexing in PubMed and other databases does not address the potential for misclassification [ 77 ]. Yet proper identification of research evidence has implications for several key components of evidence syntheses. For example, search strategies limited by index terms using design labels or study selection based on labels applied by the authors of primary studies may cause inconsistent or unjustified study inclusions and/or exclusions [ 77 ]. In addition, because risk of bias (RoB) tools consider attributes specific to certain types of studies and study design features, results of these assessments may be invalidated if an inappropriate tool is used. Appropriate classification of studies is also relevant for the selection of a suitable method of synthesis and interpretation of those results.

An alternative to these tools and nomenclature involves application of a few fundamental distinctions that encompass a wide range of research designs and contexts. While these distinctions are not novel, we integrate them into a practical scheme (see Fig. ​ Fig.1) 1 ) designed to guide authors of evidence syntheses in the basic identification of research evidence. The initial distinction is between primary and secondary studies. Primary studies are then further distinguished by: 1) the type of data reported (qualitative or quantitative); and 2) two defining design features (group or single-case and randomized or non-randomized). The different types of studies and study designs represented in the scheme are described in detail in Additional File 2 B. It is important to conceptualize their methods as complementary as opposed to contrasting or hierarchical [ 78 ]; each offers advantages and disadvantages that determine their appropriateness for answering different kinds of research questions in an evidence synthesis.

An external file that holds a picture, illustration, etc.
Object name is 13643_2023_2255_Fig1_HTML.jpg

Distinguishing types of research evidence

Application of these basic distinctions may avoid some of the potential difficulties associated with study design labels and taxonomies. Nevertheless, debatable methodological issues are raised when certain types of research identified in this scheme are included in an evidence synthesis. We briefly highlight those associated with inclusion of non-randomized studies, case reports and series, and a combination of primary and secondary studies.

Non-randomized studies

When investigating an intervention’s effectiveness, it is important for authors to recognize the uncertainty of observed effects reported by studies with high RoB. Results of statistical analyses that include such studies need to be interpreted with caution in order to avoid misleading conclusions [ 74 ]. Review authors may consider excluding randomized studies with high RoB from meta-analyses. Non-randomized studies of intervention (NRSI) are affected by a greater potential range of biases and thus vary more than RCTs in their ability to estimate a causal effect [ 79 ]. If data from NRSI are synthesized in meta-analyses, it is helpful to separately report their summary estimates [ 6 , 74 ].

Nonetheless, certain design features of NRSI (eg, which parts of the study were prospectively designed) may help to distinguish stronger from weaker ones. Cochrane recommends that authors of a review including NRSI focus on relevant study design features when determining eligibility criteria instead of relying on non-informative study design labels [ 79 , 80 ] This process is facilitated by a study design feature checklist; guidance on using the checklist is included with developers’ description of the tool [ 73 , 74 ]. Authors collect information about these design features during data extraction and then consider it when making final study selection decisions and when performing RoB assessments of the included NRSI.

Case reports and case series

Correctly identified case reports and case series can contribute evidence not well captured by other designs [ 81 ]; in addition, some topics may be limited to a body of evidence that consists primarily of uncontrolled clinical observations. Murad and colleagues offer a framework for how to include case reports and series in an evidence synthesis [ 82 ]. Distinguishing between cohort studies and case series in these syntheses is important, especially for those that rely on evidence from NRSI. Additional data obtained from studies misclassified as case series can potentially increase the confidence in effect estimates. Mathes and Pieper provide authors of evidence syntheses with specific guidance on distinguishing between cohort studies and case series, but emphasize the increased workload involved [ 77 ].

Primary and secondary studies

Synthesis of combined evidence from primary and secondary studies may provide a broad perspective on the entirety of available literature on a topic. This is, in fact, the recommended strategy for scoping reviews that may include a variety of sources of evidence (eg, CPGs, popular media). However, except for scoping reviews, the synthesis of data from primary and secondary studies is discouraged unless there are strong reasons to justify doing so.

Combining primary and secondary sources of evidence is challenging for authors of other types of evidence syntheses for several reasons [ 83 ]. Assessments of RoB for primary and secondary studies are derived from conceptually different tools, thus obfuscating the ability to make an overall RoB assessment of a combination of these study types. In addition, authors who include primary and secondary studies must devise non-standardized methods for synthesis. Note this contrasts with well-established methods available for updating existing evidence syntheses with additional data from new primary studies [ 84 – 86 ]. However, a new review that synthesizes data from primary and secondary studies raises questions of validity and may unintentionally support a biased conclusion because no existing methodological guidance is currently available [ 87 ].

Recommendations

We suggest that journal editors require authors to identify which type of evidence synthesis they are submitting and reference the specific methodology used for its development. This will clarify the research question and methods for peer reviewers and potentially simplify the editorial process. Editors should announce this practice and include it in the instructions to authors. To decrease bias and apply correct methods, authors must also accurately identify the types of research evidence included in their syntheses.

Part 3. Conduct and reporting

The need to develop criteria to assess the rigor of systematic reviews was recognized soon after the EBM movement began to gain international traction [ 88 , 89 ]. Systematic reviews rapidly became popular, but many were very poorly conceived, conducted, and reported. These problems remain highly prevalent [ 23 ] despite development of guidelines and tools to standardize and improve the performance and reporting of evidence syntheses [ 22 , 28 ]. Table 3.1  provides some historical perspective on the evolution of tools developed specifically for the evaluation of systematic reviews, with or without meta-analysis.

Tools specifying standards for systematic reviews with and without meta-analysis

a Currently recommended

b Validated tool for systematic reviews of interventions developed for use by authors of overviews or umbrella reviews

These tools are often interchangeably invoked when referring to the “quality” of an evidence synthesis. However, quality is a vague term that is frequently misused and misunderstood; more precisely, these tools specify different standards for evidence syntheses. Methodological standards address how well a systematic review was designed and performed [ 5 ]. RoB assessments refer to systematic flaws or limitations in the design, conduct, or analysis of research that distort the findings of the review [ 4 ]. Reporting standards help systematic review authors describe the methodology they used and the results of their synthesis in sufficient detail [ 92 ]. It is essential to distinguish between these evaluations: a systematic review may be biased, it may fail to report sufficient information on essential features, or it may exhibit both problems; a thoroughly reported systematic evidence synthesis review may still be biased and flawed while an otherwise unbiased one may suffer from deficient documentation.

We direct attention to the currently recommended tools listed in Table 3.1  but concentrate on AMSTAR-2 (update of AMSTAR [A Measurement Tool to Assess Systematic Reviews]) and ROBIS (Risk of Bias in Systematic Reviews), which evaluate methodological quality and RoB, respectively. For comparison and completeness, we include PRISMA 2020 (update of the 2009 Preferred Reporting Items for Systematic Reviews of Meta-Analyses statement), which offers guidance on reporting standards. The exclusive focus on these three tools is by design; it addresses concerns related to the considerable variability in tools used for the evaluation of systematic reviews [ 28 , 88 , 96 , 97 ]. We highlight the underlying constructs these tools were designed to assess, then describe their components and applications. Their known (or potential) uptake and impact and limitations are also discussed.

Evaluation of conduct

Development.

AMSTAR [ 5 ] was in use for a decade prior to the 2017 publication of AMSTAR-2; both provide a broad evaluation of methodological quality of intervention systematic reviews, including flaws arising through poor conduct of the review [ 6 ]. ROBIS, published in 2016, was developed to specifically assess RoB introduced by the conduct of the review; it is applicable to systematic reviews of interventions and several other types of reviews [ 4 ]. Both tools reflect a shift to a domain-based approach as opposed to generic quality checklists. There are a few items unique to each tool; however, similarities between items have been demonstrated [ 98 , 99 ]. AMSTAR-2 and ROBIS are recommended for use by: 1) authors of overviews or umbrella reviews and CPGs to evaluate systematic reviews considered as evidence; 2) authors of methodological research studies to appraise included systematic reviews; and 3) peer reviewers for appraisal of submitted systematic review manuscripts. For authors, these tools may function as teaching aids and inform conduct of their review during its development.

Description

Systematic reviews that include randomized and/or non-randomized studies as evidence can be appraised with AMSTAR-2 and ROBIS. Other characteristics of AMSTAR-2 and ROBIS are summarized in Table 3.2 . Both tools define categories for an overall rating; however, neither tool is intended to generate a total score by simply calculating the number of responses satisfying criteria for individual items [ 4 , 6 ]. AMSTAR-2 focuses on the rigor of a review’s methods irrespective of the specific subject matter. ROBIS places emphasis on a review’s results section— this suggests it may be optimally applied by appraisers with some knowledge of the review’s topic as they may be better equipped to determine if certain procedures (or lack thereof) would impact the validity of a review’s findings [ 98 , 100 ]. Reliability studies show AMSTAR-2 overall confidence ratings strongly correlate with the overall RoB ratings in ROBIS [ 100 , 101 ].

Comparison of AMSTAR-2 and ROBIS

a ROBIS includes an optional first phase to assess the applicability of the review to the research question of interest. The tool may be applicable to other review types in addition to the four specified, although modification of this initial phase will be needed (Personal Communication via email, Penny Whiting, 28 Jan 2022)

b AMSTAR-2 item #9 and #11 require separate responses for RCTs and NRSI

Interrater reliability has been shown to be acceptable for AMSTAR-2 [ 6 , 11 , 102 ] and ROBIS [ 4 , 98 , 103 ] but neither tool has been shown to be superior in this regard [ 100 , 101 , 104 , 105 ]. Overall, variability in reliability for both tools has been reported across items, between pairs of raters, and between centers [ 6 , 100 , 101 , 104 ]. The effects of appraiser experience on the results of AMSTAR-2 and ROBIS require further evaluation [ 101 , 105 ]. Updates to both tools should address items shown to be prone to individual appraisers’ subjective biases and opinions [ 11 , 100 ]; this may involve modifications of the current domains and signaling questions as well as incorporation of methods to make an appraiser’s judgments more explicit. Future revisions of these tools may also consider the addition of standards for aspects of systematic review development currently lacking (eg, rating overall certainty of evidence, [ 99 ] methods for synthesis without meta-analysis [ 105 ]) and removal of items that assess aspects of reporting that are thoroughly evaluated by PRISMA 2020.

Application

A good understanding of what is required to satisfy the standards of AMSTAR-2 and ROBIS involves study of the accompanying guidance documents written by the tools’ developers; these contain detailed descriptions of each item’s standards. In addition, accurate appraisal of a systematic review with either tool requires training. Most experts recommend independent assessment by at least two appraisers with a process for resolving discrepancies as well as procedures to establish interrater reliability, such as pilot testing, a calibration phase or exercise, and development of predefined decision rules [ 35 , 99 – 101 , 103 , 104 , 106 ]. These methods may, to some extent, address the challenges associated with the diversity in methodological training, subject matter expertise, and experience using the tools that are likely to exist among appraisers.

The standards of AMSTAR, AMSTAR-2, and ROBIS have been used in many methodological studies and epidemiological investigations. However, the increased publication of overviews or umbrella reviews and CPGs has likely been a greater influence on the widening acceptance of these tools. Critical appraisal of the secondary studies considered evidence is essential to the trustworthiness of both the recommendations of CPGs and the conclusions of overviews. Currently both Cochrane [ 55 ] and JBI [ 107 ] recommend AMSTAR-2 and ROBIS in their guidance for authors of overviews or umbrella reviews. However, ROBIS and AMSTAR-2 were released in 2016 and 2017, respectively; thus, to date, limited data have been reported about the uptake of these tools or which of the two may be preferred [ 21 , 106 ]. Currently, in relation to CPGs, AMSTAR-2 appears to be overwhelmingly popular compared to ROBIS. A Google Scholar search of this topic (search terms “AMSTAR 2 AND clinical practice guidelines,” “ROBIS AND clinical practice guidelines” 13 May 2022) found 12,700 hits for AMSTAR-2 and 1,280 for ROBIS. The apparent greater appeal of AMSTAR-2 may relate to its longer track record given the original version of the tool was in use for 10 years prior to its update in 2017.

Barriers to the uptake of AMSTAR-2 and ROBIS include the real or perceived time and resources necessary to complete the items they include and appraisers’ confidence in their own ratings [ 104 ]. Reports from comparative studies available to date indicate that appraisers find AMSTAR-2 questions, responses, and guidance to be clearer and simpler compared with ROBIS [ 11 , 101 , 104 , 105 ]. This suggests that for appraisal of intervention systematic reviews, AMSTAR-2 may be a more practical tool than ROBIS, especially for novice appraisers [ 101 , 103 – 105 ]. The unique characteristics of each tool, as well as their potential advantages and disadvantages, should be taken into consideration when deciding which tool should be used for an appraisal of a systematic review. In addition, the choice of one or the other may depend on how the results of an appraisal will be used; for example, a peer reviewer’s appraisal of a single manuscript versus an appraisal of multiple systematic reviews in an overview or umbrella review, CPG, or systematic methodological study.

Authors of overviews and CPGs report results of AMSTAR-2 and ROBIS appraisals for each of the systematic reviews they include as evidence. Ideally, an independent judgment of their appraisals can be made by the end users of overviews and CPGs; however, most stakeholders, including clinicians, are unlikely to have a sophisticated understanding of these tools. Nevertheless, they should at least be aware that AMSTAR-2 and ROBIS ratings reported in overviews and CPGs may be inaccurate because the tools are not applied as intended by their developers. This can result from inadequate training of the overview or CPG authors who perform the appraisals, or to modifications of the appraisal tools imposed by them. The potential variability in overall confidence and RoB ratings highlights why appraisers applying these tools need to support their judgments with explicit documentation; this allows readers to judge for themselves whether they agree with the criteria used by appraisers [ 4 , 108 ]. When these judgments are explicit, the underlying rationale used when applying these tools can be assessed [ 109 ].

Theoretically, we would expect an association of AMSTAR-2 with improved methodological rigor and an association of ROBIS with lower RoB in recent systematic reviews compared to those published before 2017. To our knowledge, this has not yet been demonstrated; however, like reports about the actual uptake of these tools, time will tell. Additional data on user experience is also needed to further elucidate the practical challenges and methodological nuances encountered with the application of these tools. This information could potentially inform the creation of unifying criteria to guide and standardize the appraisal of evidence syntheses [ 109 ].

Evaluation of reporting

Complete reporting is essential for users to establish the trustworthiness and applicability of a systematic review’s findings. Efforts to standardize and improve the reporting of systematic reviews resulted in the 2009 publication of the PRISMA statement [ 92 ] with its accompanying explanation and elaboration document [ 110 ]. This guideline was designed to help authors prepare a complete and transparent report of their systematic review. In addition, adherence to PRISMA is often used to evaluate the thoroughness of reporting of published systematic reviews [ 111 ]. The updated version, PRISMA 2020 [ 93 ], and its guidance document [ 112 ] were published in 2021. Items on the original and updated versions of PRISMA are organized by the six basic review components they address (title, abstract, introduction, methods, results, discussion). The PRISMA 2020 update is a considerably expanded version of the original; it includes standards and examples for the 27 original and 13 additional reporting items that capture methodological advances and may enhance the replicability of reviews [ 113 ].

The original PRISMA statement fostered the development of various PRISMA extensions (Table 3.3 ). These include reporting guidance for scoping reviews and reviews of diagnostic test accuracy and for intervention reviews that report on the following: harms outcomes, equity issues, the effects of acupuncture, the results of network meta-analyses and analyses of individual participant data. Detailed reporting guidance for specific systematic review components (abstracts, protocols, literature searches) is also available.

PRISMA extensions

PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses

a Note the abstract reporting checklist is now incorporated into PRISMA 2020 [ 93 ]

Uptake and impact

The 2009 PRISMA standards [ 92 ] for reporting have been widely endorsed by authors, journals, and EBM-related organizations. We anticipate the same for PRISMA 2020 [ 93 ] given its co-publication in multiple high-impact journals. However, to date, there is a lack of strong evidence for an association between improved systematic review reporting and endorsement of PRISMA 2009 standards [ 43 , 111 ]. Most journals require a PRISMA checklist accompany submissions of systematic review manuscripts. However, the accuracy of information presented on these self-reported checklists is not necessarily verified. It remains unclear which strategies (eg, authors’ self-report of checklists, peer reviewer checks) might improve adherence to the PRISMA reporting standards; in addition, the feasibility of any potentially effective strategies must be taken into consideration given the structure and limitations of current research and publication practices [ 124 ].

Pitfalls and limitations of PRISMA, AMSTAR-2, and ROBIS

Misunderstanding of the roles of these tools and their misapplication may be widespread problems. PRISMA 2020 is a reporting guideline that is most beneficial if consulted when developing a review as opposed to merely completing a checklist when submitting to a journal; at that point, the review is finished, with good or bad methodological choices. However, PRISMA checklists evaluate how completely an element of review conduct was reported, but do not evaluate the caliber of conduct or performance of a review. Thus, review authors and readers should not think that a rigorous systematic review can be produced by simply following the PRISMA 2020 guidelines. Similarly, it is important to recognize that AMSTAR-2 and ROBIS are tools to evaluate the conduct of a review but do not substitute for conceptual methodological guidance. In addition, they are not intended to be simple checklists. In fact, they have the potential for misuse or abuse if applied as such; for example, by calculating a total score to make a judgment about a review’s overall confidence or RoB. Proper selection of a response for the individual items on AMSTAR-2 and ROBIS requires training or at least reference to their accompanying guidance documents.

Not surprisingly, it has been shown that compliance with the PRISMA checklist is not necessarily associated with satisfying the standards of ROBIS [ 125 ]. AMSTAR-2 and ROBIS were not available when PRISMA 2009 was developed; however, they were considered in the development of PRISMA 2020 [ 113 ]. Therefore, future studies may show a positive relationship between fulfillment of PRISMA 2020 standards for reporting and meeting the standards of tools evaluating methodological quality and RoB.

Choice of an appropriate tool for the evaluation of a systematic review first involves identification of the underlying construct to be assessed. For systematic reviews of interventions, recommended tools include AMSTAR-2 and ROBIS for appraisal of conduct and PRISMA 2020 for completeness of reporting. All three tools were developed rigorously and provide easily accessible and detailed user guidance, which is necessary for their proper application and interpretation. When considering a manuscript for publication, training in these tools can sensitize peer reviewers and editors to major issues that may affect the review’s trustworthiness and completeness of reporting. Judgment of the overall certainty of a body of evidence and formulation of recommendations rely, in part, on AMSTAR-2 or ROBIS appraisals of systematic reviews. Therefore, training on the application of these tools is essential for authors of overviews and developers of CPGs. Peer reviewers and editors considering an overview or CPG for publication must hold their authors to a high standard of transparency regarding both the conduct and reporting of these appraisals.

Part 4. Meeting conduct standards

Many authors, peer reviewers, and editors erroneously equate fulfillment of the items on the PRISMA checklist with superior methodological rigor. For direction on methodology, we refer them to available resources that provide comprehensive conceptual guidance [ 59 , 60 ] as well as primers with basic step-by-step instructions [ 1 , 126 , 127 ]. This section is intended to complement study of such resources by facilitating use of AMSTAR-2 and ROBIS, tools specifically developed to evaluate methodological rigor of systematic reviews. These tools are widely accepted by methodologists; however, in the general medical literature, they are not uniformly selected for the critical appraisal of systematic reviews [ 88 , 96 ].

To enable their uptake, Table 4.1  links review components to the corresponding appraisal tool items. Expectations of AMSTAR-2 and ROBIS are concisely stated, and reasoning provided.

Systematic review components linked to appraisal with AMSTAR-2 and ROBIS a

CoI conflict of interest, MA meta-analysis, NA not addressed, PICO participant, intervention, comparison, outcome, PRISMA-P Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols, RoB risk of bias

a Components shown in bold are chosen for elaboration in Part 4 for one (or both) of two reasons: 1) the component has been identified as potentially problematic for systematic review authors; and/or 2) the component is evaluated by standards of an AMSTAR-2 “critical” domain

b Critical domains of AMSTAR-2 are indicated by *

Issues involved in meeting the standards for seven review components (identified in bold in Table 4.1 ) are addressed in detail. These were chosen for elaboration for one (or both) of two reasons: 1) the component has been identified as potentially problematic for systematic review authors based on consistent reports of their frequent AMSTAR-2 or ROBIS deficiencies [ 9 , 11 , 15 , 88 , 128 , 129 ]; and/or 2) the review component is judged by standards of an AMSTAR-2 “critical” domain. These have the greatest implications for how a systematic review will be appraised: if standards for any one of these critical domains are not met, the review is rated as having “critically low confidence.”

Research question

Specific and unambiguous research questions may have more value for reviews that deal with hypothesis testing. Mnemonics for the various elements of research questions are suggested by JBI and Cochrane (Table 2.1 ). These prompt authors to consider the specialized methods involved for developing different types of systematic reviews; however, while inclusion of the suggested elements makes a review compliant with a particular review’s methods, it does not necessarily make a research question appropriate. Table 4.2  lists acronyms that may aid in developing the research question. They include overlapping concepts of importance in this time of proliferating reviews of uncertain value [ 130 ]. If these issues are not prospectively contemplated, systematic review authors may establish an overly broad scope, or develop runaway scope allowing them to stray from predefined choices relating to key comparisons and outcomes.

Research question development

a Cummings SR, Browner WS, Hulley SB. Conceiving the research question and developing the study plan. In: Hulley SB, Cummings SR, Browner WS, editors. Designing clinical research: an epidemiological approach; 4th edn. Lippincott Williams & Wilkins; 2007. p. 14–22

b Doran, GT. There’s a S.M.A.R.T. way to write management’s goals and objectives. Manage Rev. 1981;70:35-6.

c Johnson BT, Hennessy EA. Systematic reviews and meta-analyses in the health sciences: best practice methods for research syntheses. Soc Sci Med. 2019;233:237–51

Once a research question is established, searching on registry sites and databases for existing systematic reviews addressing the same or a similar topic is necessary in order to avoid contributing to research waste [ 131 ]. Repeating an existing systematic review must be justified, for example, if previous reviews are out of date or methodologically flawed. A full discussion on replication of intervention systematic reviews, including a consensus checklist, can be found in the work of Tugwell and colleagues [ 84 ].

Protocol development is considered a core component of systematic reviews [ 125 , 126 , 132 ]. Review protocols may allow researchers to plan and anticipate potential issues, assess validity of methods, prevent arbitrary decision-making, and minimize bias that can be introduced by the conduct of the review. Registration of a protocol that allows public access promotes transparency of the systematic review’s methods and processes and reduces the potential for duplication [ 132 ]. Thinking early and carefully about all the steps of a systematic review is pragmatic and logical and may mitigate the influence of the authors’ prior knowledge of the evidence [ 133 ]. In addition, the protocol stage is when the scope of the review can be carefully considered by authors, reviewers, and editors; this may help to avoid production of overly ambitious reviews that include excessive numbers of comparisons and outcomes or are undisciplined in their study selection.

An association with attainment of AMSTAR standards in systematic reviews with published prospective protocols has been reported [ 134 ]. However, completeness of reporting does not seem to be different in reviews with a protocol compared to those without one [ 135 ]. PRISMA-P [ 116 ] and its accompanying elaboration and explanation document [ 136 ] can be used to guide and assess the reporting of protocols. A final version of the review should fully describe any protocol deviations. Peer reviewers may compare the submitted manuscript with any available pre-registered protocol; this is required if AMSTAR-2 or ROBIS are used for critical appraisal.

There are multiple options for the recording of protocols (Table 4.3 ). Some journals will peer review and publish protocols. In addition, many online sites offer date-stamped and publicly accessible protocol registration. Some of these are exclusively for protocols of evidence syntheses; others are less restrictive and offer researchers the capacity for data storage, sharing, and other workflow features. These sites document protocol details to varying extents and have different requirements [ 137 ]. The most popular site for systematic reviews, the International Prospective Register of Systematic Reviews (PROSPERO), for example, only registers reviews that report on an outcome with direct relevance to human health. The PROSPERO record documents protocols for all types of reviews except literature and scoping reviews. Of note, PROSPERO requires authors register their review protocols prior to any data extraction [ 133 , 138 ]. The electronic records of most of these registry sites allow authors to update their protocols and facilitate transparent tracking of protocol changes, which are not unexpected during the progress of the review [ 139 ].

Options for protocol registration of evidence syntheses

a Authors are advised to contact their target journal regarding submission of systematic review protocols

b Registration is restricted to approved review projects

c The JBI registry lists review projects currently underway by JBI-affiliated entities. These records include a review’s title, primary author, research question, and PICO elements. JBI recommends that authors register eligible protocols with PROSPERO

d See Pieper and Rombey [ 137 ] for detailed characteristics of these five registries

e See Pieper and Rombey [ 137 ] for other systematic review data repository options

Study design inclusion

For most systematic reviews, broad inclusion of study designs is recommended [ 126 ]. This may allow comparison of results between contrasting study design types [ 126 ]. Certain study designs may be considered preferable depending on the type of review and nature of the research question. However, prevailing stereotypes about what each study design does best may not be accurate. For example, in systematic reviews of interventions, randomized designs are typically thought to answer highly specific questions while non-randomized designs often are expected to reveal greater information about harms or real-word evidence [ 126 , 140 , 141 ]. This may be a false distinction; randomized trials may be pragmatic [ 142 ], they may offer important (and more unbiased) information on harms [ 143 ], and data from non-randomized trials may not necessarily be more real-world-oriented [ 144 ].

Moreover, there may not be any available evidence reported by RCTs for certain research questions; in some cases, there may not be any RCTs or NRSI. When the available evidence is limited to case reports and case series, it is not possible to test hypotheses nor provide descriptive estimates or associations; however, a systematic review of these studies can still offer important insights [ 81 , 145 ]. When authors anticipate that limited evidence of any kind may be available to inform their research questions, a scoping review can be considered. Alternatively, decisions regarding inclusion of indirect as opposed to direct evidence can be addressed during protocol development [ 146 ]. Including indirect evidence at an early stage of intervention systematic review development allows authors to decide if such studies offer any additional and/or different understanding of treatment effects for their population or comparison of interest. Issues of indirectness of included studies are accounted for later in the process, during determination of the overall certainty of evidence (see Part 5 for details).

Evidence search

Both AMSTAR-2 and ROBIS require systematic and comprehensive searches for evidence. This is essential for any systematic review. Both tools discourage search restrictions based on language and publication source. Given increasing globalism in health care, the practice of including English-only literature should be avoided [ 126 ]. There are many examples in which language bias (different results in studies published in different languages) has been documented [ 147 , 148 ]. This does not mean that all literature, in all languages, is equally trustworthy [ 148 ]; however, the only way to formally probe for the potential of such biases is to consider all languages in the initial search. The gray literature and a search of trials may also reveal important details about topics that would otherwise be missed [ 149 – 151 ]. Again, inclusiveness will allow review authors to investigate whether results differ in gray literature and trials [ 41 , 151 – 153 ].

Authors should make every attempt to complete their review within one year as that is the likely viable life of a search. (1) If that is not possible, the search should be updated close to the time of completion [ 154 ]. Different research topics may warrant less of a delay, for example, in rapidly changing fields (as in the case of the COVID-19 pandemic), even one month may radically change the available evidence.

Excluded studies

AMSTAR-2 requires authors to provide references for any studies excluded at the full text phase of study selection along with reasons for exclusion; this allows readers to feel confident that all relevant literature has been considered for inclusion and that exclusions are defensible.

Risk of bias assessment of included studies

The design of the studies included in a systematic review (eg, RCT, cohort, case series) should not be equated with appraisal of its RoB. To meet AMSTAR-2 and ROBIS standards, systematic review authors must examine RoB issues specific to the design of each primary study they include as evidence. It is unlikely that a single RoB appraisal tool will be suitable for all research designs. In addition to tools for randomized and non-randomized studies, specific tools are available for evaluation of RoB in case reports and case series [ 82 ] and single-case experimental designs [ 155 , 156 ]. Note the RoB tools selected must meet the standards of the appraisal tool used to judge the conduct of the review. For example, AMSTAR-2 identifies four sources of bias specific to RCTs and NRSI that must be addressed by the RoB tool(s) chosen by the review authors. The Cochrane RoB-2 [ 157 ] tool for RCTs and ROBINS-I [ 158 ] for NRSI for RoB assessment meet the AMSTAR-2 standards. Appraisers on the review team should not modify any RoB tool without complete transparency and acknowledgment that they have invalidated the interpretation of the tool as intended by its developers [ 159 ]. Conduct of RoB assessments is not addressed AMSTAR-2; to meet ROBIS standards, two independent reviewers should complete RoB assessments of included primary studies.

Implications of the RoB assessments must be explicitly discussed and considered in the conclusions of the review. Discussion of the overall RoB of included studies may consider the weight of the studies at high RoB, the importance of the sources of bias in the studies being summarized, and if their importance differs in relationship to the outcomes reported. If a meta-analysis is performed, serious concerns for RoB of individual studies should be accounted for in these results as well. If the results of the meta-analysis for a specific outcome change when studies at high RoB are excluded, readers will have a more accurate understanding of this body of evidence. However, while investigating the potential impact of specific biases is a useful exercise, it is important to avoid over-interpretation, especially when there are sparse data.

Synthesis methods for quantitative data

Syntheses of quantitative data reported by primary studies are broadly categorized as one of two types: meta-analysis, and synthesis without meta-analysis (Table 4.4 ). Before deciding on one of these methods, authors should seek methodological advice about whether reported data can be transformed or used in other ways to provide a consistent effect measure across studies [ 160 , 161 ].

Common methods for quantitative synthesis

CI confidence interval (or credible interval, if analysis is done in Bayesian framework)

a See text for descriptions of the types of data combined in each of these approaches

b See Additional File 4  for guidance on the structure and presentation of forest plots

c General approach is similar to aggregate data meta-analysis but there are substantial differences relating to data collection and checking and analysis [ 162 ]. This approach to syntheses is applicable to intervention, diagnostic, and prognostic systematic reviews [ 163 ]

d Examples include meta-regression, hierarchical and multivariate approaches [ 164 ]

e In-depth guidance and illustrations of these methods are provided in Chapter 12 of the Cochrane Handbook [ 160 ]

Meta-analysis

Systematic reviews that employ meta-analysis should not be referred to simply as “meta-analyses.” The term meta-analysis strictly refers to a specific statistical technique used when study effect estimates and their variances are available, yielding a quantitative summary of results. In general, methods for meta-analysis involve use of a weighted average of effect estimates from two or more studies. If considered carefully, meta-analysis increases the precision of the estimated magnitude of effect and can offer useful insights about heterogeneity and estimates of effects. We refer to standard references for a thorough introduction and formal training [ 165 – 167 ].

There are three common approaches to meta-analysis in current health care–related systematic reviews (Table 4.4 ). Aggregate meta-analyses is the most familiar to authors of evidence syntheses and their end users. This standard meta-analysis combines data on effect estimates reported by studies that investigate similar research questions involving direct comparisons of an intervention and comparator. Results of these analyses provide a single summary intervention effect estimate. If the included studies in a systematic review measure an outcome differently, their reported results may be transformed to make them comparable [ 161 ]. Forest plots visually present essential information about the individual studies and the overall pooled analysis (see Additional File 4  for details).

Less familiar and more challenging meta-analytical approaches used in secondary research include individual participant data (IPD) and network meta-analyses (NMA); PRISMA extensions provide reporting guidelines for both [ 117 , 118 ]. In IPD, the raw data on each participant from each eligible study are re-analyzed as opposed to the study-level data analyzed in aggregate data meta-analyses [ 168 ]. This may offer advantages, including the potential for limiting concerns about bias and allowing more robust analyses [ 163 ]. As suggested by the description in Table 4.4 , NMA is a complex statistical approach. It combines aggregate data [ 169 ] or IPD [ 170 ] for effect estimates from direct and indirect comparisons reported in two or more studies of three or more interventions. This makes it a potentially powerful statistical tool; while multiple interventions are typically available to treat a condition, few have been evaluated in head-to-head trials [ 171 ]. Both IPD and NMA facilitate a broader scope, and potentially provide more reliable and/or detailed results; however, compared with standard aggregate data meta-analyses, their methods are more complicated, time-consuming, and resource-intensive, and they have their own biases, so one needs sufficient funding, technical expertise, and preparation to employ them successfully [ 41 , 172 , 173 ].

Several items in AMSTAR-2 and ROBIS address meta-analysis; thus, understanding the strengths, weaknesses, assumptions, and limitations of methods for meta-analyses is important. According to the standards of both tools, plans for a meta-analysis must be addressed in the review protocol, including reasoning, description of the type of quantitative data to be synthesized, and the methods planned for combining the data. This should not consist of stock statements describing conventional meta-analysis techniques; rather, authors are expected to anticipate issues specific to their research questions. Concern for the lack of training in meta-analysis methods among systematic review authors cannot be overstated. For those with training, the use of popular software (eg, RevMan [ 174 ], MetaXL [ 175 ], JBI SUMARI [ 176 ]) may facilitate exploration of these methods; however, such programs cannot substitute for the accurate interpretation of the results of meta-analyses, especially for more complex meta-analytical approaches.

Synthesis without meta-analysis

There are varied reasons a meta-analysis may not be appropriate or desirable [ 160 , 161 ]. Syntheses that informally use statistical methods other than meta-analysis are variably referred to as descriptive, narrative, or qualitative syntheses or summaries; these terms are also applied to syntheses that make no attempt to statistically combine data from individual studies. However, use of such imprecise terminology is discouraged; in order to fully explore the results of any type of synthesis, some narration or description is needed to supplement the data visually presented in tabular or graphic forms [ 63 , 177 ]. In addition, the term “qualitative synthesis” is easily confused with a synthesis of qualitative data in a qualitative or mixed methods review. “Synthesis without meta-analysis” is currently the preferred description of other ways to combine quantitative data from two or more studies. Use of this specific terminology when referring to these types of syntheses also implies the application of formal methods (Table 4.4 ).

Methods for syntheses without meta-analysis involve structured presentations of the data in any tables and plots. In comparison to narrative descriptions of each study, these are designed to more effectively and transparently show patterns and convey detailed information about the data; they also allow informal exploration of heterogeneity [ 178 ]. In addition, acceptable quantitative statistical methods (Table 4.4 ) are formally applied; however, it is important to recognize these methods have significant limitations for the interpretation of the effectiveness of an intervention [ 160 ]. Nevertheless, when meta-analysis is not possible, the application of these methods is less prone to bias compared with an unstructured narrative description of included studies [ 178 , 179 ].

Vote counting is commonly used in systematic reviews and involves a tally of studies reporting results that meet some threshold of importance applied by review authors. Until recently, it has not typically been identified as a method for synthesis without meta-analysis. Guidance on an acceptable vote counting method based on direction of effect is currently available [ 160 ] and should be used instead of narrative descriptions of such results (eg, “more than half the studies showed improvement”; “only a few studies reported adverse effects”; “7 out of 10 studies favored the intervention”). Unacceptable methods include vote counting by statistical significance or magnitude of effect or some subjective rule applied by the authors.

AMSTAR-2 and ROBIS standards do not explicitly address conduct of syntheses without meta-analysis, although AMSTAR-2 items 13 and 14 might be considered relevant. Guidance for the complete reporting of syntheses without meta-analysis for systematic reviews of interventions is available in the Synthesis without Meta-analysis (SWiM) guideline [ 180 ] and methodological guidance is available in the Cochrane Handbook [ 160 , 181 ].

Familiarity with AMSTAR-2 and ROBIS makes sense for authors of systematic reviews as these appraisal tools will be used to judge their work; however, training is necessary for authors to truly appreciate and apply methodological rigor. Moreover, judgment of the potential contribution of a systematic review to the current knowledge base goes beyond meeting the standards of AMSTAR-2 and ROBIS. These tools do not explicitly address some crucial concepts involved in the development of a systematic review; this further emphasizes the need for author training.

We recommend that systematic review authors incorporate specific practices or exercises when formulating a research question at the protocol stage, These should be designed to raise the review team’s awareness of how to prevent research and resource waste [ 84 , 130 ] and to stimulate careful contemplation of the scope of the review [ 30 ]. Authors’ training should also focus on justifiably choosing a formal method for the synthesis of quantitative and/or qualitative data from primary research; both types of data require specific expertise. For typical reviews that involve syntheses of quantitative data, statistical expertise is necessary, initially for decisions about appropriate methods, [ 160 , 161 ] and then to inform any meta-analyses [ 167 ] or other statistical methods applied [ 160 ].

Part 5. Rating overall certainty of evidence

Report of an overall certainty of evidence assessment in a systematic review is an important new reporting standard of the updated PRISMA 2020 guidelines [ 93 ]. Systematic review authors are well acquainted with assessing RoB in individual primary studies, but much less familiar with assessment of overall certainty across an entire body of evidence. Yet a reliable way to evaluate this broader concept is now recognized as a vital part of interpreting the evidence.

Historical systems for rating evidence are based on study design and usually involve hierarchical levels or classes of evidence that use numbers and/or letters to designate the level/class. These systems were endorsed by various EBM-related organizations. Professional societies and regulatory groups then widely adopted them, often with modifications for application to the available primary research base in specific clinical areas. In 2002, a report issued by the AHRQ identified 40 systems to rate quality of a body of evidence [ 182 ]. A critical appraisal of systems used by prominent health care organizations published in 2004 revealed limitations in sensibility, reproducibility, applicability to different questions, and usability to different end users [ 183 ]. Persistent use of hierarchical rating schemes to describe overall quality continues to complicate the interpretation of evidence. This is indicated by recent reports of poor interpretability of systematic review results by readers [ 184 – 186 ] and misleading interpretations of the evidence related to the “spin” systematic review authors may put on their conclusions [ 50 , 187 ].

Recognition of the shortcomings of hierarchical rating systems raised concerns that misleading clinical recommendations could result even if based on a rigorous systematic review. In addition, the number and variability of these systems were considered obstacles to quick and accurate interpretations of the evidence by clinicians, patients, and policymakers [ 183 ]. These issues contributed to the development of the GRADE approach. An international working group, that continues to actively evaluate and refine it, first introduced GRADE in 2004 [ 188 ]. Currently more than 110 organizations from 19 countries around the world have endorsed or are using GRADE [ 189 ].

GRADE approach to rating overall certainty

GRADE offers a consistent and sensible approach for two separate processes: rating the overall certainty of a body of evidence and the strength of recommendations. The former is the expected conclusion of a systematic review, while the latter is pertinent to the development of CPGs. As such, GRADE provides a mechanism to bridge the gap from evidence synthesis to application of the evidence for informed clinical decision-making [ 27 , 190 ]. We briefly examine the GRADE approach but only as it applies to rating overall certainty of evidence in systematic reviews.

In GRADE, use of “certainty” of a body of evidence is preferred over the term “quality.” [ 191 ] Certainty refers to the level of confidence systematic review authors have that, for each outcome, an effect estimate represents the true effect. The GRADE approach to rating confidence in estimates begins with identifying the study type (RCT or NRSI) and then systematically considers criteria to rate the certainty of evidence up or down (Table 5.1 ).

GRADE criteria for rating certainty of evidence

a Applies to randomized studies

b Applies to non-randomized studies

This process results in assignment of one of the four GRADE certainty ratings to each outcome; these are clearly conveyed with the use of basic interpretation symbols (Table 5.2 ) [ 192 ]. Notably, when multiple outcomes are reported in a systematic review, each outcome is assigned a unique certainty rating; thus different levels of certainty may exist in the body of evidence being examined.

GRADE certainty ratings and their interpretation symbols a

a From the GRADE Handbook [ 192 ]

GRADE’s developers acknowledge some subjectivity is involved in this process [ 193 ]. In addition, they emphasize that both the criteria for rating evidence up and down (Table 5.1 ) as well as the four overall certainty ratings (Table 5.2 ) reflect a continuum as opposed to discrete categories [ 194 ]. Consequently, deciding whether a study falls above or below the threshold for rating up or down may not be straightforward, and preliminary overall certainty ratings may be intermediate (eg, between low and moderate). Thus, the proper application of GRADE requires systematic review authors to take an overall view of the body of evidence and explicitly describe the rationale for their final ratings.

Advantages of GRADE

Outcomes important to the individuals who experience the problem of interest maintain a prominent role throughout the GRADE process [ 191 ]. These outcomes must inform the research questions (eg, PICO [population, intervention, comparator, outcome]) that are specified a priori in a systematic review protocol. Evidence for these outcomes is then investigated and each critical or important outcome is ultimately assigned a certainty of evidence as the end point of the review. Notably, limitations of the included studies have an impact at the outcome level. Ultimately, the certainty ratings for each outcome reported in a systematic review are considered by guideline panels. They use a different process to formulate recommendations that involves assessment of the evidence across outcomes [ 201 ]. It is beyond our scope to describe the GRADE process for formulating recommendations; however, it is critical to understand how these two outcome-centric concepts of certainty of evidence in the GRADE framework are related and distinguished. An in-depth illustration using examples from recently published evidence syntheses and CPGs is provided in Additional File 5 A (Table AF5A-1).

The GRADE approach is applicable irrespective of whether the certainty of the primary research evidence is high or very low; in some circumstances, indirect evidence of higher certainty may be considered if direct evidence is unavailable or of low certainty [ 27 ]. In fact, most interventions and outcomes in medicine have low or very low certainty of evidence based on GRADE and there seems to be no major improvement over time [ 202 , 203 ]. This is still a very important (even if sobering) realization for calibrating our understanding of medical evidence. A major appeal of the GRADE approach is that it offers a common framework that enables authors of evidence syntheses to make complex judgments about evidence certainty and to convey these with unambiguous terminology. This prevents some common mistakes made by review authors, including overstating results (or under-reporting harms) [ 187 ] and making recommendations for treatment. This is illustrated in Table AF5A-2 (Additional File 5 A), which compares the concluding statements made about overall certainty in a systematic review with and without application of the GRADE approach.

Theoretically, application of GRADE should improve consistency of judgments about certainty of evidence, both between authors and across systematic reviews. In one empirical evaluation conducted by the GRADE Working Group, interrater reliability of two individual raters assessing certainty of the evidence for a specific outcome increased from ~ 0.3 without using GRADE to ~ 0.7 by using GRADE [ 204 ]. However, others report variable agreement among those experienced in GRADE assessments of evidence certainty [ 190 ]. Like any other tool, GRADE requires training in order to be properly applied. The intricacies of the GRADE approach and the necessary subjectivity involved suggest that improving agreement may require strict rules for its application; alternatively, use of general guidance and consensus among review authors may result in less consistency but provide important information for the end user [ 190 ].

GRADE caveats

Simply invoking “the GRADE approach” does not automatically ensure GRADE methods were employed by authors of a systematic review (or developers of a CPG). Table 5.3 lists the criteria the GRADE working group has established for this purpose. These criteria highlight the specific terminology and methods that apply to rating the certainty of evidence for outcomes reported in a systematic review [ 191 ], which is different from rating overall certainty across outcomes considered in the formulation of recommendations [ 205 ]. Modifications of standard GRADE methods and terminology are discouraged as these may detract from GRADE’s objectives to minimize conceptual confusion and maximize clear communication [ 206 ].

Criteria for using GRADE in a systematic review a

a Adapted from the GRADE working group [ 206 ]; this list does not contain the additional criteria that apply to the development of a clinical practice guideline

Nevertheless, GRADE is prone to misapplications [ 207 , 208 ], which can distort a systematic review’s conclusions about the certainty of evidence. Systematic review authors without proper GRADE training are likely to misinterpret the terms “quality” and “grade” and to misunderstand the constructs assessed by GRADE versus other appraisal tools. For example, review authors may reference the standard GRADE certainty ratings (Table 5.2 ) to describe evidence for their outcome(s) of interest. However, these ratings are invalidated if authors omit or inadequately perform RoB evaluations of each included primary study. Such deficiencies in RoB assessments are unacceptable but not uncommon, as reported in methodological studies of systematic reviews and overviews [ 104 , 186 , 209 , 210 ]. GRADE ratings are also invalidated if review authors do not formally address and report on the other criteria (Table 5.1 ) necessary for a GRADE certainty rating.

Other caveats pertain to application of a GRADE certainty of evidence rating in various types of evidence syntheses. Current adaptations of GRADE are described in Additional File 5 B and included on Table 6.3 , which is introduced in the next section.

Concise Guide to best practices for evidence syntheses, version 1.0 a

AMSTAR A MeaSurement Tool to Assess Systematic Reviews, CASP Critical Appraisal Skills Programme, CERQual Confidence in the Evidence from Reviews of Qualitative research, ConQual Establishing Confidence in the output of Qualitative research synthesis, COSMIN COnsensus-based Standards for the selection of health Measurement Instruments, DTA diagnostic test accuracy, eMERGe meta-ethnography reporting guidance, ENTREQ enhancing transparency in reporting the synthesis of qualitative research, GRADE Grading of Recommendations Assessment, Development and Evaluation, MA meta-analysis, NRSI non-randomized studies of interventions, P protocol, PRIOR Preferred Reporting Items for Overviews of Reviews, PRISMA Preferred Reporting Items for Systematic Reviews and Meta-Analyses, PROBAST Prediction model Risk Of Bias ASsessment Tool, QUADAS quality assessment of studies of diagnostic accuracy included in systematic reviews, QUIPS Quality In Prognosis Studies, RCT randomized controlled trial, RoB risk of bias, ROBINS-I Risk Of Bias In Non-randomised Studies of Interventions, ROBIS Risk of Bias in Systematic Reviews, ScR scoping review, SWiM systematic review without meta-analysis

a Superscript numbers represent citations provided in the main reference list. Additional File 6 lists links to available online resources for the methods and tools included in the Concise Guide

b The MECIR manual [ 30 ] provides Cochrane’s specific standards for both reporting and conduct of intervention systematic reviews and protocols

c Editorial and peer reviewers can evaluate completeness of reporting in submitted manuscripts using these tools. Authors may be required to submit a self-reported checklist for the applicable tools

d The decision flowchart described by Flemming and colleagues [ 223 ] is recommended for guidance on how to choose the best approach to reporting for qualitative reviews

e SWiM was developed for intervention studies reporting quantitative data. However, if there is not a more directly relevant reporting guideline, SWiM may prompt reviewers to consider the important details to report. (Personal Communication via email, Mhairi Campbell, 14 Dec 2022)

f JBI recommends their own tools for the critical appraisal of various quantitative primary study designs included in systematic reviews of intervention effectiveness, prevalence and incidence, and etiology and risk as well as for the critical appraisal of systematic reviews included in umbrella reviews. However, except for the JBI Checklists for studies reporting prevalence data and qualitative research, the development, validity, and reliability of these tools are not well documented

g Studies that are not RCTs or NRSI require tools developed specifically to evaluate their design features. Examples include single case experimental design [ 155 , 156 ] and case reports and series [ 82 ]

h The evaluation of methodological quality of studies included in a synthesis of qualitative research is debatable [ 224 ]. Authors may select a tool appropriate for the type of qualitative synthesis methodology employed. The CASP Qualitative Checklist [ 218 ] is an example of a published, commonly used tool that focuses on assessment of the methodological strengths and limitations of qualitative studies. The JBI Critical Appraisal Checklist for Qualitative Research [ 219 ] is recommended for reviews using a meta-aggregative approach

i Consider including risk of bias assessment of included studies if this information is relevant to the research question; however, scoping reviews do not include an assessment of the overall certainty of a body of evidence

j Guidance available from the GRADE working group [ 225 , 226 ]; also recommend consultation with the Cochrane diagnostic methods group

k Guidance available from the GRADE working group [ 227 ]; also recommend consultation with Cochrane prognostic methods group

l Used for syntheses in reviews with a meta-aggregative approach [ 224 ]

m Chapter 5 in the JBI Manual offers guidance on how to adapt GRADE to prevalence and incidence reviews [ 69 ]

n Janiaud and colleagues suggest criteria for evaluating evidence certainty for meta-analyses of non-randomized studies evaluating risk factors [ 228 ]

o The COSMIN user manual provides details on how to apply GRADE in systematic reviews of measurement properties [ 229 ]

The expected culmination of a systematic review should be a rating of overall certainty of a body of evidence for each outcome reported. The GRADE approach is recommended for making these judgments for outcomes reported in systematic reviews of interventions and can be adapted for other types of reviews. This represents the initial step in the process of making recommendations based on evidence syntheses. Peer reviewers should ensure authors meet the minimal criteria for supporting the GRADE approach when reviewing any evidence synthesis that reports certainty ratings derived using GRADE. Authors and peer reviewers of evidence syntheses unfamiliar with GRADE are encouraged to seek formal training and take advantage of the resources available on the GRADE website [ 211 , 212 ].

Part 6. Concise Guide to best practices

Accumulating data in recent years suggest that many evidence syntheses (with or without meta-analysis) are not reliable. This relates in part to the fact that their authors, who are often clinicians, can be overwhelmed by the plethora of ways to evaluate evidence. They tend to resort to familiar but often inadequate, inappropriate, or obsolete methods and tools and, as a result, produce unreliable reviews. These manuscripts may not be recognized as such by peer reviewers and journal editors who may disregard current standards. When such a systematic review is published or included in a CPG, clinicians and stakeholders tend to believe that it is trustworthy. A vicious cycle in which inadequate methodology is rewarded and potentially misleading conclusions are accepted is thus supported. There is no quick or easy way to break this cycle; however, increasing awareness of best practices among all these stakeholder groups, who often have minimal (if any) training in methodology, may begin to mitigate it. This is the rationale for inclusion of Parts 2 through 5 in this guidance document. These sections present core concepts and important methodological developments that inform current standards and recommendations. We conclude by taking a direct and practical approach.

Inconsistent and imprecise terminology used in the context of development and evaluation of evidence syntheses is problematic for authors, peer reviewers and editors, and may lead to the application of inappropriate methods and tools. In response, we endorse use of the basic terms (Table 6.1 ) defined in the PRISMA 2020 statement [ 93 ]. In addition, we have identified several problematic expressions and nomenclature. In Table 6.2 , we compile suggestions for preferred terms less likely to be misinterpreted.

Terms relevant to the reporting of health care–related evidence syntheses a

a Reproduced from Page and colleagues [ 93 ]

Terminology suggestions for health care–related evidence syntheses

a For example, meta-aggregation, meta-ethnography, critical interpretative synthesis, realist synthesis

b This term may best apply to the synthesis in a mixed methods systematic review in which data from different types of evidence (eg, qualitative, quantitative, economic) are summarized [ 64 ]

We also propose a Concise Guide (Table 6.3 ) that summarizes the methods and tools recommended for the development and evaluation of nine types of evidence syntheses. Suggestions for specific tools are based on the rigor of their development as well as the availability of detailed guidance from their developers to ensure their proper application. The formatting of the Concise Guide addresses a well-known source of confusion by clearly distinguishing the underlying methodological constructs that these tools were designed to assess. Important clarifications and explanations follow in the guide’s footnotes; associated websites, if available, are listed in Additional File 6 .

To encourage uptake of best practices, journal editors may consider adopting or adapting the Concise Guide in their instructions to authors and peer reviewers of evidence syntheses. Given the evolving nature of evidence synthesis methodology, the suggested methods and tools are likely to require regular updates. Authors of evidence syntheses should monitor the literature to ensure they are employing current methods and tools. Some types of evidence syntheses (eg, rapid, economic, methodological) are not included in the Concise Guide; for these, authors are advised to obtain recommendations for acceptable methods by consulting with their target journal.

We encourage the appropriate and informed use of the methods and tools discussed throughout this commentary and summarized in the Concise Guide (Table 6.3 ). However, we caution against their application in a perfunctory or superficial fashion. This is a common pitfall among authors of evidence syntheses, especially as the standards of such tools become associated with acceptance of a manuscript by a journal. Consequently, published evidence syntheses may show improved adherence to the requirements of these tools without necessarily making genuine improvements in their performance.

In line with our main objective, the suggested tools in the Concise Guide address the reliability of evidence syntheses; however, we recognize that the utility of systematic reviews is an equally important concern. An unbiased and thoroughly reported evidence synthesis may still not be highly informative if the evidence itself that is summarized is sparse, weak and/or biased [ 24 ]. Many intervention systematic reviews, including those developed by Cochrane [ 203 ] and those applying GRADE [ 202 ], ultimately find no evidence, or find the evidence to be inconclusive (eg, “weak,” “mixed,” or of “low certainty”). This often reflects the primary research base; however, it is important to know what is known (or not known) about a topic when considering an intervention for patients and discussing treatment options with them.

Alternatively, the frequency of “empty” and inconclusive reviews published in the medical literature may relate to limitations of conventional methods that focus on hypothesis testing; these have emphasized the importance of statistical significance in primary research and effect sizes from aggregate meta-analyses [ 183 ]. It is becoming increasingly apparent that this approach may not be appropriate for all topics [ 130 ]. Development of the GRADE approach has facilitated a better understanding of significant factors (beyond effect size) that contribute to the overall certainty of evidence. Other notable responses include the development of integrative synthesis methods for the evaluation of complex interventions [ 230 , 231 ], the incorporation of crowdsourcing and machine learning into systematic review workflows (eg the Cochrane Evidence Pipeline) [ 2 ], the shift in paradigm to living systemic review and NMA platforms [ 232 , 233 ] and the proposal of a new evidence ecosystem that fosters bidirectional collaborations and interactions among a global network of evidence synthesis stakeholders [ 234 ]. These evolutions in data sources and methods may ultimately make evidence syntheses more streamlined, less duplicative, and more importantly, they may be more useful for timely policy and clinical decision-making; however, that will only be the case if they are rigorously reported and conducted.

We look forward to others’ ideas and proposals for the advancement of methods for evidence syntheses. For now, we encourage dissemination and uptake of the currently accepted best tools and practices for their development and evaluation; at the same time, we stress that uptake of appraisal tools, checklists, and software programs cannot substitute for proper education in the methodology of evidence syntheses and meta-analysis. Authors, peer reviewers, and editors must strive to make accurate and reliable contributions to the present evidence knowledge base; online alerts, upcoming technology, and accessible education may make this more feasible than ever before. Our intention is to improve the trustworthiness of evidence syntheses across disciplines, topics, and types of evidence syntheses. All of us must continue to study, teach, and act cooperatively for that to happen.

Acknowledgements

Michelle Oakman Hayes for her assistance with the graphics, Mike Clarke for his willingness to answer our seemingly arbitrary questions, and Bernard Dan for his encouragement of this project.

Authors’ contributions

All authors participated in the development of the ideas, writing, and review of this manuscript. The author(s) read and approved the final manuscript.

The work of John Ioannidis has been supported by an unrestricted gift from Sue and Bob O’Donnell to Stanford University.

Literature Review Generator by AcademicHelp

Sybil Low

Features of Our Literature Review Generator

Advanced power of AI

Advanced power of AI

Simplified information gathering

Simplified information gathering

Enhanced quality

Enhanced quality

Rrl generator – your friend in academic writing.

Literature reviews can be tricky. They require your full attention and dedication, leaving no place for distractions. And with so many assignments on your hands, it must be very hard to concentrate just on this one thing.

No need to worry though. With our RRL AI Generator creating any type of paper that requires scrupulous literature will be as easy as it gets.

How to Work With Literature Review Generator

We designed our platform in a way that wouldn’t require you to spend much time figuring out how to work with it. What you have to do is just specify your topic, the subject of your literature review, and any further instructions on the style, formatting, and structure. After that you enter the number of pages you need to be written and, if there’s a requirement for that, formatting style. Wait for around 2 minutes and that’s all – our AI will give you the paper crafted according to your specifications.

What Makes AI Literature Review Generator Special

You are probably wondering how our AI bot is better than basically any other AI-powered solution you can find online. Well, we won’t say that our tool is a magical service that can do everything better. To be fair, as any AI it is not yet ideal. Still, our platform is more tailored to academic writing than most of the other bots. With its help, you can not just simply produce text, but also receive a paper with sources and properly organized formatting. This makes it a perfect match for those who specifically need help with tough papers, such as literature reviews, research abstracts, and analysis essays.

Why Use the Free Online Literature Review Generator 

With our Free Online Literature Review you will be able to finish your literature review assignments in just a few minutes. This will allow you to dedicate your free time to a) proofreading, and b) finishing or starting on more important tasks and projects. This tool can also help you understand the direction of your work, its structure, and possible sources you can use. In general, it is a more efficient way of doing your homework and organizing the writing process that can help you get better grades and improve your writing skills.

Free Literature Review Generator

tool literature review

Is there a free AI tool for literature review?

Yes, of course, some tools will help you with your literature review. One of the great solutions is the AcademicHelp Literature Review Generator. It offers a quick and simple work process, where you can specify all the requirements for your paper, and then receive a fully completed task in just 2 minutes. It is a specially fitting service for those looking for a budget-friendly tool.

How to create a literature review?

Crafting a literature review calls for a systematic approach to examining existing scholarly work on a specific topic. Thus, start by defining a clear research question or thesis statement to guide your focus. Conduct a thorough search of relevant databases and academic journals to gather sources that address your topic. Read and analyze these sources, noting key themes, methodologies, and conclusions. Organize the literature by themes or methods, and synthesize the findings to provide a critical overview of the existing research. Your review should give context to the research within the field, noting areas of consensus, debate, and gaps in knowledge. Finally, write your literature review, integrating your analysis with your thesis statement, providing a clear and structured narrative that offers insights into the research topic.

Can I write a literature review in 5 days?

It is possible to write a literature review in 5 days, but you will need careful planning and dedication. Start by quickly defining your topic and research question. Dedicate a day to intensive research, finding and selecting relevant sources. Spend the next two days reading and summarizing these sources. On the fourth day, organize your notes and outline the review, focusing on arranging the main findings around key themes. Use the final day to write and revise your literature review, so that it is logically structured.

What are the 5 rules for writing a literature review?

When writing a literature review, you initially need to follow these essential rules: First, maintain a clear focus and structure. Your review should be organized around your thesis statement or key question, with each section logically leading to the next. Second, be critical and analytical rather than merely descriptive. Discuss the strengths and weaknesses of the research, the methodologies used, and the conclusions drawn. Third, include credible and versatile sources to represent a balanced view of the topic. Fourth, synthesize the information from your sources to create a narrative that adds value to your field of study. Finally, your writing should be clear, concise, and plagiarism-free, with all the sources appropriately cited.

Remember Me

What is your profession ? Student Teacher Writer Other

Forgotten Password?

Username or Email

Conducting a Literature Review

  • Literature Review
  • Developing a Topic
  • Planning Your Literature Review
  • Developing a Search Strategy
  • Managing Citations
  • Critical Appraisal Tools
  • Writing a Literature Review

Appraise Your Research Articles

The structure of a literature review should include the following :

  • An overview of the subject, issue, or theory under consideration, along with the objectives of the literature review,
  • Division of works under review into themes or categories [e.g. works that support a particular position, those against, and those offering alternative approaches entirely],
  • An explanation of how each work is similar to and how it varies from the others,
  • Conclusions as to which pieces are best considered in their argument, are most convincing of their opinions, and make the greatest contribution to the understanding and development of their area of research.

The critical evaluation of each work should consider :

  • Provenance  -- what are the author's credentials? Are the author's arguments supported by evidence [e.g. primary historical material, case studies, narratives, statistics, recent scientific findings]?
  • Methodology  -- were the techniques used to identify, gather, and analyze the data appropriate to addressing the research problem? Was the sample size appropriate? Were the results effectively interpreted and reported?
  • Objectivity  -- is the author's perspective even-handed or prejudicial? Is contrary data considered or is certain pertinent information ignored to prove the author's point?
  • Persuasiveness  -- which of the author's theses are most convincing or least convincing?
  • Value  -- are the author's arguments and conclusions convincing? Does the work ultimately contribute in any significant way to an understanding of the subject?

Reviewing the Literature

While conducting a review of the literature, maximize the time you devote to writing this part of your paper by thinking broadly about what you should be looking for and evaluating. Review not just what the articles are saying, but how are they saying it.

Some questions to ask:

  • How are they organizing their ideas?
  • What methods have they used to study the problem?
  • What theories have been used to explain, predict, or understand their research problem?
  • What sources have they cited to support their conclusions?
  • How have they used non-textual elements [e.g., charts, graphs, figures, etc.] to illustrate key points?
  • When you begin to write your literature review section, you'll be glad you dug deeper into how the research was designed and constructed because it establishes a means for developing more substantial analysis and interpretation of the research problem.

Tools for Critical Appraisal

Now, that you have found articles based on your research question you can appraise the quality of those articles. These are resources you can use to appraise different study designs.

Centre for Evidence Based Medicine (Oxford)

University of Glasgow

"AFP uses the Strength-of-Recommendation Taxonomy (SORT), to label key recommendations in clinical review articles."

  • SORT: Rating the Strength of Evidence    American Family Physician and other family medicine journals use the Strength of Recommendation Taxonomy (SORT) system for rating bodies of evidence for key clinical recommendations.

Seton Hall logo

  • The Interprofessional Health Sciences Library
  • 123 Metro Boulevard
  • Nutley, NJ 07110
  • [email protected]
  • Visiting Campus
  • News and Events
  • Parents and Families
  • Web Accessibility
  • Career Center
  • Public Safety
  • Accountability
  • Privacy Statements
  • Report a Problem
  • Login to LibApps

Help | Advanced Search

Computer Science > Cryptography and Security

Title: large language models for cyber security: a systematic literature review.

Abstract: The rapid advancement of Large Language Models (LLMs) has opened up new opportunities for leveraging artificial intelligence in various domains, including cybersecurity. As the volume and sophistication of cyber threats continue to grow, there is an increasing need for intelligent systems that can automatically detect vulnerabilities, analyze malware, and respond to attacks. In this survey, we conduct a comprehensive review of the literature on the application of LLMs in cybersecurity (LLM4Security). By comprehensively collecting over 30K relevant papers and systematically analyzing 127 papers from top security and software engineering venues, we aim to provide a holistic view of how LLMs are being used to solve diverse problems across the cybersecurity domain. Through our analysis, we identify several key findings. First, we observe that LLMs are being applied to a wide range of cybersecurity tasks, including vulnerability detection, malware analysis, network intrusion detection, and phishing detection. Second, we find that the datasets used for training and evaluating LLMs in these tasks are often limited in size and diversity, highlighting the need for more comprehensive and representative datasets. Third, we identify several promising techniques for adapting LLMs to specific cybersecurity domains, such as fine-tuning, transfer learning, and domain-specific pre-training. Finally, we discuss the main challenges and opportunities for future research in LLM4Security, including the need for more interpretable and explainable models, the importance of addressing data privacy and security concerns, and the potential for leveraging LLMs for proactive defense and threat hunting. Overall, our survey provides a comprehensive overview of the current state-of-the-art in LLM4Security and identifies several promising directions for future research.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

  • Open access
  • Published: 09 May 2024

Designing an evaluation tool for evaluating training programs of medical students in clinical skill training center from consumers’ perspective

  • Rezvan Azad 1 ,
  • Mahsa Shakour 2 &
  • Narjes Moharami 2  

BMC Medical Education volume  24 , Article number:  502 ( 2024 ) Cite this article

148 Accesses

Metrics details

Introduction

The Clinical Skill Training Center (CSTC) is the first environment where third year medical students learn clinical skills after passing basic science. Consumer- based evaluation is one of the ways to improve this center with the consumer. This study was conducted with the aim of preparing a consumer-oriented evaluation tool for CSTC among medical students.

The study was mixed method. The first phase was qualitative and for providing an evaluation tool. The second phase was for evaluating the tool. At the first phase, after literature review in the Divergent phase, a complete list of problems in the field of CSTC in medicine schools was prepared. In the convergent step, the prepared list was compared with the standards of clinical education and values of scriven. In the second phase it was evaluated by the scientific and authority committee. Validity has been measured by determining CVR and CVI: Index. The face and content validity of the tool was obtained through the approval of a group of specialists.

The findings of the research were in the form of 4 questionnaires: clinical instructors, pre-clinical medical students, and interns. All items were designed as a 5-point Likert. The main areas of evaluation included the objectives and content of training courses, implementation of operations, facilities and equipment, and the environment and indoor space. In order to examine the long-term effects, a special evaluation form was designed for intern.

The tool for consumer evaluation was designed with good reliability and trustworthiness and suitable for use in the CSTC, and its use can improve the effectiveness of clinical education activities.

Peer Review reports

Mastering clinical skills is one of the essential requirements for becoming a physician and pre-clinical courses play an important role in forming these clinical skills in medical students. The importance of these courses is such that a Clinical Skill Training Center (CSTC) has been formed especially for this purpose, which is nowadays used for training pre-clinical skills and some of the more advanced procedures such as operating room simulation [ 1 ]. The CSTC is an educational environment where students can use the available resources and the supervision of experienced faculty members to be introduced to clinical skills, train and gain experience in these skills and receive immediate feedback to resolve their mistakes and shortcomings [ 2 ]. The aim of the student’s participation in this center is the training of students who have sufficient theoretical knowledge but lack the necessary skills for working in the clinical setting. Therefore, this center supports students in the acquisition, maintenance and improvement of their clinical medical skills [ 3 ]. In this center, students can learn and repeat treatment procedures in a safe environment without severe consequences which reduces their stress and allows them to train and learn [ 4 ]. In this study, medical students attend this center for the first time after the end of theoretical course and before entering the hospital for the first time and Preliminary learn practical medical skills such as performing a variety of examinations and history taking. Then, in externship and internship, they can practice more advanced courses such as cardiopulmonary resuscitation, dressing and stitches etc. in small groups.

The importance of these centers like CSTCs is the fact that learning a large number of practical and communicational skills related to theoretical knowledge is one of the essential characteristics of medical education and can play an important role in the future careers of the students and training of specialized human resources in the field of medicine and healthcare [ 4 ]. However, one of the important matters in clinical training is the quality of education which can directly affect the quality of healthcare services provided to society. The quality of education is, in turn, affected by the details of the educational programs. Therefore, the evaluation of educational programs can play an important role in providing quality equations. In other words, using suitable evaluation mechanisms creates the requirements for performance transparency and accountability in the clinical education system in medical education [ 5 ]. Observing the principles of evaluation can also help determine the shortcomings and programs in educational programs [ 2 ]. However, the evaluation of educational programs is often faced with difficulties. Evaluations conducted to ensure the suitable quality of education for medical students must determine whether the students have achieved acceptable clinical standards which is only possible through careful evaluation of their training programs [ 2 ].

There are various problems concerning evaluation tools. The faculty members in medicine are still faced with challenges concerning the improvement of evaluation tools and the creation of tools for evaluating factors which are hard to quantify or qualify, such as professionalism, group work and expertise [ 6 ].

Despite various theories regarding evaluation, the lack of credible and valid evaluation tools for educational programs is still being felt [ 7 ]. Using suitable evaluation tools can create an overview of the current situation of the training programs based on the quality factors of the curriculum and can be used as a guideline for decision-making, planning, faculty development and improving the quality of education [ 8 ]. Perhaps the most important value of a suitable evaluation tool for training programs is providing a clear picture and operational and measurable measures regarding the implementation of educational programs. Furthermore, after completion, such a tool can be used as a constant interventional screening tool by academic groups, faculty members and authorities in practical training programs.

The consumer-oriented model advocated by evaluation expert and philosopher Michael Scriven. This model of evaluation like other models, is to make a value judgment about the quality of a program, product, or policy in order to determine its value, merit, or importance, but in this model, the value judgment is based on the level of satisfaction and usefulness of the curriculum for the consumers of the program. It is achieved and the evaluator considers himself to be responsive to their needs and demands. The models that are included in this approach have paid more attention to their responsibility towards the consumers of curriculum and educational programs.it is an exercise in value-free measurement of whether program goals were achieved [ 9 , 10 ].

The current study aims to design an evaluation tool for training programs in the CSTC based on consumers’ perspectives and assess its validity and reliability to facilitate the evaluation of educational programs and help improve the practical skills of medical students. Therefore, the prepared evaluation tool not only can be used for continuous improvement of educational equality but can also be used for validation of educational programs.

Subjects and methods

The study was mixed method with triangulation approach. This was a developmental study for developing an evaluation tool for educational programs of the CSTC in medicine schools from consumers’ perspective using data gathered through qualitative study, descriptive – survey study and from many resources. The study was done in 2020 until 2022 and in Arak University of Medical Sciences. Samples were students in different level, and clinical teachers who are consumers and main stakeholders. This study included two main phases.

The first phase was qualitative. Samples were literature and 10 experts. Sampling was purposeful. This phase was for decision-making regarding factors used for evaluating the educational programs of the CSTC. In this phase and to create a deep understanding of the topic, the literature related to the subject matter was reviewed. The reviewed literature related to evaluation was based on the consumers’ perspective evaluation and questionnaire preparation method. Then, using the Scriven consumer opinion questionnaire, standards for CSTC, and the available literature, interviews were conducted with experts and stakeholders in the CSTC. These interviews aimed to prepare a comprehensive list of problems, and concerns related to the educational programs at the clinical skill training center which the evaluation tool aimed to answer. This stage was known as the divergent stage where the topics discussed in the interviews included educational goals, content, equipment, educational processes, the environment and physical location. Some of the questions asked in this stage included “What is the level of achieving educational goals among students in the current program?”, “How effective is the practical program of the center in improving the clinical skills of the students?”, “Does the center has access to sufficient tools and equipment for completing its educational program?” and “what are the long-term effects of CSTC’s educational program?”

In the next step, known as the convergent step, the list prepared in the previous stage was combined with the educational standards for CSTCs provided by the deputy of education, ministry of health as well as Scriven criteria. The results were then carefully assessed by a scientific and authority committee consisting of the Educational Deputy of Clinical Education of the Faculty of Medicine, Director of Educational Affairs of the Faculty of Medicine, Director of Clinical Skills Training Center and Curriculum, Expert of Clinical Skills Center and Bachelor of Technical Affairs of Clinical Skills Training Center in the Faculty of Medicine of Arak University of Medical Sciences. The questionnaire items were selected based on the importance and evaluation criteria. The data gathering tool was prepared after determining the evaluation questions, data gathering sources and designing the evaluation method. Customers in this study were clinical training faculty members and medical students (externship, pre-clinical and internship students). Therefore, we designed four questionnaires with special questions. Every questionnaire is designed in 5 domains (Learning objectives and course content, Equipment and tools, Educational processes, Environment and physical location).

The second phase was quantitative and it was survey. Samples were professors and who were experts in subject and medical students (externship, pre-clinical and internship students). Sampling was conventional and purposeful. 10 faculty members and 71 students were selected. This phase was for measuring the questionnaire’s face and content validity. The validity was measured using Content Validity Ratio (CVR) and Content Validity Index (CVI) using Lawshe’s method. In this method, the opinion of experts in the field concerning the questionnaire content is used to calculate these factors [ 11 ]. A total of 10 faculty members participated in the validity survey and including faculty members from specialty fields of medical education, gynecology, infectious diseases, emergency medicine, pediatric medicine, nursing and midwifery. After explaining the research goals to the participants and providing them with the operational definitions related to the contents of the items, they were asked to mark each item in a table using a three-part Likert scale using “essential”, “useful not nonessential” and “nonessential” scores. Then, Content Validity Ratio was calculated using the following equations. CVR= \(\frac{\varvec{n}\varvec{e}-\varvec{n}/2}{\varvec{n}/2}\) . In this equation, n is the total number of experts, and n e is the number of experts who have selected the “essential” score. Using the CVR table, the minimum CVT value for accepting an item based on the participants’ opinions was set at 0.62.

After calculating CVR, the method proposed by Waltz & Bausell was used for determining the CVI. To this end, a CVI evaluation table was prepared for the items using a four-part scale including “unrelated”, “requiring major revision”, “requiring minor revision” and “relevant” scores and delivered to the 10 participating experts who were asked to provide their opinions regarding each item. Then, the CVI value was calculated for each item by dividing the total number of “requiring minor revision” and “relevant” answers by the total number of experts. The items with CVI values higher than 0.79 were accepted [ 11 , 12 ]. The reliability of the questionnaire was determined with emphasis on internal correlation with the help of SPSS software and was higher than 0.8, which confirmed the suitable reliability of the questionnaire. A panel of experts then conducted a qualitative review of the items, edited their grammar, and modified unclear statements based on the research goals. In general, the entire phrase should have been accepted by the majority of the panel based on simplicity, clarity and lack of ambiguity. The face validity was also calculated by scoring the effect of each item on the questionnaire. This score was then used to eliminate phrases with scores lower than 1.5. After evaluating the face validity, Content Validity Ratio (CVR) was calculated by the experts and items with CVR values less than the threshold value were selected and eliminated. After that, we used this tool by 71 students and 11 teachers to assess reliability according to Cronbach’s alpha.

The results of the current study indicate that according to the faculty members and experts participating in this study, the evaluation of educational programs of clinical skill training centers includes evaluation of programs in regards to goal and content, educational processes, equipment and tools, and environment and physical location. After interviews with clinical training experts and a review of relevant literature, 4 separate questionnaires were developed for clinical training faculty members, pre-clinical students, internship students, and externship students. All experts as samples answered all questions for validity and 71 students of 90 students completely answered the questionnaires.

The questionnaire for faculty members included 35 items (Table  1 ), the one for interns included 6 items (Table  2 ), the externship students’ questionnaire included 29 (Table  3 ) items and the questionnaire for pre-clinical students included 41 items (Table  4 ). All items were designed for scoring using a 5-point Likert system (very low, low, average, high, very high).

The face validity of questionnaires was evaluated using qualitative and quantitative approaches. Among 117 items in 4 questionnaires, 6 items didn’t have suitable content validity (CVR < 0.62) which were eliminated according to the following table (Table  5 ). 111 items had CVR ≥ 0.62 and the results of the CVI assessment indicated that all items were acceptable.

The reliability of the questionnaires was investigated using Cronbach’s Alpha with emphasis on internal correlation with the help of SPSS software as presented in the following table, which confirms the reliability of the questionnaires (Table  6 ). The reliability in all questionnaire was more than %83. Therefore, all items received acceptable reliability and validity scores.

In the current study, a comprehensive researcher-made questionnaire was prepared based on the opinions of experts and curriculum designers while considering all relevant resources and literature which is a unique tool in Iran regarding the expansiveness of the scope. The prepared tool was then used to evaluate the activities of the clinical skills training center in 5 domains (1) program goals and content, (2) tools and equipment, (3) educational processes, (4) environment and physical location and (5) long-term effects of the curriculum.

The first part of the evaluation tool prepared in the current study aims to assess the objective goals of program according to the consumer’s views. CSTC is suitable for training basic and practical skills which are often neglected due to time constraints during the students’ presence in clinical environments [ 6 ]. The factors investigated in this area using the current tool included basic skills such as patient interview, basic resuscitation, clinical examination, practical clinical activities, interpretation of essential clinical findings, prescription skills and patient management. Other studies have also investigated similar factors. For example, Imran et al. (2018) in their study evaluated the attitude of students towards this center and stated that participation in Skill Lab sessions in the pre-clinical years will assist students in their clinical year to achieve better overall performance, as well as better communication skills and self-esteem [ 1 ]. According to previous studies, the majority of students preferred participation in pre-clinical straining in these centers due to the advantages of skill labs for learning clinical skills [ 3 ]. Another study showed that the majority of students prefer participation in skill lab for learning essential clinical skills such as venous blood sampling, catheterization, endotracheal intubation, listening to respiratory sounds, genital examination, etc. compared to directly performing these procedures on patients [ 2 ]. The designed tools in current study evaluated some of these learning objectives. But because of evaluating 5 domains and many questions in every domain, we summarized them to be user friendly. Every questionnaire had some question for objectives that questionnaire respondents as customers (faculty members and medical students) could reply them.

The second part of this evaluation tool is for assessing educational tools such as educational mannequins and models, medical examination devices (Stethoscope, sphygmomanometer, otoscope and ophthalmoscope), medical consumables, audio-visual equipment and information technology facilities. According to the studies, a common factor in CSTCs is access to a wide range of tools in each university as well as using updated technologies for education. These innovations have even resulted in the improved academic ranking of some colleges and medical universities in the world [ 12 ]. The quality of these educational tools is the other important item in many studies [ 13 ]. The quality for mannequin is depended to fidelity. Brydges et al. in his study showed that higher fidelity causes more learning and less time for learning. They suggested that clinical curricula incorporate exposure to multiple simulations to maximize educational [ 14 ].

The third part of this tool is educational processes consisted of evaluating factors such as the length and number of workshops, the effect of CSTC on teaching in a clinical environment, the effect of the center on increasing the motivation and interest in clinical topics, use of volunteer patients and actors and use of modern teaching and assessment methods. This area evaluates the educational process as an important part of clinical training. The importance of this area is also confirmed in other studies. CSTC enables students, including interns and new students, to practice procedures without fearing the consequences. Furthermore, there is also no time of ethical constraints in these practices, enabling the students to be trained in treatment procedures and physical examinations which can be dangerous or painful for the patient [ 2 ]. In this regard, the standardized patient is one of the popular methods used in universities around the world. For example, the University of Massachusetts had been using standardized patients as an education and assessment tool and even as clinical trainers for more than 20 years [ 8 ]. Another example is the simulation center of Grand Valley State University, which provides significant tools for the management of standardized patients, including registration and deployment of standardized patients as needed. This center has designed a website for the registration of standardized patients, which allows individuals to register based on certain criteria, before being trained and deployed according to the protocols [ 8 ].

The effectiveness of clinical skill training centers on motivation was presented in a study by Hashim et al. (2016) on the effects of clinical skill training centers on medical education. According to the results of this study, 84 to 89 per cent of students believed that these centers increase the motivation for medical education as well as interest in learning clinical skills [ 3 ]. In regards to the use of modern methods, one of the most recent examples is the use of clinical simulations using multimedia tools and software which can be used for improving psychological and psychomotor skills. Studies have shown that these centers also lead to improved motivation and independent learning tendencies among students [ 13 ].

The forth part, is related to the evaluation of the environment and physical location in the current tool, accessibility, flexibility in application, similarity to a real environment, specialized training spaces, receiving feedback and use of multimedia technologies. These factors are extracted according to the opinions of experts and stakeholders and have been used in similar studies. According to the standard for clinical skill training centers presented by the Ministry of Health, Treatment and Medical Education, the preferred physical location for a clinical skill training center includes a large area with a flexible application as well as a wardroom, nursing station, ICU or smaller rooms with specialized applications such as operation room and resuscitation room. Furthermore, a clinical skill training center must have access to a suitable location for providing students with multimedia education [ 8 ].

James et al. in their study, have shown effectiveness of an experimental pharmacology skill lab to facilitate training of specific modules for development of core competencies of parenteral drug administration and intravenous drip settings using mannequins for development of skills in administering injections for undergraduate medical students [ 15 ]. These factors were included in the evaluation questionnaire prepared in the current study. In the study by Hashim et.al.(2016), 62 participants believed that the time constraints and pressure of the clinical environment were not present in CSTC during learning clinical skills. Therefore, these centers can help students improve their skills by making them feel secure and resolve their concerns about the consequences of their actions. According to the students participating in this study, approximately 70 to 75 per cent of students felt more secure regarding mistakes and less worried about harming patients during clinical procedures after training clinical skills on mannequins available at clinical skill training centers [ 3 ].

The fifth part includes evaluating the long-term effects of education and evaluating the conformity between the center’s curriculum and educational needs, the effect of the center on improving essential skills, the effect of curriculum on interest, stress and facilitating clinical procedures. Ji He Yu et al. observed that after training in a clinical skill training center and simulations, students show a significantly lower level of anxiety and a significantly higher level of self-esteem compared to before the training. Furthermore, after experiencing the simulation, students without previous simulation experiences showed lower anxiety and higher self-esteem [ 16 ]. In a systematic review by Alanazi et al., evidence showed that participation in CSTC and using simulation can significantly improve the knowledge, skill and self-esteem of medical students [ 17 ]. Furthermore, a study by Younes et al. showed that adding a simulation program to a normal psychology curriculum improves the quality of education and the self-esteem of medical students [ 18 ]. In another study, Hashim et.al.(2016) showed a positive attitude among the students regarding the effectiveness of clinical skill training centers for improving skills, self-esteem as well as learning new clinical skills [ 3 ]. Therefore, based on the role of clinical skill training centers in improving the motivation and self-esteem of students presented in previous studies, these factors can be important in the evaluation of clinical skill training centers and therefore included in the evaluation questionnaire.

Limitations:

We had some limitations in our study. 1)There wasn’t any evaluation tool for evaluating Training Programs of medical students in Clinical Skill Training Center according to Consumers’ Perspective. Therefore, comparison was difficult and we compared every domain with results of other studies. The study was triangulation and we used many resources to designing this tool and it reduced biases. 2) In convergent step we extracted many items, but because of the possibility of non-response all questions, we couldn’t use all of them and questionnaires are summarized. To assuring no important item is neglected, experts in medical education checked the items.

There are many items in an evaluation tool for evaluating the Clinical Skill Training Center from Consumers’ Perspective. Some of these items could be answered by some consumers not all of them. In this tool is defined in 4 tools for four type of consumers. In every tool respondent answer questions in 5 domains (Learning objectives and course content, Equipment and tools, Educational processes, Environment and physical location). The evaluation tool designed in the current study offers suitable reliability and validity and can be used for evaluating CSTC from consumers’ perspectives. The application of this tool can help improve the effectiveness of educational activities and the curriculum in clinical skill training centers.

Data availability

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.

Abbreviations

Clinical Skill Training Center

Content Validity Ratio

Content Validity Index

- Imran M, Khan SA, Aftab T. Effect of preclinical skill lab training on clinical skills of students during clinical years. Pak J Phsyiol. 2018;12(3):30 – 2. https://pjp.pps.org.pk/index.php/PJP/article/view/580

- Upadhayay N. Clinical training in medical students during preclinical years in the skill lab. Adv Med Educ Pract. 2017;8:189–94.

Article   Google Scholar  

Hashim R, Qamar K, Khan MA, Rehman S. Role of Skill Laboratory Training in Medical Education - Students’ Perspective. J Coll Physicians Surg Pak. 2016;26(3):195-8. PMID: 26975950.

- Singh H, Kalani M, Acosta-Torres S, El Ahmadieh TY, Loya J, Ganju A. History of simulation in medicine: from Resusci Annie to the Ann Myers Medical Center. Neurosurgery. 2013;73(Suppl 1):9–14.

- Bazargan A. Educational evaluation. Tehran: Samt; 2020.

Google Scholar  

- Morgan J, Green V, Blair J. Using simulation to prepare for clinical practice. Clin Teach. 2018;15(1):57–61.

- Pazargadi M, Ashktorab T, Alavimajd H, Khosravi S. Developing an Assessment Tool for nursing Students` General Clinical performance. Iran J Med Educ. 2013;12(11):877–87.

- Denizon Arranz S, Blanco Canseco JM, Pouplana Malagarriga MM, Holgado Catalán MS, Gámez Cabero MI, Ruiz Sánchez A, et al. Multi-source evaluation of an educational program aimed at medical students for interviewing/taking the clinical history using standardized patients. GMS J Med Educ. 2021;38(2):Doc40.

- Lam CY. Consumer-oriented evaluation Approach. The SAGE Encyclopedia of Educational Research, Measurement, and evaluation. Thousand Oaks: SAGE; 2018. pp. 390–2.

- Fitzpatrick J, Sanders J, Worthen B. Program evaluation: alternative approaches and practical guidelines. 4th, editor: ed. Boston: Allyn Bacon; 2004.

- Waltz CF, Bausell RB. Nursing research: design, statistics, and computer analysis. Philadelphia: V. A. Davis; 1981.

- Zamanzadeh V, Rassouli M, Abbaszadeh A, Majd HA, Nikanfar A, Ghahramanian A, editors. Details of content validity and objectifying it in instrument development2014.

- O’Connor M, Rainford L. The impact of 3D virtual reality radiography practice on student performance in clinical practice. Radiography. 2023;29(1):159–64.

Brydges R, Carnahan H, Rose D, Rose L, Dubrowski A. Coordinating Progressive Levels of Simulation Fidelity to maximize Educational Benefit. Acad Med. 2010;85(5):806–12.

James J, Rani RJ. Novel strategy of skill lab training for parenteral injection techniques: a promising opportunity for medical students. Int J Basic Clin Pharmacol. 2022;11(4):315.

Yu JH, Chang HJ, Kim SS, Park JE, Chung WY, Lee SK, et al. Effects of high-fidelity simulation education on medical students’ anxiety and confidence. PLoS ONE. 2021;16(5):e0251078.

Alanazi A, Nicholson N, Thomas S. Use of simulation training to improve knowledge, skills, and confidence among healthcare students: a systematic review. Internet J Allied Health Sci Pract. 2017.

Younes N, Delaunay A, Roger M, et al. Evaluating the effectiveness of a single-day simulation-based program in psychiatry for medical students: a controlled study. BMC Med Educ. 2021;21(1):348.

Download references

Acknowledgements

Sincere thanks to the practice tutors who undertook these clinical assessments and also we are very thankful to professors of Arak University of Medical Sciences for helping us in successful designing the questionnaire.

Not applicable.

Author information

Authors and affiliations.

Medical Education development center, Arak University of Medical Sciences, Arak, Iran

Rezvan Azad

Medicine School, Arak University of Medical Sciences, Arak, Iran

Mahsa Shakour & Narjes Moharami

You can also search for this author in PubMed   Google Scholar

Contributions

The concept and framework were designed by MSH and RA. The questionnaires and data were collected by RA. Data analyzed by MSH and RA. The manuscript was prepared by NM and edited by MSH and NM. The technical editing was done by MSH.

Corresponding author

Correspondence to Mahsa Shakour .

Ethics declarations

Ethics approval and consent to participate.

This study received ethical approval from the Institutional Review Board (IRB) of University of Medical Sciences, Iran to which the researchers are affiliated. All study protocols were performed in accordance with the Declaration of Helsinki. This study considered ethical considerations such as the confidentiality of the participants’ names and the written consent of participants. survey was conducted in 2021. Informed consent from each participant was obtained after clearly explaining the objectives as well as the significance of the study for each study participant. We advised the study participants about the right to participate as well as refuse or discontinue participation at any time they want and the chance to ask anything about the study. The participants were also advised that all data collected would remain confidential.

Consent for publication

Not Applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Azad, R., Shakour, M. & Moharami, N. Designing an evaluation tool for evaluating training programs of medical students in clinical skill training center from consumers’ perspective. BMC Med Educ 24 , 502 (2024). https://doi.org/10.1186/s12909-024-05454-7

Download citation

Received : 22 November 2023

Accepted : 22 April 2024

Published : 09 May 2024

DOI : https://doi.org/10.1186/s12909-024-05454-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Program evaluation
  • Consumer-oriented
  • Clinical skills lab

BMC Medical Education

ISSN: 1472-6920

tool literature review

  • Open access
  • Published: 14 May 2024

PrimerEvalPy: a tool for in-silico evaluation of primers for targeting the microbiome

  • Lara Vázquez-González 1 , 4 ,
  • Alba Regueira-Iglesias 3 , 4 ,
  • Carlos Balsa-Castro 1 , 3 , 4 ,
  • Nicolás Vila-Blanco 1 , 2 , 4 ,
  • Inmaculada Tomás 1 , 3 , 4 &
  • María J. Carreira 1 , 2 , 4  

BMC Bioinformatics volume  25 , Article number:  189 ( 2024 ) Cite this article

Metrics details

The selection of primer pairs in sequencing-based research can greatly influence the results, highlighting the need for a tool capable of analysing their performance in-silico prior to the sequencing process. We therefore propose PrimerEvalPy, a Python-based package designed to test the performance of any primer or primer pair against any sequencing database. The package calculates a coverage metric and returns the amplicon sequences found, along with information such as their average start and end positions. It also allows the analysis of coverage for different taxonomic levels.

As a case study, PrimerEvalPy was used to test the most commonly used primers in the literature against two oral 16S rRNA gene databases containing bacteria and archaea. The results showed that the most commonly used primer pairs in the oral cavity did not match those with the highest coverage. The best performing primer pairs were found for the detection of oral bacteria and archaea.

Conclusions

This demonstrates the importance of a coverage analysis tool such as PrimerEvalPy to find the best primer pairs for specific niches. The software is available under the MIT licence at https://gitlab.citius.usc.es/lara.vazquez/PrimerEvalPy .

Peer Review reports

Introduction

High-throughput amplicon sequencing has become a fundamental tool in modern microbiome analysis. Although the 16S rRNA gene remains the best known and most studied gene, mainly for the study of bacteria and archaea [ 1 ], other genes, such as 18S rRNA [ 2 ], provide valuable insights into microbial eukaryotes, including protozoa and fungi. In addition, the Internal Transcribed Spacer (ITS) gene [ 3 ] and the 23S rRNA gene [ 4 ], although less widely used than 16S rRNA, have proved useful in exploring the diversity of microbial communities, particularly in identifying specific archaea and bacteria.

These genes often have several conserved regions. In some cases, such as the 16S rRNA, there can be up to nine regions that serve as target sites for primer-based amplicon amplification. Primers can be designed to amplify adjacent or distant regions, or even both ends of the gene. The latter, as seen in the 16S rRNA gene, is particularly important when using new massive high-throughput sequencing platforms, such as PacBio [ 5 ].

There is also a wide range of sample types suitable for analysis, ranging from oceanic [ 6 ] and environmental [ 7 ] to human, animal and food [ 8 ]. Within each of these categories, different niches often have very different sequence compositions, requiring specialised analytical approaches.

In all of the above scenarios, there may be dozens or even hundreds of primers available. Some of these are called universal primers and allow the simultaneous study of several taxonomic groups. For example, in the case of the 16S rRNA gene, these universal primers allow the study of both bacteria and archaea. Alternatively, specific primers are designed to target particular taxonomic groups, focusing exclusively on either archaea or bacteria. Researchers can also go even deeper and use primers to study smaller taxonomic subsets, such as specific genera or phyla within different samples.

Similarly, the 18S rRNA gene offers universal primers [ 9 ], but also primers tailored for exclusive use in the study of fungi, protozoa, or algae [ 10 ]. For the study of fungal diversity using the ITS gene, several primer pairs are proposed for different regions (e.g., ITS1F/ITS2 and ITS3/ITS4) [ 11 ]. These primers, some universal and some specific, target different taxonomic groups, including ascomycetes, basidiomycetes, ectomycorrhizal, arbuscular mycorrhizal fungi, and others. In addition, specialised primers are available to target fungal pathogens, whether in environmental or clinical samples [ 12 , 13 ].

In the scientific literature, primer pairs have been proposed for specific niches, such as the oral cavity [ 14 ], or for specific taxonomic groups in samples from oceanic environments, soil, and other sources [ 15 ]. However, it is noteworthy that primers originally designed for environmental samples have been used in very different contexts [ 16 ].

In conclusion, with the constant emergence of new primer proposals and the already large number of potential primer candidates, there is an urgent need for a versatile tool to test the performance of these primer pairs against specific sequence databases. This tool should allow researchers to assess their performance before embarking on wet lab experiments. In order to accommodate the wide range of sample types mentioned above, this tool must include the following features:

Evaluation of multiple candidate primers, either individually or in pairs.

Analysis on any sequence database.

Optional inclusion of taxonomic information to assess coverage across different taxonomic levels.

Analysis of all clades.

Output of primer start and end positions within the sequence.

Support for whole genome analysis.

When evaluating primer pairs, the tool should also allow users to set minimum and maximum amplicon length values before starting coverage analysis. This last feature will make it easier for users to select the most appropriate sequencing platform for their research needs. With this comprehensive set of features, researchers will have access to richer and more relevant information for selecting optimal primers or primer pairs tailored to their specific research objectives.

There are several works in the literature that analyse primers to assess their quality, such as EMBOSS [ 17 ], Metacoder [ 18 ], TestPrime [ 19 ] and PrimerTree [ 20 ]. However, none of them fulfil all of the above criteria.

Therefore, in this work, we present PrimerEvalPy - a versatile tool designed for the in-silico evaluation of primers or primer pairs against specific sequence databases provided by the user. The above features have been incorporated into PrimerEvalPy. In addition, users can seamlessly access genomes using our tool to retrieve them from the National Center for Biotechnology Information (NCBI) databases by specifying the appropriate identifiers. Alternatively, PrimerEvalPy allows for the direct analysis of sequences without the need for prior downloads from the NCBI.

To assess the capabilities of this in-silico tool, we performed tests using the most commonly used primer pairs for the 16S rRNA gene in oral cavity research [ 14 ]. We tested primers targeting bacteria, archaea, and both (universal primers). These tests were carried out analysing an oral bacterial sequence database proposed by Escapa et al. [ 21 ] and improved by our research group, who also developed an oral archaeal database [ 14 ].

While we focus our attention on evaluating PrimerEvalPy on the oral microbiome, which has a limited diversity [ 22 ], it is important to highlight that this tool has the capacity to work with multiple and diverse niches.

Implementation

PrimerEvalPy has been developed in Python 3.9, using Biopython [ 23 ], a well-known bioinformatics package, to support the handling of sequencing data. Our tool can be used both from the command line as well as integrated into other Python projects. It is also compatible with Windows and Linux.

The package accepts two primary inputs: primer sequences and the gene or genome sequences against which the primers are to be evaluated.

PrimerEvalPy has two modules that provide the main functionality of the package. The first is the analyze_ip module, designed for the analysis of single primer sequences, while the second is the analyze_pp module, tailored for the analysis of primer pairs.

The primer sequences can be evaluated on DNA sequences of different origins, provided they are presented in a FASTA file format. By default, the package returns coverage calculations for all sequences within the provided file. In cases where an additional file containing the taxonomy of all sequences is provided, PrimerEvalPy extends its capabilities to compute coverage at different taxonomic levels and even for all possible clades.

The package also includes the download module, which retrieves DNA sequences, either genes or genomes, from the NCBI nucleotide database. If desired, this module can also be used to retrieve and save the taxonomy.

All in all, PrimerEvalPy returns the results of the coverage analysis in several files. For each primer analysed, a table is generated, containing mainly the coverage and the average start and end positions of the primer in the sequences. FASTA files containing the sequences found by the primer are also generated.

Input file for target primers

The list of primers to be evaluated should be in the oligo file format used by Mothur [ 24 ]. This file format indicates whether a primer is a single primer (denoted by ‘forward’ or ‘reverse’) or a primer pair (denoted by ‘primer’). It includes their sequence(s) and optionally a name for identification.

It is important to note that PrimerEvalPy supports primers with degenerate bases as defined by the International Union of Pure and Applied Chemistry (IUPAC), which are treated accordingly during the analysis. However, no other transformation is applied to these sequences, so they must be presented in the correct direction for amplification.

Input file for gene or genome sequences, and taxonomy

The genes and genomes against which the candidate primers are to be evaluated must be provided in FASTA formatted files. It is also possible to download them directly from the NCBI database using the PrimerEvalPy download module.

The taxonomy for each sequence can also be provided. This should be in a separate taxonomy file with the same name as the corresponding FASTA file. This contains one line per sequence, including its identifier (matching the one in the FASTA file), and the taxonomy itself, with each taxonomic level separated by semicolons. The user must specify the name for each taxonomic level to be read from the files, and all files must contain the same number of taxonomic levels.

Primer coverage analysis procedure

To calculate the primer coverage measurements, as well as other functionality, we follow a series of steps shown in Fig.  1 , which will be explained in the following subsections.

figure 1

Block diagram of the analysis process for testing a primer or primer pair against a specific database. It consists of a sequence quality control step, followed by an optional sequence grouping step at the taxonomic level, then a primer search step within the sequences, and finally a coverage metrics calculation step

Step 1: Sequence quality control

The first step in both the analyze_ip and analyze_pp modules is a quality check of the sequences provided. This quality check involves the identification of any degenerate nucleotides that could potentially affect the subsequent analysis.

During this process, the modules actively search for nucleotides beyond the four basic bases (A, C, G, and T). If a non-standard nucleotide is detected, such as U (Uracil) found in RNA, it is clearly marked. While these unwanted nucleotides are flagged for user awareness, it is up to the user to decide what to do with them.

This quality control procedure ensures that the input data meets the required quality standards before the analyses are performed. It allows users to make informed decisions about the inclusion or exclusion of sequences based on their quality.

Step 2: Sequence grouping by taxonomic level

By default, PrimerEvalPy does not specify a taxonomy level for grouping sequences. Therefore, each sequence is analysed individually and forms its own group. In this way, it is analysed whether a sequence is covered by the primer being evaluated.

However, a key feature of PrimerEvalPy is that it supports coverage analysis at different taxonomic levels. It also allows grouping by all possible clades, i.e., groups formed by a common ancestor and all its descendants. This concept is illustrated in the phylogenetic tree in Fig.  2 .

figure 2

Example of a phylogenetic tree highlighting the clades, where node 5 represents the common ancestor of nodes 3 and 4, forming a clade that includes all three. Image by J.R. Hendricks [ 25 ] licensed under a Creative Commons Attribution-ShareAlike 4.0 International License

To evaluate sequences at different taxonomic levels, it is essential to have the appropriate taxonomy file and to specify the names of the taxonomic levels included. This allows the package to group the sequences at the taxonomic level desired by the user. When a taxonomic level is specified, PrimerEvalPy will search for all taxa within it, i.e., all groups of sequences that share the same taxonomic classification up to that level. The sequences from each taxon form an analysis group.

Steps 3 and 4: Primer search in sequences and assessment of coverage metrics

When expressed as a percentage, coverage represents the proportion of target sequences in a given dataset that can be effectively amplified by a specific primer or primer pair. It quantifies the primer’s efficiency in capturing and amplifying the genetic material of interest within the sample.

The primer sequences provided in the oligo file contain the four nucleobases A, C, G and T, but may also contain degenerate bases (IUPAC codes). We have therefore used regular expressions ( regex ) to search for the primers, either individually or in pairs, within the gene or genome sequences. These replace the degenerate bases with their possible corresponding nucleobases to ensure accurate matches within the sequences.

Furthermore, a maximum number of “mismatches” was allowed when searching for the primer within the sequence. To facilitate this, regex with fuzzy matching is used, meaning that some nucleotides in the sequence may not exactly match the corresponding nucleotides in the primer sequence. By default, no mismatches are allowed.

In addition, for primer pairs, the user can specify a minimum and maximum length of the amplified fragment between the forward and reverse primers.

Once the sequences amplified by the primer have been found and stored, coverage metrics are calculated. Primarily, the percentage of groups covered by the primer out of the total number of groups is calculated to determine the coverage of the primer. A group is considered to be covered if any of its sequences are found by the primer. If no taxonomic level was specified, which is the default approach, each sequence constitutes a group, so the coverage is the percentage of sequences covered by the primer. If a taxonomic level was specified, each group corresponds to a taxon. The most common is species level coverage, which is the percentage of species covered, that is, what percentage of species have at least one of their sequences amplified by the primer. There is also an option to obtain group coverage, which is the percentage of sequences within each group that are covered.

Download complete genomes from NCBI

PrimerEvalPy includes a complementary module that allows users to download complete genomes or genes from the NCBI databases, as shown in Fig.  3 . Although not a core feature, this option significantly enhances the capabilities of the tool and facilitates the analysis process.

figure 3

Block diagram of the steps of the NCBI download module. Here the genome identifiers are divided into batches to download the genome sequences from the NCBI step by step. Optionally, the taxonomy can also be retrieved and linked to the corresponding genomes

Sequences are downloaded from the NCBI nucleotide database using the Entrez module of the Biopython package, which is a wrapper for the online search system of the same name provided by NCBI. To use this module effectively, users must use the accession identifiers used by NCBI. It also offers the option of downloading the relevant taxonomies, which enrich the dataset with essential contextual information.

As a practical case, PrimerEvalPy was used to test the most commonly used primers in the literature against two 16S rRNA gene oral databases containing bacteria and archaea. The article by Regueira et al. [ 14 ] provides a detailed analysis.

The bacterial dataset improved by our research group was the Escapa et al. [ 21 ] dataset, which contains a total of 223,143 amplicon sequence variants (ASVs) of FASTA-formatted 16S rRNA gene sequences, and a total of 769 oral bacterial species. In particular, sequences from the same hierarchy were simultaneously aligned using Clustal Omega against a set of Escherichia coli 16S rRNA gene sequences. This dataset is provided in the Supplementary information [see Additional file 1]. The archaeal dataset was generated by our research group from complete genomes of the human oral archaeal species from the NCBI nucleotide database. This included 2842 16S rRNA gene sequences and 196 archaeal species, and is provided in the Supplementary information [see Additional file 2].

A total of 456 individual primers were analysed with PrimerEvalPy at the variant and species level, including forward, reverse, and unknown primers. These are provided in the Supplementary information [see Additional file 3]. Of these, 356 targeted bacteria, 79 archaea, and 21 both (universal) according to the literature. However, we found that some primers at the species level covered a different domain than expected, as shown in Table 1 . Many primers that were thought to cover only bacteria turned out to cover both bacteria and archaea. In addition, 26 were found to have no coverage at all in the oral cavity. We also observed that the primers with the best coverage identified in the study were not among those commonly described in the oral microbiome literature.

Next, the primers with coverage at the species level \(\ge 75\%\) (148 bacterial and 65 archaeal primers) were selected to form valid primer pairs. All possible combinations of the forward and reverse primers were identified, resulting in a total of 4,638 primer pairs. These were again evaluated to find the best ones for the detection of oral bacteria and archaea.

It was discovered that the primer pairs with the highest coverage, as proposed in the literature, did not cover many oral species that were covered by other primer pairs constructed and evaluated in this study. Additionally, the primer pairs identified as the best by PrimerEvalPy did not align with those found to be the best in the literature.

PrimerEvalPy allows for the evaluation of primers and primer pairs using their coverage as a measure of their quality. Although there are several works in the literature that analyse primers in a similar way, they have disadvantages ranging from availability in Python to limitations in the analysis itself. Only PrimerEvalPy includes analysis of individual primers, analysis of primer pairs and analysis for different taxonomic ranks, i.e., taxonomic levels, on any database. Table 2 shows a comparison of the functionalities of PrimerEvalPy with other packages.

One such tool is the European Molecular Biology Open Software Suite [ 17 ], known as EMBOSS. This is only available for UNIX systems via the command line. It allows you to analyse a pair of primers on one or more sequences, taking into account mismatches. There are many tools that use EMBOSS, such as the Emboss module in Biopython [ 23 ]. This is a wrapper for the EMBOSS toolkit and does not add any functionality. Like EMBOSS, it does not support individual primer analysis, nor does it provide coverage information that needs to be calculated. It also does not include the analysis for different taxonomic levels.

Another tool that uses EMBOSS is the R package Metacoder [ 18 ]. It allows for primer pair analysis using EMBOSS, but has been extended with additional functionality. Metacoder adds the analysis for different taxonomic levels and provides coverage measurements. However, it is only available for R, not for Python, and like EMBOSS it does not support individual primer analysis. It provides the start and end positions of each amplicon in the sequences, as well as their length, but not the average. Also, as it is based on EMBOSS, it is not available for Windows.

Apart from the tools using the EMBOSS suite, there is a web tool called TestPrime [ 19 ] which allows the analysis of one primer pair at a time only on the proposed Silva databases (PCR in silico). Like the others, it allows the analysis of primer pairs with mismatches on the primers and gives coverage information. However, it is only available as a web tool, not for Python or R, and does not allow individual primer analysis. It provides the amplicon length, but not its average or the start and end positions. Also, primers cannot be analysed in any database, there are only two to choose from.

Finally, another analysed tool was PrimerTree [ 20 ], an R package that allows the analysis of a primer pair on a specific NCBI database using Clustal Omega. This tool analyses one primer pair at a time, allowing for mismatches on the primers, and returns the number of alignments performed between the primer pair and the sequences. However, it can only be applied to the specified ecology dataset and cannot be used to analyse other datasets. It provides the start and end positions of each amplicon in the sequences, as well as its length, but does not provide the averages of the above values. In addition, it does not provide coverage measurements, does not support analysis of individual primers, nor analysis for different taxonomic levels.

PrimerEvalPy is the only tool that has all the desired features, as shown in Table 2 . Unlike all the other tools, it is the only one that allows the analysis of individual primers and calculates the average start and end positions of the primer in the sequences.

As validation, PrimerEvalPy was compared to Metacoder, the tool with most functionality from those available in the literature. Given that Metacoder does not include individual primer analysis, only three of the best primer pairs targeting bacteria and three of the best targeting archaea (according to PrimerEvalPy) were evaluated and compared against the bacteria database and archaea database, respectively. The same species level coverage was obtained for each primer pair with both tools.

The PrimerEvalPy package allows the analysis of individual primers or primer pairs. Several measures are returned to help make an informed decision, and there are several options to fine-tune the analysis. Analysis is also available at different taxonomic levels, allowing researchers to explore the suitability of primers for specific ranks in the niche.

We believe that this tool can be of great value to researchers wishing to study niche diversity using high-throughput amplicon sequencing techniques. Users can efficiently compare large numbers of primers in an economical and rapid manner, thereby reducing the number of primers that need to be evaluated in the laboratory. It also facilitates the seamless modification of primers derived from existing literature, allowing subsequent evaluation for potential improvements.

The results obtained in the case study demonstrated the need for such a tool. They showed that some of the primer pairs with the highest coverage suggested by the literature did not match the best found with PrimerEvalPy. Furthermore, some of the primers studied did not have coverage in the oral cavity, highlighting the importance of a prior study focusing on the target niche.

Although there are many tools that address this problem of primer coverage analysis, many of them have several of the limitations mentioned above. With PrimerEvalPy, we aim to overcome these limitations and provide a useful and practical tool.

In conclusion, PrimerEvalPy is a fundamental tool that allows in-silico primer analysis prior to any sequencing process, thus contributing to improve the quality and reliability of the microbial diversity results of any ecosystem.

Availability and requirements

Project name : PrimerEvalPy

Project home page : https://gitlab.citius.usc.es/lara.vazquez/PrimerEvalPy

Operating system(s) : e.g. Platform independent

Programming language : e.g. Python

Other requirements : Python 3.9 or higher

License : MIT License

Any restrictions to use by non-academics : None

Availibility of data and materials

The datasets used or analysed in this study were obtained from the Regueira et al. [ 14 ] article and are available in this manuscript as Supplementary information.

Rajendhran J, Gunasekaran P. Microbial phylogeny and diversity: small subunit ribosomal RNA sequence analysis and beyond. Microbiol Res. 2011;166(2):99–110. https://doi.org/10.1016/j.micres.2010.02.003 .

Article   CAS   PubMed   Google Scholar  

Panzer K, Yilmaz P, Weiß M, Reich L, Richter M, Wiese J, et al. Identification of habitat-specific biomes of aquatic fungal communities using a comprehensive nearly full-length 18S rRNA dataset enriched with contextual data. PLoS ONE. 2015;10(7):e0134377. https://doi.org/10.1371/journal.pone.0134377 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Ruegger PM, Clark RT, Weger JR, Braun J, Borneman J. Improved resolution of bacteria by high throughput sequence analysis of the rRNA internal transcribed spacer. J Microbiol Methods. 2014;105:82–7. https://doi.org/10.1016/j.mimet.2014.07.001 .

Hunt DE, Klepac-Ceraj V, Acinas SG, Gautier C, Bertilsson S, Polz MF. Evaluation of 23S rRNA PCR primers for use in phylogenetic studies of bacterial diversity. Appl Environ Microbiol. 2006;72(3):2221–5. https://doi.org/10.1128/aem.72.3.2221-2225.2006 .

Rhoads A, Au KF. PacBio sequencing and its applications. Genom Proteom Bioinform. 2015;13(5):278–89. https://doi.org/10.1016/j.gpb.2015.08.002 .

Article   Google Scholar  

Yang N, Tian C, Lv Y, Hou J, Yang Z, Xiao X, et al. Novel primers for 16S rRNA gene-based archaeal and bacterial community analysis in oceanic trench sediments. Appl Microbiol Biotechnol. 2022;106(7):2795–809. https://doi.org/10.1007/s00253-022-11893-3 .

Gonzalez E, Pitre FE, Brereton NJB. ANCHOR: a 16S rRNA gene amplicon pipeline for microbial analysis of multiple environmental samples. Environ Microbiol. 2019;21(7):2440–68. https://doi.org/10.1111/1462-2920.14632 .

Miralles MM, Maestre-Carballa L, Lluesma-Gomez M, Martinez-Garcia M. High-throughput 16S rRNA sequencing to assess potentially active bacteria and foodborne pathogens: a case example in ready-to-eat food. Foods. 2019;8(10):480. https://doi.org/10.3390/foods8100480 .

Article   CAS   Google Scholar  

Wang Y, Tian RM, Gao ZM, Bougouffa S, Qian PY. Optimal eukaryotic 18s and universal 16S/18S ribosomal RNA primers and their application in a study of symbiosis. PLoS ONE. 2014;9(3):e90053. https://doi.org/10.1371/journal.pone.0090053 .

Banos S, Lentendu G, Kopf A, Wubet T, Glöckner FO, Reich M. A comprehensive fungi-specific 18S rRNA gene sequence primer toolkit suited for diverse research issues and sequencing platforms. BMC Microbiol. 2018. https://doi.org/10.1186/s12866-018-1331-4 .

Article   PubMed   PubMed Central   Google Scholar  

Beeck MOD, Lievens B, Busschaert P, Declerck S, Vangronsveld J, Colpaert JV. Comparison and validation of some ITS primer pairs useful for fungal metabarcoding studies. PLoS ONE. 2014;9(6):e97629. https://doi.org/10.1371/journal.pone.0097629 .

Toju H, Tanabe AS, Yamamoto S, Sato H. High-coverage ITS primers for the DNA-based identification of ascomycetes and basidiomycetes in environmental samples. PLoS ONE. 2012;7(7):e40863. https://doi.org/10.1371/journal.pone.0040863 .

Ferrer C, Colom F, Frasés S, Mulet E, Abad JL, Alió JL. Detection and Identification of Fungal Pathogens by PCR and by ITS2 and 5.8S Ribosomal DNA Typing in Ocular Infections. J Clin Microbiol. 2001;39(8):2873–9. https://doi.org/10.1128/jcm.39.8.2873-2879.2001 .

Regueira-Iglesias A, Vázquez-González L, Balsa-Castro C, Vila-Blanco N, Blanco-Pintos T, Tamames J, et al. In silico evaluation and selection of the best 16S rRNA gene primers for use in next-generation sequencing to detect oral bacteria and archaea. Microbiome. 2023;11(1):58. https://doi.org/10.1186/s40168-023-01481-6 .

Thijs S, Beeck MOD, Beckers B, Truyens S, Stevens V, Hamme JDV, et al. Comparative evaluation of four bacteria-specific primer pairs for 16S rRNA gene surveys. Front Microbiol. 2017. https://doi.org/10.3389/fmicb.2017.00494 .

Roggiani S, Zama D, D’Amico F, Rocca A, Fabbrini M, Totaro C, et al. Gut, oral, and nasopharyngeal microbiota dynamics in the clinical course of hospitalized infants with respiratory syncytial virus bronchiolitis. Front Cell Infect Microbiol. 2023. https://doi.org/10.3389/fcimb.2023.1193113 .

Rice P, Longden I, Bleasby A. EMBOSS: the European molecular biology open software suite. Trends Genet. 2000;16(6):276–7. https://doi.org/10.1016/s0168-9525(00)02024-2 .

Foster ZSL, Sharpton TJ, Grünwald NJ. Metacoder: an R package for visualization and manipulation of community taxonomic diversity data. PLoS Comput Biol. 2017;13(2):e1005404. https://doi.org/10.1371/journal.pcbi.1005404 .

Klindworth A, Pruesse E, Schweer T, Peplies J, Quast C, Horn M, et al. Evaluation of general 16S ribosomal RNA gene PCR primers for classical and next-generation sequencing-based diversity studies. Nucleic Acids Res. 2012;41(1): e1. https://doi.org/10.1093/nar/gks808 .

Cannon MV, Hester J, Shalkhauser A, Chan ER, Logue K, Small ST, et al. In silico assessment of primers for eDNA studies using PrimerTree and application to characterize the biodiversity surrounding the Cuyahoga River. Sci Rep. 2016. https://doi.org/10.1038/srep22908 .

Escapa IF, Huang Y, Chen T, Lin M, Kokaras A, Dewhirst FE, et al. Construction of habitat-specific training sets to achieve species-level assignment in 16S rRNA gene datasets. Microbiome. 2020. https://doi.org/10.1186/s40168-020-00841-w .

Dewhirst FE, Chen T, Izard J, Paster BJ, Tanner ACR, Yu WH, et al. The Human Oral Microbiome. J Bacteriol. 2010;192(19):5002–17. https://doi.org/10.1128/jb.00542-10 .

Cock PJA, Antao T, Chang JT, Chapman BA, Cox CJ, Dalke A, et al. Biopython: freely available Python tools for computational molecular biology and bioinformatics. Bioinformatics. 2009;25(11):1422–3. https://doi.org/10.1093/bioinformatics/btp163 .

Schloss PD, Westcott SL, Ryabin T, Hall JR, Hartmann M, Hollister EB, et al. Introducing mothur: open-source, platform-independent, community-supported software for describing and comparing microbial communities. Appl Environ Microbiol. 2009;75(23):7537–41. https://doi.org/10.1128/AEM.01541-09 .

Paleontological Research Institution.: The Digital Atlas of Ancient Life. [Online; accessed November 28, 2023]. Available from: https://www.digitalatlasofancientlife.org/ .

Download references

Acknowledgements

Not applicable

This work was supported by the Instituto de Salud Carlos III (Spain) [PI21/00588]; the Xunta de Galicia - Consellería de Cultura, Educación e Universidade [ED431G-2019/04, GRC2021/48, GPC2020/27, ED481A-2021 to L.V.-G., IN606B-2023/005 to A.R.-I.]; and the European Union (European Regional Development Fund-ERDF).

Author information

Authors and affiliations.

Centro Singular de Investigación en Tecnoloxías Intelixentes (CiTIUS), Universidade de Santiago de Compostela, Rúa de Jenaro de la Fuente Domínguez, E15782, Santiago de Compostela, Spain

Lara Vázquez-González, Carlos Balsa-Castro, Nicolás Vila-Blanco, Inmaculada Tomás & María J. Carreira

Departamento de Electrónica e Computación, Escola Técnica Superior de Enxeñaría, Universidade de Santiago de Compostela, E15782, Santiago de Compostela, Spain

Nicolás Vila-Blanco & María J. Carreira

Oral Sciences Research Group, Special Needs Unit, Department of Surgery and Medical Surgical Specialities, School of Medicine and Dentistry, Universidade de Santiago de Compostela, E15782, Santiago de Compostela, Spain

Alba Regueira-Iglesias, Carlos Balsa-Castro & Inmaculada Tomás

Instituto de Investigación Sanitaria de Santiago de Compostela (IDIS), E15706, Santiago de Compostela, Spain

Lara Vázquez-González, Alba Regueira-Iglesias, Carlos Balsa-Castro, Nicolás Vila-Blanco, Inmaculada Tomás & María J. Carreira

You can also search for this author in PubMed   Google Scholar

Contributions

L.V.-G. and C.B.-C. conceived the experiments, L.V.-G. and N.V.-B. conducted the experiments, C.B.-C., A.R.-I., N.V.-B., I.T. and M.J.C. analysed the results. L.V.-G. and M.J.C. wrote and reviewed the first version of the manuscript, C.B.-C., N.V.-B., M.J.C. and I.T. critically reviewed the manuscript.

Corresponding authors

Correspondence to Lara Vázquez-González , Inmaculada Tomás or María J. Carreira .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1: oral-bacteria database of the 16s rrna gene sequences which was used for the coverage analysis., additional file 2: oral-archaea database of the 16s rrna gene sequences which was used for the coverage analysis., additional file 3: forward and reverse 16s rrna gene primers that were evaluated in the study., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Vázquez-González, L., Regueira-Iglesias, A., Balsa-Castro, C. et al. PrimerEvalPy: a tool for in-silico evaluation of primers for targeting the microbiome. BMC Bioinformatics 25 , 189 (2024). https://doi.org/10.1186/s12859-024-05805-7

Download citation

Received : 29 November 2023

Accepted : 08 May 2024

Published : 14 May 2024

DOI : https://doi.org/10.1186/s12859-024-05805-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Bioinformatics
  • 16S rRNA gene
  • Sequence analysis

BMC Bioinformatics

ISSN: 1471-2105

tool literature review

ORIGINAL RESEARCH article

Evaluation of the urban living lab in heis towards education for sustainable development (e-ull-heis) provisionally accepted.

  • 1 University of Guayaquil, Ecuador
  • 2 Universitat Politecnica de Catalunya, Spain

The final, formatted version of the article will be published soon.

This study explores the implementation of Urban Living Labs (ULLs) in Higher Education Institutions (HEIs) to promote Education for Sustainable Development (ESD). It adopts a methodology that integrates a mixed approach, combining literature review, validation with experts in the field and analysis of case studies. A structured evaluation tool is proposed based on three constructs: Synergy, Strategy and Pedagogy, which cover the essential characteristics of the three thematic axes: ULLs, ESD and HEIs, through seven indicators. This tool is applied to examine the effective-ness of ULLs in promoting sustainable practices within the university context. The results, vali-dated through experts, exploratory factor analysis and Cronbach's alpha coefficient, demonstrate the reliability and consistency of the evaluative indicators, highlighting the crucial role of ULLs in the integration of sustainability in the curriculum, experiential learning, and the impact social and community. This approach allowed the identification of successful practices and common challenges in the implementation of ULL, as well as the development of a framework of indicators adapted to the specific needs of HEIs. The study concludes by emphasizing the transformative potential of ULLs in HEIs to advance towards sustainable urban transitions, underscoring the need for robust evaluative tools to optimize the contribution of higher education to global sustainable development.

Keywords: Education for Sustainable Development (ESD)1, Higher Education Institutions (HEIs) 2, Urban Living Labs (ULL) 3, assessment tool4, urban innovation5, interdisciplinarity6, curriculum integration7, experiential learning8

Received: 04 Apr 2024; Accepted: 14 May 2024.

Copyright: © 2024 Morales, Segalás and Masseck. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Prof. Ivetheyamel Morales, University of Guayaquil, Guayaquil, Ecuador

People also looked at

  • Systematic Review
  • Open access
  • Published: 12 May 2024

Association between problematic social networking use and anxiety symptoms: a systematic review and meta-analysis

  • Mingxuan Du 1 ,
  • Chengjia Zhao 2 ,
  • Haiyan Hu 1 ,
  • Ningning Ding 1 ,
  • Jiankang He 1 ,
  • Wenwen Tian 1 ,
  • Wenqian Zhao 1 ,
  • Xiujian Lin 1 ,
  • Gaoyang Liu 1 ,
  • Wendan Chen 1 ,
  • ShuangLiu Wang 1 ,
  • Pengcheng Wang 3 ,
  • Dongwu Xu 1 ,
  • Xinhua Shen 4 &
  • Guohua Zhang 1  

BMC Psychology volume  12 , Article number:  263 ( 2024 ) Cite this article

125 Accesses

Metrics details

A growing number of studies have reported that problematic social networking use (PSNU) is strongly associated with anxiety symptoms. However, due to the presence of multiple anxiety subtypes, existing research findings on the extent of this association vary widely, leading to a lack of consensus. The current meta-analysis aimed to summarize studies exploring the relationship between PSNU levels and anxiety symptoms, including generalized anxiety, social anxiety, attachment anxiety, and fear of missing out. 209 studies with a total of 172 articles were included in the meta-analysis, involving 252,337 participants from 28 countries. The results showed a moderately positive association between PSNU and generalized anxiety (GA), social anxiety (SA), attachment anxiety (AA), and fear of missing out (FoMO) respectively (GA: r  = 0.388, 95% CI [0.362, 0.413]; SA: r  = 0.437, 95% CI [0.395, 0.478]; AA: r  = 0.345, 95% CI [0.286, 0.402]; FoMO: r  = 0.496, 95% CI [0.461, 0.529]), and there were different regulatory factors between PSNU and different anxiety subtypes. This study provides the first comprehensive estimate of the association of PSNU with multiple anxiety subtypes, which vary by time of measurement, region, gender, and measurement tool.

Peer Review reports

Introduction

Social network refers to online platforms that allow users to create, share, and exchange information, encompassing text, images, audio, and video [ 1 ]. The use of social network, a term encompassing various activities on these platforms, has been measured from angles such as frequency, duration, intensity, and addictive behavior, all indicative of the extent of social networking usage [ 2 ]. As of April 2023, there are 4.8 billion social network users globally, representing 59.9% of the world’s population [ 3 ]. The usage of social network is considered a normal behavior and a part of everyday life [ 4 , 5 ]. Although social network offers convenience in daily life, excessive use can lead to PSNU [ 6 , 7 ], posing potential threats to mental health, particularly anxiety symptoms (Rasmussen et al., 2020). Empirical research has shown that anxiety symptoms, including generalized anxiety (GA), social anxiety (SA), attachment anxiety (AA), and fear of missing out (FoMO), are closely related to PSNU [ 8 , 9 , 10 , 11 , 12 ]. While some empirical studies have explored the relationship between PSNU and anxiety symptoms, their conclusions are not consistent. Some studies have found a significant positive correlation [ 13 , 14 , 15 ], while others have found no significant correlation [ 16 , 17 , 18 , 19 ]. Furthermore, the degree of correlation varies widely in existing research, with reported r-values ranging from 0.12 to 0.80 [ 20 , 21 ]. Therefore, a systematic meta-analysis is necessary to clarify the impact of PSNU on individual anxiety symptoms.

Previous research lacks a unified concept of PSNU, primarily due to differing theoretical interpretations by various authors, and the use of varied standards and diagnostic tools. Currently, this phenomenon is referred to by several terms, including compulsive social networking use, problematic social networking use, excessive social networking use, social networking dependency, and social networking addiction [ 22 , 23 , 24 , 25 , 26 ]. These conceptual differences hinder the development of a cohesive and systematic research framework, as it remains unclear whether these definitions and tools capture the same underlying construct [ 27 ]. To address this lack of uniformity, this paper will use the term “problematic use” to encompass all the aforementioned nomenclatures (i.e., compulsive, excessive, dependent, and addictive use).

Regarding the relationship between PSNU and anxiety symptoms, two main perspectives exist: the first suggests a positive correlation, while the second proposes a U-shaped relationship. The former perspective, advocating a positive correlation, aligns with the social cognitive theory of mass communication. It posits that PSNU can reinforce certain cognitions, emotions, attitudes, and behaviors [ 28 , 29 ], potentially elevating individuals’ anxiety levels [ 30 ]. Additionally, the cognitive-behavioral model of pathological use, a primary framework for explaining factors related to internet-based addictions, indicates that psychiatric symptoms like depression or anxiety may precede internet addiction, implying that individuals experiencing anxiety may turn to social networking platforms as a coping mechanism [ 31 ]. Empirical research also suggests that highly anxious individuals prefer computer-mediated communication due to the control and social liberation it offers and are more likely to have maladaptive emotional regulation, potentially leading to problematic social network service use [ 32 ]. Turning to the alternate perspective, it proposes a U-shaped relationship as per the digital Goldilocks hypothesis. In this view, moderate social networking usage is considered beneficial for psychosocial adaptation, providing individuals with opportunities for social connection and support. Conversely, both excessive use and abstinence can negatively impact psychosocial adaptation [ 33 ]. In summary, both perspectives offer plausible explanations.

Incorporating findings from previous meta-analyses, we identified seven systematic reviews and two meta-analyses that investigated the association between PSNU and anxiety. The results of these meta-analyses indicated a significant positive correlation between PSNU and anxiety (ranging from 0.33 to 0.38). However, it is evident that these previous meta-analyses had certain limitations. Firstly, they focused only on specific subtypes of anxiety; secondly, they were limited to adolescents and emerging adults in terms of age. In summary, this systematic review aims to ascertain which theoretical perspective more effectively explains the relationship between PSNU and anxiety, addressing the gaps in previous meta-analyses. Additionally, the association between PSNU and anxiety could be moderated by various factors. Drawing from a broad research perspective, any individual study is influenced by researcher-specific designs and associated sample estimates. These may lead to bias compared to the broader population. Considering the selection criteria for moderating variables in empirical studies and meta-analyses [ 34 , 35 ], the heterogeneity of findings on problematic social network usage and anxiety symptoms could be driven by divergence in sample characteristics (e.g., gender, age, region) and research characteristics (measurement instrument of study variables). Since the 2019 coronavirus pandemic, heightened public anxiety may be attributed to the fear of the virus or heightened real life stress. The increased use of electronic devices, particularly smartphones during the pandemic, also instigates the prevalence of problematic social networking. Thus, our analysis focuses on three moderators: sample characteristics (participants’ gender, age, region), measurement tools (for PSNU and anxiety symptoms) and the time of measurement (before COVID-19 vs. during COVID-19).

The present study was conducted in accordance with the 2020 statement on Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) [ 36 ]. To facilitate transparency and to avoid unnecessary duplication of research, this study was registered on PROSPERO, and the number is CRD42022350902.

Literature search

Studies on the relationship between the PSNU and anxiety symptoms from 2000 to 2023 were retrieved from seven databases. These databases included China National Knowledge Infrastructure (CNKI), Wanfang Data, Chongqing VIP Information Co. Ltd. (VIP), Web of Science, ScienceDirect, PubMed, and PsycARTICLES. The search strings consisted of (a) anxiety symptoms, (b) social network, and (c) Problematic use. As shown in Table  1 , the keywords for anxiety are as follows: anxiety, generalized anxiety, social anxiety, attachment anxiety, fear of missing out, and FoMO. The keywords for social network are as follows: social network, social media, social networking site, Instagram, and Facebook. The keywords for addiction are as follows: addiction, dependence, problem/problematic use, excessive use. The search deadline was March 19, 2023. A total of 2078 studies were initially retrieved and all were identified ultimately.

Inclusion and exclusion criteria

Retrieved studies were eligible for the present meta-analysis if they met the following inclusion criteria: (a) the study provided Pearson correlation coefficients used to measure the relationship between PSNU and anxiety symptoms; (b) the study reported the sample size and the measurement instruments for the variables; (c) the study was written in English and Chinese; (d) the study provided sufficient statistics to calculate the effect sizes; (e) effect sizes were extracted from independent samples. If multiple independent samples were investigated in the same study, they were coded separately; if the study was a longitudinal study, they were coded by the first measurement. In addition, studies were excluded if they: (a) examined non-problematic social network use; (b) had an abnormal sample population; (c) the results of the same sample were included in another study and (d) were case reports or review articles. Two evaluators with master’s degrees independently assessed the eligibility of the articles. A third evaluator with a PhD examined the results and resolved dissenting views.

Data extraction and quality assessment

Two evaluators independently coded the selected articles according to the following characteristics: literature information, time of measurement (before the COVID-19 vs. during the COVID-19), sample source (developed country vs. developing country), sample size, proportion of males, mean age, type of anxiety, and measurement instruments for PSNU and anxiety symptoms. The following principles needed to be adhered to in the coding process: (a) effect sizes were extracted from independent samples. If multiple independent samples were investigated in the same study, they were coded separately; if the study was a longitudinal study, it was coded by the first measurement; (b) if multiple studies used the same data, the one with the most complete information was selected; (c) If studies reported t or F values rather than r , the following formula \( r=\sqrt{\frac{{t}^{2}}{{t}^{2}+df}}\) ; \( r=\sqrt{\frac{F}{F+d{f}_{e}}}\) was used to convert them into r values [ 37 , 38 ]. Additionally, if some studies only reported the correlation matrix between each dimension of PSNU and anxiety symptoms, the following formula \( {r}_{xy}=\frac{\sum {r}_{xi}{r}_{yj}}{\sqrt{n+n(n-1){r}_{xixj}}\sqrt{m+m(m-1){r}_{yiyj}}}\) was used to synthesize the r values [ 39 ], where n or m is the number of dimensions of variable x or variable y, respectively, and \( {r}_{xixj} \) or \( {r}_{yiyj}\) represents the mean of the correlation coefficients between the dimensions of variable x or variable y, respectively.

Literature quality was determined according to the meta-analysis quality evaluation scale developed [ 40 ]. The quality of the post-screening studies was assessed by five dimensions: sampling method, efficiency of sample collection, level of publication, and reliability of PSNU and anxiety symptom measurement instruments. The total score of the scale ranged from 0 to 10; higher scores indicated better quality of the literature.

Data analysis

All data were performed using Comprehensive Meta Analysis 3.3 (CMA 3.3). Pearson’s product-moment coefficient r was selected as the effect size index in this meta-analysis. Firstly, \( {\text{F}\text{i}\text{s}\text{h}\text{e}\text{r}}^{{\prime }}\text{s} Z=\frac{1}{2}\times \text{ln}\left(\frac{1+r}{1-r}\right)\) was used to convert the correlation coefficient to Fisher Z . Then the formula \( SE=\sqrt{\frac{1}{n-3}}\) was used to calculate the standard error ( SE ). Finally, the summary of r was obtained from the formula \( r=\frac{{e}^{2z}-1}{{e}^{2z}+1}\) for a comprehensive measure of the relationship between PSNU and anxiety symptoms [ 37 , 41 ].

Although the effect sizes estimated by the included studies may be similar, considering the actual differences between studies (e.g., region and gender), the random effects model was a better choice for data analysis for the current meta-analysis. The heterogeneity of the included study effect sizes was measured for significance by Cochran’s Q test and estimated quantitatively by the I 2 statistic [ 42 ]. If the results indicate there is a significant heterogeneity (the Q test: p -value < 0.05, I 2  > 75) and the results of different studies are significantly different from the overall effect size. Conversely, it indicates there are no differences between the studies and the overall effect size. And significant heterogeneity tends to indicate the possible presence of potential moderating variables. Subgroup analysis and meta-regression analysis were used to examine the moderating effect of categorical and continuous variables, respectively.

Funnel plots, fail-safe number (Nfs) and Egger linear regression were utilized to evaluate the publication bias [ 43 , 44 , 45 ]. The likelihood of publication bias was considered low if the intercept obtained from Egger linear regression was not significant. A larger Nfs indicated a lower risk of publication bias, and if Nfs < 5k + 10 (k representing the original number of studies), publication bias should be a concern [ 46 ]. When Egger’s linear regression was significant, the Duval and Tweedie’s trim-and-fill was performed to correct the effect size. If there was no significant change in the effect size, it was assumed that there was no serious publication bias [ 47 ].

A significance level of P  < 0.05 was deemed applicable in this study.

Sample characteristics

The PRISMA search process is depicted in Fig.  1 . The database search yielded 2078 records. After removing duplicate records and screening the title and abstract, the full text was subject to further evaluation. Ultimately, 172 records fit the inclusion criteria, including 209 independent effect sizes. The present meta-analysis included 68 studies on generalized anxiety, 44 on social anxiety, 22 on attachment anxiety, and 75 on fear of missing out. The characteristics of the selected studies are summarized in Table  2 . The majority of the sample group were adults. Quality scores for selected studies ranged from 0 to 10, with only 34 effect sizes below the theoretical mean, indicating high quality for the included studies. The literature included utilized BSMAS as the primary tool to measure PSNU, DASS-21-A to measure GA, IAS to measure SA, ECR to measure AA, and FoMOS to measure FoMO.

figure 1

Flow chart of the search and selection strategy

Overall analysis, homogeneity tests and publication bias

As shown in Table  3 , there was significant heterogeneity between PSNU and all four anxiety symptoms (GA: Q  = 1623.090, I 2  = 95.872%; SA: Q  = 1396.828, I 2  = 96.922%; AA: Q  = 264.899, I 2  = 92.072%; FoMO: Q  = 1847.110, I 2  = 95.994%), so a random effects model was chosen. The results of the random effects model indicate a moderate positive correlation between PSNU and anxiety symptoms (GA: r  = 0.350, 95% CI [0.323, 0.378]; SA: r  = 0.390, 95% CI [0.347, 0.431]; AA: r  = 0.345, 95% CI [0.286, 0.402]; FoMO: r  = 0.496, 95% CI [0.461, 0.529]).

Figure  2 shows the funnel plot of the relationship between PSNU and anxiety symptoms. No significant symmetry was seen in the funnel plot of the relationship between PSNU and GA and between PSNU and SA. And the Egger’s regression results also indicated that there might be publication bias ( t  = 3.775, p  < 0.001; t  = 2.309, p  < 0.05). Therefore, it was necessary to use fail-safe number (Nfs) and the trim and fill method for further examination and correction. The Nfs for PSNU and GA as well as PSNU and SA are 4591 and 7568, respectively. Both Nfs were much larger than the standard 5 k  + 10. After performing the trim and fill method, 14 effect sizes were added to the right side of the funnel plat (Fig.  2 .a), the correlation coefficient between PSNU and GA changed to ( r  = 0.388, 95% CI [0.362, 0.413]); 10 effect sizes were added to the right side of the funnel plat (Fig.  2 .b), the correlation coefficient between PSNU and SA changed to ( r  = 0.437, 95% CI [0.395, 0.478]). The correlation coefficients did not change significantly, indicating that there was no significant publication bias associated with the relationship between PSNU and these two anxiety symptoms (GA and SA).

figure 2

Funnel plot of the relationship between PSNU and anxiety symptoms. Note: Black dots indicated additional studies after using trim and fill method; ( a ) = Funnel plot of the PSNU and GA; ( b ) = Funnel plot of the PSNU and SA; ( c ) = Funnel plot of the PSNU and AA; ( d ) = Funnel plot of the PSNU and FoMO

Sensitivity analyses

Initially, the findings obtained through the one-study-removed approach indicated that the heterogeneities in the relationship between PSNU and anxiety symptoms were not attributed to any individual study. Nevertheless, it is important to note that sensitivity analysis should be performed based on literature quality [ 223 ] since low-quality literature could potentially impact result stability. In the relationship between PSNU and GA, the 10 effect sizes below the theoretical mean scores were excluded from analysis, and the sensitivity analysis results were recalculated ( r  = 0.402, 95% CI [0.375, 0.428]); In the relationship between PSNU and SA, the 8 effect sizes below the theoretical mean scores were excluded from analysis, and the sensitivity analysis results were recalculated ( r  = 0.431, 95% CI [0.387, 0.472]); In the relationship between PSNU and AA, the 5 effect sizes below the theoretical mean scores were excluded from analysis, and the sensitivity analysis results were recalculated ( r  = 0.367, 95% CI [0.298, 0.433]); In the relationship between PSNU and FoMO, the 11 effect sizes below the theoretical mean scores were excluded from analysis, and the sensitivity analysis results were recalculated ( r  = 0.508, 95% CI [0.470, 0.544]). The revised estimates indicate that meta-analysis results were stable.

Moderator analysis

The impact of moderator variables on the relation between psnu and ga.

The results of subgroup analysis and meta-regression are shown in Table  4 , the time of measurement significantly moderated the correlation between PSNU and GA ( Q between = 19.268, df  = 2, p  < 0.001). The relation between the two variables was significantly higher during the COVID-19 ( r  = 0.392, 95% CI [0.357, 0.425]) than before the COVID-19 ( r  = 0.270, 95% CI [0.227, 0.313]) or measurement time uncertain ( r  = 0.352, 95% CI [0.285, 0.415]).

The moderating effect of the PSNU measurement was significant ( Q between = 6.852, df  = 1, p  = 0.009). The relation was significantly higher when PSNU was measured with the BSMAS ( r  = 0.373, 95% CI [0.341, 0.404]) compared to others ( r  = 0.301, 95% CI [0.256, 0.344]).

The moderating effect of the GA measurement was significant ( Q between = 60.061, df  = 5, p  < 0.001). Specifically, when GA measured by the GAD ( r  = 0.398, 95% CI [0.356, 0.438]) and the DASS-21-A ( r  = 0.433, 95% CI [0.389, 0.475]), a moderate positive correlation was observed. However, the correlation was less significant when measured using the STAI ( r  = 0.232, 95% CI [0.187, 0.276]).

For the relation between PSNU and GA, the moderating effect of region, gender and age were not significant.

The impact of moderator variables on the relation between PSNU and SA

The effects of the moderating variables in the relation between PSNU and SA were shown in Table  5 . The results revealed a gender-moderated variances between the two variables (b = 0.601, 95% CI [ 0.041, 1.161], Q model (1, k = 41) = 4.705, p  = 0.036).

For the relation between PSNU and SA, the moderating effects of time of measurement, region, measurement of PSNU and SA, and age were not significant.

The impact of moderator variables on the relation between PSNU and AA

The effects of the moderating variables in the relation between PSNU and AA were shown in Table  6 , region significantly moderated the correlation between PSNU and AA ( Q between = 6.410, df  = 2, p  = 0.041). The correlation between the two variables was significantly higher in developing country ( r  = 0.378, 95% CI [0.304, 0.448]) than in developed country ( r  = 0.242, 95% CI [0.162, 0.319]).

The moderating effect of the PSNU measurement was significant ( Q between = 6.852, df  = 1, p  = 0.009). Specifically, when AA was measured by the GPIUS-2 ( r  = 0.484, 95% CI [0.200, 0.692]) and the PMSMUAQ ( r  = 0.443, 95% CI [0.381, 0.501]), a moderate positive correlation was observed. However, the correlation was less significant when measured using the BSMAS ( r  = 0.248, 95% CI [0.161, 0.331]) and others ( r  = 0.313, 95% CI [0.250, 0.372]).

The moderating effect of the AA measurement was significant ( Q between = 17.283, df  = 2, p  < 0.001). The correlation was significantly higher when measured using the ECR ( r  = 0.386, 95% CI [0.338, 0.432]) compared to the RQ ( r  = 0.200, 95% CI [0.123, 0.275]).

For the relation between PSNU and AA, the moderating effects of time of measurement, region, gender, and age were not significant.

The impact of moderator variables on the relation between PSNU and FoMO

The effects of the moderating variables in the relation between PSNU and FoMO were shown in Table  7 , the moderating effect of the PSNU measurement was significant ( Q between = 8.170, df  = 2, p  = 0.017). Among the sub-dimensions, the others was excluded because there was only one sample. Specifically, when measured using the FoMOS-MSME ( r  = 0.630, 95% CI [0.513, 0.725]), a moderate positive correlation was observed. However, the correlation was less significant when measured using the FoMOS ( r  = 0.472, 95% CI [0.432, 0.509]) and the T-S FoMOS ( r  = 0.557, 95% CI [0.463, 0.639]).

For the relationship between PSNU and FoMO, the moderating effects of time of measurement, region, measurement of PSNU, gender and age were not significant.

Through systematic review and meta-analysis, this study established a positive correlation between PSNU and anxiety symptoms (i.e., generalized anxiety, social anxiety, attachment anxiety, and fear of missing out), confirming a linear relationship and partially supporting the Social Cognitive Theory of Mass Communication [ 28 ] and the Cognitive Behavioral Model of Pathological Use [ 31 ]. Specifically, a significant positive correlation between PSNU and GA was observed, implying that GA sufferers might resort to social network for validation or as an escape from reality, potentially alleviating their anxiety. Similarly, the meta-analysis demonstrated a strong positive correlation between PSNU and SA, suggesting a preference for computer-mediated communication among those with high social anxiety due to perceived control and liberation offered by social network. This preference is often accompanied by maladaptive emotional regulation, predisposing them to problematic use. In AA, a robust positive correlation was found with PSNU, indicating a higher propensity for such use among individuals with attachment anxiety. Notably, the study identified the strongest correlation in the context of FoMO. FoMO’s significant association with PSNU is multifaceted, stemming from the real-time nature of social networks that engenders a continuous concern about missing crucial updates or events. This drives frequent engagement with social network, thereby establishing a direct link to problematic usage patterns. Additionally, social network’s feedback loops amplify this effect, intensifying FoMO. The culture of social comparison on these platforms further exacerbates FoMO, as users frequently compare their lives with others’ selectively curated portrayals, enhancing both their social networking usage frequency and the pursuit for social validation. Furthermore, the integral role of social network in modern life broadens FoMO’s scope, encompassing anxieties about staying informed and connected.

The notable correlation between FoMO and PSNU can be comprehensively understood through various perspectives. FoMO is inherently linked to the real-time nature of social networks, which cultivates an ongoing concern about missing significant updates or events in one’s social circle [ 221 ]. This anxiety prompts frequent engagement with social network, leading to patterns of problematic use. Moreover, the feedback loops in social network algorithms, designed to enhance user engagement, further intensify this fear [ 224 ]. Additionally, social comparison, a common phenomenon on these platforms, exacerbates FoMO as users continuously compare their lives with the idealized representations of others, amplifying feelings of missing out on key social experiences [ 225 ]. This behavior not only increases social networking usage but also is closely linked to the quest for social validation and identity construction on these platforms. The extensive role of social network in modern life further amplifies FoMO, as these platforms are crucial for information exchange and maintaining social ties. FoMO thus encompasses more than social concerns, extending to anxieties about staying informed with trends and dynamics within social networks [ 226 ]. The multifaceted nature of FoMO in relation to social network underscores its pronounced correlation with problematic social networking usage. In essence, the combination of social network’s intrinsic characteristics, psychological drivers of user behavior, the culture of social comparison, and the pervasiveness of social network in everyday life collectively make FoMO the most pronouncedly correlated anxiety type with PSNU.

Additionally, we conducted subgroup analyses on the timing of measurement (before COVID-19 vs. during COVID-19), measurement tools (for PSNU and anxiety symptoms), sample characteristics (participants’ region), and performed a meta-regression analysis on gender and age in the context of PSNU and anxiety symptoms. It was found that the timing of measurement, tools used for assessing PSNU and anxiety, region, and gender had a moderating effect, whereas age did not show a significant moderating impact.

Firstly, the relationship between PSNU and anxiety symptoms was significantly higher during the COVID-19 period than before, especially between PSNU and GA. However, the moderating effect of measurement timing was not significant in the relationship between PSNU and other types of anxiety. This could be attributed to the increased uncertainty and stress during the pandemic, leading to heightened levels of general anxiety [ 227 ]. The overuse of social network for information seeking and anxiety alleviation might have paradoxically exacerbated anxiety symptoms, particularly among individuals with broad future-related worries [ 228 ]. While the COVID-19 pandemic altered the relationship between PSNU and GA, its impact on other types of anxiety (such as SA and AA) may not have been significant, likely due to these anxiety types being more influenced by other factors like social skills and attachment styles, which were minimally impacted by the epidemic.

Secondly, the observed variance in the relationship between PSNU and AA across different economic contexts, notably between developing and developed countries, underscores the multifaceted influence of socio-economic, cultural, and technological factors on this dynamic. The amplified connection in developing countries may be attributed to greater socio-economic challenges, distinct cultural norms regarding social support and interaction, rising social network penetration, especially among younger demographics, and technological disparities influencing accessibility and user experience [ 229 , 230 ]. Moreover, the role of social network as a coping mechanism for emotional distress, potentially fostering insecure attachment patterns, is more pronounced in these settings [ 231 ]. These findings highlight the necessity of considering contextual variations in assessing the psychological impacts of social network, advocating for a nuanced understanding of how socio-economic and cultural backgrounds mediate the relationship between PSNU and mental health outcomes [ 232 ]. Additionally, the relationship between PSNU and other types of anxiety (such as GA and SA) presents uniform characteristics across different economic contexts.

Thirdly, the significant moderating effects of measurement tools in the context of PSNU and its correlation with various forms of anxiety, including GA, and AA, are crucial in interpreting the research findings. Specifically, the study reveals that the Bergen Social Media Addiction Scale (BSMAS) demonstrates a stronger correlation between PSNU and GA, compared to other tools. Similarly, for AA, the Griffiths’ Problematic Internet Use Scale 2 (GPIUS2) and the Problematic Media Social Media Use Assessment Questionnaire (PMSMUAQ) show a more pronounced correlation with AA than the BSMAS or other instruments, but for SA and FoMO, the PSNU instrument doesn’t significantly moderate the correlation. The PSNU measurement tool typically contains an emotional change dimension. SA and FoMO, due to their specific conditional stimuli triggers and correlation with social networks [ 233 , 234 ], are likely to yield more consistent scores in this dimension, while GA and AA may be less reliable due to their lesser sensitivity to specific conditional stimuli. Consequently, the adjustment effects of PSNU measurements vary across anxiety symptoms. Regarding the measurement tools for anxiety, different scales exhibit varying degrees of sensitivity in detecting the relationship with PSNU. The Generalized Anxiety Disorder Scale (GAD) and the Depression Anxiety Stress Scales 21 (DASS-21) are more effective in illustrating a strong relationship between GA and PSNU than the State-Trait Anxiety Inventory (STAI). In the case of AA, the Experiences in Close Relationships-21 (ECR-21) provides a more substantial correlation than the Relationship Questionnaire (RQ). Furthermore, for FoMO, the Fear of Missing Out Scale - Multi-Social Media Environment (FoMOS-MSME) is more indicative of a strong relationship with PSNU compared to the standard FoMOS or the T-S FoMOS. These findings underscore the importance of the selection of appropriate measurement tools in research. Different tools, due to their unique design, focus, and sensitivity, can reveal varying degrees of correlation between PSNU and anxiety disorders. This highlights the need for careful consideration of tool characteristics and their potential impact on research outcomes. It also cautions against drawing direct comparisons between studies without acknowledging the possible variances introduced by the use of different measurement instruments.

Fourthly, the significant moderating role of gender in the relationship between PSNU and SA, particularly pronounced in samples with a higher proportion of females. Women tend to engage more actively and emotionally with social network, potentially leading to an increased dependency on these platforms when confronting social anxiety [ 235 ]. This intensified use might amplify the association between PSNU and SA. Societal and cultural pressures, especially those related to appearance and social status, are known to disproportionately affect women, possibly exacerbating their experience of social anxiety and prompting a greater reliance on social network for validation and support [ 236 ]. Furthermore, women’s propensity to seek emotional support and express themselves on social network platforms [ 237 ] could strengthen this link, particularly in the context of managing social anxiety. Consequently, the observed gender differences in the relationship between PSNU and SA underscore the importance of considering gender-specific dynamics and cultural influences in psychological research related to social network use. In addition, gender consistency was observed in the association between PSNU and other types of anxiety, indicating no significant gender disparities.

Fifthly, the absence of a significant moderating effect of age on the relationship between PSNU and various forms of anxiety suggests a pervasive influence of social network across different age groups. This finding indicates that the impact of PSNU on anxiety is relatively consistent, irrespective of age, highlighting the universal nature of social network’s psychological implications [ 238 ]. Furthermore, this uniformity suggests that other factors, such as individual psychological traits or socio-cultural influences, might play a more crucial role in the development of anxiety related to social networking usage than age [ 239 ]. The non-significant role of age also points towards a potential generational overlap in social networking usage patterns and their psychological effects, challenging the notion that younger individuals are uniquely susceptible to the adverse effects of social network on mental health [ 240 ]. Therefore, this insight necessitates a broader perspective in understanding the dynamics of social network and mental health, one that transcends age-based assumptions.

Limitations

There are some limitations in this research. First, most of the studies were cross-sectional surveys, resulting in difficulties in inferring causality of variables, longitudinal study data will be needed to evaluate causal interactions in the future. Second, considerable heterogeneity was found in the estimated results, although heterogeneity can be partially explained by differences in study design (e.g., Time of measurement, region, gender, and measurement tools), but this can introduce some uncertainty in the aggregation and generalization of the estimated results. Third, most studies were based on Asian samples, which limits the generality of the results. Fourth, to minimize potential sources of heterogeneity, some less frequently used measurement tools were not included in the classification of measurement tools, which may have some impact on the results of heterogeneity interpretation. Finally, since most of the included studies used self-reported scales, it is possible to get results that deviate from the actual situation to some extent.

This meta-analysis aims to quantifies the correlations between PSNU and four specific types of anxiety symptoms (i.e., generalized anxiety, social anxiety, attachment anxiety, and fear of missing out). The results revealed a significant moderate positive association between PSNU and each of these anxiety symptoms. Furthermore, Subgroup analysis and meta-regression analysis indicated that gender, region, time of measurement, and instrument of measurement significantly influenced the relationship between PSNU and specific anxiety symptoms. Specifically, the measurement time and GA measurement tools significantly influenced the relationship between PSNU and GA. Gender significantly influenced the relationship between PSNU and SA. Region, PSNU measurement tools, and AA measurement tools all significantly influenced the relationship between PSNU and AA. The FoMO measurement tool significantly influenced the relationship between PSNU and FoMO. Regarding these findings, prevention interventions for PSNU and anxiety symptoms are important.

Data availability

The datasets are available from the corresponding author on reasonable request.

Abbreviations

  • Problematic social networking use
  • Generalized anxiety
  • Social anxiety
  • Attachment anxiety

Fear of miss out

Bergen Social Media Addiction Scale

Facebook Addiction Scale

Facebook Intrusion Questionnaire

Generalized Problematic Internet Use Scale 2

Problematic Mobile Social Media Usage Assessment Questionnaire

Social Network Addiction Tendency Scale

Brief Symptom Inventory

The anxiety subscale of the Depression Anxiety Stress Scales

Generalized Anxiety Disorder

The anxiety subscale of the Hospital Anxiety and Depression Scale

State-Trait Anxiety Inventory

Interaction Anxiousness Scale

Liebowitz Social Anxiety Scale

Social Anxiety Scale for Social Media Users

Social Anxiety for Adolescents

Social Anxiety Subscale of the Self-Consciousness Scale

Social Interaction Anxiety Scale

Experiences in Close Relationship Scale

Relationship questionnaire

Fear of Missing Out Scale

FoMO Measurement Scale in the Mobile Social Media Environment

Trait-State Fear of missing Out Scale

Rozgonjuk D, Sindermann C, Elhai JD, Montag C. Fear of missing out (FoMO) and social media’s impact on daily-life and productivity at work: do WhatsApp, Facebook, Instagram, and Snapchat Use disorders mediate that association? Addict Behav. 2020;110:106487.

Article   PubMed   Google Scholar  

Mieczkowski H, Lee AY, Hancock JT. Priming effects of social media use scales on well-being outcomes: the influence of intensity and addiction scales on self-reported depression. Social Media + Soc. 2020;6(4):2056305120961784.

Article   Google Scholar  

Global digital population as of April. 2023 [ https://www.statista.com/statistics/617136/digital-population-worldwide/ ].

Marengo D, Settanni M, Fabris MA, Longobardi C. Alone, together: fear of missing out mediates the link between peer exclusion in WhatsApp classmate groups and psychological adjustment in early-adolescent teens. J Social Personal Relationships. 2021;38(4):1371–9.

Marengo D, Fabris MA, Longobardi C, Settanni M. Smartphone and social media use contributed to individual tendencies towards social media addiction in Italian adolescents during the COVID-19 pandemic. Addict Behav. 2022;126:107204.

Müller SM, Wegmann E, Stolze D, Brand M. Maximizing social outcomes? Social zapping and fear of missing out mediate the effects of maximization and procrastination on problematic social networks use. Comput Hum Behav. 2020;107:106296.

Sun Y, Zhang Y. A review of theories and models applied in studies of social media addiction and implications for future research. Addict Behav. 2021;114:106699.

Boustead R, Flack M. Moderated-mediation analysis of problematic social networking use: the role of anxious attachment orientation, fear of missing out and satisfaction with life. Addict Behav 2021, 119.

Hussain Z, Griffiths MD. The associations between problematic social networking Site Use and Sleep Quality, attention-deficit hyperactivity disorder, Depression, anxiety and stress. Int J Mental Health Addict. 2021;19(3):686–700.

Gori A, Topino E, Griffiths MD. The associations between attachment, self-esteem, fear of missing out, daily time expenditure, and problematic social media use: a path analysis model. Addict Behav. 2023;141:107633.

Marino C, Manari T, Vieno A, Imperato C, Spada MM, Franceschini C, Musetti A. Problematic social networking sites use and online social anxiety: the role of attachment, emotion dysregulation and motives. Addict Behav. 2023;138:107572.

Tobin SJ, Graham S. Feedback sensitivity as a mediator of the relationship between attachment anxiety and problematic Facebook Use. Cyberpsychology Behav Social Netw. 2020;23(8):562–6.

Brailovskaia J, Rohmann E, Bierhoff H-W, Margraf J. The anxious addictive narcissist: the relationship between grandiose and vulnerable narcissism, anxiety symptoms and Facebook Addiction. PLoS ONE 2020, 15(11).

Kim S-S, Bae S-M. Social Anxiety and Social Networking Service Addiction Proneness in University students: the Mediating effects of Experiential Avoidance and interpersonal problems. Psychiatry Invest. 2022;19(8):702–702.

Zhao J, Ye B, Yu L, Xia F. Effects of Stressors of COVID-19 on Chinese College Students’ Problematic Social Media Use: A Mediated Moderation Model. Front Psychiatry 2022, 13.

Astolfi Cury GS, Takannune DM, Prates Herrerias GS, Rivera-Sequeiros A, de Barros JR, Baima JP, Saad-Hossne R, Sassaki LY. Clinical and Psychological Factors Associated with Addiction and Compensatory Use of Facebook among patients with inflammatory bowel disease: a cross-sectional study. Int J Gen Med. 2022;15:1447–57.

Balta S, Emirtekin E, Kircaburun K, Griffiths MD. Neuroticism, trait fear of missing out, and Phubbing: the mediating role of state fear of missing out and problematic Instagram Use. Int J Mental Health Addict. 2020;18(3):628–39.

Boursier V, Gioia F, Griffiths MD. Do selfie-expectancies and social appearance anxiety predict adolescents’ problematic social media use? Comput Hum Behav. 2020;110:106395.

Worsley JD, McIntyre JC, Bentall RP, Corcoran R. Childhood maltreatment and problematic social media use: the role of attachment and depression. Psychiatry Res. 2018;267:88–93.

de Bérail P, Guillon M, Bungener C. The relations between YouTube addiction, social anxiety and parasocial relationships with YouTubers: a moderated-mediation model based on a cognitive-behavioral framework. Comput Hum Behav. 2019;99:190–204.

Naidu S, Chand A, Pandaram A, Patel A. Problematic internet and social network site use in young adults: the role of emotional intelligence and fear of negative evaluation. Pers Indiv Differ. 2023;200:111915.

Apaolaza V, Hartmann P, D’Souza C, Gilsanz A. Mindfulness, compulsive Mobile Social Media Use, and derived stress: the mediating roles of self-esteem and social anxiety. Cyberpsychology Behav Social Netw. 2019;22(6):388–96.

Demircioglu ZI, Goncu-Kose A. Antecedents of problematic social media use and cyberbullying among adolescents: attachment, the dark triad and rejection sensitivity. Curr Psychol (New Brunsw NJ) 2022:1–19.

Gao Q, Li Y, Zhu Z, Fu E, Bu X, Peng S, Xiang Y. What links to psychological needs satisfaction and excessive WeChat use? The mediating role of anxiety, depression and WeChat use intensity. BMC Psychol. 2021;9(1):105–105.

Article   PubMed   PubMed Central   Google Scholar  

Malak MZ, Shuhaiber AH, Al-amer RM, Abuadas MH, Aburoomi RJ. Correlation between psychological factors, academic performance and social media addiction: model-based testing. Behav Inform Technol. 2022;41(8):1583–95.

Song C. The effect of the need to belong on mobile phone social media dependence of middle school students: Chain mediating roles of fear of missing out and maladaptive cognition. Sichuan Normal University; 2022.

Tokunaga RS, Rains SA. A review and meta-analysis examining conceptual and operational definitions of problematic internet use. Hum Commun Res. 2016;42(2):165–99.

Bandura A. Social cognitive theory of mass communication. Media effects. edn.: Routledge; 2009. pp. 110–40.

Valkenburg PM, Peter J, Walther JB. Media effects: theory and research. Ann Rev Psychol. 2016;67:315–38.

Slater MD. Reinforcing spirals: the mutual influence of media selectivity and media effects and their impact on individual behavior and social identity. Communication Theory. 2007;17(3):281–303.

Ahmed E, Vaghefi I. Social media addiction: A systematic review through cognitive-behavior model of pathological use. 2021.

She R, han Mo PK, Li J, Liu X, Jiang H, Chen Y, Ma L, fai Lau JT. The double-edged sword effect of social networking use intensity on problematic social networking use among college students: the role of social skills and social anxiety. Comput Hum Behav. 2023;140:107555.

Przybylski AK, Weinstein N. A large-scale test of the goldilocks hypothesis: quantifying the relations between digital-screen use and the mental well-being of adolescents. Psychol Sci. 2017;28(2):204–15.

Ran G, Li J, Zhang Q, Niu X. The association between social anxiety and mobile phone addiction: a three-level meta-analysis. Comput Hum Behav. 2022;130:107198.

Fioravanti G, Casale S, Benucci SB, Prostamo A, Falone A, Ricca V, Rotella F. Fear of missing out and social networking sites use and abuse: a meta-analysis. Comput Hum Behav. 2021;122:106839.

Moher D, Liberati A, Tetzlaff J, Altman DG, Group* P. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264–9.

Card NA. Applied meta-analysis for social science research. Guilford; 2015.

Peterson RA, Brown SP. On the use of beta coefficients in meta-analysis. J Appl Psychol. 2005;90(1):175.

Hunter JE, Schmidt FL. Methods of meta-analysis: correcting error and bias in research findings. Sage; 2004.

Zhang Y, Li S, Yu G. The relationship between self-esteem and social anxiety: a meta-analysis with Chinese students. Adv Psychol Sci. 2019;27(6):1005–18.

Borenstein M, Hedges LV, Higgins JP, Rothstein HR. Introduction to meta-analysis. Wiley; 2021.

Higgins JP, Thompson SG. Quantifying heterogeneity in a meta-analysis. Stat Med. 2002;21(11):1539–58.

Egger M, Smith GD, Schneider M, Minder C. Bias in meta-analysis detected by a simple, graphical test. BMJ. 1997;315(7109):629–34.

Light RJ, Pillemer DB. Summing up: the science of reviewing research. Harvard University Press; 1984.

Rosenthal R. Meta-Analytic Procedures for Social Science Research Sage Publications: Beverly Hills, 1984, 148 pp. Educational Researcher 1986;15(8):18–20.

Rothstein HR, Sutton AJ, Borenstein M. Publication bias in meta-analysis. Publication bias meta‐analysis: Prev Assess Adjustments 2005:1–7.

Duval S, Tweedie R. Trim and fill: a simple funnel-plot–based method of testing and adjusting for publication bias in meta‐analysis. Biometrics. 2000;56(2):455–63.

Al-Mamun F, Hosen I, Griffiths MD, Mamun MA. Facebook use and its predictive factors among students: evidence from a lower- and middle-income country, Bangladesh. Front Psychiatry 2022, 13.

Schou Andreassen C, Billieux J, Griffiths MD, Kuss DJ, Demetrovics Z, Mazzoni E, Pallesen S. The relationship between addictive use of social media and video games and symptoms of psychiatric disorders: a large-scale cross-sectional study. Psychol Addict Behaviors: J Soc Psychologists Addict Behav. 2016;30(2):252–62.

Arikan G, Acar IH, Ustundag-Budak AM. A two-generation study: The transmission of attachment and young adults’ depression, anxiety, and social media addiction. Addict Behav 2022, 124.

Arpaci I, Karatas K, Kiran F, Kusci I, Topcu A. Mediating role of positivity in the relationship between state anxiety and problematic social media use during the COVID-19 pandemic. Death Stud. 2022;46(10):2287–97.

Brailovskaia J, Margraf J. Facebook Addiction Disorder (FAD) among German students-A longitudinal approach. PLoS ONE 2017, 12(12).

Brailovskaia J, Margraf J. The relationship between burden caused by coronavirus (Covid-19), addictive social media use, sense of control and anxiety. Comput Hum Behav. 2021;119:106720–106720.

Brailovskaia J, Margraf J. Addictive social media use during Covid-19 outbreak: validation of the Bergen Social Media Addiction Scale (BSMAS) and investigation of protective factors in nine countries. Curr Psychol (New Brunsw NJ) 2022:1–19.

Brailovskaia J, Krasavtseva Y, Kochetkov Y, Tour P, Margraf J. Social media use, mental health, and suicide-related outcomes in Russian women: a cross-sectional comparison between two age groups. Women’s Health (London England). 2022;18:17455057221141292–17455057221141292.

PubMed   Google Scholar  

Chang C-W, Huang R-Y, Strong C, Lin Y-C, Tsai M-C, Chen IH, Lin C-Y, Pakpour AHH, Griffiths MDD. Reciprocal relationships between Problematic Social Media Use, problematic gaming, and psychological distress among University students: a 9-Month Longitudinal Study. Front Public Health 2022, 10.

Charzynska E, Sussman S, Atroszko PA. Profiles of potential behavioral addictions’ severity and their associations with gender, personality, and well-being: A person-centered approach. Addict Behav 2021, 119.

Chen C-Y, Chen IH, Pakpour AH, Lin C-Y, Griffiths MD. Internet-related behaviors and psychological distress among Schoolchildren during the COVID-19 School Hiatus. Cyberpsychology Behav Social Netw. 2021;24(10):654–63.

Da Veiga GF, Sotero L, Pontes HM, Cunha D, Portugal A, Relvas AP. Emerging adults and Facebook Use: the validation of the Bergen Facebook Addiction Scale (BFAS). Int J Mental Health Addict. 2019;17(2):279–94.

Dadiotis A, Bacopoulou F, Kokka I, Vlachakis D, Chrousos GP, Darviri C, Roussos P. Validation of the Greek version of the Bergen Social Media Addiction Scale in Undergraduate Students. EMBnetjournal 2021, 26.

Fekih-Romdhane F, Jahrami H, Away R, Trabelsi K, Pandi-Perumal SR, Seeman MV, Hallit S, Cheour M. The relationship between technology addictions and schizotypal traits: mediating roles of depression, anxiety, and stress. BMC Psychiatry 2023, 23(1).

Flynn S, Noone C, Sarma KM. An exploration of the link between adult attachment and problematic Facebook use. BMC Psychol. 2018;6(1):34–34.

Fung XCC, Siu AMH, Potenza MN, O’Brien KS, Latner JD, Chen C-Y, Chen IH, Lin C-Y. Problematic use of internet-related activities and Perceived Weight Stigma in Schoolchildren: a longitudinal study across different epidemic periods of COVID-19 in China. Front Psychiatry 2021, 12.

Gonzalez-Nuevo C, Cuesta M, Muniz J, Postigo A, Menendez-Aller A, Kuss DJ. Problematic Use of Social Networks during the First Lockdown: User Profiles and the Protective Effect of Resilience and Optimism. Journal of Clinical Medicine 2022, 11(24).

Hou X-L, Wang H-Z, Hu T-Q, Gentile DA, Gaskin J, Wang J-L. The relationship between perceived stress and problematic social networking site use among Chinese college students. J Behav Addictions. 2019;8(2):306–17.

Hussain Z, Wegmann E. Problematic social networking site use and associations with anxiety, attention deficit hyperactivity disorder, and resilience. Computers Hum Behav Rep. 2021;4:100125.

Imani V, Ahorsu DK, Taghizadeh N, Parsapour Z, Nejati B, Chen H-P, Pakpour AH. The mediating roles of anxiety, Depression, Sleepiness, Insomnia, and Sleep Quality in the Association between Problematic Social Media Use and Quality of Life among patients with Cancer. Healthcare 2022, 10(9).

Islam MS, Sujan MSH, Tasnim R, Mohona RA, Ferdous MZ, Kamruzzaman S, Toma TY, Sakib MN, Pinky KN, Islam MR et al. Problematic smartphone and Social Media Use among Bangladeshi College and University students amid COVID-19: the role of Psychological Well-Being and Pandemic related factors. Front Psychiatry 2021, 12.

Islam MS, Jahan I, Dewan MAA, Pontes HM, Koly KN, Sikder MT, Rahman M. Psychometric properties of three online-related addictive behavior instruments among Bangladeshi school-going adolescents. PLoS ONE 2022, 17(12).

Jahan I, Hosen I, Al Mamun F, Kaggwa MM, Griffiths MD, Mamun MA. How has the COVID-19 pandemic impacted Internet Use behaviors and facilitated problematic internet use? A Bangladeshi study. Psychol Res Behav Manage. 2021;14:1127–38.

Jiang Y. Problematic social media usage and anxiety among University Students during the COVID-19 pandemic: the mediating role of Psychological Capital and the moderating role of academic burnout. Front Psychol. 2021;12:612007–612007.

Kim M-R, Oh J-W, Huh B-Y. Analysis of factors related to Social Network Service Addiction among Korean High School Students. J Addictions Nurs. 2020;31(3):203–12.

Koc M, Gulyagci S. Facebook addiction among Turkish college students: the role of psychological health, demographic, and usage characteristics. Cyberpsychology Behav Social Netw. 2013;16(4):279–84.

Lin C-Y, Namdar P, Griffiths MD, Pakpour AH. Mediated roles of generalized trust and perceived social support in the effects of problematic social media use on mental health: a cross-sectional study. Health Expect. 2021;24(1):165–73.

Lin C-Y, Imani V, Griffiths MD, Brostrom A, Nygardh A, Demetrovics Z, Pakpour AH. Temporal associations between morningness/eveningness, problematic social media use, psychological distress and daytime sleepiness: mediated roles of sleep quality and insomnia among young adults. J Sleep Res 2021, 30(1).

Lozano Blasco R, Latorre Cosculluela C, Quilez Robres A. Social Network Addiction and its impact on anxiety level among University students. Sustainability 2020, 12(13).

Marino C, Musetti A, Vieno A, Manari T, Franceschini C. Is psychological distress the key factor in the association between problematic social networking sites and poor sleep quality? Addict Behav 2022, 133.

Meshi D, Ellithorpe ME. Problematic social media use and social support received in real-life versus on social media: associations with depression, anxiety and social isolation. Addict Behav 2021, 119.

Mitropoulou EM, Karagianni M, Thomadakis C. Social Media Addiction, Self-Compassion, and Psychological Well-Being: a structural equation Model. Alpha Psychiatry. 2022;23(6):298–304.

Ozimek P, Brailovskaia J, Bierhoff H-W. Active and passive behavior in social media: validating the Social Media Activity Questionnaire (SMAQ). Telematics Inf Rep. 2023;10:100048.

Phillips WJ, Wisniewski AT. Self-compassion moderates the predictive effects of social media use profiles on depression and anxiety. Computers Hum Behav Rep. 2021;4:100128.

Reer F, Festl R, Quandt T. Investigating problematic social media and game use in a nationally representative sample of adolescents and younger adults. Behav Inform Technol. 2021;40(8):776–89.

Satici B, Kayis AR, Griffiths MD. Exploring the Association between Social Media Addiction and relationship satisfaction: psychological distress as a Mediator. Int J Mental Health Addict 2021.

Sediri S, Zgueb Y, Ouanes S, Ouali U, Bourgou S, Jomli R, Nacef F. Women’s mental health: acute impact of COVID-19 pandemic on domestic violence. Archives Womens Mental Health. 2020;23(6):749–56.

Shabahang R, Shim H, Aruguete MS, Zsila A. Oversharing on Social Media: anxiety, Attention-Seeking, and Social Media Addiction Predict the breadth and depth of sharing. Psychol Rep 2022:332941221122861–332941221122861.

Sotero L, Ferreira Da Veiga G, Carreira D, Portugal A, Relvas AP. Facebook Addiction and emerging adults: the influence of sociodemographic variables, family communication, and differentiation of self. Escritos De Psicología - Psychol Writings. 2019;12(2):81–92.

Stockdale LA, Coyne SM. Bored and online: reasons for using social media, problematic social networking site use, and behavioral outcomes across the transition from adolescence to emerging adulthood. J Adolesc. 2020;79:173–83.

Wang Z, Yang H, Elhai JD. Are there gender differences in comorbidity symptoms networks of problematic social media use, anxiety and depression symptoms? Evidence from network analysis. Pers Indiv Differ. 2022;195:111705.

White-Gosselin C-E, Poulin F. Associations Between Young Adults’ Social Media Addiction, Relationship Quality With Parents, and Internalizing Problems: A Path Analysis Model. 2022.

Wong HY, Mo HY, Potenza MN, Chan MNM, Lau WM, Chui TK, Pakpour AH, Lin C-Y. Relationships between Severity of Internet Gaming Disorder, Severity of Problematic Social Media Use, Sleep Quality and Psychological Distress. Int J Environ Res Public Health 2020, 17(6).

Yam C-W, Pakpour AH, Griffiths MD, Yau W-Y, Lo C-LM, Ng JMT, Lin C-Y, Leung H. Psychometric testing of three Chinese online-related addictive Behavior instruments among Hong Kong University students. Psychiatr Q. 2019;90(1):117–28.

Yuan Y, Zhong Y. A survey on the use of social networks and mental health of college students during the COVID-19 pandemic. J Campus Life Mental Health\. 2021;19(3):209–12.

Google Scholar  

Yurdagul C, Kircaburun K, Emirtekin E, Wang P, Griffiths MD. Psychopathological consequences related to problematic Instagram Use among adolescents: the mediating role of body image dissatisfaction and moderating role of gender. Int J Mental Health Addict. 2021;19(5):1385–97.

Zhang W, Pu J, He R, Yu M, Xu L, He X, Chen Z, Gan Z, Liu K, Tan Y, et al. Demographic characteristics, family environment and psychosocial factors affecting internet addiction in Chinese adolescents. J Affect Disord. 2022;315:130–8.

Zhang L, Wu Y, Jin T, Jia Y. Revision and validation of the Chinese short version of social media disorder. Mod Prev Med. 2021;48(8):1350–3.

Zhang X, Fan L. The influence of anxiety on colleges’ life satisfaction. Chin J Health Educ. 2021;37(5):469–72.

Zhao M, Wang H, Dong Y, Niu Y, Fang Y. The relationship between self-esteem and wechat addiction among undergraduate students: the multiple mediating roles of state anxiety and online interpersonal trust. J Psychol Sci. 2021;44(1):104–10.

Zhao J, Zhou Z, Sun B, Zhang X, Zhang L, Fu S. Attentional Bias is Associated with negative emotions in problematic users of Social Media as measured by a dot-probe Task. Int J Environ Res Public Health 2022, 19(24).

Atroszko PA, Balcerowska JM, Bereznowski P, Biernatowska A, Pallesen S, Schou Andreassen C. Facebook addiction among Polish undergraduate students: validity of measurement and relationship with personality and well-being. Comput Hum Behav. 2018;85:329–38.

Chen Y, Li R, Zhang P, Liu X. The moderating role of state attachment anxiety and avoidance between social anxiety and social networking sites Addiction. Psychol Rep. 2020;123(3):633–47.

Chen B, Zheng X, Sun X. The relationship between problematic social media use and online social anxiety: the roles of social media cognitive overload and dispositional mindfulness. Psychol Dev Educ. 2023;39(5):743–51.

Chentsova VO, Bravo AJ, Mezquita L, Pilatti A, Hogarth L, Cross-Cultural AS. Internalizing symptoms, rumination, and problematic social networking site use: a cross national examination among young adults in seven countries. Addict Behav 2023, 136.

Chu X, Ji S, Wang X, Yu J, Chen Y, Lei L. Peer phubbing and social networking site addiction: the mediating role of social anxiety and the moderating role of Family Financial Difficulty. Front Psychol. 2021;12:670065–670065.

Dempsey AE, O’Brien KD, Tiamiyu MF, Elhai JD. Fear of missing out (FoMO) and rumination mediate relations between social anxiety and problematic Facebook use. Addict Behav Rep. 2019;9:100150–100150.

PubMed   PubMed Central   Google Scholar  

Yildiz Durak H, Seferoglu SS. Modeling of variables related to problematic social media usage: Social desirability tendency example. Scand J Psychol. 2019;60(3):277–88.

Ekinci N, Akat M. The relationship between anxious-ambivalent attachment and social appearance anxiety in adolescents: the serial mediation of positive Youth Development and Instagram Addiction. Psychol Rep 2023:332941231159600–332941231159600.

Foroughi B, Griffiths MD, Iranmanesh M, Salamzadeh Y. Associations between Instagram Addiction, academic performance, social anxiety, Depression, and life satisfaction among University students. Int J Mental Health Addict. 2022;20(4):2221–42.

He L. Influence mechanism and intervention suggestions on addiction of social network addiction. Gannan Normal University; 2021.

Hu Y. The influencing mechanism of type D personality on problematic social networking sites use among adolescents and intervention research. Central China Normal University; 2020.

Jia L. A study of the relationship between neuroticism, perceived social support, social anxiety and problematic social network use in high school students. Harbin Normal University; 2022.

Lee-Won RJ, Herzog L, Park SG. Hooked on Facebook: the role of social anxiety and need for Social Assurance in Problematic Use of Facebook. Cyberpsychology Behav Social Netw. 2015;18(10):567–74.

Li H. Social anxiety and internet interpersonal addiction in adolescents and countermeasures. Central China Normal University; 2022.

Lin W-S, Chen H-R, Lee TS-H, Feng JY. Role of social anxiety on high engagement and addictive behavior in the context of social networking sites. Data Technol Appl. 2019;53(2):156–70.

Liu Y. The influence of family function on social media addiction in adolescents: the chain mediation effect of social anxiety and resilience. Hunan Normal University; 2021.

Lyvers M, Salviani A, Costan S, Thorberg FA. Alexithymia, narcissism and social anxiety in relation to social media and internet addiction symptoms. Int J Psychology: J Int De Psychologie. 2022;57(5):606–12.

Majid A, Yasir M, Javed A, Ali P. From envy to social anxiety and rumination: how social media site addiction triggers task distraction amongst nurses. J Nurs Adm Manag. 2020;28(3):504–13.

Mou Q, Zhuang J, Gao Y, Zhong Y, Lu Q, Gao F, Zhao M. The relationship between social anxiety and academic engagement among Chinese college students: a serial mediation model. J Affect Disord. 2022;311:247–53.

Ruggieri S, Santoro G, Pace U, Passanisi A, Schimmenti A. Problematic Facebook use and anxiety concerning use of social media in mothers and their offspring: an actor-partner interdependence model. Addict Behav Rep. 2020;11:100256–100256.

Ruiz MJ, Saez G, Villanueva-Moya L, Exposito F. Adolescent sexting: the role of body shame, Social Physique anxiety, and social networking site addiction. Cyberpsychology Behav Social Netw. 2021;24(12):799–805.

She R, Kit Han Mo P, Li J, Liu X, Jiang H, Chen Y, Ma L, Tak Fai Lau J. The double-edged sword effect of social networking use intensity on problematic social networking use among college students: the role of social skills and social anxiety. Comput Hum Behav. 2023;140:107555.

Stănculescu E. The Bergen Social Media Addiction Scale Validity in a Romanian sample using item response theory and network analysis. Int J Mental Health Addict 2022.

Teng X, Lei H, Li J, Wen Z. The influence of social anxiety on social network site addiction of college students: the moderator of intentional self-regulation. Chin J Clin Psychol. 2021;29(3):514–7.

Tong W. Influence of boredom on the problematic mobile social networks usage in adolescents: multiple chain mediator. Chin J Clin Psychol. 2019;27(5):932–6.

Tu W, Jiang H, Liu Q. Peer victimization and adolescent Mobile Social Addiction: mediation of social anxiety and gender differences. Int J Environ Res Public Health 2022, 19(17).

Wang S. The influence of college students self-esteem, social anxiety and fear of missing out on the problematic mobile social networks usage. Huaibei Normal University; 2021.

Wang X. The impact of peer relationship and social anxiety on secondary vocational school students’ problematic social network use and intervention study. Huaibei Normal University; 2022.

Wegmann E, Stodt B, Brand M. Addictive use of social networking sites can be explained by the interaction of internet use expectancies, internet literacy, and psychopathological symptoms. J Behav Addictions. 2015;4(3):155–62.

Yang W. The relationship between the type of internet addiction and the personality traits in college students. Huazhong University of Science and Technology; 2004.

Yang Z. The relationship between social variables and social networking usage among shanghai working population. East China Normal University; 2013.

Zhang C. The relationship between perceived social support and problematic social network use among junior high school students: a chain mediation model and an intervention study. Hebei University; 2022.

Zhang J, Chang F, Huang D, Wen X. The relationship between neuroticism and the problematic mobile social networks use in adolescents: the mediating role of anxiety and positive self-presentation. Chin J Clin Psychol. 2021;29(3):598–602.

Zhang Z. College students’ loneliness and problematic social networking use: Chain mediation of social self-efficacy and social anxiety. Shanghai Normal University; 2019.

Zhu B. Discussion on mechanism of social networking addiction——Social anxiety, craving and excitability. Liaoning Normal University; 2017.

Blackwell D, Leaman C, Tramposch R, Osborne C, Liss M. Extraversion, neuroticism, attachment style and fear of missing out as predictors of social media use and addiction. Pers Indiv Differ. 2017;116:69–72.

Chen A. From attachment to addiction: the mediating role of need satisfaction on social networking sites. Comput Hum Behav. 2019;98:80–92.

Chen Y, Zhong S, Dai L, Deng Y, Liu X. Attachment anxiety and social networking sites addiction in college students: a moderated mediating model. Chin J Clin Psychol. 2019;27(3):497–500.

Li J. The relations among problematic social networks usage behavior, Childhood Trauma and adult attachment in University students. Hunan Agricultural University; 2020.

Liu C, Ma J-L. Adult attachment orientations and social networking site addiction: the Mediating effects of Online Social Support and the fear of missing out. Front Psychol. 2019;10:2629–2629.

Mo S, Huang W, Xu Y, Tang Z, Nie G. The impact of medical students’ attachment anxiety on the use of problematic social networking sites during the epidemic. Psychol Monthly. 2022;17(9):1–4.

Teng X. The effect of attachment anxiety on problematic mobile social network use: the role of loneliness and self-control. Harbin Normal University; 2021.

Worsley JD, Mansfield R, Corcoran R. Attachment anxiety and problematic social media use: the Mediating Role of Well-Being. Cyberpsychology Behav Social Netw. 2018;21(9):563–8.

Wu Z. The effect of adult attachment on problematic social network use: the chain mediating effect of loneliness and fear of missing out. Jilin University; 2022.

Xia N. The impact of attachment anxiety on adolescent problem social networking site use: a moderated mediation model. Shihezi University; 2022.

Young L, Kolubinski DC, Frings D. Attachment style moderates the relationship between social media use and user mental health and wellbeing. Heliyon 2020, 6(6).

Bakioglu F, Deniz M, Griffiths MD, Pakpour AH. Adaptation and validation of the online-fear of missing out inventory into Turkish and the association with social media addiction, smartphone addiction, and life satisfaction. BMC Psychol. 2022;10(1):154–154.

Bendayan R, Blanca MJ. Spanish version of the Facebook Intrusion Questionnaire (FIQ-S). Psicothema. 2019;31(2):204–9.

Blachnio A, Przepiorka A. Facebook intrusion, fear of missing out, narcissism, and life satisfaction: a cross-sectional study. Psychiatry Res. 2018;259:514–9.

Casale S, Rugai L, Fioravanti G. Exploring the role of positive metacognitions in explaining the association between the fear of missing out and social media addiction. Addict Behav. 2018;85:83–7.

Chen Y, Zhang Y, Zhang S, Wang K. Effect of fear of’ missing out on college students negative social adaptation: Chain¬ - mediating effect of rumination and problematic social media use. China J Health Psychol. 2022;30(4):581–6.

Cheng S, Zhang X, Han Y. Relationship between fear of missing out and phubbing on college students: the chain intermediary effect of intolerance of uncertainty and problematic social media use. China J Health Psychol. 2022;30(9):1296–300.

Cui Q, Wang J, Zhang J, Li W, Li Q. The relationship between loneliness and negative emotion in college students: the chain-mediating role of fear of missing out and social network sites addiction. J Jining Med Univ. 2022;45(4):248–51.

Ding Q, Wang Z, Zhang Y, Zhou Z. The more gossip, the more addicted: the relationship between interpersonal curiosity and social networking sites addiction tendencies in college students. Psychol Dev Educ. 2022;38(1):118–25.

Fabris MA, Marengo D, Longobardi C, Settanni M. Investigating the links between fear of missing out, social media addiction, and emotional symptoms in adolescence: the role of stress associated with neglect and negative reactions on social media. Addict Behav. 2020;106:106364.

Fang J, Wang X, Wen Z, Zhou J. Fear of missing out and problematic social media use as mediators between emotional support from social media and phubbing behavior. Addict Behav. 2020;107:106430.

Gao Z. The study on the relationship and intervention among fear of missing out self-differentiation and problematic social media use of college students. Yunnan Normal University; 2021.

Gioia F, Fioravanti G, Casale S, Boursier V. The Effects of the Fear of Missing Out on People’s Social Networking Sites Use During the COVID-19 Pandemic: The Mediating Role of Online Relational Closeness and Individuals’ Online Communication Attitude. Front Psychiatry 2021, 12.

Gu X. Study on the Inhibitory Effect of Mindfulness Training on Social Media Addiction of College Students. Wuhan University; 2020.

Gugushvili N, Taht K, Schruff-Lim EM, Ruiter RA, Verduyn P. The Association between Neuroticism and problematic social networking sites Use: the role of fear of missing out and Self-Control. Psychol Rep 2022:332941221142003–332941221142003.

Hou J. The study on FoMO and content social media addiction among young people. Huazhong University of Science and Technology; 2021.

Hu R, Zhang B, Yang Y, Mao H, Peng Y, Xiong S. Relationship between college students’ fear of missing and wechat addiction: a cross-lagged analysis. J Bio-education. 2022;10(5):369–73.

Hu G. The relationship between basic psychological needs satisfaction and the use of problematic social networks by college students: a moderated mediation model and online intervention studies. Jiangxi Normal University; 2020.

Jiang Y, Jin T. The relationship between adolescents’ narcissistic personality and their use of problematic mobile social networks: the effects of fear of missing out and positive self-presentation. Chin J Special Educ 2018(11):64–70.

Li J. The effect of positive self-presentation on social networking sites on problematic use of social networking sites: a moderated mediation model. Henan University; 2020.

Li J, Zhang Y, Zhang X. The impact of Freshmen Social Exclusion on problematic Social Network Use: a Moderated Mediation Model. J Heilongjiang Vocat Inst Ecol Eng. 2023;36(1):118–22.

Li M. The relationship between fear of missing out and social media addiction among middle school students——The moderating role of self-control. Kashi University; 2022.

Li R, Dong X, Wang M, Wang R. A study on the relationship between fear of missing out and social network addiction. New Educ Era 2021(52):122–3.

Li Y. Fear of missing out or social avoidance? The influence of peer exclusion on problematic social media use among adolescents in Guangdong Province and Macao. Guangzhou University; 2020.

Ma J, Liu C. The effect of fear of missing out on social networking sites addiction among college students: the mediating roles of social networking site integration use and social support. Psychol Dev Educ. 2019;35(5):605–14.

Mao H. A follow-up study on the mechanism of the influence of university students’ Qi deficiency quality on WeChat addiction. Hunan University of Chinese Medicine; 2021.

Mao Y. The effect of dual filial piety to the college students ’internet social dependence: the mediation of maladaptive cognition and fear of missing out. Huazhong University of Science and Technology; 2021.

Moore K, Craciun G. Fear of missing out and personality as predictors of Social networking sites usage: the Instagram Case. Psychol Rep. 2021;124(4):1761–87.

Niu J. The relationship of college students’ basic psychological needs and social media dependence: the mediating role of fear of missing out. Huazhong University of Science and Technology; 2021.

Pi L, Li X. Research on the relationship between loneliness and problematic mobile social media usage: evidence from variable-oriented and person-oriented analyses. China J Health Psychol. 2023;31(6):936–42.

Pontes HM, Taylor M, Stavropoulos V. Beyond Facebook Addiction: the role of cognitive-related factors and Psychiatric Distress in Social networking site addiction. Cyberpsychol Behav Soc Netw. 2018;21(4):240–7.

Quaglieri A, Biondi S, Roma P, Varchetta M, Fraschetti A, Burrai J, Lausi G, Marti-Vilar M, Gonzalez-Sala F, Di Domenico A et al. From Emotional (Dys) Regulation to Internet Addiction: A Mediation Model of Problematic Social Media Use among Italian Young Adults. Journal of Clinical Medicine 2022, 11(1).

Servidio R, Koronczai B, Griffiths MD, Demetrovics Z. Problematic smartphone Use and Problematic Social Media Use: the predictive role of Self-Construal and the Mediating Effect of Fear Missing Out. Front Public Health 2022, 10.

Sheldon P, Antony MG, Sykes B. Predictors of problematic social media use: personality and life-position indicators. Psychol Rep. 2021;124(3):1110–33.

Sun C, Li Y, Kwok SYCL, Mu W. The relationship between intolerance of uncertainty and problematic Social Media Use during the COVID-19 pandemic: a serial mediation model. Int J Environ Res Public Health 2022, 19(22).

Tang Z. The relationship between loneliness and problematic social networks use among college students: the mediation of fear of missing out and the moderation of social support. Jilin University; 2022.

Tomczyk Ł, Selmanagic-Lizde E. Fear of missing out (FOMO) among youth in Bosnia and Herzegovina — Scale and selected mechanisms. Child Youth Serv Rev. 2018;88:541–9.

Unal-Aydin P, Ozkan Y, Ozturk M, Aydin O, Spada MM. The role of metacognitions in cyberbullying and cybervictimization among adolescents diagnosed with major depressive disorder and anxiety disorders: a case-control study. Clinical Psychology & Psychotherapy; 2023.

Uram P, Skalski S. Still logged in? The Link between Facebook Addiction, FoMO, Self-Esteem, Life satisfaction and loneliness in social media users. Psychol Rep. 2022;125(1):218–31.

Varchetta M, Fraschetti A, Mari E, Giannini AM. Social Media Addiction, fear of missing out (FoMO) and online vulnerability in university students. Revista Digit De Investigación en Docencia Universitaria. 2020;14(1):e1187.

Wang H. Study on the relationship and intervention between fear of missing and social network addiction in college students. Yunnan Normal University; 2021.

Wang M, Yin Z, Xu Q, Niu G. The relationship between shyness and adolescents’ social network sites addiction: Moderated mediation model. Chin J Clin Psychol. 2020;28(5):906–9.

Wegmann E, Oberst U, Stodt B, Brand M. Online-specific fear of missing out and internet-use expectancies contribute to symptoms of internet-communication disorder. Addict Behav Rep. 2017;5:33–42.

Wegmann E, Brandtner A, Brand M. Perceived strain due to COVID-19-Related restrictions mediates the Effect of Social needs and fear of missing out on the risk of a problematic use of Social Networks. Front Psychiatry 2021, 12.

Wei Q. Negative emotions and problematic social network sites usage: the mediating role of fear of missing out and the moderating role of gender. Central China Normal University; 2018.

Xiong L. Effect of social network site use on college students’ social network site addiction: A moderated mediation model and attention bias training intervention study. Jiangxi Normal University; 2022.

Yan H. The influence of college students’ basic psychological needs on social network addiction: The intermediary role of fear of missing out. Wuhan University; 2020.

Yan H. The status and factors associated with social media addiction among young people——Evidence from WeChat. Chongqing University; 2021.

Yang L. Research on the relationship of fear of missing out, excessive use of Wechat and life satisfaction. Beijing Forestry University; 2020.

Yin Y, Cai X, Ouyang M, Li S, Li X, Wang P. FoMO and the brain: loneliness and problematic social networking site use mediate the association between the topology of the resting-state EEG brain network and fear of missing out. Comput Hum Behav. 2023;141:107624.

Zhang C. The parental rejection and problematic social network sites with adolescents: the chain mediating effect of basic psychological needs and fear of missing out. Central China Normal University; 2022.

Zhang J. The influence of basic psychological needs on problematic mobile social networks usage of adolescent: a moderated mediation model. Liaocheng University; 2020.

Zhang Y, Chen Y, Jin J, Yu G. The relationship between fear of missing out and social media addiction: a cross-lagged analysis. Chin J Clin Psychol. 2021;29(5):1082–5.

Zhang Y, Jiang W, Ding Q, Hong M. Social comparison orientation and social network sites addiction in college students: the mediating role of fear of missing out. Chin J Clin Psychol. 2019;27(5):928–31.

Zhou J, Fang J. Social network sites support and addiction among college students: a moderated mediation model. Psychology: Techniques Appl. 2021;9(5):293–9.

Andreassen CS, Torsheim T, Brunborg GS, Pallesen S. Development of a Facebook addiction scale. Psychol Rep. 2012;110(2):501–17.

Andreassen CS, Billieux J, Griffiths MD, Kuss DJ, Demetrovics Z, Mazzoni E, Pallesen S. The relationship between addictive use of social media and video games and symptoms of psychiatric disorders: a large-scale cross-sectional study. Psychol Addict Behav. 2016;30(2):252.

Elphinston RA, Noller P. Time to face it! Facebook intrusion and the implications for romantic jealousy and relationship satisfaction. Cyberpsychology Behav Social Netw. 2011;14(11):631–5.

Caplan SE. Theory and measurement of generalized problematic internet use: a two-step approach. Comput Hum Behav. 2010;26(5):1089–97.

Jiang Y. Development of problematic mobile social media usage assessment questionnaire for adolescents. Psychology: Techniques Appl. 2018;6(10):613–21.

Wang X. College students’ social network addiction tendency: Questionnaire construction and correlation research. Master’s thesis Southwest University; 2016.

Derogatis LR. Brief symptom inventory 18. Johns Hopkins University Baltimore; 2001.

Lovibond PF, Lovibond SH. The structure of negative emotional states: comparison of the Depression anxiety stress scales (DASS) with the Beck Depression and anxiety inventories. Behav Res Ther. 1995;33(3):335–43.

Spitzer RL, Kroenke K, Williams JB, Löwe B. A brief measure for assessing generalized anxiety disorder: the GAD-7. Arch Intern Med. 2006;166(10):1092–7.

Zigmond AS, Snaith RP. The hospital anxiety and depression scale. Acta Psychiatrica Scandinavica. 1983;67(6):361–70.

Spielberger CD, Gonzalez-Reigosa F, Martinez-Urrutia A, Natalicio LF, Natalicio DS. The state-trait anxiety inventory. Revista Interamericana de Psicologia/Interamerican Journal of Psychology 1971, 5(3&4).

Marteau TM, Bekker H. The development of a six-item short‐form of the state scale of the Spielberger State—trait anxiety inventory (STAI). Br J Clin Psychol. 1992;31(3):301–6.

Leary MR. Social anxiousness: the construct and its measurement. J Pers Assess. 1983;47(1):66–75.

Liebowitz MR. Social phobia. Modern problems of pharmacopsychiatry 1987.

Alkis Y, Kadirhan Z, Sat M. Development and validation of social anxiety scale for social media users. Comput Hum Behav. 2017;72:296–303.

La Greca AM, Stone WL. Social anxiety scale for children-revised: factor structure and concurrent validity. J Clin Child Psychol. 1993;22(1):17–27.

Fenigstein A, Scheier MF, Buss AH. Public and private self-consciousness: Assessment and theory. J Consult Clin Psychol. 1975;43(4):522.

Mattick RP, Clarke JC. Development and validation of measures of social phobia scrutiny fear and social interaction anxiety. Behav Res Ther. 1998;36(4):455–70.

Peters L, Sunderland M, Andrews G, Rapee RM, Mattick RP. Development of a short form Social Interaction anxiety (SIAS) and Social Phobia Scale (SPS) using nonparametric item response theory: the SIAS-6 and the SPS-6. Psychol Assess. 2012;24(1):66.

Brennan KA, Clark CL, Shaver PR. Self-report measurement of adult attachment: an integrative overview. Attachment Theory Close Relationships. 1998;46:76.

Wei M, Russell DW, Mallinckrodt B, Vogel DL. The experiences in Close Relationship Scale (ECR)-short form: reliability, validity, and factor structure. J Pers Assess. 2007;88(2):187–204.

Bartholomew K, Horowitz LM. Attachment styles among young adults: a test of a four-category model. J Personal Soc Psychol. 1991;61(2):226.

Przybylski AK, Murayama K, DeHaan CR, Gladwell V. Motivational, emotional, and behavioral correlates of fear of missing out. Comput Hum Behav. 2013;29(4):1841–8.

Xiaokang S, Yuxiang Z, Xuanhui Z. Developing a fear of missing out (FoMO) measurement scale in the mobile social media environment. Libr Inform Service. 2017;61(11):96.

Bown M, Sutton A. Quality control in systematic reviews and meta-analyses. Eur J Vasc Endovasc Surg. 2010;40(5):669–77.

Turel O, Qahri-Saremi H. Problematic use of social networking sites: antecedents and consequence from a dual-system theory perspective. J Manage Inform Syst. 2016;33(4):1087–116.

Chou H-TG, Edge N. They are happier and having better lives than I am: the impact of using Facebook on perceptions of others’ lives. Cyberpsychology Behav Social Netw. 2012;15(2):117–21.

Beyens I, Frison E, Eggermont S. I don’t want to miss a thing: adolescents’ fear of missing out and its relationship to adolescents’ social needs, Facebook use, and Facebook related stress. Comput Hum Behav. 2016;64:1–8.

Di Blasi M, Gullo S, Mancinelli E, Freda MF, Esposito G, Gelo OCG, Lagetto G, Giordano C, Mazzeschi C, Pazzagli C. Psychological distress associated with the COVID-19 lockdown: a two-wave network analysis. J Affect Disord. 2021;284:18–26.

Yang X, Hu H, Zhao C, Xu H, Tu X, Zhang G. A longitudinal study of changes in smart phone addiction and depressive symptoms and potential risk factors among Chinese college students. BMC Psychiatry. 2021;21(1):252.

Kuss DJ, Griffiths MD. Social networking sites and addiction: ten lessons learned. Int J Environ Res Public Health. 2017;14(3):311.

Ryan T, Chester A, Reece J, Xenos S. The uses and abuses of Facebook: a review of Facebook addiction. J Behav Addictions. 2014;3(3):133–48.

Elhai JD, Levine JC, Dvorak RD, Hall BJ. Non-social features of smartphone use are most related to depression, anxiety and problematic smartphone use. Comput Hum Behav. 2017;69:75–82.

Jackson LA, Wang J-L. Cultural differences in social networking site use: a comparative study of China and the United States. Comput Hum Behav. 2013;29(3):910–21.

Ahrens LM, Mühlberger A, Pauli P, Wieser MJ. Impaired visuocortical discrimination learning of socially conditioned stimuli in social anxiety. Soc Cognit Affect Neurosci. 2014;10(7):929–37.

Elhai JD, Yang H, Montag C. Fear of missing out (FOMO): overview, theoretical underpinnings, and literature review on relations with severity of negative affectivity and problematic technology use. Brazilian J Psychiatry. 2020;43:203–9.

Barker V. Older adolescents’ motivations for social network site use: the influence of gender, group identity, and collective self-esteem. Cyberpsychology Behav. 2009;12(2):209–13.

Krasnova H, Veltri NF, Eling N, Buxmann P. Why men and women continue to use social networking sites: the role of gender differences. J Strateg Inf Syst. 2017;26(4):261–84.

Palmer J. The role of gender on social network websites. Stylus Knights Write Showc 2012:35–46.

Vannucci A, Flannery KM, Ohannessian CM. Social media use and anxiety in emerging adults. J Affect Disord. 2017;207:163–6.

Primack BA, Shensa A, Sidani JE, Whaite EO, yi Lin L, Rosen D, Colditz JB, Radovic A, Miller E. Social media use and perceived social isolation among young adults in the US. Am J Prev Med. 2017;53(1):1–8.

Twenge JM, Campbell WK. Associations between screen time and lower psychological well-being among children and adolescents: evidence from a population-based study. Prev Med Rep. 2018;12:271–83.

Download references

This research was supported by the Social Science Foundation of China (Grant Number: 23BSH135).

Author information

Authors and affiliations.

School of Mental Health, Wenzhou Medical University, 325035, Wenzhou, China

Mingxuan Du, Haiyan Hu, Ningning Ding, Jiankang He, Wenwen Tian, Wenqian Zhao, Xiujian Lin, Gaoyang Liu, Wendan Chen, ShuangLiu Wang, Dongwu Xu & Guohua Zhang

School of Education, Renmin University of China, 100872, Beijing, China

Chengjia Zhao

School of Media and Communication, Shanghai Jiao Tong University, Dongchuan Road 800, 200240, Shanghai, China

Pengcheng Wang

Department of Neurosis and Psychosomatic Diseases, Huzhou Third Municipal Hospital, 313002, Huzhou, China

Xinhua Shen

You can also search for this author in PubMed   Google Scholar

Contributions

GZ, XS, XL and MD prepared the study design, writing - review and editing. MD and CZ wrote the main manuscript text. MD and HH analyzed data and edited the draft. ND, JH, WT, WZ, GL, WC, SW, PW and DX conducted resources and data curation. All authors have approved the final version of the manuscript.

Corresponding authors

Correspondence to Xinhua Shen or Guohua Zhang .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Du, M., Zhao, C., Hu, H. et al. Association between problematic social networking use and anxiety symptoms: a systematic review and meta-analysis. BMC Psychol 12 , 263 (2024). https://doi.org/10.1186/s40359-024-01705-w

Download citation

Received : 25 January 2024

Accepted : 03 April 2024

Published : 12 May 2024

DOI : https://doi.org/10.1186/s40359-024-01705-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Fear of missing out
  • Meta-analysis

BMC Psychology

ISSN: 2050-7283

tool literature review

IMAGES

  1. Ace your research with these 5 literature review tools

    tool literature review

  2. 39 Best Literature Review Examples (Guide & Samples)

    tool literature review

  3. 39 Best Literature Review Examples (Guide & Samples)

    tool literature review

  4. 50 Smart Literature Review Templates (APA) ᐅ TemplateLab

    tool literature review

  5. 50 Smart Literature Review Templates (APA) ᐅ TemplateLab

    tool literature review

  6. 50 Smart Literature Review Templates (APA) ᐅ TemplateLab

    tool literature review

VIDEO

  1. How to Use Reverse Outlining for Literature Reviews: An AI-Based Tool

  2. scispace copilot #literature review made easy #chatgpt #ai

  3. Tutorial Pemanfaatan Aplikasi AI

  4. Best Literature Review AI Tool

  5. How to Perform Literature Review Using AI Tool?

  6. How to Do a Good Literature Review for Research Paper and Thesis

COMMENTS

  1. Literature Review Generator

    Our Literature Review Generator is an AI-powered tool that streamlines and simplifies the creation of literature reviews by automatically collecting, analyzing, summarizing, and synthesizing all the relevant academic sources on a specific topic within the parameters you define. It saves you additional time by highlighting themes, trends, and ...

  2. Litmaps

    As a full-time researcher, Litmaps has become an indispensable tool in my arsenal. The Seed Maps and Discover features of Litmaps have transformed my literature review process, streamlining the identification of key citations while revealing previously overlooked relevant literature, ensuring no crucial connection goes unnoticed.

  3. 10 Best Literature Review Tools for Researchers

    6. Consensus. Researchers to work together, annotate, and discuss research papers in real-time, fostering team collaboration and knowledge sharing. 7. RAx. Researchers to perform efficient literature search and analysis, aiding in identifying relevant articles, saving time, and improving the quality of research. 8.

  4. Ace your research with these 5 literature review tools

    3. Zotero. A big part of many literature review workflows, Zotero is a free, open-source tool for managing citations that works as a plug-in on your browser. It helps you gather the information you need, cite your sources, lets you attach PDFs, notes, and images to your citations, and create bibliographies.

  5. How to Write a Literature Review

    Example literature review #4: "Learners' Listening Comprehension Difficulties in English Language Learning: A Literature Review ... Tip AI tools like ChatGPT can be effectively used to brainstorm ideas and create an outline for your literature review. However, trying to pass off AI-generated text as your own work is a serious offense. ...

  6. ATLAS.ti

    Finalize your literature review faster with comfort. ATLAS.ti makes it easy to manage, organize, and analyze articles, PDFs, excerpts, and more for your projects. Conduct a deep systematic literature review and get the insights you need with a comprehensive toolset built specifically for your research projects.

  7. 7 open source tools to make literature reviews easy

    2. Firefox. Linux distributions generally come with a free web browser, and the most popular is Firefox. Two Firefox plugins that are particularly useful for literature reviews are Unpaywall and Zotero. Keep reading to learn why. 3.

  8. Tools

    Free, open-source tool that "helps you upload and organize the results of a literature search for a systematic review. It also makes it possible for your team to screen, organize, and manipulate all of your abstracts in one place." -From Center for Evidence Synthesis in Health. SRDR Plus (Systematic Review Data Repository: Plus) An open-source ...

  9. Literature Review Software MAXQDA

    MAXQDA The All-in-one Literature Review Software. MAXQDA is the best choice for a comprehensive literature review. It works with a wide range of data types and offers powerful tools for literature review, such as reference management, qualitative, vocabulary, text analysis tools, and more.

  10. How to write a superb literature review

    The best proposals are timely and clearly explain why readers should pay attention to the proposed topic. It is not enough for a review to be a summary of the latest growth in the literature: the ...

  11. AI-Powered Research and Literature Review Tool

    Enago Read is an AI assistant that speeds up the literature review process, offering summaries and key insights to save researchers reading time. It boosts productivity with advanced AI technology and the Copilot feature, enabling real-time questions for deeper comprehension of extensive literature.

  12. PDF Conducting a Literature Review

    What is a Literature Review 2. Tools to help with the various stages of your review -Searching -Evaluating -Analysing and Interpreting -Writing -Publishing 3. Additional Resources ... Literature Review A literature review is a survey of scholarly sources that provides an overview of a particular topic. Literature reviews are a ...

  13. Semantic Scholar

    Semantic Scholar is a free, AI-powered research tool for scientific literature, based at the Allen Institute for AI. Learn More. About About Us Meet the Team Publishers Blog (opens in a new tab) AI2 Careers (opens in a new tab) Product Product Overview Semantic Reader Scholar's Hub Beta Program Release Notes. API

  14. Writing a Literature Review

    A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays).

  15. Best Literature Review Tool

    Our Excel export feature generates a literature synthesis matrix for you, so you can. Compare papers side by side for their study sizes, key contributions, limitations, and more. Export literature-review ready data in Excel, Word, RIS or Markdown format. Integrates with your reference manager and 'second brain' tools such as Roam, Notion ...

  16. AI Literature Review Generator

    Creates a comprehensive academic literature review with scholarly resources based on a specific research topic. HyperWrite's AI Literature Review Generator is a revolutionary tool that automates the process of creating a comprehensive literature review. Powered by the most advanced AI models, this tool can search and analyze scholarly articles, books, and other resources to identify key themes ...

  17. START HERE

    Steps to Completing a Literature Review. Find. Conduct searches for relevant information. Evaluate. Critically review your sources. Summarize. Determine the most important and relevant information from each source, theories, findings, etc. Synthesize. Create a synthesis matrix to find connections between resources, and ensure your sources ...

  18. Guidance to best tools and practices for systematic reviews

    These tools are widely accepted by methodologists; however, in the general medical literature, they are not uniformly selected for the critical appraisal of systematic reviews [88, 96]. To enable their uptake, Table 4.1 links review components to the corresponding appraisal tool items.

  19. Literature Review Generator by AcademicHelp

    With our Free Online Literature Review you will be able to finish your literature review assignments in just a few minutes. This will allow you to dedicate your free time to a) proofreading, and b) finishing or starting on more important tasks and projects. This tool can also help you understand the direction of your work, its structure, and ...

  20. Critical Appraisal Tools

    The structure of a literature review should include the following: An overview of the subject, issue, or theory under consideration, along with the objectives of the literature review, Division of works under review into themes or categories [e.g. works that support a particular position, those against, and those offering alternative approaches ...

  21. Critical Appraisal Tools and Reporting Guidelines

    More. Critical appraisal tools and reporting guidelines are the two most important instruments available to researchers and practitioners involved in research, evidence-based practice, and policymaking. Each of these instruments has unique characteristics, and both instruments play an essential role in evidence-based practice and decision-making.

  22. Literature Review

    Review the most influential work around any topic by area, genre & time. Paper * Patent Grant Clinical-Trial. Web. Expert · Past Year Past 5 Years ALL · LLM Expand Tweak. Web. Try: style transfer · covid vaccine · more | ask questions · text rewriter · review by venue.

  23. AI Literature Review Generator

    These intelligent tools present well-structured reviews, offering well-organized input which can guide you in writing your own well-formulated literature review. Finds Good Matches: A literature review generator is designed to find the most relevant literature according to your research topic. The expertise of these software tools allows users ...

  24. Collaborative Skills Training Using Digital Tools: A Systematic

    The present systematic literature review highlighted a diversity of findings about the effects of digital tools on the development of collaborative skills. This diversity can be attributed not only to the variety of digital tools used by researchers, but also to the variety of measures used to assess collaborative skills.

  25. Identification of Problem-Solving Techniques in Computational Thinking

    The literature review and result-discussion sections show the highest number of mentions, which indicates that the 37 selected articles are related in terms of the theoretical basis, the study results, and the discussion (thus theory reflects reality). ... (2017), abstraction utilizes a concept mapping tool in science to model the water cycle ...

  26. Large Language Models for Cyber Security: A Systematic Literature Review

    In this survey, we conduct a comprehensive review of the literature on the application of LLMs in cybersecurity (LLM4Security). By comprehensively collecting over 30K relevant papers and systematically analyzing 127 papers from top security and software engineering venues, we aim to provide a holistic view of how LLMs are being used to solve ...

  27. Designing an evaluation tool for evaluating training programs of

    The study was mixed method. The first phase was qualitative and for providing an evaluation tool. The second phase was for evaluating the tool. At the first phase, after literature review in the Divergent phase, a complete list of problems in the field of CSTC in medicine schools was prepared.

  28. PrimerEvalPy: a tool for in-silico evaluation of primers for targeting

    Background The selection of primer pairs in sequencing-based research can greatly influence the results, highlighting the need for a tool capable of analysing their performance in-silico prior to the sequencing process. We therefore propose PrimerEvalPy, a Python-based package designed to test the performance of any primer or primer pair against any sequencing database. The package calculates ...

  29. Frontiers

    This study explores the implementation of Urban Living Labs (ULLs) in Higher Education Institutions (HEIs) to promote Education for Sustainable Development (ESD). It adopts a methodology that integrates a mixed approach, combining literature review, validation with experts in the field and analysis of case studies. A structured evaluation tool is proposed based on three constructs: Synergy ...

  30. Association between problematic social networking use and anxiety

    A growing number of studies have reported that problematic social networking use (PSNU) is strongly associated with anxiety symptoms. However, due to the presence of multiple anxiety subtypes, existing research findings on the extent of this association vary widely, leading to a lack of consensus. The current meta-analysis aimed to summarize studies exploring the relationship between PSNU ...