synthesis of literature review in research

  • University of Oregon Libraries
  • Research Guides

How to Write a Literature Review

  • 6. Synthesize
  • Literature Reviews: A Recap
  • Reading Journal Articles
  • Does it Describe a Literature Review?
  • 1. Identify the Question
  • 2. Review Discipline Styles
  • Searching Article Databases
  • Finding Full-Text of an Article
  • Citation Chaining
  • When to Stop Searching
  • 4. Manage Your References
  • 5. Critically Analyze and Evaluate

Synthesis Visualization

Synthesis matrix example.

  • 7. Write a Literature Review

Chat

  • Synthesis Worksheet

About Synthesis

What is synthesis? What synthesis is NOT:

Approaches to Synthesis

You can sort the literature in various ways, for example:

light bulb image

How to Begin?

Read your sources carefully and find the main idea(s) of each source

Look for similarities in your sources – which sources are talking about the same main ideas? (for example, sources that discuss the historical background on your topic)

Use the worksheet (above) or synthesis matrix (below) to get organized

This work can be messy. Don't worry if you have to go through a few iterations of the worksheet or matrix as you work on your lit review!

Four Examples of Student Writing

In the four examples below, only ONE shows a good example of synthesis: the fourth column, or  Student D . For a web accessible version, click the link below the image.

Four Examples of Student Writing; Follow the "long description" infographic link for a web accessible description.

Long description of "Four Examples of Student Writing" for web accessibility

  • Download a copy of the "Four Examples of Student Writing" chart

Red X mark

Click on the example to view the pdf.

Personal Learning Environment chart

From Jennifer Lim

  • << Previous: 5. Critically Analyze and Evaluate
  • Next: 7. Write a Literature Review >>
  • Last Updated: Aug 12, 2024 11:48 AM
  • URL: https://researchguides.uoregon.edu/litreview

Contact Us Library Accessibility UO Libraries Privacy Notices and Procedures

Make a Gift

1501 Kincaid Street Eugene, OR 97403 P: 541-346-3053 F: 541-346-3485

  • Visit us on Facebook
  • Visit us on Twitter
  • Visit us on Youtube
  • Visit us on Instagram
  • Report a Concern
  • Nondiscrimination and Title IX
  • Accessibility
  • Privacy Policy
  • Find People

synthesis of literature review in research

Literature Syntheis 101

How To Synthesise The Existing Research (With Examples)

By: Derek Jansen (MBA) | Expert Reviewer: Eunice Rautenbach (DTech) | August 2023

One of the most common mistakes that students make when writing a literature review is that they err on the side of describing the existing literature rather than providing a critical synthesis of it. In this post, we’ll unpack what exactly synthesis means and show you how to craft a strong literature synthesis using practical examples.

This post is based on our popular online course, Literature Review Bootcamp . In the course, we walk you through the full process of developing a literature review, step by step. If it’s your first time writing a literature review, you definitely want to use this link to get 50% off the course (limited-time offer).

Overview: Literature Synthesis

  • What exactly does “synthesis” mean?
  • Aspect 1: Agreement
  • Aspect 2: Disagreement
  • Aspect 3: Key theories
  • Aspect 4: Contexts
  • Aspect 5: Methodologies
  • Bringing it all together

What does “synthesis” actually mean?

As a starting point, let’s quickly define what exactly we mean when we use the term “synthesis” within the context of a literature review.

Simply put, literature synthesis means going beyond just describing what everyone has said and found. Instead, synthesis is about bringing together all the information from various sources to present a cohesive assessment of the current state of knowledge in relation to your study’s research aims and questions .

Put another way, a good synthesis tells the reader exactly where the current research is “at” in terms of the topic you’re interested in – specifically, what’s known , what’s not , and where there’s a need for more research .

So, how do you go about doing this?

Well, there’s no “one right way” when it comes to literature synthesis, but we’ve found that it’s particularly useful to ask yourself five key questions when you’re working on your literature review. Having done so,  you can then address them more articulately within your actual write up. So, let’s take a look at each of these questions.

Free Webinar: Literature Review 101

1. Points Of Agreement

The first question that you need to ask yourself is: “Overall, what things seem to be agreed upon by the vast majority of the literature?”

For example, if your research aim is to identify which factors contribute toward job satisfaction, you’ll need to identify which factors are broadly agreed upon and “settled” within the literature. Naturally, there may at times be some lone contrarian that has a radical viewpoint , but, provided that the vast majority of researchers are in agreement, you can put these random outliers to the side. That is, of course, unless your research aims to explore a contrarian viewpoint and there’s a clear justification for doing so. 

Identifying what’s broadly agreed upon is an essential starting point for synthesising the literature, because you generally don’t want (or need) to reinvent the wheel or run down a road investigating something that is already well established . So, addressing this question first lays a foundation of “settled” knowledge.

Need a helping hand?

synthesis of literature review in research

2. Points Of Disagreement

Related to the previous point, but on the other end of the spectrum, is the equally important question: “Where do the disagreements lie?” .

In other words, which things are not well agreed upon by current researchers? It’s important to clarify here that by disagreement, we don’t mean that researchers are (necessarily) fighting over it – just that there are relatively mixed findings within the empirical research , with no firm consensus amongst researchers.

This is a really important question to address as these “disagreements” will often set the stage for the research gap(s). In other words, they provide clues regarding potential opportunities for further research, which your study can then (hopefully) contribute toward filling. If you’re not familiar with the concept of a research gap, be sure to check out our explainer video covering exactly that .

synthesis of literature review in research

3. Key Theories

The next question you need to ask yourself is: “Which key theories seem to be coming up repeatedly?” .

Within most research spaces, you’ll find that you keep running into a handful of key theories that are referred to over and over again. Apart from identifying these theories, you’ll also need to think about how they’re connected to each other. Specifically, you need to ask yourself:

  • Are they all covering the same ground or do they have different focal points  or underlying assumptions ?
  • Do some of them feed into each other and if so, is there an opportunity to integrate them into a more cohesive theory?
  • Do some of them pull in different directions ? If so, why might this be?
  • Do all of the theories define the key concepts and variables in the same way, or is there some disconnect? If so, what’s the impact of this ?

Simply put, you’ll need to pay careful attention to the key theories in your research area, as they will need to feature within your theoretical framework , which will form a critical component within your final literature review. This will set the foundation for your entire study, so it’s essential that you be critical in this area of your literature synthesis.

If this sounds a bit fluffy, don’t worry. We deep dive into the theoretical framework (as well as the conceptual framework) and look at practical examples in Literature Review Bootcamp . If you’d like to learn more, take advantage of our limited-time offer to get 60% off the standard price.

synthesis of literature review in research

4. Contexts

The next question that you need to address in your literature synthesis is an important one, and that is: “Which contexts have (and have not) been covered by the existing research?” .

For example, sticking with our earlier hypothetical topic (factors that impact job satisfaction), you may find that most of the research has focused on white-collar , management-level staff within a primarily Western context, but little has been done on blue-collar workers in an Eastern context. Given the significant socio-cultural differences between these two groups, this is an important observation, as it could present a contextual research gap .

In practical terms, this means that you’ll need to carefully assess the context of each piece of literature that you’re engaging with, especially the empirical research (i.e., studies that have collected and analysed real-world data). Ideally, you should keep notes regarding the context of each study in some sort of catalogue or sheet, so that you can easily make sense of this before you start the writing phase. If you’d like, our free literature catalogue worksheet is a great tool for this task.

5. Methodological Approaches

Last but certainly not least, you need to ask yourself the question: “What types of research methodologies have (and haven’t) been used?”

For example, you might find that most studies have approached the topic using qualitative methods such as interviews and thematic analysis. Alternatively, you might find that most studies have used quantitative methods such as online surveys and statistical analysis.

But why does this matter?

Well, it can run in one of two potential directions . If you find that the vast majority of studies use a specific methodological approach, this could provide you with a firm foundation on which to base your own study’s methodology . In other words, you can use the methodologies of similar studies to inform (and justify) your own study’s research design .

On the other hand, you might argue that the lack of diverse methodological approaches presents a research gap , and therefore your study could contribute toward filling that gap by taking a different approach. For example, taking a qualitative approach to a research area that is typically approached quantitatively. Of course, if you’re going to go against the methodological grain, you’ll need to provide a strong justification for why your proposed approach makes sense. Nevertheless, it is something worth at least considering.

Regardless of which route you opt for, you need to pay careful attention to the methodologies used in the relevant studies and provide at least some discussion about this in your write-up. Again, it’s useful to keep track of this on some sort of spreadsheet or catalogue as you digest each article, so consider grabbing a copy of our free literature catalogue if you don’t have anything in place.

Looking at the methodologies of existing, similar studies will help you develop a strong research methodology for your own study.

Bringing It All Together

Alright, so we’ve looked at five important questions that you need to ask (and answer) to help you develop a strong synthesis within your literature review.  To recap, these are:

  • Which things are broadly agreed upon within the current research?
  • Which things are the subject of disagreement (or at least, present mixed findings)?
  • Which theories seem to be central to your research topic and how do they relate or compare to each other?
  • Which contexts have (and haven’t) been covered?
  • Which methodological approaches are most common?

Importantly, you’re not just asking yourself these questions for the sake of asking them – they’re not just a reflection exercise. You need to weave your answers to them into your actual literature review when you write it up. How exactly you do this will vary from project to project depending on the structure you opt for, but you’ll still need to address them within your literature review, whichever route you go.

The best approach is to spend some time actually writing out your answers to these questions, as opposed to just thinking about them in your head. Putting your thoughts onto paper really helps you flesh out your thinking . As you do this, don’t just write down the answers – instead, think about what they mean in terms of the research gap you’ll present , as well as the methodological approach you’ll take . Your literature synthesis needs to lay the groundwork for these two things, so it’s essential that you link all of it together in your mind, and of course, on paper.

Literature Review Course

Psst… there’s more!

This post is an extract from our bestselling short course, Literature Review Bootcamp . If you want to work smart, you don't want to miss this .

Cosmas

excellent , thank you

Venina

Thank you for this significant piece of information.

George John Horoasia

This piece of information is very helpful. Thank you so much and look forward to hearing more literature review from you in near the future.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

The Sheridan Libraries

  • Write a Literature Review
  • Sheridan Libraries
  • Evaluate This link opens in a new window

Get Organized

  • Lit Review Prep Use this template to help you evaluate your sources, create article summaries for an annotated bibliography, and a synthesis matrix for your lit review outline.

Synthesize your Information

Synthesize: combine separate elements to form a whole.

Synthesis Matrix

A synthesis matrix helps you record the main points of each source and document how sources relate to each other.

After summarizing and evaluating your sources, arrange them in a matrix or use a citation manager to help you see how they relate to each other and apply to each of your themes or variables.  

By arranging your sources by theme or variable, you can see how your sources relate to each other, and can start thinking about how you weave them together to create a narrative.

  • Step-by-Step Approach
  • Example Matrix from NSCU
  • Matrix Template
  • << Previous: Summarize
  • Next: Integrate >>
  • Last Updated: Jul 30, 2024 1:42 PM
  • URL: https://guides.library.jhu.edu/lit-review

Banner

Literature Review Basics

  • What is a Literature Review?
  • Synthesizing Research
  • Using Research & Synthesis Tables
  • Additional Resources

Profile Photo

Synthesis: What is it?

First, let's be perfectly clear about what synthesizing your research isn't :

  • - It isn't  just summarizing the material you read
  • - It isn't  generating a collection of annotations or comments (like an annotated bibliography)
  • - It isn't  compiling a report on every single thing ever written in relation to your topic

When you  synthesize  your research, your job is to help your reader understand the current state of the conversation on your topic, relative to your research question.  That may include doing the following:

  • - Selecting and using representative work on the topic
  • - Identifying and discussing trends in published data or results
  • - Identifying and explaining the impact of common features (study populations, interventions, etc.) that appear frequently in the literature
  • - Explaining controversies, disputes, or central issues in the literature that are relevant to your research question
  • - Identifying gaps in the literature, where more research is needed
  • - Establishing the discussion to which your own research contributes and demonstrating the value of your contribution

Essentially, you're telling your reader where they are (and where you are) in the scholarly conversation about your project.

Synthesis: How do I do it?

Synthesis, step by step.

This is what you need to do  before  you write your review.

  • Identify and clearly describe your research question (you may find the Formulating PICOT Questions table at  the Additional Resources tab helpful).
  • Collect sources relevant to your research question.
  • Organize and describe the sources you've found -- your job is to identify what  types  of sources you've collected (reviews, clinical trials, etc.), identify their  purpose  (what are they measuring, testing, or trying to discover?), determine the  level of evidence  they represent (see the Levels of Evidence table at the Additional Resources tab ), and briefly explain their  major findings . Use a Research Table to document this step.
  • Study the information you've put in your Research Table and examine your collected sources, looking for  similarities  and  differences . Pay particular attention to  populations ,   methods  (especially relative to levels of evidence), and  findings .
  • Analyze what you learn in (4) using a tool like a Synthesis Table. Your goal is to identify relevant themes, trends, gaps, and issues in the research.  Your literature review will collect the results of this analysis and explain them in relation to your research question.

Analysis tips

  • - Sometimes, what you  don't  find in the literature is as important as what you do find -- look for questions that the existing research hasn't answered yet.
  • - If any of the sources you've collected refer to or respond to each other, keep an eye on how they're related -- it may provide a clue as to whether or not study results have been successfully replicated.
  • - Sorting your collected sources by level of evidence can provide valuable insight into how a particular topic has been covered, and it may help you to identify gaps worth addressing in your own work.
  • << Previous: What is a Literature Review?
  • Next: Using Research & Synthesis Tables >>
  • Last Updated: Sep 26, 2023 12:06 PM
  • URL: https://usi.libguides.com/literature-review-basics

How to Synthesize Written Information from Multiple Sources

Shona McCombes

Content Manager

B.A., English Literature, University of Glasgow

Shona McCombes is the content manager at Scribbr, Netherlands.

Learn about our Editorial Process

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

On This Page:

When you write a literature review or essay, you have to go beyond just summarizing the articles you’ve read – you need to synthesize the literature to show how it all fits together (and how your own research fits in).

Synthesizing simply means combining. Instead of summarizing the main points of each source in turn, you put together the ideas and findings of multiple sources in order to make an overall point.

At the most basic level, this involves looking for similarities and differences between your sources. Your synthesis should show the reader where the sources overlap and where they diverge.

Unsynthesized Example

Franz (2008) studied undergraduate online students. He looked at 17 females and 18 males and found that none of them liked APA. According to Franz, the evidence suggested that all students are reluctant to learn citations style. Perez (2010) also studies undergraduate students. She looked at 42 females and 50 males and found that males were significantly more inclined to use citation software ( p < .05). Findings suggest that females might graduate sooner. Goldstein (2012) looked at British undergraduates. Among a sample of 50, all females, all confident in their abilities to cite and were eager to write their dissertations.

Synthesized Example

Studies of undergraduate students reveal conflicting conclusions regarding relationships between advanced scholarly study and citation efficacy. Although Franz (2008) found that no participants enjoyed learning citation style, Goldstein (2012) determined in a larger study that all participants watched felt comfortable citing sources, suggesting that variables among participant and control group populations must be examined more closely. Although Perez (2010) expanded on Franz’s original study with a larger, more diverse sample…

Step 1: Organize your sources

After collecting the relevant literature, you’ve got a lot of information to work through, and no clear idea of how it all fits together.

Before you can start writing, you need to organize your notes in a way that allows you to see the relationships between sources.

One way to begin synthesizing the literature is to put your notes into a table. Depending on your topic and the type of literature you’re dealing with, there are a couple of different ways you can organize this.

Summary table

A summary table collates the key points of each source under consistent headings. This is a good approach if your sources tend to have a similar structure – for instance, if they’re all empirical papers.

Each row in the table lists one source, and each column identifies a specific part of the source. You can decide which headings to include based on what’s most relevant to the literature you’re dealing with.

For example, you might include columns for things like aims, methods, variables, population, sample size, and conclusion.

For each study, you briefly summarize each of these aspects. You can also include columns for your own evaluation and analysis.

summary table for synthesizing the literature

The summary table gives you a quick overview of the key points of each source. This allows you to group sources by relevant similarities, as well as noticing important differences or contradictions in their findings.

Synthesis matrix

A synthesis matrix is useful when your sources are more varied in their purpose and structure – for example, when you’re dealing with books and essays making various different arguments about a topic.

Each column in the table lists one source. Each row is labeled with a specific concept, topic or theme that recurs across all or most of the sources.

Then, for each source, you summarize the main points or arguments related to the theme.

synthesis matrix

The purposes of the table is to identify the common points that connect the sources, as well as identifying points where they diverge or disagree.

Step 2: Outline your structure

Now you should have a clear overview of the main connections and differences between the sources you’ve read. Next, you need to decide how you’ll group them together and the order in which you’ll discuss them.

For shorter papers, your outline can just identify the focus of each paragraph; for longer papers, you might want to divide it into sections with headings.

There are a few different approaches you can take to help you structure your synthesis.

If your sources cover a broad time period, and you found patterns in how researchers approached the topic over time, you can organize your discussion chronologically .

That doesn’t mean you just summarize each paper in chronological order; instead, you should group articles into time periods and identify what they have in common, as well as signalling important turning points or developments in the literature.

If the literature covers various different topics, you can organize it thematically .

That means that each paragraph or section focuses on a specific theme and explains how that theme is approached in the literature.

synthesizing the literature using themes

Source Used with Permission: The Chicago School

If you’re drawing on literature from various different fields or they use a wide variety of research methods, you can organize your sources methodologically .

That means grouping together studies based on the type of research they did and discussing the findings that emerged from each method.

If your topic involves a debate between different schools of thought, you can organize it theoretically .

That means comparing the different theories that have been developed and grouping together papers based on the position or perspective they take on the topic, as well as evaluating which arguments are most convincing.

Step 3: Write paragraphs with topic sentences

What sets a synthesis apart from a summary is that it combines various sources. The easiest way to think about this is that each paragraph should discuss a few different sources, and you should be able to condense the overall point of the paragraph into one sentence.

This is called a topic sentence , and it usually appears at the start of the paragraph. The topic sentence signals what the whole paragraph is about; every sentence in the paragraph should be clearly related to it.

A topic sentence can be a simple summary of the paragraph’s content:

“Early research on [x] focused heavily on [y].”

For an effective synthesis, you can use topic sentences to link back to the previous paragraph, highlighting a point of debate or critique:

“Several scholars have pointed out the flaws in this approach.” “While recent research has attempted to address the problem, many of these studies have methodological flaws that limit their validity.”

By using topic sentences, you can ensure that your paragraphs are coherent and clearly show the connections between the articles you are discussing.

As you write your paragraphs, avoid quoting directly from sources: use your own words to explain the commonalities and differences that you found in the literature.

Don’t try to cover every single point from every single source – the key to synthesizing is to extract the most important and relevant information and combine it to give your reader an overall picture of the state of knowledge on your topic.

Step 4: Revise, edit and proofread

Like any other piece of academic writing, synthesizing literature doesn’t happen all in one go – it involves redrafting, revising, editing and proofreading your work.

Checklist for Synthesis

  •   Do I introduce the paragraph with a clear, focused topic sentence?
  •   Do I discuss more than one source in the paragraph?
  •   Do I mention only the most relevant findings, rather than describing every part of the studies?
  •   Do I discuss the similarities or differences between the sources, rather than summarizing each source in turn?
  •   Do I put the findings or arguments of the sources in my own words?
  •   Is the paragraph organized around a single idea?
  •   Is the paragraph directly relevant to my research question or topic?
  •   Is there a logical transition from this paragraph to the next one?

Further Information

How to Synthesise: a Step-by-Step Approach

Help…I”ve Been Asked to Synthesize!

Learn how to Synthesise (combine information from sources)

How to write a Psychology Essay

Print Friendly, PDF & Email

Duke University Libraries

Literature Reviews

  • 5. Synthesize your findings
  • Getting started
  • Types of reviews
  • 1. Define your research question
  • 2. Plan your search
  • 3. Search the literature
  • 4. Organize your results

How to synthesize

Approaches to synthesis.

  • 6. Write the review
  • Artificial intelligence (AI) tools
  • Thompson Writing Studio This link opens in a new window
  • Need to write a systematic review? This link opens in a new window

synthesis of literature review in research

Contact a Librarian

Ask a Librarian

In the synthesis step of a literature review, researchers analyze and integrate information from selected sources to identify patterns and themes. This involves critically evaluating findings, recognizing commonalities, and constructing a cohesive narrative that contributes to the understanding of the research topic.

Synthesis Not synthesis
✔️ Analyzing and integrating information ❌ Simply summarizing individual studies or articles
✔️ Identifying patterns and themes ❌ Listing facts without interpretation
✔️ Critically evaluating findings ❌ Copy-pasting content from sources
✔️ Constructing a cohesive narrative ❌ Providing personal opinions
✔️ Recognizing commonalities ❌ Focusing only on isolated details
✔️ Generating new perspectives ❌ Repeating information verbatim

Here are some examples of how to approach synthesizing the literature:

💡 By themes or concepts

🕘 Historically or chronologically

📊 By methodology

These organizational approaches can also be used when writing your review. It can be beneficial to begin organizing your references by these approaches in your citation manager by using folders, groups, or collections.

Create a synthesis matrix

A synthesis matrix allows you to visually organize your literature.

Topic: ______________________________________________

  Source #2 Source #3 Source #4
       
       

Topic: Chemical exposure to workers in nail salons

  Gutierrez et al. 2015 Hansen 2018 Lee et al. 2014
"Participants reported multiple episodes of asthma over one year" (p. 58)    
"Nail salon workers who did not wear gloves routinely reported increased episodes of contact dermatitis" (p. 115)      
  • << Previous: 4. Organize your results
  • Next: 6. Write the review >>
  • Last Updated: Aug 20, 2024 3:37 PM
  • URL: https://guides.library.duke.edu/litreviews

Duke University Libraries

Services for...

  • Faculty & Instructors
  • Graduate Students
  • Undergraduate Students
  • International Students
  • Patrons with Disabilities

Twitter

  • Harmful Language Statement
  • Re-use & Attribution / Privacy
  • Support the Libraries

Creative Commons License

Literature Reviews

  • Introduction
  • Tutorials and resources
  • Step 1: Literature search
  • Step 2: Analysis, synthesis, critique
  • Step 3: Writing the review

If you need any assistance, please contact the library staff at the Georgia Tech Library Help website . 

Analysis, synthesis, critique

Literature reviews build a story. You are telling the story about what you are researching. Therefore, a literature review is a handy way to show that you know what you are talking about. To do this, here are a few important skills you will need.

Skill #1: Analysis

Analysis means that you have carefully read a wide range of the literature on your topic and have understood the main themes, and identified how the literature relates to your own topic. Carefully read and analyze the articles you find in your search, and take notes. Notice the main point of the article, the methodologies used, what conclusions are reached, and what the main themes are. Most bibliographic management tools have capability to keep notes on each article you find, tag them with keywords, and organize into groups.

Skill #2: Synthesis

After you’ve read the literature, you will start to see some themes and categories emerge, some research trends to emerge, to see where scholars agree or disagree, and how works in your chosen field or discipline are related. One way to keep track of this is by using a Synthesis Matrix .

Skill #3: Critique

As you are writing your literature review, you will want to apply a critical eye to the literature you have evaluated and synthesized. Consider the strong arguments you will make contrasted with the potential gaps in previous research. The words that you choose to report your critiques of the literature will be non-neutral. For instance, using a word like “attempted” suggests that a researcher tried something but was not successful. For example: 

There were some attempts by Smith (2012) and Jones (2013) to integrate a new methodology in this process.

On the other hand, using a word like “proved” or a phrase like “produced results” evokes a more positive argument. For example:

The new methodologies employed by Blake (2014) produced results that provided further evidence of X.

In your critique, you can point out where you believe there is room for more coverage in a topic, or further exploration in in a sub-topic.

Need more help?

If you are looking for more detailed guidance about writing your dissertation, please contact the folks in the Georgia Tech Communication Center .

  • << Previous: Step 1: Literature search
  • Next: Step 3: Writing the review >>
  • Last Updated: Apr 2, 2024 11:21 AM
  • URL: https://libguides.library.gatech.edu/litreview

Banner

Writing the Literature Review

  • Getting Started
  • Step 1: Choose A Topic
  • Step 2: Find Information
  • Step 3: Evaluate
  • Step 4: Take Notes
  • Step 5: Synthesize
  • Step 6: Stay Organized
  • Write the Review

Synthesizing

What is "Synthesis"?

synthesis of literature review in research

Synthesis?  

Synthesis refers to combining separate elements to create a whole.  When reading through your sources (peer reviewed journal articles, books, research studies, white papers etc.) you will pay attention to relationships between the studies, between groups in the studies, and look for any pattterns,  similarities or differences.  Pay attention to methodologies, unexplored themes, and things that may represent a "gap" in the literature.  These "gaps" will be things you will want to be sure to identify in your literature review.  

  • Using a Synthesis Matrix to Plan a Literature Review Introduction to synthesis matrices, and explanation of the difference between synthesis and analysis. (Geared towards Health Science/ Nursing but applicable for other literature reviews) ***Includes a synthesis matrix example***
  • Using a Spider Diagram Organize your thoughts with a spider diagram

Ready, Set...Synthesize

  • Create an outline that puts your topics (and subtopics) into a logical order
  • Look at each subtopic that you have identified and determine what the articles in that group have in common with each other
  • Look at the articles in those subtopics that you have identified and look for areas where they differ.
  • If you spot findings that are contradictory, what differences do you think could account for those contradictions?  
  • Determine what general conclusions can be reported about that subtopic, and how it relates to the group of studies that you are discussing
  • As you write, remember to follow your outline, and use transitions as you move between topics 

Galvan, J. L. (2006). Writing literature reviews (3rd ed.). Glendale, CA: Pyrczak Publishing

  • << Previous: Step 4: Take Notes
  • Next: Step 6: Stay Organized >>
  • Last Updated: Sep 27, 2023 11:22 AM
  • URL: https://guides.mga.edu/TheLiteratureReviewANDYou

478-471-2709 for the Macon campus library | 478-934-3179 for the Roberts Memorial Library at the Cochran campus | 478-275-6772 for the Dublin campus library

478-374-6833 for the Eastman campus library | 478-929-6804 for the Warner Robins campus library | On the Go? Text-A-Librarian: 478-285-4898

Middle Georgia State University Library

Book an Appointment With a Librarian

Harvey Cushing/John Hay Whitney Medical Library

  • Collections
  • Research Help

YSN Doctoral Programs: Steps in Conducting a Literature Review

  • Biomedical Databases
  • Global (Public Health) Databases
  • Soc. Sci., History, and Law Databases
  • Grey Literature
  • Trials Registers
  • Data and Statistics
  • Public Policy
  • Google Tips
  • Recommended Books
  • Steps in Conducting a Literature Review

What is a literature review?

A literature review is an integrated analysis -- not just a summary-- of scholarly writings and other relevant evidence related directly to your research question.  That is, it represents a synthesis of the evidence that provides background information on your topic and shows a association between the evidence and your research question.

A literature review may be a stand alone work or the introduction to a larger research paper, depending on the assignment.  Rely heavily on the guidelines your instructor has given you.

Why is it important?

A literature review is important because it:

  • Explains the background of research on a topic.
  • Demonstrates why a topic is significant to a subject area.
  • Discovers relationships between research studies/ideas.
  • Identifies major themes, concepts, and researchers on a topic.
  • Identifies critical gaps and points of disagreement.
  • Discusses further research questions that logically come out of the previous studies.

APA7 Style resources

Cover Art

APA Style Blog - for those harder to find answers

1. Choose a topic. Define your research question.

Your literature review should be guided by your central research question.  The literature represents background and research developments related to a specific research question, interpreted and analyzed by you in a synthesized way.

  • Make sure your research question is not too broad or too narrow.  Is it manageable?
  • Begin writing down terms that are related to your question. These will be useful for searches later.
  • If you have the opportunity, discuss your topic with your professor and your class mates.

2. Decide on the scope of your review

How many studies do you need to look at? How comprehensive should it be? How many years should it cover? 

  • This may depend on your assignment.  How many sources does the assignment require?

3. Select the databases you will use to conduct your searches.

Make a list of the databases you will search. 

Where to find databases:

  • use the tabs on this guide
  • Find other databases in the Nursing Information Resources web page
  • More on the Medical Library web page
  • ... and more on the Yale University Library web page

4. Conduct your searches to find the evidence. Keep track of your searches.

  • Use the key words in your question, as well as synonyms for those words, as terms in your search. Use the database tutorials for help.
  • Save the searches in the databases. This saves time when you want to redo, or modify, the searches. It is also helpful to use as a guide is the searches are not finding any useful results.
  • Review the abstracts of research studies carefully. This will save you time.
  • Use the bibliographies and references of research studies you find to locate others.
  • Check with your professor, or a subject expert in the field, if you are missing any key works in the field.
  • Ask your librarian for help at any time.
  • Use a citation manager, such as EndNote as the repository for your citations. See the EndNote tutorials for help.

Review the literature

Some questions to help you analyze the research:

  • What was the research question of the study you are reviewing? What were the authors trying to discover?
  • Was the research funded by a source that could influence the findings?
  • What were the research methodologies? Analyze its literature review, the samples and variables used, the results, and the conclusions.
  • Does the research seem to be complete? Could it have been conducted more soundly? What further questions does it raise?
  • If there are conflicting studies, why do you think that is?
  • How are the authors viewed in the field? Has this study been cited? If so, how has it been analyzed?

Tips: 

  • Review the abstracts carefully.  
  • Keep careful notes so that you may track your thought processes during the research process.
  • Create a matrix of the studies for easy analysis, and synthesis, across all of the studies.
  • << Previous: Recommended Books
  • Last Updated: Jun 20, 2024 9:08 AM
  • URL: https://guides.library.yale.edu/YSNDoctoral

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • AIMS Public Health
  • v.3(1); 2016

Logo of aimsph

What Synthesis Methodology Should I Use? A Review and Analysis of Approaches to Research Synthesis

Kara schick-makaroff.

1 Faculty of Nursing, University of Alberta, Edmonton, AB, Canada

Marjorie MacDonald

2 School of Nursing, University of Victoria, Victoria, BC, Canada

Marilyn Plummer

3 College of Nursing, Camosun College, Victoria, BC, Canada

Judy Burgess

4 Student Services, University Health Services, Victoria, BC, Canada

Wendy Neander

Associated data, additional file 1.

Types of Research SynthesisKey CharacteristicsPurposeMethodsProduct
CONVENTIONAL

“The integrative literature review is a form of research that reviews, critiques, and synthesizes representative literature on a topic in an integrated way such that new frameworks and perspectives on the topic are generated” [ , p.356].

Integrative literature reviews include studies using diverse methodologies (i.e., experimental and non-experimental research, as well as qualitative research) in order to more fully understand a phenomenon of interest. It may also include theoretical and empirical literature.

Start by clearly identifying the problem that the review is addressing and the purpose of the review. There usually is not a specific research question, but rather a research purpose.

The quality of primary sources may be appraised using broad criteria. How quality is evaluated will depend upon the sampling frame .
Integrative reviews are used to address mature topics in order to re-conceptualize the expanding and diverse literature on the topic. They are also used to comprehensively review new topics in need of preliminary conceptualization .

Integrative reviews should ultimately present the “state of the art” of knowledge, depict the breadth and depth of the topic, and contribute to greater understanding of the phenomenon .
Integrative reviews generally contain similar steps , , which include the following: , is one overarching approach commonly used. Conclusions are often presented in a table/diagram. Explicit details from primary sources to support conclusions must be provided to demonstrate a logical chain of evidence.

Torraco suggests they can be represented in four forms:
Results should emphasize implications for policy/practice .
QUANTITATIVE

A SR is a review of literature that uses systematic and explicit methods to identify, select, and critically appraise relevant research, and to collect and analyze data from the studies. Conducting a SR is analogous to conducting a primary study in that there are steps and protocols. It may or may not be done in conjunction with a meta-analysis.

In Cochrane , a SR is identified as the highest form of evidence in support of interventions. By contrast, the Joanna Briggs Institute does not define a SR as necessarily the highest form of evidence.

As noted below, a meta-analysis is always a SR, but a SR is not always a meta-analysis.

There is nothing that specifies data have to be quantitative, and the definition can apply to qualitative findings. Generally, however, the term has been used most frequently to apply to reviews of quantitative studies – traditional RCTs and experimental or quasi-experimental designs. More recently, both the Campbell and the Cochrane collaborations have been grappling with the need to, and the process of, integrating qualitative research into a SR. A number of studies have been published that do this , , , – .

A well-defined research question is required.

The Quality Appraisal section under MA above also applies to SR. Some researchers are developing standard reliable and valid quality appraisal tools to judge the quality of primary studies but there remains no consensus on which tools should be used. The Joanna Briggs Institute has developed their own criteria to ensure that only the highest quality studies are included in SRs for nursing, but they hold that studies from any methodological position are relevant.
The purpose of a SR is to integrate empirical research for the purpose of generalizing from a group of studies. The reviewer is also seeking to discover the limits of generalization .

Often, the review focuses on questions of intervention effectiveness. Thus, the intent is to summarize across studies to obtain a summative judgment about the effectiveness of interventions. However, the Joanna Briggs Institute suggests that for nursing, there is a concern not just with effectiveness but also with questions of appropriateness, meaningfulness and feasibility of health practices and delivery methods. Thus, SR's may have purposes other than to assess the effectiveness of interventions.
A number of authors have provided guidelines for conducting a SR but they generally contain similar steps: The products of a SR may include:
QUANTITATIVE

M-A is the statistical analysis of a large collection of results from individual studies (usually interventions) for the purposes of integrating the findings, based on conversion to a common metric (effect size) to determine the overall effect and its magnitude. The term was coined by Gene Glass – but dates back to 1904 . A M-A is always a SR (see above).

Data are from quantitative research studies and findings, primarily randomized control trials. Increasingly there is use of experimental, quasi-experimental and some types of observational studies. Each primary study is abstracted and coded into a database.

A clear, well-defined research question or hypothesis is required.

Articles are usually appraised according to a set of pre-defined criteria but these criteria vary considerably and there are many methodological limitations . Lower quality studies are not necessarily excluded and there is some debate about whether these should be included , . When lower quality studies are included, the validity of the findings is often discussed in relation to the study quality.
Analytic M-As are conducted for the purpose of summarizing and integrating the results of individual primary studies to increase the power for detecting intervention effects, which may be small and insignificant in the individual studies – .

Exploratory M-As are conducted to resolve controversy in a field or to pose and answer new questions. The main concern is to explain the variation in effect sizes.
Specific steps include : The product for M-A includes a narrative summary of the findings with a conclusion about the effectiveness of interventions.
QUALITATIVE

“Meta-study is a research approach involving analysis of the theory, methods, and findings of qualitative research and the synthesis of these insights into new ways of thinking about phenomenon” [ , p.1].

Three analytic components are undertaken prior to synthesis. Data includes qualitative findings (meta-data), research methods (meta-method), and/or philosophical/theoretical perspectives (meta-theory).

A relevant, well-defined research question is used.

According to Paterson et al. , primary articles are appraised according to specific criteria; however the specific appraisal will depend on the requirements of the meta-study. Studies of poor quality will be excluded. Data from included studies may also be excluded if reported themes are not supported by the presented data.
Analysis of research findings, methods, and theory across qualitative studies are compared and contrasted to create a new interpretation .Paterson et al. propose a clear set of techniques: Through the three meta-study processes, researchers create a “meta-synthesis” which brings together ideas to develop a mid-range theory as the product.
QUALITATIVE

Meta-ethnography entails choosing relevant empirical studies to synthesize through repetitive reading while noting metaphors – . Noblit and Hare explain that “metaphors” refer to “themes, perspectives, organizers, and/or concepts revealed by qualitative studies” [ , p.15]. These metaphors are then used as data for the synthesis through (at least) one of three strategies including reciprocal translation, refutational synthesis, and/or line of argument syntheses. A meta-ethnographic synthesis is the creation of interpretive (abstract) explanations that are essentially metaphoric. The goal is to create, in a reduced form, a representation of the abstraction through metaphor, all the while preserving the relationships between concepts .

Qualitative research studies and findings on a specific topic.

An “intellectual interest” [ , p.26] begins the process. Then, a relevant research question, aim, or purpose is developed.

Researchers are divided on the merits of critical appraisal and whether or not it should be a standard element in meta-ethnography . Some researchers choose to follow pre-determined criteria based on critical appraisal [e.g., ], whereas others do not critically appraise.
To synthesize qualitative studies through a building of “comparative understanding” [ , p.22] so that the result is greater than the sum of the parts.

Noblit and Hare summarize that meta-ethnography is “a form of synthesis for ethnographic or other interpretive studies. It enables us to talk to each other about our studies; to communicate to policy makers, concerned citizens, and scholars what interpretive research reveals; and to reflect on our collective craft and the place of our own studies within it” [ , p.14].
Methods used in meta-ethnography generally following the following: .

Noblit and Hare identified three possible analysis strategies (all do not have to be completed):
The product of a meta-ethnography is a mid-range theory that has greater explanatory power than could be otherwise achieved in a conventional literature review.
QUALITATIVE

A grounded formal theory (GFT) is a synthesis of substantive grounded theories (GTs) to produce a higher order, more abstract theory that goes beyond the specifics of the original theories. GFT takes into account the conditions under which the primary study data were collected and analyzed to develop a more generalized and abstract model .

Substantive GTs were originally constructed using the methodology developed by Glaser & Strauss . While some synthesis approaches emphasize including all possible primary GT studies, the concept of saturation in GFT (see Methods column) allows limiting the number of reviewed papers to emphasize robustness rather than completeness .

GFT begins with a phenomenon of focus . Analytic questions and the overall research question emerge throughout the process.

There is no discussion in the GFT literature about critically appraising the studies to be included. However, the nature of the analytic process suggests that critical appraisal may not be relevant. The authenticity and accuracy of data in a GFT are not an issue because, for the purposes of generating theory, what is important is the conceptual category and not the accuracy of the evidence. The constant comparative method of GFT will correct for such inaccuracies because each concept must “earn” its way into the theory by repeatedly showing up – .
The intent of GFT is to expand the applicability of individual GTs by synthesizing the findings to provide a broad meaning that is based in data and is applicable to people who experience a common phenomenon across populations and context .

The focus is on the conditions under which theoretical generalizations apply. GFT aims “to bring cultural and individual differences into dialogue with each other by seeking a metaphor through which those differences can be understood by others” [ , p.1354].
GFT uses the same methods that were used to create the original GTs in the synthesis , . Specific elements of the analytic process include: , . A GFT is a mid-range GT that has “fit, work and grab”: that is, it fits the data (concepts and categories from primary studies), works to explain the phenomenon under review, and resonates with the readers' experiences and understandings.

Thorne et al. suggest that a GFT is “an artistic explanation that works for now, a model created on the basis of limited materials and a specific, situated perspective within known and unconscious limits of representation” [ , p.1354].
QUALITATIVE

Concept analysis is a systematic procedure to extract attributes of a concept from literature, definitions and case examples to delineate the meaning of that concept with respect to a certain domain or context.

Most writings on concept analysis do not specify the data type. However, our scan of the methodological and empirical literature on concept analysis suggests that although the analytic approach in concept analysis is qualitative, quantitative study designs and data can be used to address the questions related to defining the meaning of a concept [e.g. , – ].

Requires the researcher to isolate or identify a conceptual question or concept of interest.

Quality appraisal is not typically attended to in concept analyses. Rather, researchers are interested in all instances of actual use of a concept (or surrogate terms) .
Concept analysis is used to extend the theoretical meaning of a concept or to understand a conceptual practice problem – . In this case, concepts are cognitive descriptive meanings utilized for theoretical or practical purposes.

Concept analysis is used to identify, clarify, and refine or define the meaning of a concept and can be used as a first step in theory development , .
There are varied procedural techniques attributed to various authors such as Wilson , Walker & Avant , Chinn & (Jacobs) Kramer – , Rodgers & Knafl, , Rodgers , Schwartz-Barcott & Kim , and Morse .

Despite varied techniques, steps generally include: , , .
Concept analysis generates a definition of a concept that may be used to operationalize phenomena for further research study or theory development .
EMERGING

Although no universal definition exists, there are some common elements of scoping reviews , . They are exploratory projects that systematically map the literature on a topic, identifying the key concepts, theories, sources of evidence, and gaps in the research. It involves systematically selecting, collecting and summarizing knowledge in a broad area .

A scoping review is used to address broad topics where many different study designs and methods might be applicable. It may be conducted as part of an ongoing review, or as a stand-alone summary of research. Whereas a systematic review assesses a narrow range of quality-assessed studies to synthesize or aggregate findings, a scoping review assesses a much broader range of literature with a wide focus and does not synthesize or aggregate the findings .

Includes studies using any data type or method. May include empirical, theoretical or conceptual papers. Exclusion and inclusion criteria are inductively derived and based on relevance rather than on the quality of the primary studies or articles .

The question is stated broadly and often becomes refined as the study progresses. One or more general questions may guide the review.

The scoping review does not provide an appraisal of the quality of the evidence. It presents the existing literature without weighting the evidence in relation to specific interventions.
The purpose of a scoping review is to examine the extent, range and nature of research activity in an area. It is done to identify where there is sufficient evidence to conduct a full synthesis or to determine that insufficient evidence exists and additional primary research is needed , . It may be done for the purpose of disseminating research findings or to clarify working definitions and the conceptual boundaries of a topic area .Arksey and O'Malley recommend a 5 step process for conducting a scoping review:
More recently, Levac et al. have proposed recommendations to clarify and enhance each stage of the framework described above.
The product of a scoping review will depend on the purpose for which it is conducted. In general, however, the narrative report provides an overview of all reviewed material.

The product generally includes:
EMERGING

? Rapid review of the literature provides a quick, rather than comprehensive, overview of the literature on a narrowly defined issue. Rapid review evolved out of a need to inform policy makers about issues and interventions in a timely manner . It is often proposed as an intermediary step to be followed by a more comprehensive review.

The literature is often narrowly defined, focusing on a specific issue or a specific local, regional, or federal context . It can include diverse study designs, methods, and data types as well as peer reviewed and gray literature.

Rapid reviews require a thorough understanding of the intended audience and a specific, focused research question.

Rapid reviews typically do not include an assessment of the quality of the literature, nor do they always include the views of experts and/or reviews by peers .
The purpose is to produce a fast review of the literature, within a defined and usually limited time frame, on a question of immediate importance to a stakeholder group.There is no standardized methodology as yet, but the depth and breadth of the review depends upon the specific purpose and the allotted time frame. Rapid reviews typically take one to nine months. – . – . .

It is important that those conducting a rapid review describe the methodology in detail to promote transparency, support transferability, and avoid misrepresenting the veracity of the findings .
Typically a concise report is written for macro-level decision-makers that answer the specific review question.
EMERGING

MNS is a new form of systematic review that addresses the issues of synthesizing a large and complex body of data from diverse and heterogeneous sources. At the same time, it is systematic in that it is conducted “according to an explicit, rigorous and transparent method” [ , p.418].

The approach moves from logico-scientific reasoning (which underlies many approaches to synthesis) to narrative-interpretive reasoning. The unit of analysis for the synthesis is the unfolding “storyline” of a research tradition over time. Five key principles underlie the methodology: pragmatism, pluralism, historicity, contestation, and peer review.

This methodology involves the judicious combination of qualitative and quantitative evidence, and the theoretical and empirical literature.

: The original research question is outlined in a broad, open-ended format, and may shift and change through the process.

MNS uses the criteria of the research tradition of the primary study to judge the quality of the research, generally as set out in key sources within that tradition.
The purpose is to summarize, synthesize and interpret a diverse body of literature from multiple traditions that use different methods, theoretical perspectives, and data types.The steps to conduct a MNS , – include the following: The product of a MNS is:
EMERGING

? A realist synthesis is a review of complex social interventions and programs that seek to unpack the mechanisms by which complex programs produce outcomes, and the context in which the relationship occurs. This is in contrast to systematic reviews, which aim to synthesize studies on whether interventions are effective. Realist synthesis seeks to answer the question: What works for whom, in what ways and under what circumstances?

This form of synthesis represents a review logic not a review technique . Instead of a replicable method that follows rigid rules, the logic of realist review is based on principles. It reflects a shift away from an ontology of empirical realism to one of critical realism .

There is no specific data preference but will include quantitative, qualitative and grey literature. Because the focus is on the mechanisms of action and their context, seemingly disparate bodies of literature and diverse methodologies are included. The focus is upon literature that emphasizes process with detailed descriptions of the interventions and context.

The review question is carefully articulated, prioritizing different aspects of an intervention . It can be a broad question.

Realist review supports the principle that high quality evidence should be used but takes a different position than in systematic reviews on how the evidence is to be judged. It rejects a hierarchical approach to quality because multiple methods are needed to identify all aspects of the context, mechanisms and outcomes. Appraisal checklists are viewed skeptically because they cannot be applied evenly across the diverse study types and methods being reviewed. Thus, quality appraisal is seen as occurring in stages with a focus on the relevance of the study or article to the theory under consideration, and the extent to which an inference drawn has sufficient weight to make a credible contribution to the test of a particular intervention theory .
The purpose of a realist synthesis is to guide program and policy development by providing decision makers with a set of program theories that identify potential policy levers for change. Within its explanatory intent, there are four general purposes: , ].Pawson et al. identify 5 steps: Pawson explains that realist synthesis ends up with useful, middle-range theory. However, the product of a realist review combines theoretical understanding with empirical evidence. It focuses on explaining the relationships among the context in which an intervention takes place, the mechanisms by which it works, and the outcomes produced – .
Recommendations for dissemination and implementation are explicitly articulated. The result is a series of contextualized decision points that describe the contingencies of effectiveness. That is, a realist review provides an explanatory analysis that answers the original question of “what works for whom, in what circumstances, in what respects, and how” [ , p.21].
EMERGING

CIS is a methodology with an explicit orientation to theory generation, developed to respond to the need identified in the literature for rigorous methods to synthesize diverse types of research evidence generated by diverse methodologies particularly when the body of evidence is very complex . Thus, it was developed to address the limitations of conventional systematic review techniques. It involves an iterative process and recognizes the need for flexibility and reflexivity. It addresses the criticism that many approaches to syntheses are insufficiently critical and do not question the epistemological and normative assumptions reflected in the literature . CIS is “sensitized to the kinds of processes involved in a conventional systematic review while drawing on a distinctively qualitative tradition of inquiry” [ , p.35].

CIS utilizes data from quantitative and qualitative empirical studies, conceptual and theoretical papers, reviews and commentaries.

It is neither possible nor desirable to specify a precise review question in advance. Rather the process is highly iterative and may not be finalized until the end of the review. There is no hierarchy of designs for determining the quality of qualitative studies and, furthermore, no consensus exists on whether qualitative studies should even be assessed for quality . Studies for inclusion are not selected on the basis of study design or methodological quality. Rather, papers that are relevant are prioritized. However, papers that are determined to be fatally flawed are excluded on the basis of a set of questions for determining quality [see ]. Often, however, judgments about quality are deferred until the synthesis phase because even methodologically weak papers can provide important theoretical or conceptual insights .
The purpose of CIS is to develop an in-depth understanding of an issue/research question “by drawing on broadly relevant literature to develop concepts and theories that integrate those concepts” [ , p.71]. The overarching aim is to generate theory.The developers of CIS explicitly reject a staged approach to the review. Rather, the processes are iterative, interactive, dynamic and recursive. It includes these general categories of activities – : The product is a “synthesizing argument” that “links existing constructions from the findings to ‘synthetic constructs' (new constructs generated through synthesis)” [ , p.71]. The synthesizing argument integrates evidence from across the studies in the review into a coherent theoretical framework – . This may be represented as a “conceptual map” that identifies the main synthetic constructs and illustrates the relationships among them .

When we began this process, we were doctoral students and a faculty member in a research methods course. As students, we were facing a review of the literature for our dissertations. We encountered several different ways of conducting a review but were unable to locate any resources that synthesized all of the various synthesis methodologies. Our purpose is to present a comprehensive overview and assessment of the main approaches to research synthesis. We use ‘research synthesis’ as a broad overarching term to describe various approaches to combining, integrating, and synthesizing research findings.

We conducted an integrative review of the literature to explore the historical, contextual, and evolving nature of research synthesis. We searched five databases, reviewed websites of key organizations, hand-searched several journals, and examined relevant texts from the reference lists of the documents we had already obtained.

We identified four broad categories of research synthesis methodology including conventional, quantitative, qualitative, and emerging syntheses. Each of the broad categories was compared to the others on the following: key characteristics, purpose, method, product, context, underlying assumptions, unit of analysis, strengths and limitations, and when to use each approach.

Conclusions

The current state of research synthesis reflects significant advancements in emerging synthesis studies that integrate diverse data types and sources. New approaches to research synthesis provide a much broader range of review alternatives available to health and social science students and researchers.

1. Introduction

Since the turn of the century, public health emergencies have been identified worldwide, particularly related to infectious diseases. For example, the Severe Acute Respiratory Syndrome (SARS) epidemic in Canada in 2002-2003, the recent Ebola epidemic in Africa, and the ongoing HIV/AIDs pandemic are global health concerns. There have also been dramatic increases in the prevalence of chronic diseases around the world [1] – [3] . These epidemiological challenges have raised concerns about the ability of health systems worldwide to address these crises. As a result, public health systems reform has been initiated in a number of countries. In Canada, as in other countries, the role of evidence to support public health reform and improve population health has been given high priority. Yet, there continues to be a significant gap between the production of evidence through research and its application in practice [4] – [5] . One strategy to address this gap has been the development of new research synthesis methodologies to deal with the time-sensitive and wide ranging evidence needs of policy makers and practitioners in all areas of health care, including public health.

As doctoral nursing students facing a review of the literature for our dissertations, and as a faculty member teaching a research methods course, we encountered several ways of conducting a research synthesis but found no comprehensive resources that discussed, compared, and contrasted various synthesis methodologies on their purposes, processes, strengths and limitations. To complicate matters, writers use terms interchangeably or use different terms to mean the same thing, and the literature is often contradictory about various approaches. Some texts [6] , [7] – [9] did provide a preliminary understanding about how research synthesis had been taken up in nursing, but these did not meet our requirements. Thus, in this article we address the need for a comprehensive overview of research synthesis methodologies to guide public health, health care, and social science researchers and practitioners.

Research synthesis is relatively new in public health but has a long history in other fields dating back to the late 1800s. Research synthesis, a research process in its own right [10] , has become more prominent in the wake of the evidence-based movement of the 1990s. Research syntheses have found their advocates and detractors in all disciplines, with challenges to the processes of systematic review and meta-analysis, in particular, being raised by critics of evidence-based healthcare [11] – [13] .

Our purpose was to conduct an integrative review of the literature to explore the historical, contextual, and evolving nature of research synthesis [14] – [15] . We synthesize and critique the main approaches to research synthesis that are relevant for public health, health care, and social scientists. Research synthesis is the overarching term we use to describe approaches to combining, aggregating, integrating, and synthesizing primary research findings. Each synthesis methodology draws on different types of findings depending on the purpose and product of the chosen synthesis (see Additional File 1 ).

3. Method of Review

Based on our current knowledge of the literature, we identified these approaches to include in our review: systematic review, meta-analysis, qualitative meta-synthesis, meta-narrative synthesis, scoping review, rapid review, realist synthesis, concept analysis, literature review, and integrative review. Our first step was to divide the synthesis types among the research team. Each member did a preliminary search to identify key texts. The team then met to develop search terms and a framework to guide the review.

Over the period of 2008 to 2012 we extensively searched the literature, updating our search at several time points, not restricting our search by date. The dates of texts reviewed range from 1967 to 2015. We used the terms above combined with the term “method* (e.g., “realist synthesis” and “method*) in the database Health Source: Academic Edition (includes Medline and CINAHL). This search yielded very few texts on some methodologies and many on others. We realized that many documents on research synthesis had not been picked up in the search. Therefore, we also searched Google Scholar, PubMed, ERIC, and Social Science Index, as well as the websites of key organizations such as the Joanna Briggs Institute, the University of York Centre for Evidence-Based Nursing, and the Cochrane Collaboration database. We hand searched several nursing, social science, public health and health policy journals. Finally, we traced relevant documents from the references in obtained texts.

We included works that met the following inclusion criteria: (1) published in English; (2) discussed the history of research synthesis; (3) explicitly described the approach and specific methods; or (4) identified issues, challenges, strengths and limitations of the particular methodology. We excluded research reports that resulted from the use of particular synthesis methodologies unless they also included criteria 2, 3, or 4 above.

Based on our search, we identified additional types of research synthesis (e.g., meta-interpretation, best evidence synthesis, critical interpretive synthesis, meta-summary, grounded formal theory). Still, we missed some important developments in meta-analysis, for example, identified by the journal's reviewers that have now been discussed briefly in the paper. The final set of 197 texts included in our review comprised theoretical, empirical, and conceptual papers, books, editorials and commentaries, and policy documents.

In our preliminary review of key texts, the team inductively developed a framework of the important elements of each method for comparison. In the next phase, each text was read carefully, and data for these elements were extracted into a table for comparison on the points of: key characteristics, purpose, methods, and product; see Additional File 1 ). Once the data were grouped and extracted, we synthesized across categories based on the following additional points of comparison: complexity of the process, degree of systematization, consideration of context, underlying assumptions, unit of analysis, and when to use each approach. In our results, we discuss our comparison of the various synthesis approaches on the elements above. Drawing only on documents for the review, ethics approval was not required.

We identified four broad categories of research synthesis methodology: Conventional, quantitative, qualitative, and emerging syntheses. From our dataset of 197 texts, we had 14 texts on conventional synthesis, 64 on quantitative synthesis, 78 on qualitative synthesis, and 41 on emerging syntheses. Table 1 provides an overview of the four types of research synthesis, definitions, types of data used, products, and examples of the methodology.

Types of Research SynthesisDefinitionData Types UsedProductsExamples
1. Conventional SynthesisOlder forms of review with less-systematic examination, critique, and synthesis of the literature on a mature topic for re-conceptulization or on a new topic for preliminary conceptualization , –
2. Quantitative SynthesisCombining, aggregating, or integrating quantitative empirical research with data expressed in numeric form , – – –
3. Qualitative SynthesisCombining, aggregating, or integrating qualitative empirical research and/or theoretical work expressed in narrative form – – , – , , – , – – , – – , –
4. Emerging SynthesisNewer syntheses that provide a systematic approach to synthesizing varied literature in a topic area that includes diverse data types – – – – , , –

Although we group these types of synthesis into four broad categories on the basis of similarities, each type within a category has unique characteristics, which may differ from the overall group similarities. Each could be explored in greater depth to tease out their unique characteristics, but detailed comparison is beyond the scope of this article.

Additional File 1 presents one or more selected types of synthesis that represent the broad category but is not an exhaustive presentation of all types within each category. It provides more depth for specific examples from each category of synthesis on the characteristics, purpose, methods, and products than is found in Table 1 .

4.1. Key Characteristics

4.1.1. what is it.

Here we draw on two types of categorization. First, we utilize Dixon Woods et al.'s [49] classification of research syntheses as being either integrative or interpretive . (Please note that integrative syntheses are not the same as an integrative review as defined in Additional File 1 .) Second, we use Popay's [80] enhancement and epistemological models .

The defining characteristics of integrative syntheses are that they involve summarizing the data achieved by pooling data [49] . Integrative syntheses include systematic reviews, meta-analyses, as well as scoping and rapid reviews because each of these focus on summarizing data. They also define concepts from the outset (although this may not always be true in scoping or rapid reviews) and deal with a well-specified phenomenon of interest.

Interpretive syntheses are primarily concerned with the development of concepts and theories that integrate concepts [49] . The analysis in interpretive synthesis is conceptual both in process and outcome, and “the product is not aggregations of data, but theory” [49] , [p.12]. Interpretive syntheses involve induction and interpretation, and are primarily conceptual in process and outcome. Examples include integrative reviews, some systematic reviews, all of the qualitative syntheses, meta-narrative, realist and critical interpretive syntheses. Of note, both quantitative and qualitative studies can be either integrative or interpretive

The second categorization, enhancement versus epistemological , applies to those approaches that use multiple data types and sources [80] . Popay's [80] classification reflects the ways that qualitative data are valued in relation to quantitative data.

In the enhancement model , qualitative data adds something to quantitative analysis. The enhancement model is reflected in systematic reviews and meta-analyses that use some qualitative data to enhance interpretation and explanation. It may also be reflected in some rapid reviews that draw on quantitative data but use some qualitative data.

The epistemological model assumes that quantitative and qualitative data are equal and each has something unique to contribute. All of the other review approaches, except pure quantitative or qualitative syntheses, reflect the epistemological model because they value all data types equally but see them as contributing different understandings.

4.1.2. Data type

By and large, the quantitative approaches (quantitative systematic review and meta-analysis) have typically used purely quantitative data (i.e., expressed in numeric form). More recently, both Cochrane [81] and Campbell [82] collaborations are grappling with the need to, and the process of, integrating qualitative research into a systematic review. The qualitative approaches use qualitative data (i.e., expressed in words). All of the emerging synthesis types, as well as the conventional integrative review, incorporate qualitative and quantitative study designs and data.

4.1.3. Research question

Four types of research questions direct inquiry across the different types of syntheses. The first is a well-developed research question that gives direction to the synthesis (e.g., meta-analysis, systematic review, meta-study, concept analysis, rapid review, realist synthesis). The second begins as a broad general question that evolves and becomes more refined over the course of the synthesis (e.g., meta-ethnography, scoping review, meta-narrative, critical interpretive synthesis). In the third type, the synthesis begins with a phenomenon of interest and the question emerges in the analytic process (e.g., grounded formal theory). Lastly, there is no clear question, but rather a general review purpose (e.g., integrative review). Thus, the requirement for a well-defined question cuts across at least three of the synthesis types (e.g., quantitative, qualitative, and emerging).

4.1.4. Quality appraisal

This is a contested issue within and between the four synthesis categories. There are strong proponents of quality appraisal in the quantitative traditions of systematic review and meta-analysis based on the need for strong studies that will not jeopardize validity of the overall findings. Nonetheless, there is no consensus on pre-defined criteria; many scales exist that vary dramatically in composition. This has methodological implications for the credibility of findings [83] .

Specific methodologies from the conventional, qualitative, and emerging categories support quality appraisal but do so with caveats. In conventional integrative reviews appraisal is recommended, but depends on the sampling frame used in the study [18] . In meta-study, appraisal criteria are explicit but quality criteria are used in different ways depending on the specific requirements of the inquiry [54] . Among the emerging syntheses, meta-narrative review developers support appraisal of a study based on criteria from the research tradition of the primary study [67] , [84] – [85] . Realist synthesis similarly supports the use of high quality evidence, but appraisal checklists are viewed with scepticism and evidence is judged based on relevance to the research question and whether a credible inference may be drawn [69] . Like realist, critical interpretive syntheses do not judge quality using standardized appraisal instruments. They will exclude fatally flawed studies, but there is no consensus on what ‘fatally flawed’ means [49] , [71] . Appraisal is based on relevance to the inquiry, not rigor of the study.

There is no agreement on quality appraisal among qualitative meta-ethnographers with some supporting and others refuting the need for appraisal. [60] , [62] . Opponents of quality appraisal are found among authors of qualitative (grounded formal theory and concept analysis) and emerging syntheses (scoping and rapid reviews) because quality is not deemed relevant to the intention of the synthesis; the studies being reviewed are not effectiveness studies where quality is extremely important. These qualitative synthesis are often reviews of theoretical developments where the concept itself is what is important, or reviews that provide quotations from the raw data so readers can make their own judgements about the relevance and utility of the data. For example, in formal grounded theory, the purpose of theory generation and authenticity of data used to generate the theory is not as important as the conceptual category. Inaccuracies may be corrected in other ways, such as using the constant comparative method, which facilitates development of theoretical concepts that are repeatedly found in the data [86] – [87] . For pragmatic reasons, evidence is not assessed in rapid and scoping reviews, in part to produce a timely product. The issue of quality appraisal is unresolved across the terrain of research synthesis and we consider this further in our discussion.

4.2. Purpose

All research syntheses share a common purpose -- to summarize, synthesize, or integrate research findings from diverse studies. This helps readers stay abreast of the burgeoning literature in a field. Our discussion here is at the level of the four categories of synthesis. Beginning with conventional literature syntheses, the overall purpose is to attend to mature topics for the purpose of re-conceptualization or to new topics requiring preliminary conceptualization [14] . Such syntheses may be helpful to consider contradictory evidence, map shifting trends in the study of a phenomenon, and describe the emergence of research in diverse fields [14] . The purpose here is to set the stage for a study by identifying what has been done, gaps in the literature, important research questions, or to develop a conceptual framework to guide data collection and analysis.

The purpose of quantitative systematic reviews is to combine, aggregate, or integrate empirical research to be able to generalize from a group of studies and determine the limits of generalization [27] . The focus of quantitative systematic reviews has been primarily on aggregating the results of studies evaluating the effectiveness of interventions using experimental, quasi-experimental, and more recently, observational designs. Systematic reviews can be done with or without quantitative meta-analysis but a meta-analysis always takes place within the context of a systematic review. Researchers must consider the review's purpose and the nature of their data in undertaking a quantitative synthesis; this will assist in determining the approach.

The purpose of qualitative syntheses is broadly to synthesize complex health experiences, practices, or concepts arising in healthcare environments. There may be various purposes depending on the qualitative methodology. For example, in hermeneutic studies the aim may be holistic explanation or understanding of a phenomenon [42] , which is deepened by integrating the findings from multiple studies. In grounded formal theory, the aim is to produce a conceptual framework or theory expected to be applicable beyond the original study. Although not able to generalize from qualitative research in the statistical sense [88] , qualitative researchers usually do want to say something about the applicability of their synthesis to other settings or phenomena. This notion of ‘theoretical generalization’ has been referred to as ‘transferability’ [89] – [90] and is an important criterion of rigour in qualitative research. It applies equally to the products of a qualitative synthesis in which the synthesis of multiple studies on the same phenomenon strengthens the ability to draw transferable conclusions.

The overarching purpose of emerging syntheses is challenging the more traditional types of syntheses, in part by using data from both quantitative and qualitative studies with diverse designs for analysis. Beyond this, however, each emerging synthesis methodology has a unique purpose. In meta-narrative review, the purpose is to identify different research traditions in the area, synthesize a complex and diverse body of research. Critical interpretive synthesis shares this characteristic. Although a distinctive approach, critical interpretive synthesis utilizes a modification of the analytic strategies of meta-ethnography [61] (e.g., reciprocal translational analysis, refutational synthesis, and lines of argument synthesis) but goes beyond the use of these to bring a critical perspective to bear in challenging the normative or epistemological assumptions in the primary literature [72] – [73] . The unique purpose of a realist synthesis is to amalgamate complex empirical evidence and theoretical understandings within a diverse body of literature to uncover the operative mechanisms and contexts that affect the outcomes of social interventions. In a scoping review, the intention is to find key concepts, examine the range of research in an area, and identify gaps in the literature. The purpose of a rapid review is comparable to that of a scoping review, but done quickly to meet the time-sensitive information needs of policy makers.

4.3. Method

4.3.1. degree of systematization.

There are varying degrees of systematization across the categories of research synthesis. The most systematized are quantitative systematic reviews and meta-analyses. There are clear processes in each with judgments to be made at each step, although there are no agreed upon guidelines for this. The process is inherently subjective despite attempts to develop objective and systematic processes [91] – [92] . Mullen and Ramirez [27] suggest that there is often a false sense of rigour implied by the terms ‘systematic review’ and ‘meta-analysis’ because of their clearly defined procedures.

In comparison with some types of qualitative synthesis, concept analysis is quite procedural. Qualitative meta-synthesis also has defined procedures and is systematic, yet perhaps less so than concept analysis. Qualitative meta-synthesis starts in an unsystematic way but becomes more systematic as it unfolds. Procedures and frameworks exist for some of the emerging types of synthesis [e.g., [50] , [63] , [71] , [93] ] but are not linear, have considerable flexibility, and are often messy with emergent processes [85] . Conventional literature reviews tend not to be as systematic as the other three types. In fact, the lack of systematization in conventional literature synthesis was the reason for the development of more systematic quantitative [17] , [20] and qualitative [45] – [46] , [61] approaches. Some authors in the field [18] have clarified processes for integrative reviews making them more systematic and rigorous, but most conventional syntheses remain relatively unsystematic in comparison with other types.

4.3.2. Complexity of the process

Some synthesis processes are considerably more complex than others. Methodologies with clearly defined steps are arguably less complex than the more flexible and emergent ones. We know that any study encounters challenges and it is rare that a pre-determined research protocol can be followed exactly as intended. Not even the rigorous methods associated with Cochrane [81] systematic reviews and meta-analyses are always implemented exactly as intended. Even when dealing with numbers rather than words, interpretation is always part of the process. Our collective experience suggests that new methodologies (e.g., meta-narrative synthesis and realist synthesis) that integrate different data types and methods are more complex than conventional reviews or the rapid and scoping reviews.

4.4. Product

The products of research syntheses usually take three distinct formats (see Table 1 and Additional File 1 for further details). The first representation is in tables, charts, graphical displays, diagrams and maps as seen in integrative, scoping and rapid reviews, meta-analyses, and critical interpretive syntheses. The second type of synthesis product is the use of mathematical scores. Summary statements of effectiveness are mathematically displayed in meta-analyses (as an effect size), systematic reviews, and rapid reviews (statistical significance).

The third synthesis product may be a theory or theoretical framework. A mid-range theory can be produced from formal grounded theory, meta-study, meta-ethnography, and realist synthesis. Theoretical/conceptual frameworks or conceptual maps may be created in meta-narrative and critical interpretive syntheses, and integrative reviews. Concepts for use within theories are produced in concept analysis. While these three product types span the categories of research synthesis, narrative description and summary is used to present the products resulting from all methodologies.

4.5. Consideration of context

There are diverse ways that context is considered in the four broad categories of synthesis. Context may be considered to the extent that it features within primary studies for the purpose of the review. Context may also be understood as an integral aspect of both the phenomenon under study and the synthesis methodology (e.g., realist synthesis). Quantitative systematic reviews and meta-analyses have typically been conducted on studies using experimental and quasi-experimental designs and more recently observational studies, which control for contextual features to allow for understanding of the ‘true’ effect of the intervention [94] .

More recently, systematic reviews have included covariates or mediating variables (i.e., contextual factors) to help explain variability in the results across studies [27] . Context, however, is usually handled in the narrative discussion of findings rather than in the synthesis itself. This lack of attention to context has been one criticism leveled against systematic reviews and meta-analyses, which restrict the types of research designs that are considered [e.g., [95] ].

When conventional literature reviews incorporate studies that deal with context, there is a place for considering contextual influences on the intervention or phenomenon. Reviews of quantitative experimental studies tend to be devoid of contextual considerations since the original studies are similarly devoid, but context might figure prominently in a literature review that incorporates both quantitative and qualitative studies.

Qualitative syntheses have been conducted on the contextual features of a particular phenomenon [33] . Paterson et al. [54] advise researchers to attend to how context may have influenced the findings of particular primary studies. In qualitative analysis, contextual features may form categories by which the data can be compared and contrasted to facilitate interpretation. Because qualitative research is often conducted to understand a phenomenon as a whole, context may be a focus, although this varies with the qualitative methodology. At the same time, the findings in a qualitative synthesis are abstracted from the original reports and taken to a higher level of conceptualization, thus removing them from the original context.

Meta-narrative synthesis [67] , [84] , because it draws on diverse research traditions and methodologies, may incorporate context into the analysis and findings. There is not, however, an explicit step in the process that directs the analyst to consider context. Generally, the research question guiding the synthesis is an important factor in whether context will be a focus.

More recent iterations of concept analysis [47] , [96] – [97] explicitly consider context reflecting the assumption that a concept's meaning is determined by its context. Morse [47] points out, however, that Wilson's [98] approach to concept analysis, and those based on Wilson [e.g., [45] ], identify attributes that are devoid of context, while Rodgers' [96] , [99] evolutionary method considers context (e.g., antecedents, consequences, and relationships to other concepts) in concept development.

Realist synthesis [69] considers context as integral to the study. It draws on a critical realist logic of inquiry grounded in the work of Bhaskar [100] , who argues that empirical co-occurrence of events is insufficient for inferring causation. One must identify generative mechanisms whose properties are causal and, depending on the situation, may nor may not be activated [94] . Context interacts with program/intervention elements and thus cannot be differentiated from the phenomenon [69] . This approach synthesizes evidence on generative mechanisms and analyzes contextual features that activate them; the result feeds back into the context. The focus is on what works, for whom, under what conditions, why and how [68] .

4.6. Underlying Philosophical and Theoretical Assumptions

When we began our review, we ‘assumed’ that the assumptions underlying synthesis methodologies would be a distinguishing characteristic of synthesis types, and that we could compare the various types on their assumptions, explicit or implicit. We found, however, that many authors did not explicate the underlying assumptions of their methodologies, and it was difficult to infer them. Kirkevold [101] has argued that integrative reviews need to be carried out from an explicit philosophical or theoretical perspective. We argue this should be true for all types of synthesis.

Authors of some emerging synthesis approaches have been very explicit about their assumptions and philosophical underpinnings. An implicit assumption of most emerging synthesis methodologies is that quantitative systematic reviews and meta-analyses have limited utility in some fields [e.g., in public health – [13] , [102] ] and for some kinds of review questions like those about feasibility and appropriateness versus effectiveness [103] – [104] . They also assume that ontologically and epistemologically, both kinds of data can be combined. This is a significant debate in the literature because it is about the commensurability of overarching paradigms [105] but this is beyond the scope of this review.

Realist synthesis is philosophically grounded in critical realism or, as noted above, a realist logic of inquiry [93] , [99] , [106] – [107] . Key assumptions regarding the nature of interventions that inform critical realism have been described above in the section on context. See Pawson et al. [106] for more information on critical realism, the philosophical basis of realist synthesis.

Meta-narrative synthesis is explicitly rooted in a constructivist philosophy of science [108] in which knowledge is socially constructed rather than discovered, and what we take to be ‘truth’ is a matter of perspective. Reality has a pluralistic and plastic character, and there is no pre-existing ‘real world’ independent of human construction and language [109] . See Greenhalgh et al. [67] , [85] and Greenhalgh & Wong [97] for more discussion of the constructivist basis of meta-narrative synthesis.

In the case of purely quantitative or qualitative syntheses, it may be an easier matter to uncover unstated assumptions because they are likely to be shared with those of the primary studies in the genre. For example, grounded formal theory shares the philosophical and theoretical underpinnings of grounded theory, rooted in the theoretical perspective of symbolic interactionism [110] – [111] and the philosophy of pragmatism [87] , [112] – [114] .

As with meta-narrative synthesis, meta-study developers identify constructivism as their interpretive philosophical foundation [54] , [88] . Epistemologically, constructivism focuses on how people construct and re-construct knowledge about a specific phenomenon, and has three main assumptions: (1) reality is seen as multiple, at times even incompatible with the phenomenon under consideration; (2) just as primary researchers construct interpretations from participants' data, meta-study researchers also construct understandings about the primary researchers' original findings. Thus, meta-synthesis is a construction of a construction, or a meta-construction; and (3) all constructions are shaped by the historical, social and ideological context in which they originated [54] . The key message here is that reports of any synthesis would benefit from an explicit identification of the underlying philosophical perspectives to facilitate a better understanding of the results, how they were derived, and how they are being interpreted.

4.7. Unit of Analysis

The unit of analysis for each category of review is generally distinct. For the emerging synthesis approaches, the unit of analysis is specific to the intention. In meta-narrative synthesis it is the storyline in diverse research traditions; in rapid review or scoping review, it depends on the focus but could be a concept; and in realist synthesis, it is the theories rather than programs that are the units of analysis. The elements of theory that are important in the analysis are mechanisms of action, the context, and the outcome [107] .

For qualitative synthesis, the units of analysis are generally themes, concepts or theories, although in meta-study, the units of analysis can be research findings (“meta-data-analysis”), research methods (“meta-method”) or philosophical/theoretical perspectives (“meta-theory”) [54] . In quantitative synthesis, the units of analysis range from specific statistics for systematic reviews to effect size of the intervention for meta-analysis. More recently, some systematic reviews focus on theories [115] – [116] , therefore it depends on the research question. Similarly, within conventional literature synthesis the units of analysis also depend on the research purpose, focus and question as well as on the type of research methods incorporated into the review. What is important in all research syntheses, however, is that the unit of analysis needs to be made explicit. Unfortunately, this is not always the case.

4.8. Strengths and Limitations

In this section, we discuss the overarching strengths and limitations of synthesis methodologies as a whole and then highlight strengths and weaknesses across each of our four categories of synthesis.

4.8.1. Strengths of Research Syntheses in General

With the vast proliferation of research reports and the increased ease of retrieval, research synthesis has become more accessible providing a way of looking broadly at the current state of research. The availability of syntheses helps researchers, practitioners, and policy makers keep up with the burgeoning literature in their fields without which evidence-informed policy or practice would be difficult. Syntheses explain variation and difference in the data helping us identify the relevance for our own situations; they identify gaps in the literature leading to new research questions and study designs. They help us to know when to replicate a study and when to avoid excessively duplicating research. Syntheses can inform policy and practice in a way that well-designed single studies cannot; they provide building blocks for theory that helps us to understand and explain our phenomena of interest.

4.8.2. Limitations of Research Syntheses in General

The process of selecting, combining, integrating, and synthesizing across diverse study designs and data types can be complex and potentially rife with bias, even with those methodologies that have clearly defined steps. Just because a rigorous and standardized approach has been used does not mean that implicit judgements will not influence the interpretations and choices made at different stages.

In all types of synthesis, the quantity of data can be considerable, requiring difficult decisions about scope, which may affect relevance. The quantity of available data also has implications for the size of the research team. Few reviews these days can be done independently, in particular because decisions about inclusion and exclusion may require the involvement of more than one person to ensure reliability.

For all types of synthesis, it is likely that in areas with large, amorphous, and diverse bodies of literature, even the most sophisticated search strategies will not turn up all the relevant and important texts. This may be more important in some synthesis methodologies than in others, but the omission of key documents can influence the results of all syntheses. This issue can be addressed, at least in part, by including a library scientist on the research team as required by some funding agencies. Even then, it is possible to miss key texts. In this review, for example, because none of us are trained in or conduct meta-analyses, we were not even aware that we had missed some new developments in this field such as meta-regression [117] – [118] , network meta-analysis [119] – [121] , and the use of individual patient data in meta-analyses [122] – [123] .

One limitation of systematic reviews and meta-analyses is that they rapidly go out of date. We thought this might be true for all types of synthesis, although we wondered if those that produce theory might not be somewhat more enduring. We have not answered this question but it is open for debate. For all types of synthesis, the analytic skills and the time required are considerable so it is clear that training is important before embarking on a review, and some types of review may not be appropriate for students or busy practitioners.

Finally, the quality of reporting in primary studies of all genres is variable so it is sometimes difficult to identify aspects of the study essential for the synthesis, or to determine whether the study meets quality criteria. There may be flaws in the original study, or journal page limitations may necessitate omitting important details. Reporting standards have been developed for some types of reviews (e.g., systematic review, meta-analysis, meta-narrative synthesis, realist synthesis); but there are no agreed upon standards for qualitative reviews. This is an important area for development in advancing the science of research synthesis.

4.8.3. Strengths and Limitations of the Four Synthesis Types

The conventional literature review and now the increasingly common integrative review remain important and accessible approaches for students, practitioners, and experienced researchers who want to summarize literature in an area but do not have the expertise to use one of the more complex methodologies. Carefully executed, such reviews are very useful for synthesizing literature in preparation for research grants and practice projects. They can determine the state of knowledge in an area and identify important gaps in the literature to provide a clear rationale or theoretical framework for a study [14] , [18] . There is a demand, however, for more rigour, with more attention to developing comprehensive search strategies and more systematic approaches to combining, integrating, and synthesizing the findings.

Generally, conventional reviews include diverse study designs and data types that facilitate comprehensiveness, which may be a strength on the one hand, but can also present challenges on the other. The complexity inherent in combining results from studies with diverse methodologies can result in bias and inaccuracies. The absence of clear guidelines about how to synthesize across diverse study types and data [18] has been a challenge for novice reviewers.

Quantitative systematic reviews and meta-analyses have been important in launching the field of evidence-based healthcare. They provide a systematic, orderly and auditable process for conducting a review and drawing conclusions [25] . They are arguably the most powerful approaches to understanding the effectiveness of healthcare interventions, especially when intervention studies on the same topic show very different results. When areas of research are dogged by controversy [25] or when study results go against strongly held beliefs, such approaches can reduce the uncertainty and bring strong evidence to bear on the controversy.

Despite their strengths, they also have limitations. Systematic reviews and meta-analyses do not provide a way of including complex literature comprising various types of evidence including qualitative studies, theoretical work, and epidemiological studies. Only certain types of design are considered and qualitative data are used in a limited way. This exclusion limits what can be learned in a topic area.

Meta-analyses are often not possible because of wide variability in study design, population, and interventions so they may have a narrow range of utility. New developments in meta-analysis, however, can be used to address some of these limitations. Network meta-analysis is used to explore relative efficacy of multiple interventions, even those that have never been compared in more conventional pairwise meta-analyses [121] , allowing for improved clinical decision making [120] . The limitation is that network meta-analysis has only been used in medical/clinical applications [119] and not in public health. It has not yet been widely accepted and many methodological challenges remain [120] – [121] . Meta-regression is another development that combines meta-analytic and linear regression principles to address the fact that heterogeneity of results may compromise a meta-analysis [117] – [118] . The disadvantage is that many clinicians are unfamiliar with it and may incorrectly interpret results [117] .

Some have accused meta-analysis of combining apples and oranges [124] raising questions in the field about their meaningfulness [25] , [28] . More recently, the use of individual rather than aggregate data has been useful in facilitating greater comparability among studies [122] . In fact, Tomas et al. [123] argue that meta-analysis using individual data is now the gold standard although access to the raw data from other studies may be a challenge to obtain.

The usefulness of systematic reviews in synthesizing complex health and social interventions has also been challenged [102] . It is often difficult to synthesize their findings because such studies are “epistemologically diverse and methodologically complex” [ [69] , p.21]. Rigid inclusion/exclusion criteria may allow only experimental or quasi-experimental designs into consideration resulting in lost information that may well be useful to policy makers for tailoring an intervention to the context or understanding its acceptance by recipients.

Qualitative syntheses may be the type of review most fraught with controversy and challenge, while also bringing distinct strengths to the enterprise. Although these methodologies provide a comprehensive and systematic review approach, they do not generally provide definitive statements about intervention effectiveness. They do, however, address important questions about the development of theoretical concepts, patient experiences, acceptability of interventions, and an understanding about why interventions might work.

Most qualitative syntheses aim to produce a theoretically generalizable mid-range theory that explains variation across studies. This makes them more useful than single primary studies, which may not be applicable beyond the immediate setting or population. All provide a contextual richness that enhances relevance and understanding. Another benefit of some types of qualitative synthesis (e.g., grounded formal theory) is that the concept of saturation provides a sound rationale for limiting the number of texts to be included thus making reviews potentially more manageable. This contrasts with the requirements of systematic reviews and meta-analyses that require an exhaustive search.

Qualitative researchers debate about whether the findings of ontologically and epistemological diverse qualitative studies can actually be combined or synthesized [125] because methodological diversity raises many challenges for synthesizing findings. The products of different types of qualitative syntheses range from theory and conceptual frameworks, to themes and rich descriptive narratives. Can one combine the findings from a phenomenological study with the theory produced in a grounded theory study? Many argue yes, but many also argue no.

Emerging synthesis methodologies were developed to address some limitations inherent in other types of synthesis but also have their own issues. Because each type is so unique, it is difficult to identify overarching strengths of the entire category. An important strength, however, is that these newer forms of synthesis provide a systematic and rigorous approach to synthesizing a diverse literature base in a topic area that includes a range of data types such as: both quantitative and qualitative studies, theoretical work, case studies, evaluations, epidemiological studies, trials, and policy documents. More than conventional literature reviews and systematic reviews, these approaches provide explicit guidance on analytic methods for integrating different types of data. The assumption is that all forms of data have something to contribute to knowledge and theory in a topic area. All have a defined but flexible process in recognition that the methods may need to shift as knowledge develops through the process.

Many emerging synthesis types are helpful to policy makers and practitioners because they are usually involved as team members in the process to define the research questions, and interpret and disseminate the findings. In fact, engagement of stakeholders is built into the procedures of the methods. This is true for rapid reviews, meta-narrative syntheses, and realist syntheses. It is less likely to be the case for critical interpretive syntheses.

Another strength of some approaches (realist and meta-narrative syntheses) is that quality and publication standards have been developed to guide researchers, reviewers, and funders in judging the quality of the products [108] , [126] – [127] . Training materials and online communities of practice have also been developed to guide users of realist and meta-narrative review methods [107] , [128] . A unique strength of critical interpretive synthesis is that it takes a critical perspective on the process that may help reconceptualize the data in a way not considered by the primary researchers [72] .

There are also challenges of these new approaches. The methods are new and there may be few published applications by researchers other than the developers of the methods, so new users often struggle with the application. The newness of the approaches means that there may not be mentors available to guide those unfamiliar with the methods. This is changing, however, and the number of applications in the literature is growing with publications by new users helping to develop the science of synthesis [e.g., [129] ]. However, the evolving nature of the approaches and their developmental stage present challenges for novice researchers.

4.9. When to Use Each Approach

Choosing an appropriate approach to synthesis will depend on the question you are asking, the purpose of the review, and the outcome or product you want to achieve. In Additional File 1 , we discuss each of these to provide guidance to readers on making a choice about review type. If researchers want to know whether a particular type of intervention is effective in achieving its intended outcomes, then they might choose a quantitative systemic review with or without meta-analysis, possibly buttressed with qualitative studies to provide depth and explanation of the results. Alternately, if the concern is about whether an intervention is effective with different populations under diverse conditions in varying contexts, then a realist synthesis might be the most appropriate.

If researchers' concern is to develop theory, they might consider qualitative syntheses or some of the emerging syntheses that produce theory (e.g., critical interpretive synthesis, realist review, grounded formal theory, qualitative meta-synthesis). If the aim is to track the development and evolution of concepts, theories or ideas, or to determine how an issue or question is addressed across diverse research traditions, then meta-narrative synthesis would be most appropriate.

When the purpose is to review the literature in advance of undertaking a new project, particularly by graduate students, then perhaps an integrative review would be appropriate. Such efforts contribute towards the expansion of theory, identify gaps in the research, establish the rationale for studying particular phenomena, and provide a framework for interpreting results in ways that might be useful for influencing policy and practice.

For researchers keen to bring new insights, interpretations, and critical re-conceptualizations to a body of research, then qualitative or critical interpretive syntheses will provide an inductive product that may offer new understandings or challenges to the status quo. These can inform future theory development, or provide guidance for policy and practice.

5. Discussion

What is the current state of science regarding research synthesis? Public health, health care, and social science researchers or clinicians have previously used all four categories of research synthesis, and all offer a suitable array of approaches for inquiries. New developments in systematic reviews and meta-analysis are providing ways of addressing methodological challenges [117] – [123] . There has also been significant advancement in emerging synthesis methodologies and they are quickly gaining popularity. Qualitative meta-synthesis is still evolving, particularly given how new it is within the terrain of research synthesis. In the midst of this evolution, outstanding issues persist such as grappling with: the quantity of data, quality appraisal, and integration with knowledge translation. These topics have not been thoroughly addressed and need further debate.

5.1. Quantity of Data

We raise the question of whether it is possible or desirable to find all available studies for a synthesis that has this requirement (e.g., meta-analysis, systematic review, scoping, meta-narrative synthesis [25] , [27] , [63] , [67] , [84] – [85] ). Is the synthesis of all available studies a realistic goal in light of the burgeoning literature? And how can this be sustained in the future, particularly as the emerging methodologies continue to develop and as the internet facilitates endless access? There has been surprisingly little discussion on this topic and the answers will have far-reaching implications for searching, sampling, and team formation.

Researchers and graduate students can no longer rely on their own independent literature search. They will likely need to ask librarians for assistance as they navigate multiple sources of literature and learn new search strategies. Although teams now collaborate with library scientists, syntheses are limited in that researchers must make decisions on the boundaries of the review, in turn influencing the study's significance. The size of a team may also be pragmatically determined to manage the search, extraction, and synthesis of the burgeoning data. There is no single answer to our question about the possibility or necessity of finding all available articles for a review. Multiple strategies that are situation specific are likely to be needed.

5.2. Quality Appraisal

While the issue of quality appraisal has received much attention in the synthesis literature, scholars are far from resolution. There may be no agreement about appraisal criteria in a given tradition. For example, the debate rages over the appropriateness of quality appraisal in qualitative synthesis where there are over 100 different sets of criteria and many do not overlap [49] . These differences may reflect disciplinary and methodological orientations, but diverse quality appraisal criteria may privilege particular types of research [49] . The decision to appraise is often grounded in ontological and epistemological assumptions. Nonetheless, diversity within and between categories of synthesis is likely to continue unless debate on the topic of quality appraisal continues and evolves toward consensus.

5.3. Integration with Knowledge Translation

If research syntheses are to make a difference to practice and ultimately to improve health outcomes, then we need to do a better job of knowledge translation. In the Canadian Institutes of Health Research (CIHR) definition of knowledge translation (KT), research or knowledge synthesis is an integral component [130] . Yet, with few exceptions [131] – [132] , very little of the research synthesis literature even mentions the relationship of synthesis to KT nor does it discuss strategies to facilitate the integration of synthesis findings into policy and practice. The exception is in the emerging synthesis methodologies, some of which (e.g., realist and meta-narrative syntheses, scoping reviews) explicitly involve stakeholders or knowledge users. The argument is that engaging them in this way increases the likelihood that the knowledge generated will be translated into policy and practice. We suggest that a more explicit engagement with knowledge users in all types of synthesis would benefit the uptake of the research findings.

Research synthesis neither makes research more applicable to practice nor ensures implementation. Focus must now turn seriously towards translation of synthesis findings into knowledge products that are useful for health care practitioners in multiple areas of practice and develop appropriate strategies to facilitate their use. The burgeoning field of knowledge translation has, to some extent, taken up this challenge; however, the research-practice gap continues to plague us [133] – [134] . It is a particular problem for qualitative syntheses [131] . Although such syntheses have an important place in evidence-informed practice, little effort has gone into the challenge of translating the findings into useful products to guide practice [131] .

5.4. Limitations

Our study took longer than would normally be expected for an integrative review. Each of us were primarily involved in our own dissertations or teaching/research positions, and so this study was conducted ‘off the sides of our desks.’ A limitation was that we searched the literature over the course of 4 years (from 2008–2012), necessitating multiple search updates. Further, we did not do a comprehensive search of the literature after 2012, thus the more recent synthesis literature was not systematically explored. We did, however, perform limited database searches from 2012–2015 to keep abreast of the latest methodological developments. Although we missed some new approaches to meta-analysis in our search, we did not find any new features of the synthesis methodologies covered in our review that would change the analysis or findings of this article. Lastly, we struggled with the labels used for the broad categories of research synthesis methodology because of our hesitancy to reinforce the divide between quantitative and qualitative approaches. However, it was very difficult to find alternative language that represented the types of data used in these methodologies. Despite our hesitancy in creating such an obvious divide, we were left with the challenge of trying to find a way of characterizing these broad types of syntheses.

6. Conclusion

Our findings offer methodological clarity for those wishing to learn about the broad terrain of research synthesis. We believe that our review makes transparent the issues and considerations in choosing from among the four broad categories of research synthesis. In summary, research synthesis has taken its place as a form of research in its own right. The methodological terrain has deep historical roots reaching back over the past 200 years, yet research synthesis remains relatively new to public health, health care, and social sciences in general. This is rapidly changing. New developments in systematic reviews and meta-analysis, and the emergence of new synthesis methodologies provide a vast array of options to review the literature for diverse purposes. New approaches to research synthesis and new analytic methods within existing approaches provide a much broader range of review alternatives for public health, health care, and social science students and researchers.

Acknowledgments

KSM is an assistant professor in the Faculty of Nursing at the University of Alberta. Her work on this article was largely conducted as a Postdoctoral Fellow, funded by KRESCENT (Kidney Research Scientist Core Education and National Training Program, reference #KRES110011R1) and the Faculty of Nursing at the University of Alberta.

MM's work on this study over the period of 2008-2014 was supported by a Canadian Institutes of Health Research Applied Public Health Research Chair Award (grant #92365).

We thank Rachel Spanier who provided support with reference formatting.

List of Abbreviations (in Additional File 1 )

Conflict of interest: The authors declare that they have no conflicts of interest in this article.

Authors' contributions: KSM co-designed the study, collected data, analyzed the data, drafted/revised the manuscript, and managed the project.

MP contributed to searching the literature, developing the analytic framework, and extracting data for the Additional File.

JB contributed to searching the literature, developing the analytic framework, and extracting data for the Additional File.

WN contributed to searching the literature, developing the analytic framework, and extracting data for the Additional File.

All authors read and approved the final manuscript.

Additional Files: Additional File 1 – Selected Types of Research Synthesis

This Additional File is our dataset created to organize, analyze and critique the literature that we synthesized in our integrative review. Our results were created based on analysis of this Additional File.

A Guide to Evidence Synthesis: What is Evidence Synthesis?

  • Meet Our Team
  • Our Published Reviews and Protocols
  • What is Evidence Synthesis?
  • Types of Evidence Synthesis
  • Evidence Synthesis Across Disciplines
  • Finding and Appraising Existing Systematic Reviews
  • 0. Develop a Protocol
  • 1. Draft your Research Question
  • 2. Select Databases
  • 3. Select Grey Literature Sources
  • 4. Write a Search Strategy
  • 5. Register a Protocol
  • 6. Translate Search Strategies
  • 7. Citation Management
  • 8. Article Screening
  • 9. Risk of Bias Assessment
  • 10. Data Extraction
  • 11. Synthesize, Map, or Describe the Results
  • Evidence Synthesis Institute for Librarians
  • Open Access Evidence Synthesis Resources

What are Evidence Syntheses?

What are evidence syntheses.

According to the Royal Society, 'evidence synthesis' refers to the process of bringing together information from a range of sources and disciplines to inform debates and decisions on specific issues. They generally include a methodical and comprehensive literature synthesis focused on a well-formulated research question.  Their aim is to identify and synthesize all  of the scholarly research on a particular topic, including both published and unpublished studies. Evidence syntheses are conducted in an unbiased, reproducible way to provide evidence for practice and policy-making, as well as to identify gaps in the research. Evidence syntheses may also include a meta-analysis, a more quantitative process of synthesizing and visualizing data retrieved from various studies. 

Evidence syntheses are much more time-intensive than traditional literature reviews and require a multi-person research team. See this PredicTER tool to get a sense of a systematic review timeline (one type of evidence synthesis). Before embarking on an evidence synthesis, it's important to clearly identify your reasons for conducting one. For a list of types of evidence synthesis projects, see the next tab.

How Does a Traditional Literature Review Differ From an Evidence Synthesis?

How does a systematic review differ from a traditional literature review.

One commonly used form of evidence synthesis is a systematic review.  This table compares a traditional literature review with a systematic review.

 

Review Question/Topic

Topics may be broad in scope; the goal of the review may be to place one's own research within the existing body of knowledge, or to gather information that supports a particular viewpoint.

Starts with a well-defined research question to be answered by the review. Reviews are conducted with the aim of finding all existing evidence in an unbiased, transparent, and reproducible way.

Searching for Studies

Searches may be ad hoc and based on what the author is already familiar with. Searches are not exhaustive or fully comprehensive.

Attempts are made to find all existing published and unpublished literature on the research question. The process is well-documented and reported.

Study Selection

Often lack clear reasons for why studies were included or excluded from the review.

Reasons for including or excluding studies are explicit and informed by the research question.

Assessing the Quality of Included Studies

Often do not consider study quality or potential biases in study design.

Systematically assesses risk of bias of individual studies and overall quality of the evidence, including sources of heterogeneity between study results.

Synthesis of Existing Research

Conclusions are more qualitative and may not be based on study quality.

Bases conclusion on quality of the studies and provide recommendations for practice or to address knowledge gaps.

Video: Reproducibility and transparent methods (Video 3:25)

Reporting Standards

There are some reporting standards for evidence syntheses. These can serve as guidelines for protocol and manuscript preparation and journals may require that these standards are followed for the review type that is being employed (e.g. systematic review, scoping review, etc). ​

  • PRISMA checklist Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) is an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses.
  • PRISMA-P Standards An updated version of the original PRISMA standards for protocol development.
  • PRISMA - ScR Reporting guidelines for scoping reviews and evidence maps
  • PRISMA-IPD Standards Extension of the original PRISMA standards for systematic reviews and meta-analyses of individual participant data.
  • EQUATOR Network The EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network is an international initiative that seeks to improve the reliability and value of published health research literature by promoting transparent and accurate reporting and wider use of robust reporting guidelines. They provide a list of various standards for reporting in systematic reviews.

Video: Guidelines and reporting standards

PRISMA Flow Diagram

The  PRISMA  flow diagram depicts the flow of information through the different phases of an evidence synthesis. It maps the search (number of records identified), screening (number of records included and excluded), and selection (reasons for exclusion).  Many evidence syntheses include a PRISMA flow diagram in the published manuscript.

See below for resources to help you generate your own PRISMA flow diagram.

  • PRISMA Flow Diagram Tool
  • PRISMA Flow Diagram Word Template
  • << Previous: Our Published Reviews and Protocols
  • Next: Types of Evidence Synthesis >>
  • Last Updated: Aug 5, 2024 2:03 PM
  • URL: https://guides.library.cornell.edu/evidence-synthesis

Banner Image

Library Guides

Literature reviews: synthesis.

  • Criticality

Synthesise Information

So, how can you create paragraphs within your literature review that demonstrates your knowledge of the scholarship that has been done in your field of study?  

You will need to present a synthesis of the texts you read.  

Doug Specht, Senior Lecturer at the Westminster School of Media and Communication, explains synthesis for us in the following video:  

Synthesising Texts  

What is synthesis? 

Synthesis is an important element of academic writing, demonstrating comprehension, analysis, evaluation and original creation.  

With synthesis you extract content from different sources to create an original text. While paraphrase and summary maintain the structure of the given source(s), with synthesis you create a new structure.  

The sources will provide different perspectives and evidence on a topic. They will be put together when agreeing, contrasted when disagreeing. The sources must be referenced.  

Perfect your synthesis by showing the flow of your reasoning, expressing critical evaluation of the sources and drawing conclusions.  

When you synthesise think of "using strategic thinking to resolve a problem requiring the integration of diverse pieces of information around a structuring theme" (Mateos and Sole 2009, p448). 

Synthesis is a complex activity, which requires a high degree of comprehension and active engagement with the subject. As you progress in higher education, so increase the expectations on your abilities to synthesise. 

How to synthesise in a literature review: 

Identify themes/issues you'd like to discuss in the literature review. Think of an outline.  

Read the literature and identify these themes/issues.  

Critically analyse the texts asking: how does the text I'm reading relate to the other texts I've read on the same topic? Is it in agreement? Does it differ in its perspective? Is it stronger or weaker? How does it differ (could be scope, methods, year of publication etc.). Draw your conclusions on the state of the literature on the topic.  

Start writing your literature review, structuring it according to the outline you planned.  

Put together sources stating the same point; contrast sources presenting counter-arguments or different points.  

Present your critical analysis.  

Always provide the references. 

The best synthesis requires a "recursive process" whereby you read the source texts, identify relevant parts, take notes, produce drafts, re-read the source texts, revise your text, re-write... (Mateos and Sole, 2009). 

What is good synthesis?  

The quality of your synthesis can be assessed considering the following (Mateos and Sole, 2009, p439):  

Integration and connection of the information from the source texts around a structuring theme. 

Selection of ideas necessary for producing the synthesis. 

Appropriateness of the interpretation.  

Elaboration of the content.  

Example of Synthesis

Original texts (fictitious): 

Animal testing is necessary to save human lives. Incidents have happened where humans have died or have been seriously harmed for using drugs that had not been tested on animals (Smith 2008).   

Animals feel pain in a way that is physiologically and neuroanatomically similar to humans (Chowdhury 2012).   

Animal testing is not always used to assess the toxicology of a drug; sometimes painful experiments are undertaken to improve the effectiveness of cosmetics (Turner 2015) 

Animals in distress can suffer psychologically, showing symptoms of depression and anxiety (Panatta and Hudson 2016). 

  

Synthesis: 

Animal experimentation is a subject of heated debate. Some argue that painful experiments should be banned. Indeed it has been demonstrated that such experiments make animals suffer physically and psychologically (Chowdhury 2012; Panatta and Hudson 2016). On the other hand, it has been argued that animal experimentation can save human lives and reduce harm on humans (Smith 2008). This argument is only valid for toxicological testing, not for tests that, for example, merely improve the efficacy of a cosmetic (Turner 2015). It can be suggested that animal experimentation should be regulated to only allow toxicological risk assessment, and the suffering to the animals should be minimised.   

Bibliography

Mateos, M. and Sole, I. (2009). Synthesising Information from various texts: A Study of Procedures and Products at Different Educational Levels. European Journal of Psychology of Education,  24 (4), 435-451. Available from https://doi.org/10.1007/BF03178760 [Accessed 29 June 2021].

  • << Previous: Structure
  • Next: Criticality >>
  • Last Updated: Nov 18, 2023 10:56 PM
  • URL: https://libguides.westminster.ac.uk/literature-reviews

CONNECT WITH US

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Working with sources
  • Synthesizing Sources | Examples & Synthesis Matrix

Synthesizing Sources | Examples & Synthesis Matrix

Published on July 4, 2022 by Eoghan Ryan . Revised on May 31, 2023.

Synthesizing sources involves combining the work of other scholars to provide new insights. It’s a way of integrating sources that helps situate your work in relation to existing research.

Synthesizing sources involves more than just summarizing . You must emphasize how each source contributes to current debates, highlighting points of (dis)agreement and putting the sources in conversation with each other.

You might synthesize sources in your literature review to give an overview of the field or throughout your research paper when you want to position your work in relation to existing research.

Table of contents

Example of synthesizing sources, how to synthesize sources, synthesis matrix, other interesting articles, frequently asked questions about synthesizing sources.

Let’s take a look at an example where sources are not properly synthesized, and then see what can be done to improve it.

This paragraph provides no context for the information and does not explain the relationships between the sources described. It also doesn’t analyze the sources or consider gaps in existing research.

Research on the barriers to second language acquisition has primarily focused on age-related difficulties. Building on Lenneberg’s (1967) theory of a critical period of language acquisition, Johnson and Newport (1988) tested Lenneberg’s idea in the context of second language acquisition. Their research seemed to confirm that young learners acquire a second language more easily than older learners. Recent research has considered other potential barriers to language acquisition. Schepens, van Hout, and van der Slik (2022) have revealed that the difficulties of learning a second language at an older age are compounded by dissimilarity between a learner’s first language and the language they aim to acquire. Further research needs to be carried out to determine whether the difficulty faced by adult monoglot speakers is also faced by adults who acquired a second language during the “critical period.”

Scribbr Citation Checker New

The AI-powered Citation Checker helps you avoid common mistakes such as:

  • Missing commas and periods
  • Incorrect usage of “et al.”
  • Ampersands (&) in narrative citations
  • Missing reference entries

synthesis of literature review in research

To synthesize sources, group them around a specific theme or point of contention.

As you read sources, ask:

  • What questions or ideas recur? Do the sources focus on the same points, or do they look at the issue from different angles?
  • How does each source relate to others? Does it confirm or challenge the findings of past research?
  • Where do the sources agree or disagree?

Once you have a clear idea of how each source positions itself, put them in conversation with each other. Analyze and interpret their points of agreement and disagreement. This displays the relationships among sources and creates a sense of coherence.

Consider both implicit and explicit (dis)agreements. Whether one source specifically refutes another or just happens to come to different conclusions without specifically engaging with it, you can mention it in your synthesis either way.

Synthesize your sources using:

  • Topic sentences to introduce the relationship between the sources
  • Signal phrases to attribute ideas to their authors
  • Transition words and phrases to link together different ideas

To more easily determine the similarities and dissimilarities among your sources, you can create a visual representation of their main ideas with a synthesis matrix . This is a tool that you can use when researching and writing your paper, not a part of the final text.

In a synthesis matrix, each column represents one source, and each row represents a common theme or idea among the sources. In the relevant rows, fill in a short summary of how the source treats each theme or topic.

This helps you to clearly see the commonalities or points of divergence among your sources. You can then synthesize these sources in your work by explaining their relationship.

Example: Synthesis matrix
Lenneberg (1967) Johnson and Newport (1988) Schepens, van Hout, and van der Slik (2022)
Approach Primarily theoretical, due to the ethical implications of delaying the age at which humans are exposed to language Testing the English grammar proficiency of 46 native Korean or Chinese speakers who moved to the US between the ages of 3 and 39 (all participants had lived in the US for at least 3 years at the time of testing) Analyzing the results of 56,024 adult immigrants to the Netherlands from 50 different language backgrounds
Enabling factors in language acquisition A critical period between early infancy and puberty after which language acquisition capabilities decline A critical period (following Lenneberg) General age effects (outside of a contested critical period), as well as the similarity between a learner’s first language and target language
Barriers to language acquisition Aging Aging (following Lenneberg) Aging as well as the dissimilarity between a learner’s first language and target language

If you want to know more about ChatGPT, AI tools , citation , and plagiarism , make sure to check out some of our other articles with explanations and examples.

  • ChatGPT vs human editor
  • ChatGPT citations
  • Is ChatGPT trustworthy?
  • Using ChatGPT for your studies
  • What is ChatGPT?
  • Chicago style
  • Paraphrasing

 Plagiarism

  • Types of plagiarism
  • Self-plagiarism
  • Avoiding plagiarism
  • Academic integrity
  • Consequences of plagiarism
  • Common knowledge

Prevent plagiarism. Run a free check.

Synthesizing sources means comparing and contrasting the work of other scholars to provide new insights.

It involves analyzing and interpreting the points of agreement and disagreement among sources.

You might synthesize sources in your literature review to give an overview of the field of research or throughout your paper when you want to contribute something new to existing research.

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

Topic sentences help keep your writing focused and guide the reader through your argument.

In an essay or paper , each paragraph should focus on a single idea. By stating the main idea in the topic sentence, you clarify what the paragraph is about for both yourself and your reader.

At college level, you must properly cite your sources in all essays , research papers , and other academic texts (except exams and in-class exercises).

Add a citation whenever you quote , paraphrase , or summarize information or ideas from a source. You should also give full source details in a bibliography or reference list at the end of your text.

The exact format of your citations depends on which citation style you are instructed to use. The most common styles are APA , MLA , and Chicago .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Ryan, E. (2023, May 31). Synthesizing Sources | Examples & Synthesis Matrix. Scribbr. Retrieved August 21, 2024, from https://www.scribbr.com/working-with-sources/synthesizing-sources/

Is this article helpful?

Eoghan Ryan

Eoghan Ryan

Other students also liked, signal phrases | definition, explanation & examples, how to write a literature review | guide, examples, & templates, how to find sources | scholarly articles, books, etc., get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Logo for Rebus Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 7: Synthesizing Sources

Learning objectives.

At the conclusion of this chapter, you will be able to:

  • synthesize key sources connecting them with the research question and topic area.

7.1 Overview of synthesizing

7.1.1 putting the pieces together.

Combining separate elements into a whole is the dictionary definition of synthesis.  It is a way to make connections among and between numerous and varied source materials.  A literature review is not an annotated bibliography, organized by title, author, or date of publication.  Rather, it is grouped by topic to create a whole view of the literature relevant to your research question.

synthesis of literature review in research

Your synthesis must demonstrate a critical analysis of the papers you collected as well as your ability to integrate the results of your analysis into your own literature review.  Each paper collected should be critically evaluated and weighed for “adequacy, appropriateness, and thoroughness” ( Garrard, 2017 ) before inclusion in your own review.  Papers that do not meet this criteria likely should not be included in your literature review.

Begin the synthesis process by creating a grid, table, or an outline where you will summarize, using common themes you have identified and the sources you have found. The summary grid or outline will help you compare and contrast the themes so you can see the relationships among them as well as areas where you may need to do more searching. Whichever method you choose, this type of organization will help you to both understand the information you find and structure the writing of your review.  Remember, although “the means of summarizing can vary, the key at this point is to make sure you understand what you’ve found and how it relates to your topic and research question” ( Bennard et al., 2014 ).

Figure 7.2 shows an example of a simplified literature summary table. In this example, individual journal citations are listed in rows. Table column headings read: purpose, methods, and results.

As you read through the material you gather, look for common themes as they may provide the structure for your literature review.  And, remember, research is an iterative process: it is not unusual to go back and search information sources for more material.

At one extreme, if you are claiming, ‘There are no prior publications on this topic,’ it is more likely that you have not found them yet and may need to broaden your search.  At another extreme, writing a complete literature review can be difficult with a well-trod topic.  Do not cite it all; instead cite what is most relevant.  If that still leaves too much to include, be sure to reference influential sources…as well as high-quality work that clearly connects to the points you make. ( Klingner, Scanlon, & Pressley, 2005 ).

7.2 Creating a summary table

Literature reviews can be organized sequentially or by topic, theme, method, results, theory, or argument.  It’s important to develop categories that are meaningful and relevant to your research question.  Take detailed notes on each article and use a consistent format for capturing all the information each article provides.  These notes and the summary table can be done manually, using note cards.  However, given the amount of information you will be recording, an electronic file created in a word processing or spreadsheet is more manageable. Examples of fields you may want to capture in your notes include:

  • Authors’ names
  • Article title
  • Publication year
  • Main purpose of the article
  • Methodology or research design
  • Participants
  • Measurement
  • Conclusions

  Other fields that will be useful when you begin to synthesize the sum total of your research:

  • Specific details of the article or research that are especially relevant to your study
  • Key terms and definitions
  • Strengths or weaknesses in research design
  • Relationships to other studies
  • Possible gaps in the research or literature (for example, many research articles conclude with the statement “more research is needed in this area”)
  • Finally, note how closely each article relates to your topic.  You may want to rank these as high, medium, or low relevance.  For papers that you decide not to include, you may want to note your reasoning for exclusion, such as ‘small sample size’, ‘local case study,’ or ‘lacks evidence to support assertion.’

This short video demonstrates how a nursing researcher might create a summary table.

7.2.1 Creating a Summary Table

synthesis of literature review in research

  Summary tables can be organized by author or by theme, for example:

Author/Year Research Design Participants or Population Studied Comparison Outcome
Smith/2010 Mixed methods Undergraduates Graduates Improved access
King/2016 Survey Females Males Increased representation
Miller/2011 Content analysis Nurses Doctors New procedure

For a summary table template, see http://blogs.monm.edu/writingatmc/files/2013/04/Synthesis-Matrix-Template.pdf

7.3 Creating a summary outline

An alternate way to organize your articles for synthesis it to create an outline. After you have collected the articles you intend to use (and have put aside the ones you won’t be using), it’s time to identify the conclusions that can be drawn from the articles as a group.

  Based on your review of the collected articles, group them by categories.  You may wish to further organize them by topic and then chronologically or alphabetically by author.  For each topic or subtopic you identified during your critical analysis of the paper, determine what those papers have in common.  Likewise, determine which ones in the group differ.  If there are contradictory findings, you may be able to identify methodological or theoretical differences that could account for the contradiction (for example, differences in population demographics).  Determine what general conclusions you can report about the topic or subtopic as the entire group of studies relate to it.  For example, you may have several studies that agree on outcome, such as ‘hands on learning is best for science in elementary school’ or that ‘continuing education is the best method for updating nursing certification.’ In that case, you may want to organize by methodology used in the studies rather than by outcome.

Organize your outline in a logical order and prepare to write the first draft of your literature review.  That order might be from broad to more specific, or it may be sequential or chronological, going from foundational literature to more current.  Remember, “an effective literature review need not denote the entire historical record, but rather establish the raison d’etre for the current study and in doing so cite that literature distinctly pertinent for theoretical, methodological, or empirical reasons.” ( Milardo, 2015, p. 22 ).

As you organize the summarized documents into a logical structure, you are also appraising and synthesizing complex information from multiple sources.  Your literature review is the result of your research that synthesizes new and old information and creates new knowledge.

7.4 Additional resources:

Literature Reviews: Using a Matrix to Organize Research / Saint Mary’s University of Minnesota

Literature Review: Synthesizing Multiple Sources / Indiana University

Writing a Literature Review and Using a Synthesis Matrix / Florida International University

 Sample Literature Reviews Grid / Complied by Lindsay Roberts

Select three or four articles on a single topic of interest to you. Then enter them into an outline or table in the categories you feel are important to a research question. Try both the grid and the outline if you can to see which suits you better. The attached grid contains the fields suggested in the video .

Literature Review Table  

Author

Date

Topic/Focus

Purpose

Conceptual

Theoretical Framework

Paradigm

Methods

Context

Setting

Sample

Findings Gaps

Test Yourself

  • Select two articles from your own summary table or outline and write a paragraph explaining how and why the sources relate to each other and your review of the literature.
  • In your literature review, under what topic or subtopic will you place the paragraph you just wrote?

Image attribution

Literature Reviews for Education and Nursing Graduate Students Copyright © by Linda Frederiksen is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

Research to Action

The Global Guide to Research Impact

Social Media

Framing challenges

Synthetic literature reviews: An introduction

By Steve Wallis and Bernadette Wright 26/05/2020

Whether you are writing a funding proposal or an academic paper, you will most likely be required to start with a literature review of some kind. Despite (or because of) the work involved, a literature review is a great opportunity to showcase your knowledge on a topic. In this post, we’re going to take it one step further. We’re going to tell you a very practical approach to conducting literature reviews that allows you to show that you are advancing scientific knowledge before your project even begins. Also – and this is no small bonus – this approach lets you show how your literature review will lead to a more successful project.

Literature review – start with the basics

A literature review helps you shape effective solutions to the problems you (and your organisation) are facing. A literature review also helps you demonstrate the value of your activities. You can show how much you add to the process before you spend any money collecting new data. Finally, your literature review helps you avoid reinventing the wheel by showing you what relevant research already exists, so that you can target your new research more efficiently and more effectively.

We all want to conduct good research and have a meaningful impact on people’s lives. To do this, a literature review is a critical step. For funders, a literature review is especially important because it shows how much useful knowledge the writer already has.

Past methods of literature reviews tend to be focused on ‘muscle power’, that is spending more time and more effort to review more papers and adhering more closely to accepted standards. Examples of standards for conducting literature reviews include the PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions and the guidelines for assessing the quality and applicability of systematic reviews developed by the Task Force on Systematic Review and Guidelines . Given the untold millions of papers in many disciplines, even a large literature review that adheres to the best guidelines does little to move us toward integrated knowledge in and across disciplines.

In short, we need we need to work smarter, not harder!

Synthetic literature reviews

One approach that can provide more benefit is the synthetic literature review. Synthetic meaning synthesised or integrated, not artificial. Rather than explaining and reflecting on the results of previous studies (as is typically done in literature reviews), a synthetic literature review strives to create a new and more useful theoretical perspective by rigorously integrating the results of previous studies.

Many people find the process of synthesis difficult, elusive, or mysterious. When presenting their views and making recommendations for research, they tend to fall back on intuition (which is neither harder nor smarter).

After defining your research topic (‘poverty’ for example), the next step is to search the literature for existing theories or models of poverty that have been developed from research. You can use Google Scholar or your institutional database, or the assistance of a research librarian. A broad topic such as ‘poverty’, however, will lead you to millions of articles. You’ll narrow that field by focusing more closely on your topic and adding search terms. For example, you might be more interested in poverty among Latino communities in central California. You might also focus your search according to the date of the study (often, but not always, more recent results are preferred), or by geographic location. Continue refining and focusing your search until you have a workable number of papers (depending on your available time and resources). You might also take this time to throw out the papers that seem to be less relevant.

Skim those papers to be sure that they are really relevant to your topic. Once you have chosen a workable number of relevant papers, it is time to start integrating them.

Next, sort them according to the quality of their data.

Next, read the theory presented in each paper and create a diagram of the theory. The theory may be found in a section called ‘theory’ or sometimes in the ‘introduction’. For research papers, that presented theory may have changed during the research process, so you should look for the theory in the ‘findings’, ‘results’, or ‘discussion’ sections.

That diagram should include all relevant concepts from the theory and show the causal connections between the concepts that have been supported by research (some papers will present two theories, one before and one after the research – use the second one – only the hypotheses that have been supported by the research).

For a couple of brief and partial example from a recent interdisciplinary research paper, one theory of poverty might say ‘Having more education will help people to stay out of poverty’, while another might say ‘The more that the economy develops, the less poverty there will be’.

We then use those statements to create a diagram as we have in Figure 1.

synthesis of literature review in research

Figure 1. Two (simple, partial) theories of poverty. (We like to use dashed lines to indicate ’causes less’, and solid lines to indicate ’causes more’)

When you have completed a diagram for each theory, the next step is to synthesise (integrate) them where the concepts are the same (or substantively similar) between two or more theories. With causal diagrams such as these, the process of synthesis becomes pretty direct. We simply combine the two (or more) theories to create a synthesised theory, such as in Figure 2.

synthesis of literature review in research

Figure 2. Two theories synthesised where they overlap (in this case theories of poverty)

Much like a road map, a causal diagram of a theory with more concepts and more connecting arrows is more useful for navigation. You can show that your literature review is better than previous reviews by showing that you have taken a number of fragmented theories (as in Figure 1) and synthesised them to create a more coherent theory (as in Figure 2).

To go a step further, you may use Integrative Propositional Analysis (IPA) to quantify the extent to which your research has improved the structure and potential usefulness of your knowledge through the synthesis. Another source is our new book from Practical Mapping for Applied Research and Program Evaluation (see especially Chapter 5). (For the basics, you can look at Chapter One for free on the publisher’s site by clicking on the ‘Preview’ tab here. )

Once you become comfortable with the process, you will certainly be working ‘smarter’ and showcasing your knowledge to funders!

Contribute Write a blog post, post a job or event, recommend a resource

Partner with Us Are you an institution looking to increase your impact?

Most Recent Posts

  • Africa Evidence Week 2024
  • “Nothing about us without us”: Youth inclusion in research
  • Youth Inclusion and Engagement Resource List
  • Reviewing ethical reporting practice in violence research
  • International Programme Officer, Water Witness – Deadline 19 August

synthesis of literature review in research

The latest #R2AImpactPractitioners post features an article by Karen Bell and Mark Reed on the Tree of Participation (ToP) model, a groundbreaking framework designed to enhance inclusive decision-making. By identifying 12 key factors and 7 contextual elements, ToP empowers marginalized groups and ensures processes that are inclusive, accountable, and balanced in power dynamics. The model uses a tree metaphor to illustrate its phases: roots (pre-process), branches (process), and leaves (post-process), all interconnected within their context. Discover more by following the #R2Aimpactpractitioners link in our linktree 👉🔗

synthesis of literature review in research

Do you use AI in your work? AI is increasingly present in all our lives, but how can we use it effectively to enhance research practice? Earlier this year Inés Arangüena explored this question in a two part series. Follow the link in our bio to read more. https://ow.ly/IV0R50SH5tI #AI

synthesis of literature review in research

🌟 This week, #R2ARecommends this webinar on demystifying evaluation practice, produced by the Global Evaluation Initiative. 🌟 🔗Follow the #R2ARecommends link in our linktree to find out more! #Evaluation #Webinar #InclusiveCommunication #GEI #GlobalEval

Research To Action (R2A) is a learning platform for anyone interested in maximising the impact of research and capturing evidence of impact.

The site publishes practical resources on a range of topics including research uptake, communications, policy influence and monitoring and evaluation. It captures the experiences of practitioners and researchers working on these topics and facilitates conversations between this global community through a range of social media platforms.

R2A is produced by a small editorial team, led by CommsConsult . We welcome suggestions for and contributions to the site.

Subscribe to our newsletter!

Our contributors

synthesis of literature review in research

Browse all authors

Friends and partners

  • Global Development Network (GDN)
  • Institute of Development Studies (IDS)
  • International Initiative for Impact Evaluation (3ie)
  • On Think Tanks
  • Politics & Ideas
  • Research for Development (R4D)
  • Research Impact

Evidence Synthesis Service

  • Starting a Review
  • Meeting Request
  • Intake Form
  • Information for Students
  • Other Library Resources

Guides for Conducting a Systematic Review

  • 3IE Impact International Initiative for Impact Evaluation offers a database of systematic reviews on impact evaluations and has methods information for conducting your own evaluation.
  • Campbell systematic reviews: Policies and guidelines Guidelines for producing a Campbell Systematic Review. The Campbell Collaboration is an international research network that produced systematic reviews of the effects of social interventions.
  • Cochrane Handbook for Systematic Reviews of Interventions The Cochrane Handbook for Systematic Reviews of Interventions is the official document that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions.
  • EQUATOR Network The EQUATOR Network is an international initiative that seeks to improve the reliability and value of published health research literature by promoting transparent and accurate reporting. They provide a list of various standards for reporting in systematic reviews.
  • Finding What Works in Health Care: Standards for Systematic Reviews (IOM) IOM's (2011) standards address the entire systematic review process, from locating, screening, and selecting studies for the review, to synthesizing the findings and assessing the overall quality of the body of evidence, to producing the final report. Includes a link to the IOM Standards for Systematic Reviews.
  • Methods Guide for Effectiveness and Comparative Effectiveness Reviews (AHRQ) This guide was developed to improve transparency, consistency, and scientific rigor of those working on Comparative Effectiveness Reviews.
  • Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA Statement) The aim of the PRISMA Statement is to help authors improve the reporting of systematic reviews and meta-analyses. The focus of PRISMA is randomized controlled trials, but it can also be used as a basis for reporting systematic reviews of other types of research, particularly evaluations of interventions.
  • Systematic Reviews: CRD's Guidance for Undertaking Reviews in Health Care Provides practical guidance for undertaking evidence synthesis based on a thorough understanding of systematic review methodology. Presents core principles of systematic reviews and highlights issues that are specific to reviews of clinical tests, public health interventions, adverse effects, and economic evaluations. The final chapter discusses incorporation of qualitative research in or alongside effectiveness reviews.

Tools for Assessing the Quality of Studies

  • Assessing the quality of reports of randomized clinical trials: is blinding necessary? Jadad, A. R., Moore, R. A., Carroll, D., Jenkinson, C., Reynolds, D. J., Gavaghan, D. J., & McQuay, H. J. (1996). Assessing the quality of reports of randomized clinical trials: is blinding necessary?. Controlled clinical trials, 17(1), 1–12. https://doi.org/10.1016/0197-2456(95)00134-4
  • Assessing the Risk of Bias of Individual Studies in Systematic Reviews of Health Care Interventions | AHRQ
  • Avoiding Bias in Selecting Studies | AHRQ
  • Centre for Evidence-Based Medicine (CEBM) Critical Appraisal

Cover Art

  • Duke Quality Assessment and Risk of Bias Tool Repository Spreadsheet of validated ROB Tools

Cover Art

  • GRADE Working Group The Grading of Recommendations Assessment, Development and Evaluation (short GRADE) working group began in the year 2000 as an informal collaboration of people with an interest in addressing the shortcomings of grading systems in health care. The working group has developed a common, sensible and transparent approach to grading quality (or certainty) of evidence and strength of recommendations. Many international organizations have provided input into the development of the GRADE approach which is now considered the standard in guideline development.
  • LATITUDES Network Validated Risk of Bias Tools for evidence synthesis designed to parallel the EQUATOR Network Launched at the Cochrane Colloquium in 2023
  • The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses Nonrandomized studies, including case-control and cohort studies, can be challenging to implement and conduct. Assessment of the quality of such studies is essential for a proper understanding of nonrandomized studies. The Newcastle-Ottawa Scale (NOS) is an ongoing collaboration between the Universities of Newcastle, Australia and Ottawa, Canada. It was developed to assess the quality of nonrandomized studies with its design, content and ease of use directed to the task of incorporating the quality assessments in the interpretation of meta-analytic results.
  • Strength of recommendation taxonomy (SORT): a patient-centered approach to grading evidence in the medical literature Ebell, M. H., Siwek, J., Weiss, B. D., Woolf, S. H., Susman, J., Ewigman, B., & Bowman, M. (2004). Strength of recommendation taxonomy (SORT): a patient-centered approach to grading evidence in the medical literature. American family physician, 69(3), 548–556.
  • << Previous: Intake Form
  • Next: Covidence >>
  • Last Updated: Aug 14, 2024 1:22 PM
  • URL: https://libguides.libraries.wsu.edu/evidencesynthesis
  • Open access
  • Published: 14 August 2024

Qualitative studies involving users of clinical neurotechnology: a scoping review

  • Georg Starke 1 , 2 ,
  • Tugba Basaran Akmazoglu 3 ,
  • Annalisa Colucci 4 ,
  • Mareike Vermehren 4 ,
  • Amanda van Beinum 5 ,
  • Maria Buthut 4 ,
  • Surjo R. Soekadar 4 ,
  • Christoph Bublitz 7 ,
  • Jennifer A. Chandler 6 &
  • Marcello Ienca 1 , 2  

BMC Medical Ethics volume  25 , Article number:  89 ( 2024 ) Cite this article

227 Accesses

Metrics details

The rise of a new generation of intelligent neuroprostheses, brain-computer interfaces (BCI) and adaptive closed-loop brain stimulation devices hastens the clinical deployment of neurotechnologies to treat neurological and neuropsychiatric disorders. However, it remains unclear how these nascent technologies may impact the subjective experience of their users. To inform this debate, it is crucial to have a solid understanding how more established current technologies already affect their users. In recent years, researchers have used qualitative research methods to explore the subjective experience of individuals who become users of clinical neurotechnology. Yet, a synthesis of these more recent findings focusing on qualitative methods is still lacking.

To address this gap in the literature, we systematically searched five databases for original research articles that investigated subjective experiences of persons using or receiving neuroprosthetics, BCIs or neuromodulation with qualitative interviews and raised normative questions.

36 research articles were included and analysed using qualitative content analysis. Our findings synthesise the current scientific literature and reveal a pronounced focus on usability and other technical aspects of user experience. In parallel, they highlight a relative neglect of considerations regarding agency, self-perception, personal identity and subjective experience.

Conclusions

Our synthesis of the existing qualitative literature on clinical neurotechnology highlights the need to expand the current methodological focus as to investigate also non-technical aspects of user experience. Given the critical role considerations of agency, self-perception and personal identity play in assessing the ethical and legal significance of these technologies, our findings reveal a critical gap in the existing literature. This review provides a comprehensive synthesis of the current qualitative research landscape on neurotechnology and the limitations thereof. These findings can inform researchers on how to study the subjective experience of neurotechnology users more holistically and build patient-centred neurotechnology.

Peer Review reports

Introduction

Due to a rapid expansion in public-private investment, market size and availability of Artificial Intelligence (AI) tools for functional optimization, the clinical advancement of novel neurotechnologies is accelerating its pace [ 1 ]. Bidirectional intelligent Brain-Computer interfaces (BCI) that aim at merging both read-out and write-in devices are in active development and are expanding in functional capabilities and commercial availability. [ 2 , 3 ]. Such BCIs that can decode and modulate neural activity through direct stimulation of brain tissue, promise additional avenues in the treatment of neurological diseases by adapting to the particularities of individual users’ brain. Potential applications are Parkinson’s disease [ 4 ] or epilepsy [ 5 ] as well as psychiatric disorders, such as major depressive disorder [ 6 ] or obsessive compulsive disorder [ 7 ]. Driven by these advances and in conjunction with progress in deep learning and generative AI software as well as higher-bandwidth hardware, clinical neurotechnology is likely to take an increasingly central role in the prevention, diagnosis and treatment of neuropsychiatric disorders.

In line with these scientific trends, the last decade has seen a consequent fast rise in the ethical attention devoted to neurotechnological systems that establish a direct connection with the human central nervous system [ 8 ], including neurostimulation devices. Yet, at times, neuroethical concerns may have outpaced real-life possibilities, particularly with view to the impact of neurotechnology on personality, identity, autonomy, authenticity, agency or self (PIAAAS) [ 9 ]. This points to the need for basing ethical assessments and personal decisions about deploying devices on solid empirical grounds. In particular, it is crucial to gain a comprehensive understanding of the lived experience of using neurotechnologies from the epistemically privileged first-person perspective of users – “what it is like” to use neurotechnologies. Its examination by empirical studies have added a vital contribution to the literature [ 10 ].

Yet, few reviews have attempted to synthesize the growing body of empirical studies on user experience with clinical neurotechnology. Burwell et al. [ 11 ] reviewed literature from biomedical ethics on BCIs up to 2016, identifying key ethical, legal and societal challenges, yet noting a lack of concrete ethical recommendations for implementation. Worries about a lack of attention to ethics in BCI studies have been further corroborated by two reviews by Specker Sullivan and Illes, reviewing BCI research published up until 2015. They critically assessed the rationales of BCI research studies [ 12 ] and found a remarkable absence of ethical language in published BCI research [ 13 ]. Taking a different focus, Kögel et al. [ 14 ] have provided a scoping review summarizing empirical studies investigating ethics of BCIs until 2017, with a strong focus on quantitative methods in the reviewed papers. Most recently, this list of reviews has been complemented by van Velthoven et al. [ 15 ], who review empirical and conceptual ethical literature on the use of visual neuroprostheses.

To the best of our knowledge, a specific review of qualitative research on the ethics of emerging neurotechnologies such as neuroprosthetics, BCIs and neuromodulation systems is outstanding. We believe that qualitative research involving actual or prospective neurotechnology users is particularly significant as it allows researchers to tap into the richness of first-person experiences as compared to standardized questionnaires without the option of free report. In the following, we synthesize published research on the subjective experience of using clinical neurotechnologies to enrich the ethical debate and provide guidance to developers and regulators.

On January 13, 2022 we conducted a search of relevant scientific literature across 5 databases, namely Pubmed (89 results), Scopus (178 results), Web of Science (79 results), PsycInfo (134 results) and IEEE Xplore (4 results). The search was performed for title, abstract and keywords, using a search string to identify articles employing qualitative methods that engaged with users of neurotechnology, and covered normative issues: [“qualitative” OR “interview” OR “focus group” OR “ethnography” OR “grounded theory” OR “discourse analysis” OR “interpretative phenomenological analysis” OR “thematic analysis”] AND [“user” OR “patient” OR “people” OR “person” OR “participant” OR “subject”] AND [“Brain-Computer” OR “BCI” OR “Brain-Machine” OR “neurostimulation” OR “neuromodulation” OR “TMS” OR “transcranial” OR “neuroprosthetic*” OR “neuroprosthesis” OR “DBS”] AND [“ethic*” OR “bioethic*” OR “normative” OR “value” OR “evaluation”].

Across databases, search syntax was adapted to reflect the respective logic of each library. Our search yielded a total of 484 articles. Of these, 133 duplicates were removed. 52 further results were marked as ineligible by automation tools, due to either not being written in English or not representing original research in a peer-reviewed journal. The remaining 299 were screened manually, with screening tasks being shared equally among the authors GS, TBA, AC, MV, CB, JC, and MI. Articles were included if they were written in English, published in a peer-reviewed journal, and reported original research of empirical qualitative findings among human users of a neurotechnological system that establishes a direct connection with the human central nervous system (including neurostimulation devices). Other types of articles such as perspectives, letters to the editor, or review articles were not included. Potential methods included individual interviews, focus groups, stakeholder consultations but excluded studies that did not use any direct verbal input from the users. Each abstract was screened individually by two reviewers. Unclear cases were resolved by discussion among reviewers. This process resulted in the exclusion of 247 articles, leaving 52 publications for inclusion into the final synthesis.

Full texts of these 52 articles were retrieved and assessed for eligibility. Again, this task was shared equally across the 7 authors who made independent recommendations whether an article was included for further analysis, and disagreement was resolved by discussion. 20 articles were excluded at this stage, due to not meeting the inclusion criteria. This resulted in a body of 32 articles plus 4 additional papers identified through citation chaining, as customary in scoping reviews.

In the data analysis phase, we compiled a descriptive summary of the findings and conducted a thematic analysis. When compiling the descriptive summary, we followed the recommendations by Arksey and O’Malley [ 16 ] and included comprehensive information beyond authors, year, and title of the study, extracting also study location, methodology, study population, type of neurotechnology, and more. For the thematic analysis, the full text was read and coded by the authors through annotations in pdf files, with papers evenly distributed among the group. Coding was based on a previously agreed coding structure of four thematic families, covering (1) subjective experience with BCIs, (2) aspects concerning usability and technology, (3) ethical questions, (4) impact on social relations, and a fifth miscellaneous category for future resolution. In accordance with the suggestions by Braun and Clarke [ 17 ], codes that were not clearly covered by the coding tree were grouped into a category “miscellaneous”, and after discussion used to develop new themes or subsumed under the existing thematic families. The results were compiled and unified by the first author and imported into the Atlas.ti software (version 22.2), with adaptations to the coding tree being discussed between first and last author.

In line with the framework suggested by Pham, Rajić [ 18 ], we adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) in conducting and presenting our results [ 19 ]. A flow diagram representing the entire process is depicted in Fig.  1 .

figure 1

PRISMA flow diagram: search and screening strategy. Based on Page et al [ 19 ]

Descriptive findings

Our study included 36 papers reporting original qualitative research among users of BCIs, neuroprosthetics and neuromodulation. We found a pronounced increase in the number of publications employing qualitative methods in the investigation of such neurotechnology users over time, with the earliest study dating back to 2012. However, contrary to what one may expect as reflection of the growing number of neurotechnology users, we did not find an increase in the average sample size of participants enrolled in qualitative studies nor a correlation between year of publication and number of participants (see Fig.  2 ).

figure 2

Average number of participants and number of publications over time

The included studies were exclusively conducted in Western countries, with 11 studies from the US, 9 from Australia and the remaining 16 distributed across Europe (UK: 6, Germany: 4, Sweden, Netherlands and Switzerland 2 each). The majority of studies investigated the effects of invasive neurotechnology in the form of Deep Brain Stimulators (DBS) (26/36), especially in patients with Parkinson’s Disease (PD) (19/36). Many papers also investigated users’ experiences with non-invasive EEG-based BCIs (7/36), whereas all other technologies such as TMS, ECT, FES, intracortical microelectrode arrays, or spinal cord stimulation were only covered by one or two papers each. Footnote 1 Due to the large focus on PD patients, other potential fields for clinical neurotechnological applications were much less present in the analysed research, with only 4 papers each investigating the effects of DBS on patients with major depressive disorder (4/36) or obsessive-compulsive disorder (OCD) (4/36). Across all technologies and patient groups, studies most frequently relied on semi-structured interviews with individual participants (28/36), with much fewer studies using focus groups (3/36) or other qualitative methods.

We found that a large number of papers (14/36) incorporated longitudinal aspects in their study design. With view to non-invasive BCIs, this comprised involving users in the development and testing of BCIs for acquired brain injury [ 20 , 21 ], assessing subjective reports across sessions for experimental BCI training [ 22 ], or having a 2-month follow-up interview for users of a BCI for pain management after spinal cord injury [ 23 ]. Studies of invasive devices often included interviews pre- and post-implantation, with a potential third follow-up. In studies with two interviews, the first interview after implantation took place a few weeks after implantation [ 24 , 25 ], after 3 months [ 26 ], after 9 months [ 27 , 28 ] or after a year [ 29 ]. In studies with 3 interviews, post-implantation interviews were either conducted after surgery and again after 3 months in a study on spinal cord stimulation [ 30 ] or, in the case of DBS for PD, after 3 and 6 months [ 31 , 32 ] or after 3–6 and 9–12 months respectively [ 33 ]. Table  1 provides a full overview over the included studies.

Thematic findings

Our findings from the thematic analysis can be grouped into four overlapping thematic families, namely (1) ethical challenges of neurotechnology use, (2) subjective experience with clinical neurotechnologies, (3) impact on social relations, and (4) usability and technological aspects. The raw data of our findings are accessible in the supplementary file.

Ethical concerns

With respect to users’ experiences of neurotechnology that touch on classical ethical topics, we found that autonomy played a central role in slightly more than half of all papers (20/36), yet in four different ways. Many papers noted the positive impact neurotechnology has on users’ autonomy. Users often perceive the technology as enabler of greater control over their own life, allowing them “to become who they wanted to be” [ 2 ], providing them with agency and greater independence, restoring their ability to help others, or allowing them to be more spontaneous in their everyday life [ 2 , 10 , 28 , 31 , 32 , 34 , 35 , 36 , 37 ]. Some studies reported how neurotechnology may impact users’ autonomy negatively, especially by making them more dependent on technological and medical support [ 25 , 28 , 35 , 38 , 39 ]. When balancing these positive and negative impacts, some users seem to prefer such dependency and to leave control over the devices to healthcare professionals, to ensure its safe and appropriate working [ 2 , 32 , 39 , 40 ]. Also related to autonomy were concerns about consent, especially with a view to the level of information patients received before the implantation of an invasive device, which was deemed inadequate by some patients [ 2 , 24 , 31 , 34 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 ]. Several papers called to include patients during the technology design process [ 2 , 31 , 39 ]. In addition, questions of responsibility and accountability in case of malfunctioning were repeatedly named as key concern [ 10 , 25 , 37 , 38 , 45 , 47 ].

Concerns about beneficence and about harming patients also featured prominently in most of the analysed papers (24/36), yet with substantive differences on a more granular level. While symptom improvement and restorative changes were widely reported [ 2 , 10 , 23 , 26 , 29 , 31 , 33 , 34 , 35 , 38 , 39 , 40 , 43 , 44 , 46 ], some users reported experiencing physical or psychological side effects, such as postoperative complications, new worries – for instance about magnetic fields or about changing batteries –, stigma, or becoming more aware of their past suffering [ 23 , 25 , 26 , 28 , 34 , 35 , 36 , 37 , 38 , 39 , 40 , 42 , 46 , 48 , 49 ]. Less frequently we found concerns about patient-doctor-relationships [ 2 , 24 , 32 , 40 , 42 , 43 ], which seem to mediate the acceptance of clinical neurotechnologies but are also themselves impacted by technology use. For instance, while some research points to the importance of patients’ trust in healthcare professionals for the acceptance of neurotechnology [ 24 ], a personal narrative described a breakdown of patient-physician relationship following a distressful DBS implantation for treating PD [ 42 ].

Impact on subjective experiences

Since the subjective lived experiences of neurotechnology users commonly constituted the central element of the reviewed qualitative papers, we found a rich field of reports in the vast majority of paper (31/36), describing experiences that were perceived as positive, negative or neutral. Neurotechnology-induced behavioural changes [ 28 , 36 , 37 , 40 , 42 , 46 , 47 , 49 ], as well as changes in feelings [ 27 , 41 , 42 ], (self-) perception [ 10 , 23 , 34 , 36 , 40 , 41 , 42 , 44 , 48 , 50 ], personality [ 27 , 29 , 34 , 35 , 36 , 37 , 42 , 43 , 44 , 47 , 49 ], preferences [ 49 , 50 ] or thinking [ 10 , 41 ] were also reported, particularly in users receiving continuous, non-adaptive deep brain stimulation (DBS).

Behavioural changes often concerned desired outcomes such as fewer obsessive thoughts and compulsive behaviours after successful OCD treatment [ 49 ], acting with less impediment due to seizure predictions [ 36 ], or acting more boldly with more energy and increased confidence due to symptom improvement in PD [ 37 , 47 ]. Nevertheless, it was necessary for patients and for their environment to adapt and get used to new patterns of behaviour. Some patients also reported undesirable behavioural changes after subthalamic DBS implantation, “bordering on mania” [ 42 ], such as being excessively talkative [ 46 ] or shopping compulsions that were later described by the patient as “ridiculous” [ 28 ].

These outwardly observable changes were often related to psychological changes that users reported. Some DBS users experienced mood changes, ranging from elevated to depressed [ 27 , 41 , 42 , 44 ], while others reported changed preferences. Sometimes this affected what users valued as important in life [ 50 ], sometimes it related to very particular preferences, such as taste in music, with one patient attributing a transition from The Rolling Stones and The Beatles to Johnny Cash to their DBS implantation [ 49 ]. In patients treated for OCD or motor disorders, two studies also found positive impact on users’ thinking, whether by freeing them from obsessive thoughts [ 41 ] or improving their concentration skill [ 10 ]. In line with the large neuroethical debate on the subject, changes at times amounted to what neurotechnology users described as personality changes. Such changes included negative impacts such as being more irritable, anxious or less patient [ 34 , 35 ] or overly increased libido [ 49 ], neutral changes, such as (re-)taking an interest in politics or movies [ 49 ], and positive changes linked to improvement of psychiatric symptoms, such as being more easy-going and daring, being more expressive and assertive, or simply being more confident [ 35 , 49 ].

In line with the diversity of these changes, patients reported a vast spectrum of different attitudes towards and relations with the neurotechnology. Some users embraced the BCI explicitly as part of themselves [ 14 , 37 , 39 , 49 ] and described how “DBS becomes a part of who you are rather than changing you” [ 37 ]. Others felt estranged using the BCI [ 28 , 36 , 37 , 42 , 49 ] and even expressed desires to remove the alien device in forceful terms: “I hate it! I wish I could pull it out!” [ 37 ]. Aside from changes brought about by the device, the patients’ state before using neurotechnology and especially their relation to their illness seemed to play a crucial role [ 28 , 51 ]. An overview over the different thematic findings is provided in Fig.  3 .

figure 3

Impact of clinical neurotechnology on subjective experience. The colours represent the valence of the impact, with orange dots representing negative, green dots representing positive, and blue dots representing ambivalent changes

The overwhelming majority of studies (23/36) reported improvements of the treated symptoms [ 2 , 26 , 28 , 31 , 33 , 34 , 35 , 37 , 40 , 41 , 42 , 43 , 46 , 47 , 48 , 49 , 50 , 52 ], making patients’ lives easier [ 48 , 49 ] or – as some put it – even saving their lives [ 34 , 45 , 48 ]. Patients felt that the neurotechnology allowed them an increase in activity [ 33 , 34 , 40 ] and a return to previous forms of behaviour [ 33 , 40 , 48 , 49 ], strengthening their sense of freedom and independence [ 2 , 10 , 22 , 33 , 34 , 35 , 36 , 40 , 43 , 49 , 50 , 53 ]. Emotionally, users reported feeling more daring [ 29 , 35 ], self-confident [ 28 , 35 , 36 , 37 , 44 ] or more stable [ 34 , 50 ] as well as feelings of hope or joy [ 10 , 22 , 35 , 50 ]. For better or worse, such changes were sometimes perceived as providing a “new start” [ 34 , 48 ] or even a “new identity” [ 34 , 41 , 42 , 49 ], while others perceived their changes as a reversion to their “former” [ 28 , 29 , 47 , 49 , 50 ] or their “real” self [ 36 , 42 , 49 ].

Among the negative subjective impacts of clinical neurotechnology mentioned in the literature (16/36), users commonly reported issues of estrangement, caused by self-perceived changes to behaviour, feelings, personality traits, or patients’ relation to their disease or disorder [ 28 , 36 , 37 , 42 , 49 ]. The negative impact differed largely depending on the type of neurotechnology used as well as on the disorders and symptoms treated with the technology. While ALS patients as users of non-invasive BCIs for spelling interfaces reported increased anxiety in interaction with the devices [ 53 ], PD patients with invasive DBS reported presurgical fears of pain and of the invasive procedure as well as fear of outward manipulation within their brain through the DBS implantation [ 40 , 43 , 54 ]. Frequently, it was not entirely clear whether adverse developments such as further cognitive decline were attributable to the implanted device or to the persisting disease and its natural trajectory [ 31 , 33 , 34 , 40 , 43 , 48 , 50 ]. However, occasionally very severe psychiatric consequences of treatment were reported, notably by one PD patient who experienced mania and depressive symptoms through DBS treatment, resulting in a suicide attempt [ 42 ]. For DBS patients with OCD, negative impacts seem more related to difficulties of adapting to the new situation [ 35 , 49 ], for instance to their suddenly increased libido as a side-effect of DBS use that may be perceived as “too much” [ 49 ], or to a perceived lack of preparation for their new (OCD-free) identity [ 41 ]. In two studies on patients with OCD, the sudden improvement of symptoms also led to moments of existential crisis, given that the symptoms had shaped a great part of their previous daily activities [ 41 , 49 ].

Impact on social relations

Using a neurotechnology not only impacts users but can also affect social relations with others (23/36), particularly primary caregivers. While some neurotechnologies such as non-invasive BCIs for communication may create additional workload for caregivers if the BCI needs to be set up, neurotechnologies can also reduce their burden by rendering patients more independent [ 10 , 34 , 40 , 53 ]. Beyond workload, neurotechnologies were also reported to enrich social relations by facilitating communication [ 10 , 34 , 53 ], though in some cases, they led to potential tension between informal caregivers and patients, e.g. due to personality changes [ 28 , 35 , 37 , 40 , 42 , 47 , 49 , 55 ] or if the device was blamed for a patient’s behaviour or suggested as a solution to interpersonal problems [ 2 ]. Whether positive or negative, family and social support were reportedly playing a vital role in the treatment [ 2 , 28 , 40 , 50 ].

Similarly important was support by clinicians [ 39 , 40 ] and the wish for support groups with fellow neurotechnology users [ 27 , 30 , 40 , 41 ]. Inclusion in research activities was also reported as a positive effect of (experimental) BCIs [ 10 , 38 ]. More importantly though, in a large number of studies, neurotechnology users reported positive effects on their social relations [ 2 , 29 , 35 , 43 , 46 , 48 , 50 ], with some users reporting an increased wish to help others [ 35 , 50 ]. A negative social consequence in public was perceived stigma [ 25 , 35 , 48 ], even though some patients chose to actively show their device in public, “to spread information and knowledge about this treatment” [ 39 ].

Usability concerns

Concerns with technical questions and usability issues comprising efficiency, effectiveness and satisfaction [ 52 ] were also raised by almost half of the research papers (17/36), yet differed greatly between neurotechnologies, owing to large differences in hardware (e.g., between EEG caps and implanted electrodes) and handling (e.g., between passive neurostimulation or training-intensive active BCIs). Across all applications, invasive as much as non-invasive, the most frequent concerns (8/36 each) related to hardware issues [ 2 , 22 , 23 , 38 , 39 , 46 , 52 , 53 ] as well as to the required fine-tuning of devices to find optimal settings, associated with time-burden for their users [ 20 , 23 , 27 , 32 , 39 , 46 , 50 , 56 ]. Similarly, the training of patients required for the successful use of non-invasive, active BCIs was reported as being perceived as cumbersome or complicated, providing a potential obstacle to their implementation in everyday contexts [ 38 , 52 ]. Several studies reported that the use of such active BCIs required considerable concentration, leading to fatigue after prolonged use [ 10 , 38 , 53 ]. Mediating factors to address such obstacles were the availability of technical support [ 33 , 53 ], general attitudes towards technology [ 53 ], ease of integrating the technology into everyday life [ 10 , 38 , 53 ] and realistic expectations regarding the neurotechnology’s effects [ 30 , 38 , 40 , 46 ].

The identified publications highlight that qualitative research through interviews and focus groups offers a useful way to gain access to the subjective experience of users of a diverse range of neurotechnologies. Such investigation of users’ privileged knowledge about novel devices in turn is crucial to improve future neurotechnological developments and align them with ethical considerations already at an early stage [ 57 ]. Here, we discuss our findings by comparing different clinical neurotechnologies, identify gaps in the literature and point to the limitations of our scoping review.

One finding of our scoping review is that qualitative research on neurotechnologies has so far primarily focused on users of DBS treated for PD. In part, this may reflect that DBS is an established, effective treatment for controlling motor symptoms in PD, improving patients’ quality of life, resulting in its wide-spread adoption in many different healthcare systems worldwide [ 58 , 59 , 60 , 61 ]. Still, it would be highly beneficial to extend qualitative research to different patient groups and other clinical neurotechnologies that directly target mental states or processes, where more pronounced effects of subjective experiences may be expected.

A potential obstacle to involving more neurotechnology users beyond PD patients treated with DBS is that, for many other technologies, users are still likely to receive their treatment as part of an experimental trial. Qualitative research with such patients may face the additional practical barrier of convincing the other researchers to facilitate access to their patients. Better communication across disciplines and research fields may facilitate such access, providing much-needed insights into user experiences of experimental neurotechnologies.

Some of the articles reviewed here already offer such perspectives, e.g. the ones investigating DBS used for major depressive disorder or OCD. Such research may also help to further clarify which differences in subjective outcome are owed to technology and which are owed to differences in the treated disorders. As different patient groups are likely to have different needs and views, further research is needed to explore those needs and views and develop implementation strategies designed to address them in a patient-tailored manner. Furthermore, different neurotechnologies (and applications thereof) are likely to impact the mind of their users in a different way. Therefore, future research should investigate whether the type and modality of stimulation exert differential impacts on the subjective experience of the end users.

Our findings reveal differential effects among patients using DBS for the treatment of PD and patients using DBS for the treatment of OCD, respectively. For example, some reported effects of invasive neurotechnology such as the induction of more assertive behaviour may be a reason for concern in PD [ 28 ], while being considered a successful treatment outcome in OCD [ 35 , 49 ]. More comparative research among DBS users treated for OCD or other neuropsychiatric disorders, such as depression, are needed [ 62 ] and may help to better understand which experiences are directly attributable to the stimulation of specific brain areas such as the subthalamic nucleus for PD and the nucleus accumbens for OCD, and which result from other factors, e.g., related to undergoing surgery or to different treatment settings in neurological and psychiatric care [ 63 , 64 ].

Research on such differences may also imply practical consequences. For instance, one may wonder whether different preparation stages and possibly different degrees of information for obtaining consent may be called for between invasive clinical neurotechnologies used in psychiatry and neurology—or whether, on the contrary, similarities in the use of neurotechnologies ultimately point towards ending the distinction between mental and neurological illnesses [ 63 ]. In either case, our findings highlight that psychological impacts of clinical neurotechnologies are complex and multi-faceted phenomena—mediated by many factors—calling for more qualitative research to better grasp the lived experiences of those using novel neurotechnologies.

Our scoping review identified several gaps in the literature related to research methodology, investigated topics and investigated neurotechnologies. First, while a large number of studies embrace a longitudinal approach to investigating users’ experiences, none of the included studies looked at impacts beyond a timeframe of one year. However, as is known from DBS studies in major depressive disorder, it is important to investigate and evaluate long-term effects of neurotechnologies such as DBS [ 6 ]. Future qualitative research should therefore address this gap. Connected to this are, second, research questions that have not yet been investigated in full, such as long-term impacts of clinical neurotechnologies on memory or belief continuity. Third, empirical findings on closed-loop neurotechnologies that integrate artificial intelligence are so far nascent [ 2 , 36 ]. As there are important conceptual and ethical questions that arise specifically from the integration of human and artificial intelligence, e.g. questions of control and responsibility, further qualitative research should be conducted on users of such devices.

Finally, our findings reveal a complex and multifaceted landscape of ethical considerations. While considerations regarding personal autonomy appear largely prevalent among users, the perceived or expected impacts of neurotechnology use on personal autonomy differ significantly. Some studies suggest that neurotechnology use may enhance personal autonomy by allowing users to be more autonomous and independent in their daily lives and even restore part of the autonomous control that was disrupted by their disorders. Other studies suggest that some neurotechnologies, especially neural implants relying on autonomous components, may diminish autonomy as they may override some users’ intentions. Sometimes this ambivalent effect is observed within the same study. This is consistent with previous theoretical reflections on this topic [ 65 ] and urges scientists to develop fine-grained and patient-centred models for assessing the impact of neurotechnology on personal autonomy. These models should distinguish on-target and off-target effects and elucidate which subcomponents of personal autonomy (e.g., volition, behavioural control, authenticity etc.) are impacted by the use of neurotechnology.

Our scoping review has several limitations. Owing to the nature of a scoping review and to our inclusion criteria, there may be relevant literature that we missed to identify and analyse. For instance, since we only included English publications, we may have missed relevant research published in other languages, which may explain why we only found qualitative studies conducted in Western countries. Furthermore, our narrow search strategy excluded other relevant research, for instance qualitative studies conducted with potential users of clinical neurotechnology or with caregivers. Yet, a scoping reviews can provide a useful tool to map existing literature [ 16 , 18 ], and given recent advances in technology and accompanying qualitative research, an update of earlier reviews such as the one by Kögel et al. [ 14 ], provides an important addition to the existing literature. By looking at qualitative studies only we further import general limitations of qualitative studies, such as a lack of generalizability and a dependency on the skills and experience of the involved researchers. More standardized instruments to complement the investigation of subjective experiences of neurotechnology users therefore seem highly desirable. Recent quantitative approaches such as online surveys assessing the subjective preferences of DBS users concerning the timing of implantation [ 66 ] or studies combining qualitative data with quantitative assessments [ 67 ] point in this direction. Additionally, experimental approaches to the monitoring and evaluation of the effects of neurotechnology on the user’s experience are currently absent. Therefore, future research should complement qualitative and quantitative user evaluations based on social science methods (e.g., interviews, focus groups and questionnaires) with experimental models.

The findings of our review emphasize the diversity of individual experiences with neurotechnology across individuals and different technologies. They underscore the need to conduct qualitative research among diverse groups at different time-points to better assess the impact of such technologies on their users, which are important to inform requirements of efficacy and safety for clinical neurotechnologies. In addition, qualitative research offers one way to implement user-centred ethical considerations into product development through user-centred design and to accompany the development of novel neurotechnologies with ethical considerations as they mature and become clinical standard.

Data availability

The availability of the full data supporting the findings of this study is subject to restrictions due to the copyright of the included papers. The quotes analysed during this study are included in this published article and its supplementary information files. Further data are available from the authors upon request.

As many publications included patients with different diagnoses or investigated the effects of different neurotechnologies, the numbers indicated here do not add up.

UNESCO. Unveiling the neurotechnology landscape: scientific advancements innovations and major trends. 2023.

Klein E, et al. Brain-computer interface-based control of closed-loop brain stimulation: attitudes and ethical considerations. Brain-Computer Interfaces. 2016;3(3):140–8.

Article   Google Scholar  

Kellmeyer P, et al. The effects of closed-loop medical devices on the autonomy and accountability of persons and systems. Camb Q Healthc Ethics. 2016;25(4):623–33.

Limousin P, Foltynie T. Long-term outcomes of deep brain stimulation in Parkinson disease. Nat Reviews Neurol. 2019;15(4):234–42.

Alkawadri R. Brain–computer interface (BCI) applications in mapping of epileptic brain networks based on intracranial-EEG: an update. Front NeuroSci. 2019;13:191.

Crowell AL, et al. Long-term outcomes of subcallosal cingulate deep brain stimulation for treatment-resistant depression. Am J Psychiatry. 2019;176(11):949–56.

Mar-Barrutia L, et al. Deep brain stimulation for obsessive-compulsive disorder: a systematic review of worldwide experience after 20 years. World J Psychiatry. 2021;11(9):659.

Clausen J, et al. Help, hope, and hype: ethical dimensions of neuroprosthetics. Science. 2017;356(6345):1338–9.

Gilbert F, Viaña JNM, Ineichen C. Deflating the DBS causes personality changes bubble. Neuroethics. 2021;14(1):1–17.

Kögel J, Jox RJ, Friedrich O. What is it like to use a BCI? - insights from an interview study with brain-computer interface users. BMC Med Ethics. 2020;21(1):2.

Burwell S, Sample M, Racine E. Ethical aspects of brain computer interfaces: a scoping review. BMC Med Ethics. 2017;18(1):1–11.

Sullivan LS, Illes J. Beyond ‘communication and control’: towards ethically complete rationales for brain-computer interface research. Brain-Computer Interfaces. 2016;3(3):156–63.

Specker Sullivan L, Illes J. Ethics in published brain–computer interface research. J Neural Eng. 2018;15(1):013001.

Kögel J, et al. Using brain-computer interfaces: a scoping review of studies employing social research methods. BMC Med Ethics. 2019;20(1):18.

van Velthoven E, et al. Ethical implications of visual neuroprostheses—a systematic review. J Neural Eng. 2022;19(2):026055.

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Braun V, Clarke V. Using thematic analysis in psychology. Qualitative Res Psychol. 2006;3(2):77–101.

Pham MT, et al. A scoping review of scoping reviews: advancing the approach and enhancing the consistency. Res Synthesis Methods. 2014;5(4):371–85.

Page MJ, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Syst Reviews. 2021;10(1):1–11.

Mulvenna M, et al. Realistic expectations with brain computer interfaces. J Assist Technol. 2012;6(4):233–44.

Martin S, et al. A qualitative study adopting a user-centered approach to design and validate a brain computer interface for cognitive rehabilitation for people with brain injury. Assist Technol. 2018;30(5):233–41.

Kryger M, et al. Flight simulation using a brain-computer interface: a pilot, pilot study. Exp Neurol. 2017;287:473–8.

Al-Taleb M, et al. Home used, patient self-managed, brain-computer interface for the management of central neuropathic pain post spinal cord injury: usability study. J Neuroeng Rehabil. 2019;16(1):1–24.

Wexler A, et al. Ethical issues in intraoperative neuroscience research: assessing subjects’ recall of informed consent and motivations for participation. AJOB Empir Bioeth. 2022;13(1):57–66.

Goering S, Wexler A, Klein E. Trading vulnerabilities: living with Parkinson’s Disease before and after deep brain stimulation. Camb Q Healthc Ethics. 2021;30(4):623–30.

Maier F, et al. Patients’ expectations of deep brain stimulation, and subjective perceived outcome related to clinical measures in Parkinson’s disease: a mixed-method approach. J Neurol Neurosurg Psychiatry. 2013;84(11):1273–81.

Thomson CJ, Segrave RA, Carter A. Changes in Personality Associated with Deep Brain Stimulation: a qualitative evaluation of clinician perspectives. Neuroethics. 2021;14:109–24.

Thomson CJ, et al. He’s back so I’m not alone: the impact of deep brain stimulation on personality, self, and relationships in Parkinson’s disease. Qual Health Res. 2020;30(14):2217–33.

Lewis CJ, et al. Subjectively perceived personality and mood changes associated with subthalamic stimulation in patients with Parkinson’s disease. Psychol Med. 2015;45(1):73–85.

Ryan CG, et al. An exploration of the experiences and Educational needs of patients with failed back surgery syndrome receiving spinal cord stimulation. Neuromodulation. 2019;22(3):295–301.

Kubu CS, et al. Patients’ shifting goals for deep brain stimulation and informed consent. Neurology. 2018;91(5):e472–8.

Merner AR, et al. Changes in patients’ desired control of their deep brain stimulation and subjective Global Control over the Course of Deep Brain Stimulation. Front Hum Neurosci. 2021;15:642195.

Liddle J, et al. Impact of deep brain stimulation on people with Parkinson’s disease: a mixed methods feasibility study exploring lifespace and community outcomes. Hong Kong J Occup Ther. 2019;32(2):97–107.

Chacón Gámez YM, Brugger F, Biller-Andorno N. Parkinson’s Disease and Deep Brain Stimulation Have an Impact on My Life: A Multimodal Study on the Experiences of Patients and Family Caregivers. Int J Environ Res Public Health. 2021;18(18):9516.

de Haan S et al. Effects of deep brain stimulation on the lived experience of obsessive-compulsive disorder patients: in-depth interviews with 18 patients. PLoS One. 2015;10(8):e0135524.

Gilbert F, et al. Embodiment and estrangement: results from a first-in-Human Intelligent BCI Trial. Sci Eng Ethics. 2019;25(1):83–96.

Gilbert F, et al. I miss being me: phenomenological effects of deep brain stimulation. AJOB Neurosci. 2017;8(2):96–109.

Grübler G, et al. Psychosocial and ethical aspects in non-invasive EEG-based BCI research - A survey among BCI users and BCI professionals. Neuroethics. 2014;7(1):29–41.

Hariz G-M, Hamberg K. Perceptions of living with a device-based treatment: an account of patients treated with deep brain stimulation for Parkinson’s disease. Neuromodulation: Technol Neural Interface. 2014;17(3):272–8.

Liddle J, et al. Mapping the experiences and needs of deep brain stimulation for people with Parkinson’s disease and their family members. Brain Impairment. 2019;20(3):211–25.

Bosanac P, et al. Identity challenges and ‘burden of normality’ after DBS for severe OCD: a narrative case study. BMC Psychiatry. 2018;18(1):186.

Gilbert F, Viaña JN. A personal narrative on living and dealing with Psychiatric symptoms after DBS surgery. Narrat Inq Bioeth. 2018;8(1):67–77.

Cabrera LY, Kelly-Blake K, Sidiropoulos C. Perspectives on deep brain stimulation and its earlier use for parkinson’s disease: a qualitative study of US patients. Brain Sci. 2020;10(1).

Bluhm R, et al. They affect the person, but for Better or worse? Perceptions of Electroceutical interventions for Depression among psychiatrists, patients, and the Public. Qual Health Res. 2021;31(13):2542–53.

Sankary LR et al. Exit from Brain Device Research: A Modified Grounded Theory Study of Researcher Obligations and Participant Experiences. AJOB Neurosci. 2021;1–12.

Thomson CJ, et al. Nothing to lose, absolutely everything to Gain: patient and caregiver expectations and subjective outcomes of deep brain stimulation for treatment-resistant depression. Front Hum Neurosci. 2021;15:755276.

Mosley PE, et al. Woe betides anybody who tries to turn me down.’ A qualitative analysis of neuropsychiatric symptoms following subthalamic deep brain stimulation for Parkinson’s Disease. Neuroethics. 2021;14:47–63.

Hariz G-M, Limousin P, Hamberg K. DBS means everything-for some time. Patients’ perspectives on daily life with deep brain stimulation for Parkinson’s disease. J Parkinson’s Disease. 2016;6(2):335–47.

de Haan S, et al. Becoming more oneself? Changes in personality following DBS treatment for psychiatric disorders: experiences of OCD patients and general considerations. PLoS ONE. 2017;12(4):e0175748.

Shahmoon S, Smith JA, Jahanshahi M. The lived experiences of deep brain stimulation in parkinson’s disease: an interpretative phenomenological analysis. Parkinson’s Disease. 2019;2019(1):1937235.

Adamson AS, Welch HG. Machine learning and the Cancer-diagnosis problem - no gold Standard. N Engl J Med. 2019;381(24):2285–7.

Zulauf-Czaja A, et al. On the way home: a BCI-FES hand therapy self-managed by sub-acute SCI participants and their caregivers: a usability study. J Neuroeng Rehabil. 2021;18(1):1–18.

Blain-Moraes S, et al. Barriers to and mediators of brain-computer interface user acceptance: Focus group findings. Ergonomics. 2012;55(5):516–25.

LaHue SC, et al. Parkinson’s disease patient preference and experience with various methods of DBS lead placement. Parkinsonism Relat Disord. 2017;41:25–30.

Lewis CJ, et al. The impact of subthalamic deep brain stimulation on caregivers of Parkinson’s disease patients: an exploratory study. J Neurol. 2015;262(2):337–45.

Cabrera LY, et al. Beyond the cuckoo’s nest: patient and public attitudes about Psychiatric Electroceutical interventions. Psychiatr Q. 2021;92(4):1425–38.

Jongsma KR, Bredenoord AL. Ethics parallel research: an approach for (early) ethical guidance of biomedical innovation. BMC Med Ethics. 2020;21(1):1–9.

Lozano AM, et al. Deep brain stimulation: current challenges and future directions. Nat Reviews Neurol. 2019;15(3):148–60.

Schuepbach W, et al. Neurostimulation for Parkinson’s disease with early motor complications. N Engl J Med. 2013;368(7):610–22.

Follett KA, et al. Pallidal versus subthalamic deep-brain stimulation for Parkinson’s disease. N Engl J Med. 2010;362(22):2077–91.

Mahajan A, et al. Global variability in Deep Brain Stimulation practices for Parkinson’s Disease. Front Hum Neurosci. 2021;15:667035.

Bublitz C, Gilbert F, Soekadar SR. Concerns with the promotion of deep brain stimulation for obsessive-compulsive disorder. Nat Med. 2023.

White P, Rickards H, Zeman A. Time to end the distinction between mental and neurological illnesses. BMJ. 2012;344.

Martin JB. The integration of neurology, psychiatry, and neuroscience in the 21st century. Am J Psychiatry. 2002;159(5):695–704.

Ferretti A, Ienca M. Enhanced cognition, enhanced self? On neuroenhancement and subjectivity. J Cogn Enhancement. 2018;2(4):348–55.

Montemayor J, et al. Deep brain stimulation for Parkinson’s Disease: why earlier use makes Shared decision making important. Neuroethics. 2022;15(2):1–11.

Maier F, et al. Subjective perceived outcome of subthalamic deep brain stimulation in Parkinson’s disease one year after surgery. Parkinsonism Relat Disord. 2016;24:41–7.

Download references

Acknowledgements

GS would like to thank the attendees of the ERA-NET NEURON mid-term seminar (Madrid, January 2023) for kind and constructive feedback on an earlier draft.

This work was supported by the ERA-NET NEURON project HYBRIDMIND (SNSF 32NE30_199436; BMBF, 01GP2121A and -B), and in part by the European Research Council (ERC) under the project NGBMI (759370), the Federal Ministry of Research and Education (BMBF) under the projects SSMART (01DR21025A), NEO (13GW0483C), QHMI (03ZU1110DD), QSHIFT (01UX2211) and NeuroQ (13N16486), as well as the Einstein Foundation Berlin (A-2019-558).

Author information

Authors and affiliations.

Faculty of Medicine, Institute for History and Ethics of Medicine, Technical University of Munich, Munich, Germany

Georg Starke & Marcello Ienca

College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland

Faculty of Law, University of Ottawa, Ottawa, ON, Canada

Tugba Basaran Akmazoglu

Clinical Neurotechnology Laboratory, Department of Psychiatry and Neurosciences at the Charité Campus Mitte, Charité – Universitätsmedizin Berlin, Berlin, Germany

Annalisa Colucci, Mareike Vermehren, Maria Buthut & Surjo R. Soekadar

Centre for Health Law Policy and Ethics, University of Ottawa, Ottawa, ON, Canada

Amanda van Beinum

Bertram Loeb Research Chair, Faculty of Law, University of Ottawa, Ottawa, ON, Canada

Jennifer A. Chandler

Faculty of Law, Universität Hamburg, Hamburg, Germany

Christoph Bublitz

You can also search for this author in PubMed   Google Scholar

Contributions

GS, TBA, AC, MV, SS, CB, JC and MI contributed to the design and planning of the review, conducted the literature searches and organized and analyzed collected references. GS and MI wrote different sections of the article. All authors provided review of analysis results and suggested revisions for the write-up. All authors reviewed and approved the manuscript before submission.

Corresponding author

Correspondence to Georg Starke .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Starke, G., Akmazoglu, T.B., Colucci, A. et al. Qualitative studies involving users of clinical neurotechnology: a scoping review. BMC Med Ethics 25 , 89 (2024). https://doi.org/10.1186/s12910-024-01087-z

Download citation

Received : 23 January 2023

Accepted : 02 August 2024

Published : 14 August 2024

DOI : https://doi.org/10.1186/s12910-024-01087-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Neurotechnology
  • Qualitative research
  • Subjective experience
  • Self-perception
  • Patient-centred technology

BMC Medical Ethics

ISSN: 1472-6939

synthesis of literature review in research

  • Research article
  • Open access
  • Published: 15 August 2024

The impact of adverse childhood experiences on multimorbidity: a systematic review and meta-analysis

  • Dhaneesha N. S. Senaratne 1 ,
  • Bhushan Thakkar 1 ,
  • Blair H. Smith 1 ,
  • Tim G. Hales 2 ,
  • Louise Marryat 3 &
  • Lesley A. Colvin 1  

BMC Medicine volume  22 , Article number:  315 ( 2024 ) Cite this article

762 Accesses

18 Altmetric

Metrics details

Adverse childhood experiences (ACEs) have been implicated in the aetiology of a range of health outcomes, including multimorbidity. In this systematic review and meta-analysis, we aimed to identify, synthesise, and quantify the current evidence linking ACEs and multimorbidity.

We searched seven databases from inception to 20 July 2023: APA PsycNET, CINAHL Plus, Cochrane CENTRAL, Embase, MEDLINE, Scopus, and Web of Science. We selected studies investigating adverse events occurring during childhood (< 18 years) and an assessment of multimorbidity in adulthood (≥ 18 years). Studies that only assessed adverse events in adulthood or health outcomes in children were excluded. Risk of bias was assessed using the ROBINS-E tool. Meta-analysis of prevalence and dose–response meta-analysis methods were used for quantitative data synthesis. This review was pre-registered with PROSPERO (CRD42023389528).

From 15,586 records, 25 studies were eligible for inclusion (total participants = 372,162). The prevalence of exposure to ≥ 1 ACEs was 48.1% (95% CI 33.4 to 63.1%). The prevalence of multimorbidity was 34.5% (95% CI 23.4 to 47.5%). Eight studies provided sufficient data for dose–response meta-analysis (total participants = 197,981). There was a significant dose-dependent relationship between ACE exposure and multimorbidity ( p  < 0.001), with every additional ACE exposure contributing to a 12.9% (95% CI 7.9 to 17.9%) increase in the odds for multimorbidity. However, there was heterogeneity among the included studies ( I 2  = 76.9%, Cochran Q  = 102, p  < 0.001).

Conclusions

This is the first systematic review and meta-analysis to synthesise the literature on ACEs and multimorbidity, showing a dose-dependent relationship across a large number of participants. It consolidates and enhances an extensive body of literature that shows an association between ACEs and individual long-term health conditions, risky health behaviours, and other poor health outcomes.

Peer Review reports

In recent years, adverse childhood experiences (ACEs) have been identified as factors of interest in the aetiology of many conditions [ 1 ]. ACEs are potentially stressful events or environments that occur before the age of 18. They have typically been considered in terms of abuse (e.g. physical, emotional, sexual), neglect (e.g. physical, emotional), and household dysfunction (e.g. parental separation, household member incarceration, household member mental illness) but could also include other forms of stress, such as bullying, famine, and war. ACEs are common: estimates suggest that 47% of the UK population have experienced at least one form, with 12% experiencing four or more [ 2 ]. ACEs are associated with poor outcomes in a range of physical health, mental health, and social parameters in adulthood, with greater ACE burden being associated with worse outcomes [ 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 ].

Over a similar timescale, multimorbidity has emerged as a significant heath challenge. It is commonly defined as the co-occurrence of two or more long-term conditions (LTCs), with a long-term condition defined as any physical or mental health condition lasting, or expected to last, longer than 1 year [ 9 ]. Multimorbidity is both common and age-dependent, with a global adult prevalence of 37% that rises to 51% in adults over 60 [ 10 , 11 ]. Individuals living with multimorbidity face additional challenges in managing their health, such as multiple appointments, polypharmacy, and the lack of continuity of care [ 12 , 13 , 14 ]. Meanwhile, many healthcare systems struggle to manage the additional cost and complexity of people with multimorbidity as they have often evolved to address the single disease model [ 15 , 16 ]. As global populations continue to age, with an estimated 2.1 billion adults over 60 by 2050, the pressures facing already strained healthcare systems will continue to grow [ 17 ]. Identifying factors early in the aetiology of multimorbidity may help to mitigate the consequences of this developing healthcare crisis.

Many mechanisms have been suggested for how ACEs might influence later life health outcomes, including the risk of developing individual LTCs. Collectively, they contribute to the idea of ‘toxic stress’; cumulative stress during key developmental phases may affect development [ 18 ]. ACEs are associated with measures of accelerated cellular ageing, including changes in DNA methylation and telomere length [ 19 , 20 ]. ACEs may lead to alterations in stress-signalling pathways, including changes to the immune, endocrine, and cardiovascular systems [ 21 , 22 , 23 ]. ACEs are also associated with both structural and functional differences in the brain [ 24 , 25 , 26 , 27 ]. These diverse biological changes underpin psychological and behavioural changes, predisposing individuals to poorer self-esteem and risky health behaviours, which may in turn lead to increased risk of developing individual LTCs [ 1 , 2 , 28 , 29 , 30 , 31 , 32 ]. A growing body of evidence has therefore led to an increased focus on developing trauma-informed models of healthcare, in which the impact of negative life experiences is incorporated into the assessment and management of LTCs [ 33 ].

Given the contributory role of ACEs in the aetiology of individual LTCs, it is reasonable to suspect that ACEs may also be an important factor in the development of multimorbidity. Several studies have implicated ACEs in the aetiology of multimorbidity, across different cohorts and populations, but to date no meta-analyses have been performed to aggregate this evidence. In this review, we aim to summarise the state of the evidence linking ACEs and multimorbidity, to quantify the strength of any associations through meta-analysis, and to highlight the challenges of research in this area.

Search strategy and selection criteria

We conducted a systematic review and meta-analysis that was prospectively registered in the International Prospective Register of Systematic Reviews (PROSPERO) on 25 January 2023 (ID: CRD42023389528) and reported using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines.

We developed a search strategy based on previously published literature reviews and refined it following input from subject experts, an academic librarian, and patient and public partners (Additional File 1: Table S1). We searched the following seven databases from inception to 20 July 2023: APA PsycNET, CINAHL Plus, Cochrane CENTRAL, Embase, MEDLINE, Scopus, and Web of Science. The search results were imported into Covidence (Veritas Health Innovation, Melbourne, Australia), which automatically identified and removed duplicate entries. Two reviewers (DS and BT) independently performed title and abstract screening and full text review. Discrepancies were resolved by a third reviewer (LC).

Reports were eligible for review if they included adults (≥ 18 years), adverse events occurring during childhood (< 18 years), and an assessment of multimorbidity or health status based on LTCs. Reports that only assessed adverse events in adulthood or health outcomes in children were excluded.

The following study designs were eligible for review: randomised controlled trials, cohort studies, case–control studies, cross-sectional studies, and review articles with meta-analysis. Editorials, case reports, and conference abstracts were excluded. Systematic reviews without a meta-analysis and narrative synthesis review articles were also excluded; however, their reference lists were screened for relevant citations.

Data analysis

Two reviewers (DS and BT) independently performed data extraction into Microsoft Excel (Microsoft Corporation, Redmond, USA) using a pre-agreed template. Discrepancies were resolved by consensus discussion with a third reviewer (LC). Data extracted from each report included study details (author, year, study design, sample cohort, sample size, sample country of origin), patient characteristics (age, sex), ACE information (definition, childhood cut-off age, ACE assessment tool, number of ACEs, list of ACEs, prevalence), multimorbidity information (definition, multimorbidity assessment tool, number of LTCs, list of LTCs, prevalence), and analysis parameters (effect size, model adjustments). For meta-analysis, we extracted ACE groups, number of ACE cases, number of multimorbidity cases, number of participants, odds ratios or regression beta coefficients, and 95% confidence intervals (95% CI). Where data were partially reported or missing, we contacted the study authors directly for further information.

Two reviewers (DS and BT) independently performed risk of bias assessments of each included study using the Risk Of Bias In Non-randomized Studies of Exposures (ROBINS-E) tool [ 34 ]. The ROBINS-E tool assesses the risk of bias for the study outcome relevant to the systematic review question, which may not be the primary study outcome. It assesses risk of bias across seven domains; confounding, measurement of the exposure, participant selection, post-exposure interventions, missing data, measurement of the outcome, and selection of the reported result. The overall risk of bias for each study was determined using the ROBINS-E algorithm. Discrepancies were resolved by consensus discussion.

All statistical analyses were performed in R version 4.2.2 using the RStudio integrated development environment (RStudio Team, Boston, USA). To avoid repetition of participant data, where multiple studies analysed the same patient cohort, we selected the study with the best reporting of raw data for meta-analysis and the largest sample size. Meta-analysis of prevalence was performed with the meta package [ 35 ], using logit transformations within a generalised linear mixed model, and reporting the random-effects model [ 36 ]. Inter-study heterogeneity was assessed and reported using the I 2 statistic, Cochran Q statistic, and Cochran Q p -value. Dose–response meta-analysis was performed using the dosresmeta package [ 37 ] following the method outlined by Greenland and Longnecker (1992) [ 38 , 39 ]. Log-linear and non-linear (restricted cubic spline, with knots at 5%, 35%, 65%, and 95%) random effects models were generated, and goodness of fit was evaluated using a Wald-type test (denoted by X 2 ) and the Akaike information criterion (AIC) [ 39 ].

Patient and public involvement

The Consortium Against Pain Inequality (CAPE) Chronic Pain Advisory Group (CPAG) consists of individuals with lived experiences of ACEs, chronic pain, and multimorbidity. CPAG was involved in developing the research question. The group has experience in systematic review co-production (in progress).

The search identified 15,586 records, of which 25 met inclusion criteria for the systematic review (Fig.  1 ) [ 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 ]. The summary characteristics can be found in Additional File 1: Table S2. Most studies examined European ( n  = 11) or North American ( n  = 9) populations, with a few looking at Asian ( n  = 3) or South American ( n  = 1) populations and one study examining a mixed cohort (European and North American populations). The total participant count (excluding studies performed on the same cohort) was 372,162. Most studies had a female predominance (median 53.8%, interquartile range (IQR) 50.9 to 57.4%).

figure 1

Flow chart of selection of studies into the systematic review and meta-analysis. Flow chart of selection of studies into the systematic review and meta-analysis. ACE, adverse childhood experience; MM, multimorbidity; DRMA, dose–response meta-analysis

All studies were observational in design, and so risk of bias assessments were performed using the ROBINS-E tool (Additional File 1: Table S3) [ 34 ]. There were some consistent risks observed across the studies, especially in domain 1 (risk of bias due to confounding) and domain 3 (risk of bias due to participant selection). In domain 1, most studies were ‘high risk’ ( n  = 24) as they controlled for variables that could have been affected by ACE exposure (e.g. smoking status) [ 40 , 41 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 ]. In domain 3, some studies were ‘high risk’ ( n  = 7) as participant selection was based on participant characteristics that could have been influenced by ACE exposure (e.g. through recruitment at an outpatient clinic) [ 45 , 48 , 49 , 51 , 53 , 54 , 58 ]. The remaining studies were deemed as having ‘some concerns’ ( n  = 18) as participant selection occurred at a time after ACE exposure, introducing a risk of survivorship bias [ 40 , 41 , 42 , 43 , 44 , 46 , 47 , 50 , 52 , 55 , 56 , 57 , 59 , 60 , 61 , 62 , 63 , 64 ].

Key differences in risk of bias were seen in domain 2 (risk of bias due to exposure measurement) and domain 5 (risk of bias due to missing data). In domain 2, some studies were ‘high risk’ as they used a narrow or atypical measure of ACEs ( n  = 8) [ 40 , 42 , 44 , 46 , 55 , 56 , 60 , 64 ]; others were graded as having ‘some concerns’ as they used a broader but still incomplete measure of ACEs ( n  = 8) [ 43 , 45 , 48 , 49 , 50 , 52 , 54 , 62 ]; the remainder were ‘low risk’ as they used an established or comprehensive list of ACE questions [ 41 , 47 , 51 , 53 , 57 , 58 , 59 , 61 , 63 ]. In domain 5, some studies were ‘high risk’ as they failed to acknowledge or appropriately address missing data ( n  = 7) [ 40 , 42 , 43 , 45 , 51 , 53 , 60 ]; others were graded as having ‘some concerns’ as they had a significant amount of missing data (> 10% for exposure, outcome, or confounders) but mitigated for this with appropriate strategies ( n  = 6) [ 41 , 50 , 56 , 57 , 62 , 64 ]; the remainder were ‘low risk’ as they reported low levels of missing data ( n  = 12) [ 44 , 46 , 47 , 48 , 49 , 52 , 54 , 55 , 58 , 59 , 61 , 63 ].

Most studies assessed an exposure that was ‘adverse childhood experiences’ ( n  = 10) [ 41 , 42 , 50 , 51 , 53 , 57 , 58 , 61 , 63 , 64 ], ‘childhood maltreatment’ ( n  = 6) [ 44 , 45 , 46 , 48 , 49 , 59 ], or ‘childhood adversity’ ( n  = 3) [ 47 , 54 , 62 ]. The other exposures studied were ‘birth phase relative to World War Two’ [ 40 ], ‘childhood abuse’ [ 43 ], ‘childhood disadvantage’ [ 56 ], ‘childhood racial discrimination’ [ 55 ], ‘childhood trauma’ [ 52 ], and ‘quality of childhood’ (all n  = 1) [ 60 ]. More than half of studies ( n  = 13) did not provide a formal definition of their exposure of choice [ 42 , 43 , 44 , 45 , 49 , 52 , 53 , 54 , 57 , 58 , 60 , 61 , 64 ]. The upper age limit for childhood ranged from < 15 to < 18 years with the most common cut-off being < 18 years ( n  = 9). The median number of ACEs measured in each study was 7 (IQR 4–10). In total, 58 different ACEs were reported; 17 ACEs were reported by at least three studies, whilst 33 ACEs were reported by only one study. The most frequently reported ACEs were physical abuse ( n  = 19) and sexual abuse ( n  = 16) (Table  1 ). The exposure details for each study can be found in Additional File 1: Table S4.

Thirteen studies provided sufficient data to allow for a meta-analysis of the prevalence of exposure to ≥ 1 ACE; the pooled prevalence was 48.1% (95% CI 33.4 to 63.1%, I 2  = 99.9%, Cochran Q  = 18,092, p  < 0.001) (Fig.  2 ) [ 41 , 43 , 44 , 46 , 47 , 49 , 50 , 52 , 53 , 57 , 59 , 61 , 63 ]. Six studies provided sufficient data to allow for a meta-analysis of the prevalence of exposure to ≥ 4 ACEs; the pooled prevalence was 12.3% (95% CI 3.5 to 35.4%, I 2  = 99.9%, Cochran Q  = 9071, p  < 0.001) (Additional File 1: Fig. S1) [ 46 , 50 , 51 , 53 , 59 , 63 ].

figure 2

Meta-analysis of prevalence of exposure to ≥ 1 adverse childhood experiences. Meta-analysis of prevalence of exposure to ≥ 1 adverse childhood experience. ACE, adverse childhood experience; CI, confidence interval

Thirteen studies explicitly assessed multimorbidity as an outcome, and all of these defined the threshold for multimorbidity as the presence of two or more LTCs [ 40 , 41 , 42 , 44 , 46 , 47 , 50 , 55 , 57 , 60 , 61 , 62 , 64 ]. The remaining studies assessed comorbidities, morbidity, or disease counts [ 43 , 45 , 48 , 49 , 51 , 52 , 53 , 54 , 56 , 58 , 59 , 63 ]. The median number of LTCs measured in each study was 14 (IQR 12–21). In total, 115 different LTCs were reported; 36 LTCs were reported by at least three studies, whilst 63 LTCs were reported by only one study. Two studies did not report the specific LTCs that they measured [ 51 , 53 ]. The most frequently reported LTCs were hypertension ( n  = 22) and diabetes ( n  = 19) (Table  2 ). Fourteen studies included at least one mental health LTC. The outcome details for each study can be found in Additional File 1: Table S5.

Fifteen studies provided sufficient data to allow for a meta-analysis of the prevalence of multimorbidity; the pooled prevalence was 34.5% (95% CI 23.4 to 47.5%, I 2  = 99.9%, Cochran Q  = 24,072, p  < 0.001) (Fig.  3 ) [ 40 , 41 , 44 , 46 , 47 , 49 , 50 , 51 , 52 , 55 , 57 , 58 , 59 , 60 , 63 ].

figure 3

Meta-analysis of prevalence of multimorbidity. Meta-analysis of prevalence of multimorbidity. CI, confidence interval; LTC, long-term condition; MM, multimorbidity

All studies reported significant positive associations between measures of ACE and multimorbidity, though they varied in their means of analysis and reporting of the relationship. Nine studies reported an association between the number of ACEs (variably considered as a continuous or categorical parameter) and multimorbidity [ 41 , 43 , 46 , 47 , 50 , 56 , 57 , 61 , 64 ]. Eight studies reported an association between the number of ACEs and comorbidity counts in specific patient populations [ 45 , 48 , 49 , 51 , 53 , 58 , 59 , 63 ]. Six studies reported an association between individual ACEs or ACE subgroups and multimorbidity [ 42 , 43 , 44 , 47 , 55 , 62 ]. Two studies incorporated a measure of frequency within their ACE measurement tool and reported an association between this ACE score and multimorbidity [ 52 , 54 ]. Two studies reported an association between proxy measures for ACEs and multimorbidity; one reported ‘birth phase relative to World War Two’, and the other reported a self-report on the overall quality of childhood [ 40 , 60 ].

Eight studies, involving a total of 197,981 participants, provided sufficient data (either in the primary text, or following author correspondence) for quantitative synthesis [ 41 , 46 , 47 , 49 , 50 , 51 , 57 , 58 ]. Log-linear (Fig.  4 ) and non-linear (Additional File 1: Fig. S2) random effects models were compared for goodness of fit: the Wald-type test for linearity was non-significant ( χ 2  = 3.7, p  = 0.16) and the AIC was lower for the linear model (− 7.82 vs 15.86) indicating that the log-linear assumption was valid. There was a significant dose-dependent relationship between ACE exposure and multimorbidity ( p  < 0.001), with every additional ACE exposure contributing to a 12.9% (95% CI 7.9 to 17.9%) increase in the odds for multimorbidity ( I 2  = 76.9%, Cochran Q  = 102, p  < 0.001).

figure 4

Dose–response meta-analysis of the relationship between adverse childhood experiences and multimorbidity. Dose–response meta-analysis of the relationship between adverse childhood experiences and multimorbidity. Solid black line represents the estimated relationship; dotted black lines represent the 95% confidence intervals for this estimate. ACE, adverse childhood experience

This systematic review and meta-analysis synthesised the literature on ACEs and multimorbidity and showed a dose-dependent relationship across a large number of participants. Each additional ACE exposure contributed to a 12.9% (95% CI 7.9 to 17.9%) increase in the odds for multimorbidity. This adds to previous meta-analyses that have shown an association between ACEs and individual LTCs, health behaviours, and other health outcomes [ 1 , 28 , 31 , 65 , 66 ]. However, we also identified substantial inter-study heterogeneity that is likely to have arisen due to variation in the definitions, methodology, and analysis of the included studies, and so our results should be interpreted with these limitations in mind.

Although 25 years have passed since the landmark Adverse Childhood Experiences Study by Felitti et al. [ 3 ], there is still no consistent approach to determining what constitutes an ACE. This is reflected in this review, where fewer than half of the 58 different ACEs ( n  = 25, 43.1%) were reported by more than one study and no study reported more than 15 ACEs. Even ACE types that are commonly included are not always assessed in the same way [ 67 ], and furthermore, the same question can be interpreted differently in different contexts (e.g. physical punishment for bad behaviour was socially acceptable 50 years ago but is now considered physical abuse in the UK). Although a few validated questionnaires exist, they often focus on a narrow range of ACEs; for example, the childhood trauma questionnaire demonstrates good reliability and validity but focuses on interpersonal ACEs, missing out on household factors (e.g. parental separation), and community factors (e.g. bullying) [ 68 ]. Many studies were performed on pre-existing research cohorts or historic healthcare data, where the study authors had limited or no influence on the data collected. As a result, very few individual studies reported on the full breadth of potential ACEs.

ACE research is often based on ACE counts, where the types of ACEs experienced are summed into a single score that is taken as a proxy measure of the burden of childhood stress. The original Adverse Childhood Experiences Study by Felitti et al. took this approach [ 3 ], as did 17 of the studies included in this review and our own quantitative synthesis. At the population level, there are benefits to this: ACE counts provide quantifiable and comparable metrics, they are easy to collect and analyse, and in many datasets, they are the only means by which an assessment of childhood stress can be derived. However, there are clear limitations to this method when considering experiences at the individual level, not least the inherent assumptions that different ACEs in the same person are of equal weight or that the same ACE in different people carries the same burden of childhood stress. This limitation was strongly reinforced by our patient and public involvement group (CPAG). Two studies in this review incorporated frequency within their ACE scoring system [ 52 , 54 ], which adds another dimension to the assessment, but this is insufficient to understand and quantify the ‘impact’ of an ACE within an epidemiological framework.

The definitions of multimorbidity were consistent across the relevant studies but the contributory long-term conditions varied. Fewer than half of the 115 different LTCs ( n  = 52, 45.2%) were reported by more than one study. Part of the challenge is the classification of healthcare conditions. For example, myocardial infarction is commonly caused by coronary heart disease, and both are a form of heart disease. All three were reported as LTCs in the included studies, but which level of pathology should be reported? Mental health LTCs were under-represented within the condition list, with just over half of the included studies assessing at least one ( n  = 14, 56.0%). Given the strong links between ACEs and mental health, and the impact of mental health on quality of life, this is an area for improvement in future research [ 31 , 32 ]. A recent Delphi consensus study by Ho et al. may help to address these issues: following input from professionals and members of the public they identified 24 LTCs to ‘always include’ and 35 LTCs to ‘usually include’ in multimorbidity research, including nine mental health conditions [ 9 ].

As outlined in the introduction, there is a strong evidence base supporting the link between ACEs and long-term health outcomes, including specific LTCs. It is not unreasonable to extrapolate this association to ACEs and multimorbidity, though to our knowledge, the pathophysiological processes that link the two have not been precisely identified. However, similar lines of research are being independently followed in both fields and these areas of overlap may suggest possible mechanisms for a relationship. For example, both ACEs and multimorbidity have been associated with markers of accelerated epigenetic ageing [ 69 , 70 ], mitochondrial dysfunction [ 71 , 72 ], and inflammation [ 22 , 73 ]. More work is required to better understand how these concepts might be linked.

This review used data from a large participant base, with information from 372,162 people contributing to the systematic review and information from 197,981 people contributing to the dose–response meta-analysis. Data from the included studies originated from a range of sources, including healthcare settings and dedicated research cohorts. We believe this is of a sufficient scale and variety to demonstrate the nature and magnitude of the association between ACEs and multimorbidity in these populations.

However, there are some limitations. Firstly, although data came from 11 different countries, only two of those were from outside Europe and North America, and all were from either high- or middle-income countries. Data on ACEs from low-income countries have indicated a higher prevalence of any ACE exposure (consistently > 70%) [ 74 , 75 ], though how well this predicts health outcomes in these populations is unknown.

Secondly, studies in this review utilised retrospective participant-reported ACE data and so are at risk of recall and reporting bias. Studies utilising prospective assessments are rare and much of the wider ACE literature is open to a similar risk of bias. To date, two studies have compared prospective and retrospective ACE measurements, demonstrating inconsistent results [ 76 , 77 ]. However, these studies were performed in New Zealand and South Africa, two countries not represented by studies in our review, and had relatively small sample sizes (1037 and 1595 respectively). It is unclear whether these are generalisable to other population groups.

Thirdly, previous research has indicated a close relationship between ACEs and childhood socio-economic status (SES) [ 78 ] and between SES and multimorbidity [ 10 , 79 ]. However, the limitations of the included studies meant we were unable to separate the effect of ACEs from the effect of childhood SES on multimorbidity in this review. Whilst two studies included childhood SES as covariates in their models, others used measures from adulthood (such as adulthood SES, income level, and education level) that are potentially influenced by ACEs and therefore increase the risk of bias due to confounding (Additional File 1: Table S3). Furthermore, as for ACEs and multimorbidity, there is no consistently applied definition of SES and different measures of SES may produce different apparent effects [ 80 ]. The complex relationships between ACEs, childhood SES, and multimorbidity remain a challenge for research in this field.

Fourthly, there was a high degree of heterogeneity within included studies, especially relating to the definition and measurement of ACEs and multimorbidity. Whilst this suggests that our results should be interpreted with caution, it is reassuring to see that our meta-analysis of prevalence estimates for exposure to any ACE (48.1%) and multimorbidity (34.5%) are in line with previous estimates in similar populations [ 2 , 11 ]. Furthermore, we believe that the quantitative synthesis of these relatively heterogenous studies provides important benefit by demonstrating a strong dose–response relationship across a range of contexts.

Our results strengthen the evidence supporting the lasting influence of childhood conditions on adult health and wellbeing. How this understanding is best incorporated into routine practice is still not clear. Currently, the lack of consistency in assessing ACEs limits our ability to understand their impact at both the individual and population level and poses challenges for those looking to incorporate a formalised assessment. Whilst most risk factors for disease (e.g. blood pressure) are usually only relevant within healthcare settings, ACEs are relevant to many other sectors (e.g. social care, education, policing) [ 81 , 82 , 83 , 84 ], and so consistency of assessment across society is both more important and more challenging to achieve.

Some have suggested that the evidence for the impact of ACEs is strong enough to warrant screening, which would allow early identification of potential harms to children and interventions to prevent them. This approach has been implemented in California, USA [ 85 , 86 , 87 ]. However, this is controversial, and others argue that screening is premature with the current evidence base [ 88 , 89 , 90 ]. Firstly, not everyone who is exposed to ACEs develops poor health outcomes, and it is not clear how to identify those who are at highest risk. Many people appear to be vulnerable, with more adverse health outcomes following ACE exposure than those who are not exposed, whilst others appear to be more resilient, with good health in later life despite multiple ACE exposures [ 91 ] It may be that supportive environments can mitigate the long-term effects of ACE exposure and promote resilience [ 92 , 93 ]. Secondly, there are no accepted interventions for managing the impact of an identified ACE. As identified above, different ACEs may require input from different sectors (e.g. healthcare, social care, education, police), and so collating this evidence may be challenging. At present, ACEs screening does not meet the Wilson-Jungner criteria for a screening programme [ 94 ].

Existing healthcare systems are poorly designed to deal with the complexities of addressing ACEs and multimorbidity. Possibly, ways to improve this might be allocating more time per patient, prioritising continuity of care to foster long-term relationships, and greater integration between different healthcare providers (most notably primary vs secondary care teams, or physical vs mental health teams). However, such changes often demand additional resources (e.g. staff, infrastructure, processes), which are challenging to source when existing healthcare systems are already stretched [ 95 , 96 ]. Nevertheless, increasing the spotlight on ACEs and multimorbidity may help to focus attention and ultimately bring improvements to patient care and experience.

ACEs are associated with a range of poor long-term health outcomes, including harmful health behaviours and individual long-term conditions. Multimorbidity is becoming more common as global populations age, and it increases the complexity and cost of healthcare provision. This is the first systematic review and meta-analysis to synthesise the literature on ACEs and multimorbidity, showing a statistically significant dose-dependent relationship across a large number of participants, albeit with a high degree of inter-study heterogeneity. This consolidates and enhances an increasing body of data supporting the role of ACEs in determining long-term health outcomes. Whilst these observational studies do not confirm causality, the weight and consistency of evidence is such that we can be confident in the link. The challenge for healthcare practitioners, managers, policymakers, and governments is incorporating this body of evidence into routine practice to improve the health and wellbeing of our societies.

Availability of data and materials

No additional data was generated for this review. The data used were found in the referenced papers or provided through correspondence with the study authors.

Abbreviations

Adverse childhood experience

Akaike information criterion

CONSORTIUM Against pain inequality

Confidence interval

Chronic pain advisory group

Interquartile range

Long-term condition

International prospective register of systematic reviews

Preferred reporting items for systematic reviews and meta-analyses

Risk of bias in non-randomised studies of exposures

Socio-economic status

Hughes K, Bellis MA, Hardcastle KA, Sethi D, Butchart A, Mikton C, et al. The effect of multiple adverse childhood experiences on health: a systematic review and meta-analysis. Lancet Public Health. 2017;2:e356–66.

Article   PubMed   Google Scholar  

Bellis MA, Lowey H, Leckenby N, Hughes K, Harrison D. Adverse childhood experiences: retrospective study to determine their impact on adult health behaviours and health outcomes in a UK population. J Public Health Oxf Engl. 2014;36:81–91.

Article   Google Scholar  

Felitti VJ, Anda RF, Nordenberg D, Williamson DF, Spitz AM, Edwards V, et al. Relationship of childhood abuse and household dysfunction to many of the leading causes of death in adults. The Adverse Childhood Experiences (ACE) Study. Am J Prev Med. 1998;14:245–58.

Article   CAS   PubMed   Google Scholar  

Maniglio R. The impact of child sexual abuse on health: a systematic review of reviews. Clin Psychol Rev. 2009;29:647–57.

Yu J, Patel RA, Haynie DL, Vidal-Ribas P, Govender T, Sundaram R, et al. Adverse childhood experiences and premature mortality through mid-adulthood: a five-decade prospective study. Lancet Reg Health - Am. 2022;15:100349.

Wang Y-X, Sun Y, Missmer SA, Rexrode KM, Roberts AL, Chavarro JE, et al. Association of early life physical and sexual abuse with premature mortality among female nurses: prospective cohort study. BMJ. 2023;381: e073613.

Article   PubMed   PubMed Central   Google Scholar  

Rogers NT, Power C, Pereira SMP. Child maltreatment, early life socioeconomic disadvantage and all-cause mortality in mid-adulthood: findings from a prospective British birth cohort. BMJ Open. 2021;11: e050914.

Hardcastle K, Bellis MA, Sharp CA, Hughes K. Exploring the health and service utilisation of general practice patients with a history of adverse childhood experiences (ACEs): an observational study using electronic health records. BMJ Open. 2020;10: e036239.

Ho ISS, Azcoaga-Lorenzo A, Akbari A, Davies J, Khunti K, Kadam UT, et al. Measuring multimorbidity in research: Delphi consensus study. BMJ Med. 2022;1:e000247.

Barnett K, Mercer SW, Norbury M, Watt G, Wyke S, Guthrie B. Epidemiology of multimorbidity and implications for health care, research, and medical education: a cross-sectional study. Lancet Lond Engl. 2012;380:37–43.

Chowdhury SR, Das DC, Sunna TC, Beyene J, Hossain A. Global and regional prevalence of multimorbidity in the adult population in community settings: a systematic review and meta-analysis. eClinicalMedicine. 2023;57:101860.

Noël PH, Chris Frueh B, Larme AC, Pugh JA. Collaborative care needs and preferences of primary care patients with multimorbidity. Health Expect. 2005;8:54–63.

Chau E, Rosella LC, Mondor L, Wodchis WP. Association between continuity of care and subsequent diagnosis of multimorbidity in Ontario, Canada from 2001–2015: a retrospective cohort study. PLoS ONE. 2021;16: e0245193.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Nicholson K, Liu W, Fitzpatrick D, Hardacre KA, Roberts S, Salerno J, et al. Prevalence of multimorbidity and polypharmacy among adults and older adults: a systematic review. Lancet Healthy Longev. 2024;5:e287–96.

Albreht T, Dyakova M, Schellevis FG, Van den Broucke S. Many diseases, one model of care? J Comorbidity. 2016;6:12–20.

Soley-Bori M, Ashworth M, Bisquera A, Dodhia H, Lynch R, Wang Y, et al. Impact of multimorbidity on healthcare costs and utilisation: a systematic review of the UK literature. Br J Gen Pract. 2020;71:e39-46.

World Health Organization (WHO). Ageing and health. 2022. https://www.who.int/news-room/fact-sheets/detail/ageing-and-health . Accessed 23 Apr 2024.

Franke HA. Toxic stress: effects, prevention and treatment. Children. 2014;1:390–402.

Parade SH, Huffhines L, Daniels TE, Stroud LR, Nugent NR, Tyrka AR. A systematic review of childhood maltreatment and DNA methylation: candidate gene and epigenome-wide approaches. Transl Psychiatry. 2021;11:1–33.

Ridout KK, Levandowski M, Ridout SJ, Gantz L, Goonan K, Palermo D, et al. Early life adversity and telomere length: a meta-analysis. Mol Psychiatry. 2018;23:858–71.

Elwenspoek MMC, Kuehn A, Muller CP, Turner JD. The effects of early life adversity on the immune system. Psychoneuroendocrinology. 2017;82:140–54.

Danese A, Baldwin JR. Hidden wounds? Inflammatory links between childhood trauma and psychopathology. Annu Rev Psychol. 2017;68:517–44.

Brindle RC, Pearson A, Ginty AT. Adverse childhood experiences (ACEs) relate to blunted cardiovascular and cortisol reactivity to acute laboratory stress: a systematic review and meta-analysis. Neurosci Biobehav Rev. 2022;134: 104530.

Teicher MH, Samson JA, Anderson CM, Ohashi K. The effects of childhood maltreatment on brain structure, function and connectivity. Nat Rev Neurosci. 2016;17:652–66.

McLaughlin KA, Weissman D, Bitrán D. Childhood adversity and neural development: a systematic review. Annu Rev Dev Psychol. 2019;1:277–312.

Koyama Y, Fujiwara T, Murayama H, Machida M, Inoue S, Shobugawa Y. Association between adverse childhood experiences and brain volumes among Japanese community-dwelling older people: findings from the NEIGE study. Child Abuse Negl. 2022;124: 105456.

Antoniou G, Lambourg E, Steele JD, Colvin LA. The effect of adverse childhood experiences on chronic pain and major depression in adulthood: a systematic review and meta-analysis. Br J Anaesth. 2023;130:729–46.

Huang H, Yan P, Shan Z, Chen S, Li M, Luo C, et al. Adverse childhood experiences and risk of type 2 diabetes: a systematic review and meta-analysis. Metabolism. 2015;64:1408–18.

Lopes S, Hallak JEC, de Machado Sousa JP, de Osório F L. Adverse childhood experiences and chronic lung diseases in adulthood: a systematic review and meta-analysis. Eur J Psychotraumatology. 2020;11:1720336.

Hu Z, Kaminga AC, Yang J, Liu J, Xu H. Adverse childhood experiences and risk of cancer during adulthood: a systematic review and meta-analysis. Child Abuse Negl. 2021;117: 105088.

Tan M, Mao P. Type and dose-response effect of adverse childhood experiences in predicting depression: a systematic review and meta-analysis. Child Abuse Negl. 2023;139: 106091.

Zhang L, Zhao N, Zhu M, Tang M, Liu W, Hong W. Adverse childhood experiences in patients with schizophrenia: related factors and clinical implications. Front Psychiatry. 2023;14:1247063.

Emsley E, Smith J, Martin D, Lewis NV. Trauma-informed care in the UK: where are we? A qualitative study of health policies and professional perspectives. BMC Health Serv Res. 2022;22:1164.

ROBINS-E Development Group (Higgins J, Morgan R, Rooney A, Taylor K, Thayer K, Silva R, Lemeris C, Akl A, Arroyave W, Bateson T, Berkman N, Demers P, Forastiere F, Glenn B, Hróbjartsson A, Kirrane E, LaKind J, Luben T, Lunn R, McAleenan A, McGuinness L, Meerpohl J, Mehta S, Nachman R, Obbagy J, O’Connor A, Radke E, Savović J, Schubauer-Berigan M, Schwingl P, Schunemann H, Shea B, Steenland K, Stewart T, Straif K, Tilling K, Verbeek V, Vermeulen R, Viswanathan M, Zahm S, Sterne J). Risk Of Bias In Non-randomized Studies - of Exposure (ROBINS-E). Launch version, 20 June 2023. https://www.riskofbias.info/welcome/robins-e-tool . Accessed 20 Jul 2023.

Balduzzi S, Rücker G, Schwarzer G. How to perform a meta-analysis with R: a practical tutorial. Evid Based Ment Health. 2019;22:153–60.

Schwarzer G, Chemaitelly H, Abu-Raddad LJ, Rücker G. Seriously misleading results using inverse of Freeman-Tukey double arcsine transformation in meta-analysis of single proportions. Res Synth Methods. 2019;10:476–83.

Crippa A, Orsini N. Multivariate dose-response meta-analysis: the dosresmeta R Package. J Stat Softw. 2016;72:1–15.

Greenland S, Longnecker MP. Methods for trend estimation from summarized dose-response data, with applications to meta-analysis. Am J Epidemiol. 1992;135:1301–9.

Shim SR, Lee J. Dose-response meta-analysis: application and practice using the R software. Epidemiol Health. 2019;41: e2019006.

Arshadipour A, Thorand B, Linkohr B, Rospleszcz S, Ladwig K-H, Heier M, et al. Impact of prenatal and childhood adversity effects around World War II on multimorbidity: results from the KORA-Age study. BMC Geriatr. 2022;22:115.

Atkinson L, Joshi D, Raina P, Griffith LE, MacMillan H, Gonzalez A. Social engagement and allostatic load mediate between adverse childhood experiences and multimorbidity in mid to late adulthood: the Canadian Longitudinal Study on Aging. Psychol Med. 2021;53(4):1–11.

Chandrasekar R, Lacey RE, Chaturvedi N, Hughes AD, Patalay P, Khanolkar AR. Adverse childhood experiences and the development of multimorbidity across adulthood—a national 70-year cohort study. Age Ageing. 2023;52:afad062.

Cromer KR, Sachs-Ericsson N. The association between childhood abuse, PTSD, and the occurrence of adult health problems: moderation via current life stress. J Trauma Stress. 2006;19:967–71.

England-Mason G, Casey R, Ferro M, MacMillan HL, Tonmyr L, Gonzalez A. Child maltreatment and adult multimorbidity: results from the Canadian Community Health Survey. Can J Public Health. 2018;109:561–72.

Godin O, Leboyer M, Laroche DG, Aubin V, Belzeaux R, Courtet P, et al. Childhood maltreatment contributes to the medical morbidity of individuals with bipolar disorders. Psychol Med. 2023;53(15):1–9.

Hanlon P, McCallum M, Jani BD, McQueenie R, Lee D, Mair FS. Association between childhood maltreatment and the prevalence and complexity of multimorbidity: a cross-sectional analysis of 157,357 UK Biobank participants. J Comorbidity. 2020;10:2235042X1094434.

Henchoz Y, Seematter-Bagnoud L, Nanchen D, Büla C, von Gunten A, Démonet J-F, et al. Childhood adversity: a gateway to multimorbidity in older age? Arch Gerontol Geriatr. 2019;80:31–7.

Hosang GM, Fisher HL, Uher R, Cohen-Woods S, Maughan B, McGuffin P, et al. Childhood maltreatment and the medical morbidity in bipolar disorder: a case–control study. Int J Bipolar Disord. 2017;5:30.

Hosang GM, Fisher HL, Hodgson K, Maughan B, Farmer AE. Childhood maltreatment and adult medical morbidity in mood disorders: comparison of unipolar depression with bipolar disorder. Br J Psychiatry. 2018;213:645–53.

Lin L, Wang HH, Lu C, Chen W, Guo VY. Adverse childhood experiences and subsequent chronic diseases among middle-aged or older adults in China and associations with demographic and socioeconomic characteristics. JAMA Netw Open. 2021;4: e2130143.

Mendizabal A, Nathan CL, Khankhanian P, Anto M, Clyburn C, Acaba-Berrocal A, et al. Adverse childhood experiences in patients with neurologic disease. Neurol Clin Pract. 2022. https://doi.org/10.1212/CPJ.0000000000001134 .

Noteboom A, Have MT, De Graaf R, Beekman ATF, Penninx BWJH, Lamers F. The long-lasting impact of childhood trauma on adult chronic physical disorders. J Psychiatr Res. 2021;136:87–94.

Patterson ML, Moniruzzaman A, Somers JM. Setting the stage for chronic health problems: cumulative childhood adversity among homeless adults with mental illness in Vancouver. British Columbia BMC Public Health. 2014;14:350.

Post RM, Altshuler LL, Leverich GS, Frye MA, Suppes T, McElroy SL, et al. Role of childhood adversity in the development of medical co-morbidities associated with bipolar disorder. J Affect Disord. 2013;147:288–94.

Reyes-Ortiz CA. Racial discrimination and multimorbidity among older adults in Colombia: a national data analysis. Prev Chronic Dis. 2023;20:220360.

Sheikh MA. Coloring of the past via respondent’s current psychological state, mediation, and the association between childhood disadvantage and morbidity in adulthood. J Psychiatr Res. 2018;103:173–81.

Sinnott C, Mc Hugh S, Fitzgerald AP, Bradley CP, Kearney PM. Psychosocial complexity in multimorbidity: the legacy of adverse childhood experiences. Fam Pract. 2015;32:269–75.

Sosnowski DW, Feder KA, Astemborski J, Genberg BL, Letourneau EJ, Musci RJ, et al. Adverse childhood experiences and comorbidity in a cohort of people who have injected drugs. BMC Public Health. 2022;22:986.

Stapp EK, Williams SC, Kalb LG, Holingue CB, Van Eck K, Ballard ED, et al. Mood disorders, childhood maltreatment, and medical morbidity in US adults: an observational study. J Psychosom Res. 2020;137: 110207.

Tomasdottir MO, Sigurdsson JA, Petursson H, Kirkengen AL, Krokstad S, McEwen B, et al. Self reported childhood difficulties, adult multimorbidity and allostatic load. A cross-sectional analysis of the Norwegian HUNT study. PloS One. 2015;10:e0130591.

Vásquez E, Quiñones A, Ramirez S, Udo T. Association between adverse childhood events and multimorbidity in a racial and ethnic diverse sample of middle-aged and older adults. Innov Aging. 2019;3:igz016.

Yang L, Hu Y, Silventoinen K, Martikainen P. Childhood adversity and trajectories of multimorbidity in mid-late life: China health and longitudinal retirement study. J Epidemiol Community Health. 2021;75:593–600.

Zak-Hunter L, Carr CP, Tate A, Brustad A, Mulhern K, Berge JM. Associations between adverse childhood experiences and stressful life events and health outcomes in pregnant and breastfeeding women from diverse racial and ethnic groups. J Womens Health. 2023;32:702–14.

Zheng X, Cui Y, Xue Y, Shi L, Guo Y, Dong F, et al. Adverse childhood experiences in depression and the mediating role of multimorbidity in mid-late life: A nationwide longitudinal study. J Affect Disord. 2022;301:217–24.

Liu M, Luong L, Lachaud J, Edalati H, Reeves A, Hwang SW. Adverse childhood experiences and related outcomes among adults experiencing homelessness: a systematic review and meta-analysis. Lancet Public Health. 2021;6:e836–47.

Petruccelli K, Davis J, Berman T. Adverse childhood experiences and associated health outcomes: a systematic review and meta-analysis. Child Abuse Negl. 2019;97: 104127.

Bethell CD, Carle A, Hudziak J, Gombojav N, Powers K, Wade R, et al. Methods to assess adverse childhood experiences of children and families: toward approaches to promote child well-being in policy and practice. Acad Pediatr. 2017;17(7 Suppl):S51-69.

Bernstein DP, Stein JA, Newcomb MD, Walker E, Pogge D, Ahluvalia T, et al. Development and validation of a brief screening version of the Childhood Trauma Questionnaire. Child Abuse Negl. 2003;27:169–90.

Kim K, Yaffe K, Rehkopf DH, Zheng Y, Nannini DR, Perak AM, et al. Association of adverse childhood experiences with accelerated epigenetic aging in midlife. JAMA Network Open. 2023;6:e2317987.

Jain P, Binder A, Chen B, Parada H, Gallo LC, Alcaraz J, et al. The association of epigenetic age acceleration and multimorbidity at age 90 in the Women’s Health Initiative. J Gerontol A Biol Sci Med Sci. 2023;78:2274–81.

Zang JCS, May C, Hellwig B, Moser D, Hengstler JG, Cole S, et al. Proteome analysis of monocytes implicates altered mitochondrial biology in adults reporting adverse childhood experiences. Transl Psychiatry. 2023;13:31.

Mau T, Blackwell TL, Cawthon PM, Molina AJA, Coen PM, Distefano G, et al. Muscle mitochondrial bioenergetic capacities are associated with multimorbidity burden in older adults: the Study of Muscle, Mobility and Aging (SOMMA). J Gerontol A Biol Sci Med Sci. 2024;79(7):glae101.

Friedman E, Shorey C. Inflammation in multimorbidity and disability: an integrative review. Health Psychol Off J Div Health Psychol Am Psychol Assoc. 2019;38:791–801.

Google Scholar  

Satinsky EN, Kakuhikire B, Baguma C, Rasmussen JD, Ashaba S, Cooper-Vince CE, et al. Adverse childhood experiences, adult depression, and suicidal ideation in rural Uganda: a cross-sectional, population-based study. PLoS Med. 2021;18: e1003642.

Amene EW, Annor FB, Gilbert LK, McOwen J, Augusto A, Manuel P, et al. Prevalence of adverse childhood experiences in sub-Saharan Africa: a multicounty analysis of the Violence Against Children and Youth Surveys (VACS). Child Abuse Negl. 2023;150:106353.

Reuben A, Moffitt TE, Caspi A, Belsky DW, Harrington H, Schroeder F, et al. Lest we forget: comparing retrospective and prospective assessments of adverse childhood experiences in the prediction of adult health. J Child Psychol Psychiatry. 2016;57:1103–12.

Naicker SN, Norris SA, Mabaso M, Richter LM. An analysis of retrospective and repeat prospective reports of adverse childhood experiences from the South African Birth to Twenty Plus cohort. PLoS ONE. 2017;12: e0181522.

Walsh D, McCartney G, Smith M, Armour G. Relationship between childhood socioeconomic position and adverse childhood experiences (ACEs): a systematic review. J Epidemiol Community Health. 2019;73:1087–93.

Ingram E, Ledden S, Beardon S, Gomes M, Hogarth S, McDonald H, et al. Household and area-level social determinants of multimorbidity: a systematic review. J Epidemiol Community Health. 2021;75:232–41.

Darin-Mattsson A, Fors S, Kåreholt I. Different indicators of socioeconomic status and their relative importance as determinants of health in old age. Int J Equity Health. 2017;16:173.

Bateson K, McManus M, Johnson G. Understanding the use, and misuse, of Adverse Childhood Experiences (ACEs) in trauma-informed policing. Police J. 2020;93:131–45.

Webb NJ, Miller TL, Stockbridge EL. Potential effects of adverse childhood experiences on school engagement in youth: a dominance analysis. BMC Public Health. 2022;22:2096.

Stewart-Tufescu A, Struck S, Taillieu T, Salmon S, Fortier J, Brownell M, et al. Adverse childhood experiences and education outcomes among adolescents: linking survey and administrative data. Int J Environ Res Public Health. 2022;19:11564.

Frederick J, Spratt T, Devaney J. Adverse childhood experiences and social work: relationship-based practice responses. Br J Soc Work. 2021;51:3018–34.

University of California ACEs Aware Family Resilience Network (UCAAN). acesaware.org. ACEs Aware. https://www.acesaware.org/about/ . Accessed 6 Oct 2023.

Watson CR, Young-Wolff KC, Negriff S, Dumke K, DiGangi M. Implementation and evaluation of adverse childhood experiences screening in pediatrics and obstetrics settings. Perm J. 2024;28:180–7.

Gordon JB, Felitti VJ. The importance of screening for adverse childhood experiences (ACE) in all medical encounters. AJPM Focus. 2023;2: 100131.

Finkelhor D. Screening for adverse childhood experiences (ACEs): Cautions and suggestions. Child Abuse Negl. 2018;85:174–9.

Cibralic S, Alam M, Mendoza Diaz A, Woolfenden S, Katz I, Tzioumi D, et al. Utility of screening for adverse childhood experiences (ACE) in children and young people attending clinical and healthcare settings: a systematic review. BMJ Open. 2022;12: e060395.

Gentry SV, Paterson BA. Does screening or routine enquiry for adverse childhood experiences (ACEs) meet criteria for a screening programme? A rapid evidence summary. J Public Health Oxf Engl. 2022;44:810–22.

Article   CAS   Google Scholar  

Morgan CA, Chang Y-H, Choy O, Tsai M-C, Hsieh S. Adverse childhood experiences are associated with reduced psychological resilience in youth: a systematic review and meta-analysis. Child Basel Switz. 2021;9:27.

Narayan AJ, Lieberman AF, Masten AS. Intergenerational transmission and prevention of adverse childhood experiences (ACEs). Clin Psychol Rev. 2021;85: 101997.

VanBronkhorst SB, Abraham E, Dambreville R, Ramos-Olazagasti MA, Wall M, Saunders DC, et al. Sociocultural risk and resilience in the context of adverse childhood experiences. JAMA Psychiat. 2024;81:406–13.

Wilson JM, Jungner G. Principles and practice of screening for disease. World Health Organisation; 1968.

Huo Y, Couzner L, Windsor T, Laver K, Dissanayaka NN, Cations M. Barriers and enablers for the implementation of trauma-informed care in healthcare settings: a systematic review. Implement Sci Commun. 2023;4:49.

Foo KM, Sundram M, Legido-Quigley H. Facilitators and barriers of managing patients with multiple chronic conditions in the community: a qualitative study. BMC Public Health. 2020;20:273.

Download references

Acknowledgements

The authors thank the members of the CAPE CPAG patient and public involvement group for providing insights gained from relevant lived experiences.

The authors are members of the Advanced Pain Discovery Platform (APDP) supported by UK Research & Innovation (UKRI), Versus Arthritis, and Eli Lilly. DS is a fellow on the Multimorbidity Doctoral Training Programme for Health Professionals, which is supported by the Wellcome Trust [223499/Z/21/Z]. BT, BS, and LC are supported by an APDP grant as part of the Partnership for Assessment and Investigation of Neuropathic Pain: Studies Tracking Outcomes, Risks and Mechanisms (PAINSTORM) consortium [MR/W002388/1]. TH and LC are supported by an APDP grant as part of the Consortium Against Pain Inequality [MR/W002566/1]. The funding bodies had no role in study design, data collection/analysis/interpretation, report writing, or the decision to submit the manuscript for publication.

Author information

Authors and affiliations.

Chronic Pain Research Group, Division of Population Health & Genomics, School of Medicine, University of Dundee, Ninewells Hospital, Dundee, DD1 9SY, UK

Dhaneesha N. S. Senaratne, Bhushan Thakkar, Blair H. Smith & Lesley A. Colvin

Institute of Academic Anaesthesia, Division of Systems Medicine, School of Medicine, University of Dundee, Dundee, UK

Tim G. Hales

School of Health Sciences, University of Dundee, Dundee, UK

Louise Marryat

You can also search for this author in PubMed   Google Scholar

Contributions

DS and LC contributed to review conception and design. DC, BT, BS, TH, LM, and LC contributed to search strategy design. DS and BT contributed to study selection and data extraction, with input from LC. DS and BT accessed and verified the underlying data. DS conducted the meta-analyses, with input from BT, BS, TH, LM, and LC. DS drafted the manuscript, with input from DC, BT, BS, TH, LM, and LC. DC, BT, BS, TH, LM, and LC read and approved the final manuscript.

Corresponding author

Correspondence to Dhaneesha N. S. Senaratne .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

12916_2024_3505_moesm1_esm.docx.

Additional File 1: Tables S1-S5 and Figures S1-S2. Table S1: Search strategy, Table S2: Characteristics of studies included in the systematic review, Table S3: Risk of bias assessment (ROBINS-E), Table S4: Exposure details (adverse childhood experiences), Table S5: Outcome details (multimorbidity), Figure S1: Meta-analysis of prevalence of exposure to ≥4 adverse childhood experiences, Figure S2: Dose-response meta-analysis of the relationship between adverse childhood experiences and multimorbidity (using a non-linear/restricted cubic spline model).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Senaratne, D.N.S., Thakkar, B., Smith, B.H. et al. The impact of adverse childhood experiences on multimorbidity: a systematic review and meta-analysis. BMC Med 22 , 315 (2024). https://doi.org/10.1186/s12916-024-03505-w

Download citation

Received : 01 December 2023

Accepted : 14 June 2024

Published : 15 August 2024

DOI : https://doi.org/10.1186/s12916-024-03505-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Adverse childhood experiences
  • Childhood adversity
  • Chronic disease
  • Long-term conditions
  • Multimorbidity

BMC Medicine

ISSN: 1741-7015

synthesis of literature review in research

COMMENTS

  1. Research Guides: How to Write a Literature Review: 6. Synthesize

    Describing how sources converse each other. Organizing similar ideas together so readers can understand how they overlap. Synthesis helps readers see where you add your own new ideas to existing knowledge. Critiquing a source. Simply comparing and contrasting sources. A series of summaries. Direct quotes without using your own voice.

  2. Literature Synthesis 101: How To Guide + Examples

    Learn how to synthesise the research when writing your literature review. We unpack 5 key things to address to ensure a strong synthesis. Who We Are ... let's quickly define what exactly we mean when we use the term "synthesis" within the context of a literature review. Simply put, literature synthesis means going beyond just describing ...

  3. Synthesize

    Synthesis Matrix. A synthesis matrix helps you record the main points of each source and document how sources relate to each other. After summarizing and evaluating your sources, arrange them in a matrix or use a citation manager to help you see how they relate to each other and apply to each of your themes or variables. By arranging your ...

  4. Synthesizing Research

    Synthesis, step by step. ... Your goal is to identify relevant themes, trends, gaps, and issues in the research. Your literature review will collect the results of this analysis and explain them in relation to your research question. Analysis tips - Sometimes, what you don't find in the literature is as important as what you do find ...

  5. How To Write Synthesis In Research: Example Steps

    Step 1 Organize your sources. Step 2 Outline your structure. Step 3 Write paragraphs with topic sentences. Step 4 Revise, edit and proofread. When you write a literature review or essay, you have to go beyond just summarizing the articles you've read - you need to synthesize the literature to show how it all fits together (and how your own ...

  6. PDF Writing A Literature Review and Using a Synthesis Matrix

    One way that seems particularly helpful in organizing literature reviews is the synthesis matrix. The synthesis matrix is a chart that allows a researcher to sort and categorize the different arguments presented on an issue. Across the top of the chart are the spaces to record sources, and along the side of the chart are the spaces to record ...

  7. LibGuides: Literature Reviews: 5. Synthesize your findings

    How to synthesize. In the synthesis step of a literature review, researchers analyze and integrate information from selected sources to identify patterns and themes. This involves critically evaluating findings, recognizing commonalities, and constructing a cohesive narrative that contributes to the understanding of the research topic. Synthesis.

  8. Research Guides: Conducting a Literature Review: Synthesize

    Review the information in the Resources box to learn about using a synthesis matrix. Create your own literature review synthesis matrix using the Word or Excel files available in the Activity box. Organize and synthesize literature related to your topic using your synthesis matrix

  9. Step 2: Analysis, synthesis, critique

    Skill #3: Critique. As you are writing your literature review, you will want to apply a critical eye to the literature you have evaluated and synthesized. Consider the strong arguments you will make contrasted with the potential gaps in previous research. The words that you choose to report your critiques of the literature will be non-neutral.

  10. Research Guides: Writing the Literature Review: Step 5: Synthesize

    Synthesis? Synthesis refers to combining separate elements to create a whole. When reading through your sources (peer reviewed journal articles, books, research studies, white papers etc.) you will pay attention to relationships between the studies, between groups in the studies, and look for any pattterns, similarities or differences.

  11. Steps in Conducting a Literature Review

    A literature review is an integrated analysis-- not just a summary-- of scholarly writings and other relevant evidence related directly to your research question.That is, it represents a synthesis of the evidence that provides background information on your topic and shows a association between the evidence and your research question.

  12. What Synthesis Methodology Should I Use? A Review and Analysis of

    Types of Research Synthesis: Key Characteristics: Purpose: Methods: Product: CONVENTIONAL Integrative Review: What is it? "The integrative literature review is a form of research that reviews, critiques, and synthesizes representative literature on a topic in an integrated way such that new frameworks and perspectives on the topic are generated" [, p.356]. ...

  13. A Guide to Evidence Synthesis: What is Evidence Synthesis?

    Evidence syntheses are much more time-intensive than traditional literature reviews and require a multi-person research team. See this PredicTER tool to get a sense of a systematic review timeline (one type of evidence synthesis). Before embarking on an evidence synthesis, it's important to clearly identify your reasons for conducting one.

  14. Synthesis

    Synthesis is a complex activity, which requires a high degree of comprehension and active engagement with the subject. As you progress in higher education, so increase the expectations on your abilities to synthesise. How to synthesise in a literature review: Identify themes/issues you'd like to discuss in the literature review. Think of an ...

  15. Synthesizing Sources

    A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question. It is often written as part of a thesis, dissertation, or research paper, in order to situate your work in relation to existing knowledge.

  16. A practical guide to data analysis in general literature reviews

    This article is a practical guide to conducting data analysis in general literature reviews. The general literature review is a synthesis and analysis of published research on a relevant clinical issue, and is a common format for academic theses at the bachelor's and master's levels in nursing, physiotherapy, occupational therapy, public health and other related fields.

  17. Chapter 7: Synthesizing Sources

    A literature review is not an annotated bibliography, organized by title, author, or date of publication. Rather, it is grouped by topic to create a whole view of the literature relevant to your research question. Figure 7.1. Your synthesis must demonstrate a critical analysis of the papers you collected as well as your ability to integrate the ...

  18. Synthetic literature reviews: An introduction

    Rather than explaining and reflecting on the results of previous studies (as is typically done in literature reviews), a synthetic literature review strives to create a new and more useful theoretical perspective by rigorously integrating the results of previous studies. Many people find the process of synthesis difficult, elusive, or mysterious.

  19. Literature Synthesis 101: How to Synthesise In Your Literature Review

    Learn how to synthesise the existing literature for your literature review by addressing five key questions. In this video, we explain exactly how you can en...

  20. (PDF) A Synthesis of Literature Review Guidelines from Information

    The synthesis resul ts in the. identification of five m ajor stages fo r conducting literature reviews for publication, i.e. (1) Define the protocol, (2) Search the literature, (3) Select the ...

  21. LibGuides: Evidence Synthesis Service: Conducting the Review

    Provides practical guidance for undertaking evidence synthesis based on a thorough understanding of systematic review methodology. Presents core principles of systematic reviews and highlights issues that are specific to reviews of clinical tests, public health interventions, adverse effects, and economic evaluations.

  22. Literature review

    A literature review is an overview of the previously published works on a topic. The term can refer to a full scholarly paper or a section of a scholarly work such as a book, or an article. Either way, a literature review is supposed to provide the researcher/author and the audiences with a general image of the existing knowledge on the topic under question.

  23. Scoping review search practices in the social sciences: A scoping

    Scoping review methods have been used to explore topics such as flipped classroom model aspects that contribute to improved student learning experiences 19; to summarize and discuss research findings through a feminist citizenship lens on gender differences in dementia care 20; to map and synthesize the literature on assessment in social work ...

  24. Qualitative studies involving users of clinical neurotechnology: a

    Given the critical role considerations of agency, self-perception and personal identity play in assessing the ethical and legal significance of these technologies, our findings reveal a critical gap in the existing literature. This review provides a comprehensive synthesis of the current qualitative research landscape on neurotechnology and the ...

  25. The impact of adverse childhood experiences on multimorbidity: a

    This is the first systematic review and meta-analysis to synthesise the literature on ACEs and multimorbidity, showing a dose-dependent relationship across a large number of participants. It consolidates and enhances an extensive body of literature that shows an association between ACEs and individual long-term health conditions, risky health ...

  26. Parent engagement in child-focused interventions: A systematised review

    BackgroundParent engagement in child-focused interventions is increasingly recognised as an important aspect of effective intervention delivery. While several fields have an emerging literature around parent engagement, no reviews currently exist which combine findings across allied health literatures.ObjectiveThis review aimed to explore factors relevant to understanding parent engagement in ...