• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

causal comparative research questions examples

Home Market Research Research Tools and Apps

Causal Comparative Research: Definition, Types & Benefits

Causal-comparative research is a methodology used to identify cause-effect relationships between independent and dependent variables.

Within the field of research, there are multiple methodologies and ways to find answers to your needs, in this article we will address everything you need to know about Causal Comparative Research, a methodology with many advantages and applications.

What Is Causal Comparative Research?

Causal-comparative research is a methodology used to identify cause-effect relationships between independent and dependent variables.

Researchers can study cause and effect in retrospect. This can help determine the consequences or causes of differences already existing among or between different groups of people.

When you think of Casual Comparative Research, it will almost always consist of the following:

  • A method or set of methods to identify cause/effect relationships
  • A set of individuals (or entities) that are NOT selected randomly – they were intended to participate in this specific study
  • Variables are represented in two or more groups (cannot be less than two, otherwise there is no differentiation between them)
  • Non-manipulated independent variables – *typically, it’s a suggested relationship (since we can’t control the independent variable completely)

Types of Casual Comparative Research

Casual Comparative Research is broken down into two types:

  • Retrospective Comparative Research
  • Prospective Comparative Research

Retrospective Comparative Research: Involves investigating a particular question…. after the effects have occurred. As an attempt to see if a specific variable does influence another variable.

Prospective Comparative Research: This type of Casual Comparative Research is characterized by being initiated by the researcher and starting with the causes and determined to analyze the effects of a given condition. This type of investigation is much less common than the Retrospective type of investigation.

LEARN ABOUT: Quasi-experimental Research

Causal Comparative Research vs Correlation Research

The universal rule of statistics… correlation is NOT causation! 

Casual Comparative Research does not rely on relationships. Instead, they’re comparing two groups to find out whether the independent variable affected the outcome of the dependent variable

When running a Causal Comparative Research, none of the variables can be influenced, and a cause-effect relationship has to be established with a persuasive, logical argument; otherwise, it’s a correlation.

Another significant difference between both methodologies is their analysis of the data collected. In the case of Causal Comparative Research, the results are usually analyzed using cross-break tables and comparing the averages obtained. At the same time, in Causal Comparative Research, Correlation Analysis typically uses scatter charts and correlation coefficients.

Advantages and Disadvantages of Causal Comparative Research

Like any research methodology, causal comparative research has a specific use and limitations to consider when considering them in your next project. Below we list some of the main advantages and disadvantages.

  • It is more efficient since it allows you to save human and economic resources and to do it relatively quickly.
  • Identifying causes of certain occurrences (or non-occurrences)
  • Thus, descriptive analysis rather than experimental

Disadvantages

  • You’re not fully able to manipulate/control an independent variable as well as the lack of randomization
  • Like other methodologies, it tends to be prone to some research bias , the most common type of research is subject- selection bias , so special care must be taken to avoid it so as not to compromise the validity of this type of research.
  • The loss of subjects/location influences / poor attitude of subjects/testing threats….are always a possibility

Finally, it is important to remember that the results of this type of causal research should be interpreted with caution since a common mistake is to think that although there is a relationship between the two variables analyzed, this does not necessarily guarantee that the variable influences or is the main factor to influence in the second variable.

LEARN ABOUT: ANOVA testing

QuestionPro can be your ally in your next Causal Comparative Research

QuestionPro is one of the platforms most used by the world’s leading research agencies, thanks to its diverse functions and versatility when collecting and analyzing data.

With QuestionPro you will not only be able to collect the necessary data to carry out your causal comparative research, you will also have access to a series of advanced reports and analyses to obtain valuable insights for your research project.

We invite you to learn more about our Research Suite, schedule a free demo of our main features today, and clarify all your doubts about our solutions.

LEARN MORE         SIGN UP FREE

Author : John Oppenhimer

MORE LIKE THIS

Cannabis Industry Business Intelligence

Cannabis Industry Business Intelligence: Impact on Research

May 28, 2024

Best Dynata Alternatives

Top 10 Dynata Alternatives & Competitors

May 27, 2024

causal comparative research questions examples

What Are My Employees Really Thinking? The Power of Open-ended Survey Analysis

May 24, 2024

When I think of “disconnected”, it is important that this is not just in relation to people analytics, Employee Experience or Customer Experience - it is also relevant to looking across them.

I Am Disconnected – Tuesday CX Thoughts

May 21, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence

Market Research

  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • Causal Research

Try Qualtrics for free

Causal research: definition, examples and how to use it.

16 min read Causal research enables market researchers to predict hypothetical occurrences & outcomes while improving existing strategies. Discover how this research can decrease employee retention & increase customer success for your business.

What is causal research?

Causal research, also known as explanatory research or causal-comparative research, identifies the extent and nature of cause-and-effect relationships between two or more variables.

It’s often used by companies to determine the impact of changes in products, features, or services process on critical company metrics. Some examples:

  • How does rebranding of a product influence intent to purchase?
  • How would expansion to a new market segment affect projected sales?
  • What would be the impact of a price increase or decrease on customer loyalty?

To maintain the accuracy of causal research, ‘confounding variables’ or influences — e.g. those that could distort the results — are controlled. This is done either by keeping them constant in the creation of data, or by using statistical methods. These variables are identified before the start of the research experiment.

As well as the above, research teams will outline several other variables and principles in causal research:

  • Independent variables

The variables that may cause direct changes in another variable. For example, the effect of truancy on a student’s grade point average. The independent variable is therefore class attendance.

  • Control variables

These are the components that remain unchanged during the experiment so researchers can better understand what conditions create a cause-and-effect relationship.  

This describes the cause-and-effect relationship. When researchers find causation (or the cause), they’ve conducted all the processes necessary to prove it exists.

  • Correlation

Any relationship between two variables in the experiment. It’s important to note that correlation doesn’t automatically mean causation. Researchers will typically establish correlation before proving cause-and-effect.

  • Experimental design

Researchers use experimental design to define the parameters of the experiment — e.g. categorizing participants into different groups.

  • Dependent variables

These are measurable variables that may change or are influenced by the independent variable. For example, in an experiment about whether or not terrain influences running speed, your dependent variable is the terrain.  

Why is causal research useful?

It’s useful because it enables market researchers to predict hypothetical occurrences and outcomes while improving existing strategies. This allows businesses to create plans that benefit the company. It’s also a great research method because researchers can immediately see how variables affect each other and under what circumstances.

Also, once the first experiment has been completed, researchers can use the learnings from the analysis to repeat the experiment or apply the findings to other scenarios. Because of this, it’s widely used to help understand the impact of changes in internal or commercial strategy to the business bottom line.

Some examples include:

  • Understanding how overall training levels are improved by introducing new courses
  • Examining which variations in wording make potential customers more interested in buying a product
  • Testing a market’s response to a brand-new line of products and/or services

So, how does causal research compare and differ from other research types?

Well, there are a few research types that are used to find answers to some of the examples above:

1. Exploratory research

As its name suggests, exploratory research involves assessing a situation (or situations) where the problem isn’t clear. Through this approach, researchers can test different avenues and ideas to establish facts and gain a better understanding.

Researchers can also use it to first navigate a topic and identify which variables are important. Because no area is off-limits, the research is flexible and adapts to the investigations as it progresses.

Finally, this approach is unstructured and often involves gathering qualitative data, giving the researcher freedom to progress the research according to their thoughts and assessment. However, this may make results susceptible to researcher bias and may limit the extent to which a topic is explored.

2. Descriptive research

Descriptive research is all about describing the characteristics of the population, phenomenon or scenario studied. It focuses more on the “what” of the research subject than the “why”.

For example, a clothing brand wants to understand the fashion purchasing trends amongst buyers in California — so they conduct a demographic survey of the region, gather population data and then run descriptive research. The study will help them to uncover purchasing patterns amongst fashion buyers in California, but not necessarily why those patterns exist.

As the research happens in a natural setting, variables can cross-contaminate other variables, making it harder to isolate cause and effect relationships. Therefore, further research will be required if more causal information is needed.

Get started on your market research journey with CoreXM

How is causal research different from the other two methods above?

Well, causal research looks at what variables are involved in a problem and ‘why’ they act a certain way. As the experiment takes place in a controlled setting (thanks to controlled variables) it’s easier to identify cause-and-effect amongst variables.

Furthermore, researchers can carry out causal research at any stage in the process, though it’s usually carried out in the later stages once more is known about a particular topic or situation.

Finally, compared to the other two methods, causal research is more structured, and researchers can combine it with exploratory and descriptive research to assist with research goals.

Summary of three research types

causal research table

What are the advantages of causal research?

  • Improve experiences

By understanding which variables have positive impacts on target variables (like sales revenue or customer loyalty), businesses can improve their processes, return on investment, and the experiences they offer customers and employees.

  • Help companies improve internally

By conducting causal research, management can make informed decisions about improving their employee experience and internal operations. For example, understanding which variables led to an increase in staff turnover.

  • Repeat experiments to enhance reliability and accuracy of results

When variables are identified, researchers can replicate cause-and-effect with ease, providing them with reliable data and results to draw insights from.

  • Test out new theories or ideas

If causal research is able to pinpoint the exact outcome of mixing together different variables, research teams have the ability to test out ideas in the same way to create viable proof of concepts.

  • Fix issues quickly

Once an undesirable effect’s cause is identified, researchers and management can take action to reduce the impact of it or remove it entirely, resulting in better outcomes.

What are the disadvantages of causal research?

  • Provides information to competitors

If you plan to publish your research, it provides information about your plans to your competitors. For example, they might use your research outcomes to identify what you are up to and enter the market before you.

  • Difficult to administer

Causal research is often difficult to administer because it’s not possible to control the effects of extraneous variables.

  • Time and money constraints

Budgetary and time constraints can make this type of research expensive to conduct and repeat. Also, if an initial attempt doesn’t provide a cause and effect relationship, the ROI is wasted and could impact the appetite for future repeat experiments.

  • Requires additional research to ensure validity

You can’t rely on just the outcomes of causal research as it’s inaccurate. It’s best to conduct other types of research alongside it to confirm its output.

  • Trouble establishing cause and effect

Researchers might identify that two variables are connected, but struggle to determine which is the cause and which variable is the effect.

  • Risk of contamination

There’s always the risk that people outside your market or area of study could affect the results of your research. For example, if you’re conducting a retail store study, shoppers outside your ‘test parameters’ shop at your store and skew the results.

How can you use causal research effectively?

To better highlight how you can use causal research across functions or markets, here are a few examples:

Market and advertising research

A company might want to know if their new advertising campaign or marketing campaign is having a positive impact. So, their research team can carry out a causal research project to see which variables cause a positive or negative effect on the campaign.

For example, a cold-weather apparel company in a winter ski-resort town may see an increase in sales generated after a targeted campaign to skiers. To see if one caused the other, the research team could set up a duplicate experiment to see if the same campaign would generate sales from non-skiers. If the results reduce or change, then it’s likely that the campaign had a direct effect on skiers to encourage them to purchase products.

Improving customer experiences and loyalty levels

Customers enjoy shopping with brands that align with their own values, and they’re more likely to buy and present the brand positively to other potential shoppers as a result. So, it’s in your best interest to deliver great experiences and retain your customers.

For example, the Harvard Business Review found that an increase in customer retention rates by 5% increased profits by 25% to 95%. But let’s say you want to increase your own, how can you identify which variables contribute to it?Using causal research, you can test hypotheses about which processes, strategies or changes influence customer retention. For example, is it the streamlined checkout? What about the personalized product suggestions? Or maybe it was a new solution that solved their problem? Causal research will help you find out.

Discover how to use analytics to improve customer retention.

Improving problematic employee turnover rates

If your company has a high attrition rate, causal research can help you narrow down the variables or reasons which have the greatest impact on people leaving. This allows you to prioritize your efforts on tackling the issues in the right order, for the best positive outcomes.

For example, through causal research, you might find that employee dissatisfaction due to a lack of communication and transparency from upper management leads to poor morale, which in turn influences employee retention.

To rectify the problem, you could implement a routine feedback loop or session that enables your people to talk to your company’s C-level executives so that they feel heard and understood.

How to conduct causal research first steps to getting started are:

1. Define the purpose of your research

What questions do you have? What do you expect to come out of your research? Think about which variables you need to test out the theory.

2. Pick a random sampling if participants are needed

Using a technology solution to support your sampling, like a database, can help you define who you want your target audience to be, and how random or representative they should be.

3. Set up the controlled experiment

Once you’ve defined which variables you’d like to measure to see if they interact, think about how best to set up the experiment. This could be in-person or in-house via interviews, or it could be done remotely using online surveys.

4. Carry out the experiment

Make sure to keep all irrelevant variables the same, and only change the causal variable (the one that causes the effect) to gather the correct data. Depending on your method, you could be collecting qualitative or quantitative data, so make sure you note your findings across each regularly.

5. Analyze your findings

Either manually or using technology, analyze your data to see if any trends, patterns or correlations emerge. By looking at the data, you’ll be able to see what changes you might need to do next time, or if there are questions that require further research.

6. Verify your findings

Your first attempt gives you the baseline figures to compare the new results to. You can then run another experiment to verify your findings.

7. Do follow-up or supplemental research

You can supplement your original findings by carrying out research that goes deeper into causes or explores the topic in more detail. One of the best ways to do this is to use a survey. See ‘Use surveys to help your experiment’.

Identifying causal relationships between variables

To verify if a causal relationship exists, you have to satisfy the following criteria:

  • Nonspurious association

A clear correlation exists between one cause and the effect. In other words, no ‘third’ that relates to both (cause and effect) should exist.

  • Temporal sequence

The cause occurs before the effect. For example, increased ad spend on product marketing would contribute to higher product sales.

  • Concomitant variation

The variation between the two variables is systematic. For example, if a company doesn’t change its IT policies and technology stack, then changes in employee productivity were not caused by IT policies or technology.

How surveys help your causal research experiments?

There are some surveys that are perfect for assisting researchers with understanding cause and effect. These include:

  • Employee Satisfaction Survey – An introductory employee satisfaction survey that provides you with an overview of your current employee experience.
  • Manager Feedback Survey – An introductory manager feedback survey geared toward improving your skills as a leader with valuable feedback from your team.
  • Net Promoter Score (NPS) Survey – Measure customer loyalty and understand how your customers feel about your product or service using one of the world’s best-recognized metrics.
  • Employee Engagement Survey – An entry-level employee engagement survey that provides you with an overview of your current employee experience.
  • Customer Satisfaction Survey – Evaluate how satisfied your customers are with your company, including the products and services you provide and how they are treated when they buy from you.
  • Employee Exit Interview Survey – Understand why your employees are leaving and how they’ll speak about your company once they’re gone.
  • Product Research Survey – Evaluate your consumers’ reaction to a new product or product feature across every stage of the product development journey.
  • Brand Awareness Survey – Track the level of brand awareness in your target market, including current and potential future customers.
  • Online Purchase Feedback Survey – Find out how well your online shopping experience performs against customer needs and expectations.

That covers the fundamentals of causal research and should give you a foundation for ongoing studies to assess opportunities, problems, and risks across your market, product, customer, and employee segments.

If you want to transform your research, empower your teams and get insights on tap to get ahead of the competition, maybe it’s time to leverage Qualtrics CoreXM.

Qualtrics CoreXM provides a single platform for data collection and analysis across every part of your business — from customer feedback to product concept testing. What’s more, you can integrate it with your existing tools and services thanks to a flexible API.

Qualtrics CoreXM offers you as much or as little power and complexity as you need, so whether you’re running simple surveys or more advanced forms of research, it can deliver every time.

Related resources

Market intelligence 10 min read, marketing insights 11 min read, ethnographic research 11 min read, qualitative vs quantitative research 13 min read, qualitative research questions 11 min read, qualitative research design 12 min read, primary vs secondary research 14 min read, request demo.

Ready to learn more about Qualtrics?

causal comparative research questions examples

Causal Comparative Research: Methods And Examples

Ritu was in charge of marketing a new protein drink about to be launched. The client wanted a causal-comparative study…

Causal Comparative Research

Ritu was in charge of marketing a new protein drink about to be launched. The client wanted a causal-comparative study highlighting the drink’s benefits. They demanded that comparative analysis be made the main campaign design strategy. After carefully analyzing the project requirements, Ritu decided to follow a causal-comparative research design. She realized that causal-comparative research emphasizing physical development in different groups of people would lay a good foundation to establish the product.

What Is Causal Comparative Research?

Examples of causal comparative research variables.

Causal-comparative research is a method used to identify the cause–effect relationship between a dependent and independent variable. This relationship is usually a suggested relationship because we can’t control an independent variable completely. Unlike correlation research, this doesn’t rely on relationships. In a causal-comparative research design, the researcher compares two groups to find out whether the independent variable affected the outcome or the dependent variable.

A causal-comparative method determines whether one variable has a direct influence on the other and why. It identifies the causes of certain occurrences (or non-occurrences). It makes a study descriptive rather than experimental by scrutinizing the relationships among different variables in which the independent variable has already occurred. Variables can’t be manipulated sometimes, but a link between dependent and independent variables is established and the implications of possible causes are used to draw conclusions.

In a causal-comparative design, researchers study cause and effect in retrospect and determine consequences or causes of differences already existing among or between groups of people.

Let’s look at some characteristics of causal-comparative research:

  • This method tries to identify cause and effect relationships.
  • Two or more groups are included as variables.
  • Individuals aren’t selected randomly.
  • Independent variables can’t be manipulated.
  • It helps save time and money.

The main purpose of a causal-comparative study is to explore effects, consequences and causes. There are two types of causal-comparative research design. They are:

Retrospective Causal Comparative Research

For this type of research, a researcher has to investigate a particular question after the effects have occurred. They attempt to determine whether or not a variable influences another variable.

Prospective Causal Comparative Research

The researcher initiates a study, beginning with the causes and determined to analyze the effects of a given condition. This is not as common as retrospective causal-comparative research.

Usually, it’s easier to compare a variable with the known than the unknown.

Researchers use causal-comparative research to achieve research goals by comparing two variables that represent two groups. This data can include differences in opportunities, privileges exclusive to certain groups or developments with respect to gender, race, nationality or ability.

For example, to find out the difference in wages between men and women, researchers have to make a comparative study of wages earned by both genders across various professions, hierarchies and locations. None of the variables can be influenced and cause-effect relationship has to be established with a persuasive logical argument. Some common variables investigated in this type of research are:

  • Achievement and other ability variables
  • Family-related variables
  • Organismic variables such as age, sex and ethnicity
  • Variables related to schools
  • Personality variables

While raw test scores, assessments and other measures (such as grade point averages) are used as data in this research, sources, standardized tests, structured interviews and surveys are popular research tools.

However, there are drawbacks of causal-comparative research too, such as its inability to manipulate or control an independent variable and the lack of randomization. Subject-selection bias always remains a possibility and poses a threat to the internal validity of a study. Researchers can control it with statistical matching or by creating identical subgroups. Executives have to look out for loss of subjects, location influences, poor attitude of subjects and testing threats to produce a valid research study.

Harappa’s Thinking Critically program is for managers who want to learn how to think effectively before making critical decisions. Learn how leaders articulate the reasons behind and implications of their decisions. Become a growth-driven manager looking to select the right strategies to outperform targets. It’s packed with problem-solving and effective-thinking tools that are essential for skill development. What more? It offers live learning support and the opportunity to progress at your own pace. Ask for your free demo today!

Explore Harappa Diaries to learn more about topics such as Objectives Of Research Methodology , Types Of Thinking , What Is Visualisation and Effective Learning Methods to upgrade your knowledge and skills.

Thriversitybannersidenav

  • Translators
  • Graphic Designers

Solve

Please enter the email address you used for your account. Your sign in information will be sent to your email address after it has been verified.

Causal Comparative Research: Insights and Implications

David Costello

Diving into the realm of research methodologies, one encounters a variety of approaches tailored for specific inquiries. Causal Comparative Research, at its core, refers to a research design aimed at identifying and analyzing causal relationships between variables, specifically when the researcher does not have control over active manipulation of variables. Instead of manipulating variables as in experimental research, this method examines existing differences between or among groups to derive potential causes.

Its significance in the academic and research arena is multifaceted. For scenarios where experimental designs are either not feasible or ethical, Causal Comparative Research provides an alternative pathway to glean insights. This approach bridges the gap between mere observational studies and those requiring strict control, offering researchers a valuable tool to unearth potential causal links in a myriad of contexts. By understanding these causal links, scholars, policymakers, and professionals can make more informed decisions and theories, further enriching our collective knowledge base.

Background and evolution

Causal Comparative Research, while not as old as some other research methodologies, has roots deeply embedded in the quest for understanding relationships without direct manipulation. The method blossomed in fields such as education, sociology, and psychology during times when researchers confronted questions of causality. Over time, as the academic community acknowledged the need to investigate causal relationships in naturally occurring group differences, this method gained traction.

What sets Causal Comparative Research apart from other methodologies is its unique stance on causality without direct interference. Experimental research, often hailed as the gold standard for identifying causal relationships, involves deliberate manipulation of independent variables to gauge their effect on dependent variables . This controlled setting allows for clearer cause-and-effect assertions. On the other hand, observational studies, which are purely descriptive, steer clear from making any causal inferences and focus primarily on recording and understanding patterns or phenomena as they naturally occur.

Yet, nestled between these two methodologies, Causal Comparative Research carves its niche. It aims to identify potential causes by examining existing differences between or among groups. While it doesn't offer the direct control of an experiment, it delves deeper than a mere observational approach by trying to understand the "why" behind observed differences. In doing so, it offers a unique blend of retrospective investigation with a pursuit for causality, providing researchers with a versatile tool in their investigative arsenal.

Key characteristics

Causal Comparative Research is distinguished by a unique set of features that demarcate its approach from other research methodologies. These characteristics not only define its operational dynamics but also guide its potential applications and insights. By understanding these foundational traits, researchers can effectively harness the method's strengths and navigate its nuances.

Non-manipulation of variables

One of the foundational attributes of Causal Comparative Research is the non-manipulation of variables. Rather than actively intervening or changing conditions, researchers in this paradigm focus on studying groups as they naturally present themselves . This means the intrinsic differences between groups, which have already emerged, become the central focus.

Such a non-interventionist approach allows for real-world applicability and reduces the artificiality sometimes present in controlled experiments. However, this comes at the cost of being less definitive about causal relationships since the conditions aren't being manipulated directly by the researcher.

By studying pre-existing conditions and group differences, researchers aim to unearth potential causative factors or trends that may have otherwise gone unnoticed in a more controlled setting.

Retrospective in nature

Causal Comparative Research is inherently retrospective. Instead of setting up conditions and predicting future outcomes, researchers using this method look backward, aiming to identify what might have caused the current differences between groups .

This backward-looking approach offers a distinct vantage point. It allows researchers to harness historical data, past events, and already established patterns to discern potential causal relationships. While this method doesn't provide as concrete causative conclusions as prospective studies, it provides crucial insights into historical causative factors.

Understanding the past is vital in many academic fields. This retrospective nature provides a pathway to delve into historical causality, offering insights that can guide future investigations and decisions.

Relies on existing differences between or among groups

The very essence of Causal Comparative Research is rooted in the examination of existing differences. Instead of creating distinct groups through manipulation, researchers study naturally occurring group differences .

These existing distinctions can arise from a multitude of factors, be it cultural, environmental, socio-economic, or even genetic. The goal is to discern whether these differences can hint at underlying causal relationships or if they are mere coincidences.

The reliance on pre-existing differences is both a strength and a limitation. It ensures genuine applicability to real-world scenarios but also introduces potential confounding variables that researchers must be cautious of while interpreting results.

Advantages of causal comparative research

Offering a unique blend of observational and experimental techniques, Causal Comparative Research is tailored for situations demanding flexibility without compromising on the search for causal insights. Here is why many researchers consider it a crucial tool in their investigative arsenal.

Useful when experimental research is not feasible

Causal Comparative Research emerges as a strong alternative in scenarios where experimental research is unfeasible. Experimental research, while robust, often requires conditions or manipulations that might not be viable or ethical , especially in fields like psychology, sociology, or education.

In such situations, relying on naturally occurring differences provides researchers a viable avenue to still investigate potential causal relationships without directly intervening or risking harm. Thus, it offers a middle ground between pure observation and controlled experimentation, allowing for causal inquiries in challenging contexts.

Provides valuable insights in a short amount of time

One of the standout attributes of Causal Comparative Research is its efficiency. Given that it focuses on pre-existing differences, there's no need to wait for conditions to develop or results to manifest over extended periods.

This means that researchers can glean valuable insights in a relatively shorter time frame compared to longitudinal or prospective experimental designs. For pressing questions or time-sensitive scenarios, this method offers timely data and conclusions. Its swiftness does not compromise depth, ensuring that the insights derived are both timely and profound.

Can offer preliminary evidence before experimental designs are implemented

Before diving into a full-fledged experimental design, researchers often seek preliminary evidence or hints to justify their hypotheses or the feasibility of the experiment. Causal Comparative Research serves this purpose aptly.

By examining existing differences and drawing potential causal links, it provides an initial layer of evidence. This preliminary data can guide the structuring of more elaborate, controlled experiments, ensuring they're grounded in prior findings. Thus, it acts as a stepping stone, paving the way for more rigorous research designs by providing an initial overview of potential causal links.

Limitations and challenges

Every research methodology, regardless of its strengths, comes with its set of limitations and challenges. Causal Comparative Research, while flexible and versatile, is no exception. Before embracing its advantages, it's imperative for researchers to be acutely aware of its potential pitfalls and the nuances that might influence their findings.

Cannot definitively establish cause-and-effect relationships

While Causal Comparative Research offers valuable insights into potential causal relationships, it does not provide definitive cause-and-effect conclusions. Without direct manipulation of variables, it's challenging to ascertain a clear causative link. This inherent limitation means that, at best, findings can suggest probable causes but cannot confirm them with the same certainty as experimental research.

Potential for confounding variables

Given the reliance on naturally occurring group differences, there's a heightened risk of confounding variables influencing the outcomes . These are external factors that might affect the dependent variable , clouding the clarity of potential causal links. Researchers must remain vigilant, identifying and accounting for these variables to ensure the study's findings remain as untainted as possible by external influences.

Difficulty in ensuring group equivalency

Ensuring that the groups under study are equivalent is paramount in Causal Comparative Research. Any intrinsic group differences, other than the ones being studied, can skew results and interpretations. This challenge underscores the importance of careful selection and meticulous analysis to minimize the impact of non-equivalent groups on the research findings.

Steps in conducting causal comparative research

The process of Causal Comparative Research demands a systematic progression through specific stages to ensure that the research is comprehensive, accurate, and valid. Below is a step-by-step breakdown of this research methodology:

  • Identification of the Research Problem: This initial stage involves recognizing and defining the specific research problem or research question . It forms the foundation upon which the entire research process will be built, making it crucial to be clear, concise, and relevant.
  • Selection of Groups: Once the problem is identified, researchers need to select the groups they wish to compare. These groups should have existing differences relevant to the research question. The accuracy and relevance of group selection directly influence the research's validity.
  • Measurement of the Dependent Variable(s): In this phase, researchers decide on the dependent variables they'll measure. These are the outcomes or effects potentially influenced by the groups' differences. Proper operationalization and measurement scales are essential to ensure that the data collected is accurate and meaningful.
  • Data Collection and Analysis: With everything set up, the actual data collection begins. This could involve surveys, observations, or any other relevant data collection method. Post collection, the data undergoes rigorous analysis to identify patterns, differences, or potential causal links.
  • Interpretation and Reporting of Results: Once the analysis is complete, researchers need to interpret the results in the context of the research problem. This interpretation forms the basis of the research's conclusions. Finally, findings are reported, often in the form of academic papers or reports, ensuring that the insights can be shared and critiqued by the broader academic community.

By meticulously following these steps, researchers can navigate the complexities of Causal Comparative Research, ensuring that their investigations are both methodologically sound and academically valuable.

Key considerations for validity

When conducting Causal Comparative Research, validity remains at the forefront. Ensuring that the research accurately captures and represents the phenomena under study is pivotal for its credibility and utility. Delving into the intricacies of validity, two primary considerations emerge: internal and external validity.

Internal validity concerns

Internal validity pertains to the degree to which the research accurately establishes a cause-and-effect relationship between variables. However, several threats can compromise it, especially in a causal-comparative setup, for instance:

  • Maturation: Refers to changes occurring naturally over time within participants, which could be misconstrued as effects of the studied variable.
  • Testing: Concerns the effects of taking a test multiple times. Participants might improve not because of the variable of interest, but due to familiarity with the test.
  • Instrumentation: Issues arise when the tools or methods used to collect data change or are inconsistent, potentially skewing results.

Addressing these concerns and others is crucial to maintain the research's integrity and ensure that the findings genuinely reflect the causal relationships under scrutiny.

External validity considerations

While internal validity focuses on the research's accuracy within its confines, external validity revolves around the generalizability of the findings. It assesses whether the study's conclusions can be applied to broader contexts, populations, or settings.

One major concern here is the representativeness of the groups studied. If they are too niche or specific, generalizing findings becomes problematic. Additionally, the conditions under which the research is conducted can influence its applicability elsewhere. If the environment, time, or setting is too unique, the findings might not hold true in different scenarios.

Ensuring robust external validity means that the research doesn't just hold academic value, but can also inform real-world practices, policies, and decisions, making its implications far-reaching and impactful.

Illustrative examples of causal comparative research

Across varied disciplines, Causal Comparative Research has been employed to address pressing questions, providing insights into causal factors without the need for direct manipulation. Let's explore a few examples that encapsulate its breadth and significance.

Comparing traditional and online learning outcomes

With the rise of digital platforms, online learning has rapidly grown as a popular alternative to traditional classroom settings . However, discerning the effectiveness of both mediums in terms of student performance and engagement is essential for educators and institutions. Causal Comparative Research provides an apt approach to explore this, without altering the learning environments, but rather examining the existing outcomes.

  • Identification of the Research Problem: The primary concern here is understanding the potential causal factors behind the differing success rates or engagement levels of students in traditional classrooms versus online learning platforms.
  • Selection of Groups: Two primary groups would be selected for this study: students who have primarily undergone traditional classroom learning and those who have predominantly experienced online learning. It would be essential to ensure these groups are as comparable as possible in other aspects, such as age, educational level, and background.
  • Measurement of the Dependent Variable(s): The dependent variables might include academic performance (grades or test scores), engagement metrics (participation in class discussions or assignments turned in), and possibly even feedback or satisfaction surveys from students regarding their learning experience.
  • Data Collection and Analysis: Data would be gathered from institutional records, online learning platforms, and potentially direct surveys. Once collected, statistical analyses would be employed to compare the performance and engagement metrics between the two groups, adjusting for any potential confounding variables.
  • Interpretation and Reporting of Results: After analysis, researchers would interpret the data to understand any significant differences in outcomes between traditional and online learners. It's crucial to report findings with the acknowledgment that the research indicates correlation and not necessarily direct causation. Recommendations could be made for educators based on the insights gathered.

In conclusion, while both traditional and online learning environments offer unique benefits, utilizing Causal Comparative Research allows institutions and educators to glean vital insights into their relative effectiveness. Such understanding can guide curriculum development, teaching methodologies, and even future educational investments.

Analysis of lifestyle factors in disease prevalence

In contemporary health studies, lifestyle factors like diet, exercise, and stress have often been cited as potential determinants of disease prevalence . With diverse populations adhering to varied lifestyles, understanding the potential influence of these factors on disease rates becomes pivotal for healthcare professionals and policymakers. Causal Comparative Research offers a path to delve into these influences by analyzing existing health outcomes against different lifestyle patterns.

  • Identification of the Research Problem: The primary goal is to determine whether specific lifestyle factors (e.g., sedentary behavior, dietary habits, tobacco use) have a significant influence on the prevalence of certain diseases, such as heart disease, diabetes, or hypertension.
  • Selection of Groups: Groups can be categorized based on distinct lifestyle patterns. For example, groups might consist of individuals who are sedentary versus those who exercise regularly, or those who adhere to a vegetarian diet versus those who consume meat regularly.
  • Measurement of the Dependent Variable(s): The dependent variable would be the prevalence or incidence of specific diseases in each group. This can be measured using health records, self-reported incidents, or clinical diagnoses.
  • Data Collection and Analysis: Data can be sourced from health databases, patient surveys, or direct health check-ups. Statistical tools can then be applied to identify any significant disparities in disease rates between the varied lifestyle groups, accounting for potential confounders like age, genetics, or socio-economic status.
  • Interpretation and Reporting of Results: After the data analysis, findings would elucidate any notable correlations between lifestyle factors and disease prevalence. It's vital to emphasize that this research would indicate associations, not direct causation. Still, such insights could be invaluable for health promotion campaigns and policy formulation.

To conclude, by leveraging Causal Comparative Research in analyzing lifestyle factors and their potential influence on disease rates, healthcare stakeholders can be better equipped with knowledge that informs public health strategies and individual lifestyle recommendations.

Resilience levels in trauma survivors vs. non-trauma individuals

Resilience—the capacity to recover quickly from difficulties and maintain mental health— has piqued the interest of psychologists , especially when comparing trauma survivors to those who haven't experienced trauma. The ability to understand the underlying factors contributing to resilience can pave the way for better therapeutic approaches and interventions.

  • Identification of the Research Problem: Determining whether individuals who have experienced trauma have different resilience levels compared to those who haven't.
  • Selection of Groups: One group would consist of individuals who have experienced significant traumatic events (such as natural disasters, personal assaults, or wartime experiences), and the second group would comprise individuals with no history of significant trauma.
  • Measurement of the Dependent Variable(s): Resilience levels would be the primary dependent variable, measured using standardized resilience scales like the Connor-Davidson Resilience Scale (CD-RISC) .
  • Data Collection and Analysis: Participants from both groups would complete the chosen resilience scale. Data would then be analyzed to determine if there are significant differences in resilience scores between the two groups. Covariates like age, gender, socioeconomic status, and mental health history might be controlled for to enhance the study's validity.
  • Interpretation and Reporting of Results: The findings would indicate whether trauma survivors, on average, have higher, lower, or comparable resilience levels to their non-trauma counterparts. This would provide valuable insights into the potential protective factors or coping strategies that trauma survivors might develop.

The outcomes of this study can significantly influence therapeutic strategies and post-trauma interventions, ensuring that individuals who've faced traumatic events receive tailored care that acknowledges their unique psychological landscape.

Impact of family structure on child development outcomes

Family structures have undergone significant evolution over the decades . With varying family setups—from nuclear families to single-parent households to extended family living arrangements—the question arises: How do these different structures impact child development? Delving into this query provides insights crucial for educators, therapists, and policymakers.

  • Identification of the Research Problem: Investigate the potential differences in child development outcomes based on varying family structures.
  • Selection of Groups: Children would be categorized based on their family structure: nuclear families, single-parent households, extended family households, and other non-traditional structures.
  • Measurement of the Dependent Variable(s): Child development outcomes, which could include academic performance, socio-emotional development, and behavioral patterns. These would be measured using standardized tests, behavioral assessments, and teacher or caregiver reports.
  • Data Collection and Analysis: Data would be collected from schools, families, and relevant institutions. Statistical methods would then be used to determine significant differences in developmental outcomes across the different family structures, controlling for factors like socio-economic status, parental education, and location.
  • Interpretation and Reporting of Results: Findings would detail whether and how family structures play a pivotal role in shaping child development. Results could reveal, for instance, if children from extended family structures exhibit better socio-emotional skills due to increased interactions with varied age groups within the family.

Understanding the nuances of how family structure affects child development can guide interventions, curricula designs, and policies to cater better to the diverse needs of children, ensuring every child receives the support they require to thrive.

Impact of organizational structures on employee productivity

As businesses evolve, they experiment with different organizational structures , from traditional hierarchies to flat structures to matrix setups. How do these varying structures influence employee productivity and satisfaction? Exploring this can provide businesses valuable insights to optimize performance and employee morale.

  • Identification of the Research Problem: Determine the effect of different organizational structures on employee productivity.
  • Selection of Groups: Employees from diverse firms, categorized based on their company's organizational structure: hierarchical, flat, matrix, and hybrid structures.
  • Measurement of the Dependent Variable(s): Employee productivity could be gauged through metrics like task completion rate, project delivery timelines, and output quality. Additionally, employee satisfaction surveys might be incorporated as secondary data.
  • Data Collection and Analysis: Data would be collected from employee performance metrics and satisfaction surveys across different companies. Advanced statistical methods would be employed to analyze potential variations in productivity and satisfaction across organizational structures, accounting for potential confounders.
  • Interpretation and Reporting of Results: Findings might indicate, for instance, that flat structures promote higher employee autonomy and satisfaction but might face challenges in larger teams due to potential communication breakdowns.

By discerning the relationship between organizational structure and employee productivity, businesses can make informed decisions on organizational design, ensuring optimal output while fostering a conducive work environment.

Best practices

Ensuring the validity and reliability of your Causal Comparative Research findings is paramount. Implementing best practices not only adds rigor to the research but also increases the trustworthiness of the results. Below are some practices to uphold when conducting Causal Comparative Research.

Ensuring representative samples

One of the primary pillars of credible research is the selection of a representative sample. A sample that genuinely mirrors the larger population ensures that findings can be more confidently generalized. In Causal Comparative Research, the groups being compared should ideally capture the broader dynamics and diversity of the populations they represent.

To ensure a representative sample, researchers should be wary of biases during selection. This includes avoiding convenience sampling unless it's justified. Stratified random sampling or quota sampling can help in ensuring that different subgroups within the population are adequately represented.

Furthermore, the size of the sample plays a crucial role. While a larger sample can often yield more reliable results, it's imperative to ensure that it remains manageable and aligns with the study's logistical and financial constraints.

Controlling for extraneous variables

Extraneous variables can introduce noise into the research, obscuring the clarity of potential causal relationships. It's essential to identify potential confounders and control for them, ensuring that they don't unduly influence the outcome.

In Causal Comparative Research, since there's no direct manipulation of variables, the risk of uncontrolled extraneous variables affecting the outcome is heightened. One way to control for these variables is through matching, where participants in different groups are matched based on certain criteria, ensuring that these criteria do not interfere with the results.

Another technique involves statistical control, where advanced analytical methods, such as covariance analysis , are employed to account for the variance caused by extraneous variables.

Choosing appropriate statistical tools and techniques

The analysis phase is the heart of the research, where data comes alive and starts narrating a story. Selecting the appropriate statistical tools and techniques is pivotal in ensuring that this story is accurate and meaningful.

In Causal Comparative Research, the choice of statistical analysis largely depends on the nature of the data and the research question . For instance, if you're comparing means of two groups, a t-test might be appropriate. However, for more than two groups, an ANOVA could be the preferred choice.

Advanced statistical models, such as regression analysis or structural equation modeling , might be employed for more complex research questions. Regardless of the chosen method, it's crucial to ensure that assumptions of the tests are met, and the data is adequately prepared for analysis.

In the landscape of research methodologies, Causal Comparative Research stands out as a compelling blend of observational and quasi-experimental approaches. While it offers the advantage of examining naturally occurring differences without the need for direct manipulation, it comes with its own set of challenges and considerations. As with all research methods , its efficacy lies in the meticulous application of its principles, and the conscious effort to uphold best practices. When executed with rigor, this method provides invaluable insights, bridging the gap between observation and direct experimentation, and helping researchers navigate the complex webs of causality in varied fields.

Header image by Tom Wang .

Related Posts

Ethnography 101: An Exploration of Cultures

Ethnography 101: An Exploration of Cultures

Understanding Independent vs. Dependent Variables

Understanding Independent vs. Dependent Variables

  • Academic Writing Advice
  • All Blog Posts
  • Writing Advice
  • Admissions Writing Advice
  • Book Writing Advice
  • Short Story Advice
  • Employment Writing Advice
  • Business Writing Advice
  • Web Content Advice
  • Article Writing Advice
  • Magazine Writing Advice
  • Grammar Advice
  • Dialect Advice
  • Editing Advice
  • Freelance Advice
  • Legal Writing Advice
  • Poetry Advice
  • Graphic Design Advice
  • Logo Design Advice
  • Translation Advice
  • Blog Reviews
  • Short Story Award Winners
  • Scholarship Winners

Advance your scientific manuscript with expert editing

Advance your scientific manuscript with expert editing

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Can J Hosp Pharm
  • v.67(1); Jan-Feb 2014

Logo of cjhp

Research: Articulating Questions, Generating Hypotheses, and Choosing Study Designs

Introduction.

Articulating a clear and concise research question is fundamental to conducting a robust and useful research study. Although “getting stuck into” the data collection is the exciting part of research, this preparation stage is crucial. Clear and concise research questions are needed for a number of reasons. Initially, they are needed to enable you to search the literature effectively. They will allow you to write clear aims and generate hypotheses. They will also ensure that you can select the most appropriate research design for your study.

This paper begins by describing the process of articulating clear and concise research questions, assuming that you have minimal experience. It then describes how to choose research questions that should be answered and how to generate study aims and hypotheses from your questions. Finally, it describes briefly how your question will help you to decide on the research design and methods best suited to answering it.

TURNING CURIOSITY INTO QUESTIONS

A research question has been described as “the uncertainty that the investigator wants to resolve by performing her study” 1 or “a logical statement that progresses from what is known or believed to be true to that which is unknown and requires validation”. 2 Developing your question usually starts with having some general ideas about the areas within which you want to do your research. These might flow from your clinical work, for example. You might be interested in finding ways to improve the pharmaceutical care of patients on your wards. Alternatively, you might be interested in identifying the best antihypertensive agent for a particular subgroup of patients. Lipowski 2 described in detail how work as a practising pharmacist can be used to great advantage to generate interesting research questions and hence useful research studies. Ideas could come from questioning received wisdom within your clinical area or the rationale behind quick fixes or workarounds, or from wanting to improve the quality, safety, or efficiency of working practice.

Alternatively, your ideas could come from searching the literature to answer a query from a colleague. Perhaps you could not find a published answer to the question you were asked, and so you want to conduct some research yourself. However, just searching the literature to generate questions is not to be recommended for novices—the volume of material can feel totally overwhelming.

Use a research notebook, where you regularly write ideas for research questions as you think of them during your clinical practice or after reading other research papers. It has been said that the best way to have a great idea is to have lots of ideas and then choose the best. The same would apply to research questions!

When you first identify your area of research interest, it is likely to be either too narrow or too broad. Narrow questions (such as “How is drug X prescribed for patients with condition Y in my hospital?”) are usually of limited interest to anyone other than the researcher. Broad questions (such as “How can pharmacists provide better patient care?”) must be broken down into smaller, more manageable questions. If you are interested in how pharmacists can provide better care, for example, you might start to narrow that topic down to how pharmacists can provide better care for one condition (such as affective disorders) for a particular subgroup of patients (such as teenagers). Then you could focus it even further by considering a specific disorder (depression) and a particular type of service that pharmacists could provide (improving patient adherence). At this stage, you could write your research question as, for example, “What role, if any, can pharmacists play in improving adherence to fluoxetine used for depression in teenagers?”

TYPES OF RESEARCH QUESTIONS

Being able to consider the type of research question that you have generated is particularly useful when deciding what research methods to use. There are 3 broad categories of question: descriptive, relational, and causal.

Descriptive

One of the most basic types of question is designed to ask systematically whether a phenomenon exists. For example, we could ask “Do pharmacists ‘care’ when they deliver pharmaceutical care?” This research would initially define the key terms (i.e., describing what “pharmaceutical care” and “care” are), and then the study would set out to look for the existence of care at the same time as pharmaceutical care was being delivered.

When you know that a phenomenon exists, you can then ask description and/or classification questions. The answers to these types of questions involve describing the characteristics of the phenomenon or creating typologies of variable subtypes. In the study above, for example, you could investigate the characteristics of the “care” that pharmacists provide. Classifications usually use mutually exclusive categories, so that various subtypes of the variable will have an unambiguous category to which they can be assigned. For example, a question could be asked as to “what is a pharmacist intervention” and a definition and classification system developed for use in further research.

When seeking further detail about your phenomenon, you might ask questions about its composition. These questions necessitate deconstructing a phenomenon (such as a behaviour) into its component parts. Within hospital pharmacy practice, you might be interested in asking questions about the composition of a new behavioural intervention to improve patient adherence, for example, “What is the detailed process that the pharmacist implicitly follows during delivery of this new intervention?”

After you have described your phenomena, you may then be interested in asking questions about the relationships between several phenomena. If you work on a renal ward, for example, you may be interested in looking at the relationship between hemoglobin levels and renal function, so your question would look something like this: “Are hemoglobin levels related to level of renal function?” Alternatively, you may have a categorical variable such as grade of doctor and be interested in the differences between them with regard to prescribing errors, so your research question would be “Do junior doctors make more prescribing errors than senior doctors?” Relational questions could also be asked within qualitative research, where a detailed understanding of the nature of the relationship between, for example, the gender and career aspirations of clinical pharmacists could be sought.

Once you have described your phenomena and have identified a relationship between them, you could ask about the causes of that relationship. You may be interested to know whether an intervention or some other activity has caused a change in your variable, and your research question would be about causality. For example, you may be interested in asking, “Does captopril treatment reduce blood pressure?” Generally, however, if you ask a causality question about a medication or any other health care intervention, it ought to be rephrased as a causality–comparative question. Without comparing what happens in the presence of an intervention with what happens in the absence of the intervention, it is impossible to attribute causality to the intervention. Although a causality question would usually be answered using a comparative research design, asking a causality–comparative question makes the research design much more explicit. So the above question could be rephrased as, “Is captopril better than placebo at reducing blood pressure?”

The acronym PICO has been used to describe the components of well-crafted causality–comparative research questions. 3 The letters in this acronym stand for Population, Intervention, Comparison, and Outcome. They remind the researcher that the research question should specify the type of participant to be recruited, the type of exposure involved, the type of control group with which participants are to be compared, and the type of outcome to be measured. Using the PICO approach, the above research question could be written as “Does captopril [ intervention ] decrease rates of cardiovascular events [ outcome ] in patients with essential hypertension [ population ] compared with patients receiving no treatment [ comparison ]?”

DECIDING WHETHER TO ANSWER A RESEARCH QUESTION

Just because a question can be asked does not mean that it needs to be answered. Not all research questions deserve to have time spent on them. One useful set of criteria is to ask whether your research question is feasible, interesting, novel, ethical, and relevant. 1 The need for research to be ethical will be covered in a later paper in the series, so is not discussed here. The literature review is crucial to finding out whether the research question fulfils the remaining 4 criteria.

Conducting a comprehensive literature review will allow you to find out what is already known about the subject and any gaps that need further exploration. You may find that your research question has already been answered. However, that does not mean that you should abandon the question altogether. It may be necessary to confirm those findings using an alternative method or to translate them to another setting. If your research question has no novelty, however, and is not interesting or relevant to your peers or potential funders, you are probably better finding an alternative.

The literature will also help you learn about the research designs and methods that have been used previously and hence to decide whether your potential study is feasible. As a novice researcher, it is particularly important to ask if your planned study is feasible for you to conduct. Do you or your collaborators have the necessary technical expertise? Do you have the other resources that will be needed? If you are just starting out with research, it is likely that you will have a limited budget, in terms of both time and money. Therefore, even if the question is novel, interesting, and relevant, it may not be one that is feasible for you to answer.

GENERATING AIMS AND HYPOTHESES

All research studies should have at least one research question, and they should also have at least one aim. As a rule of thumb, a small research study should not have more than 2 aims as an absolute maximum. The aim of the study is a broad statement of intention and aspiration; it is the overall goal that you intend to achieve. The wording of this broad statement of intent is derived from the research question. If it is a descriptive research question, the aim will be, for example, “to investigate” or “to explore”. If it is a relational research question, then the aim should state the phenomena being correlated, such as “to ascertain the impact of gender on career aspirations”. If it is a causal research question, then the aim should include the direction of the relationship being tested, such as “to investigate whether captopril decreases rates of cardiovascular events in patients with essential hypertension, relative to patients receiving no treatment”.

The hypothesis is a tentative prediction of the nature and direction of relationships between sets of data, phrased as a declarative statement. Therefore, hypotheses are really only required for studies that address relational or causal research questions. For the study above, the hypothesis being tested would be “Captopril decreases rates of cardiovascular events in patients with essential hypertension, relative to patients receiving no treatment”. Studies that seek to answer descriptive research questions do not test hypotheses, but they can be used for hypothesis generation. Those hypotheses would then be tested in subsequent studies.

CHOOSING THE STUDY DESIGN

The research question is paramount in deciding what research design and methods you are going to use. There are no inherently bad research designs. The rightness or wrongness of the decision about the research design is based simply on whether it is suitable for answering the research question that you have posed.

It is possible to select completely the wrong research design to answer a specific question. For example, you may want to answer one of the research questions outlined above: “Do pharmacists ‘care’ when they deliver pharmaceutical care?” Although a randomized controlled study is considered by many as a “gold standard” research design, such a study would just not be capable of generating data to answer the question posed. Similarly, if your question was, “Is captopril better than placebo at reducing blood pressure?”, conducting a series of in-depth qualitative interviews would be equally incapable of generating the necessary data. However, if these designs are swapped around, we have 2 combinations (pharmaceutical care investigated using interviews; captopril investigated using a randomized controlled study) that are more likely to produce robust answers to the questions.

The language of the research question can be helpful in deciding what research design and methods to use. Subsequent papers in this series will cover these topics in detail. For example, if the question starts with “how many” or “how often”, it is probably a descriptive question to assess the prevalence or incidence of a phenomenon. An epidemiological research design would be appropriate, perhaps using a postal survey or structured interviews to collect the data. If the question starts with “why” or “how”, then it is a descriptive question to gain an in-depth understanding of a phenomenon. A qualitative research design, using in-depth interviews or focus groups, would collect the data needed. Finally, the term “what is the impact of” suggests a causal question, which would require comparison of data collected with and without the intervention (i.e., a before–after or randomized controlled study).

CONCLUSIONS

This paper has briefly outlined how to articulate research questions, formulate your aims, and choose your research methods. It is crucial to realize that articulating a good research question involves considerable iteration through the stages described above. It is very common that the first research question generated bears little resemblance to the final question used in the study. The language is changed several times, for example, because the first question turned out not to be feasible and the second question was a descriptive question when what was really wanted was a causality question. The books listed in the “Further Reading” section provide greater detail on the material described here, as well as a wealth of other information to ensure that your first foray into conducting research is successful.

This article is the second in the CJHP Research Primer Series, an initiative of the CJHP Editorial Board and the CSHP Research Committee. The planned 2-year series is intended to appeal to relatively inexperienced researchers, with the goal of building research capacity among practising pharmacists. The articles, presenting simple but rigorous guidance to encourage and support novice researchers, are being solicited from authors with appropriate expertise.

Previous article in this series:

Bond CM. The research jigsaw: how to get started. Can J Hosp Pharm . 2014;67(1):28–30.

Competing interests: Mary Tully has received personal fees from the UK Renal Pharmacy Group to present a conference workshop on writing research questions and nonfinancial support (in the form of travel and accommodation) from the Dubai International Pharmaceuticals and Technologies Conference and Exhibition (DUPHAT) to present a workshop on conducting pharmacy practice research.

Further Reading

  • Cresswell J. Research design: qualitative, quantitative and mixed methods approaches. London (UK): Sage; 2009. [ Google Scholar ]
  • Haynes RB, Sackett DL, Guyatt GH, Tugwell P. Clinical epidemiology: how to do clinical practice research. 3rd ed. Philadelphia (PA): Lippincott, Williams & Wilkins; 2006. [ Google Scholar ]
  • Kumar R. Research methodology: a step-by-step guide for beginners. 3rd ed. London (UK): Sage; 2010. [ Google Scholar ]
  • Smith FJ. Conducting your pharmacy practice research project. London (UK): Pharmaceutical Press; 2005. [ Google Scholar ]

causal comparative research questions examples

Yearly paid plans are up to 65% off for the spring sale. Limited time only! 🌸

  • Form Builder
  • Survey Maker
  • AI Form Generator
  • AI Survey Tool
  • AI Quiz Maker
  • Store Builder
  • WordPress Plugin

causal comparative research questions examples

HubSpot CRM

causal comparative research questions examples

Google Sheets

causal comparative research questions examples

Google Analytics

causal comparative research questions examples

Microsoft Excel

causal comparative research questions examples

  • Popular Forms
  • Job Application Form Template
  • Rental Application Form Template
  • Hotel Accommodation Form Template
  • Online Registration Form Template
  • Employment Application Form Template
  • Application Forms
  • Booking Forms
  • Consent Forms
  • Contact Forms
  • Donation Forms
  • Customer Satisfaction Surveys
  • Employee Satisfaction Surveys
  • Evaluation Surveys
  • Feedback Surveys
  • Market Research Surveys
  • Personality Quiz Template
  • Geography Quiz Template
  • Math Quiz Template
  • Science Quiz Template
  • Vocabulary Quiz Template

Try without registration Quick Start

Read engaging stories, how-to guides, learn about forms.app features.

Inspirational ready-to-use templates for getting started fast and powerful.

Spot-on guides on how to use forms.app and make the most out of it.

causal comparative research questions examples

See the technical measures we take and learn how we keep your data safe and secure.

  • Integrations
  • Help Center
  • Sign In Sign Up Free
  • What is causal-comparative research: Definition, types & methods

What is causal-comparative research: Definition, types & methods

Defne Çobanoğlu

Like most of the population in the world, you probably also learned about World War 1 and its short-term and long-term causes. But there was no declaration from the nations stating their reasons. The researchers analyzed the events and occasions and drew cause-and-effect relationships between variables. Thanks to their efforts, the main causes and starting points are clearly defined. 

The research method used when investigating the reasons for WWI is causal-comparative research . And in this article, we give an explanation for causal-comparative research, the types of methods, advantages and disadvantages, and examples. Let us get started with the definition!

  • What is causal-comparative research?

Causal comparative research is a type of research method where the researcher tries to find out if there is a causal effect relationship between dependent and independent variables. In other words, the researcher using this method wants to know whether one thing changing affects another thing, and if so, why.

The researcher can look at previous events and try to draw conclusions and cause-and-effect relationships. But of course, there may be some times when it is not possible to do so. Then, they can collect information about a group of participants and observe the changes in the long run. Let us get into detail on the types of causal-comparative research:

  • Types of causal-comparative research

Causal-comparative research types

Causal-comparative research types

Even though the main objective of a causal-comparative study is to draw cause-and-effect relationships, how the researcher does it may change. Because it is possible that there may be some limitations that put the study in a difficult situation. Mainly, causal-comparative research design is divided into two groups. These research designs are: 

1 - Retrospective comparative research

Retrospective comparative study is about the study and comparison of the existing data to know more about the relations, patterns, or outcomes of past events and historical periods . In this study approach, the scientists collect data on past events and try to find results and create patterns. This method is mainly used when it is impossible to do a prospective comparative study. The reasons for limitation can be practical, ethical, or logistical reasons.

2 - Prospective comparative research

The prospective comparative study is about collecting information from a group of participants over a long period. Afterward, the scientists make some predictions about the future. Then, researchers follow the participants and observe the changes, outcomes, or developments. The main goal of this study is to see how the conditions in the beginning change and effect each other. 

  • Causal-comparative research examples

The nature of causal-comparative study design makes it possible to study and make a hypothesis on all kinds of past events and occurrences. When there are multiple variables, researchers can try to make sense of how different variables affect the outcome of situations. Now let us see some causal-comparative study method examples:

Causal-comparative research example #1

For example, let’s imagine that a researcher wants to figure out whether classroom sizes affect students' exam results. In this case, the classroom size is the independent variable, and the effect on academic performance is the dependent variable. The researcher can compare the exam results of students from classes of varying sizes to see if there is a correlation between the two.

Causal-comparative research example #2

There may or may not be a difference in leadership styles between men and women, and it is possible to figure out the difference by looking at various examples. To find data on the subject, the researcher can collect data on the leadership methods from both female and male leaders. And they compare the information between the two groups.

  • Advantages and disadvantages of causal-comparative research

A causal-comparative study design may be the perfect method for a researcher, but it may not be as suitable for another one. It is up to the aim of the study and the researcher’s wishes to use which research method. In order to make a conscious decision, the researcher should be aware of the advantages and disadvantages of causal-comparative design.

Advantages of causal-comparative research

  • This type of study helps identify the causes of occurrences .
  • It is useful when experimentation is not possible.
  • As this research type relies on existing data or natural occurrences, there is no need for experimentation. Therefore, it is cost-effective .
  • The findings of a causal-comparative research study are good for creating a hypothesis .
  • It is an effective method to make sense of past events to be prepared for the future.

Disadvantages of causal-comparative research

  • Randomization aspect is not possible in this type of study.
  • There is a lack of control over independent variables .
  • As with other research methodologies, this type of research is also prone to researcher bias . Subject-selection bias may be unavoidable.
  • When preexisting characteristics and events are studied, ethical issues may arise, especially if the data is sensitive.
  • Frequently asked questions about causal-comparative research

Is causal-comparative research quantitative or qualitative?

Causal-comparative research is mostly a quantitative study, as it gives factual and numerical data. After all, the primary goal of causal-comparative research is to find out whether or not there is a statistically noticeable difference or relation between conditions based on naturally occurring independent variables. But this study method also provides qualitative data as it answers “ why ” questions.

Causal-comparative research vs. correlational research

The main difference between causal-comparative research and correlational research is the fact that causal-comparative research studies two or more groups and one independent variable . And correlational research observes and studies two or more variables and one group .

Causal-comparative research vs. experimental research

The difference between experimental and causal-comparative study design is a big one. In the experimental study, the participants are randomly selected . In the causal-comparative study design, however, the participants are already in different groups as the events have already happened . Also, in other instances, natural events are studied without human intervention.

Causal-comparative research vs. quasi-experimental research

Both causal-comparative studies and quasi-experimental studies are used to explore and identify cause-and-effect relationships, and both of them are non-experimental methods . Causal-comparative research design aims to find causal connections between groups based on naturally occurring independent factors. And quasi-experimental research has a more experimental element, such as partial control over subjects and the use of groups for comparison.

What is the sample size for causal-comparative research?

The best sample size for causal-comparative research depends on a number of factors, such as the purpose of the research, research design, and practical limitations. There is no right or wrong sample size for data collection by causal-comparative study. It can change according to the nature of the study .

What are the limitations of causal-comparative research?

As much as it is an effective method to create a cause-and-effect relationship between variables, comparative research also has its limits. For example, to name a few, randomization can not be done, and there is a lack of control over independent variables.

  • Final words

There are times when experimentation can be used, and there are times when it is not possible for ethical or practical reasons. In that case, analyzing events and groups of people is a good way to define the cause-and-effect relationship between two different variables. The researcher can look into the details of past events to draw conclusions, or they can find a defined group to observe and study them long-term.

In this article, we have gathered information on causal-comparative research to give a good idea of the research method. For further information on different research types and for all your research needs, don’t forget to visit our other articles!

Defne is a content writer at forms.app. She is also a translator specializing in literary translation. Defne loves reading, writing, and translating professionally and as a hobby. Her expertise lies in survey research, research methodologies, content writing, and translation.

  • Form Features
  • Data Collection

Table of Contents

Related posts.

15 SaaS customer survey questions you must ask

15 SaaS customer survey questions you must ask

Ayşegül Nacu

All-in-one feedback solution for hotels

All-in-one feedback solution for hotels

Şeyma Beyazçiçek

How to add a linear scale question in Google Forms

How to add a linear scale question in Google Forms

Nursena Canbolat

Accessibility Report

[Enter personal and organization information through the Preferences > Identity dialog.]

The checker found no problems in this document.

  • Needs manual check: 2
  • Passed manually: 0
  • Failed manually: 0

Detailed Report

AD Center Site Banner

  • Section 1: Home
  • Narrowing Your Topic
  • Problem Statement
  • Purpose Statement

Alignment of Problem, Purpose, and Questions

Overarching alignment of doctoral project/dissertation-in-practice components, alignment of the quantitative research components, the quantitative general and specific problem, alignment of the qualitative research components, the qualitative phenomenon and specific problem.

  • Conceptual Framework
  • Avoiding Common Mistakes
  • Synthesis and Analysis in Writing Support This link opens in a new window
  • Qualitative Research Questions This link opens in a new window
  • Quantitative Research Questions This link opens in a new window
  • Qualitative & Quantitative Research Support with the ASC This link opens in a new window
  • Library Research Consultations This link opens in a new window
  • Library Guide: Research Process This link opens in a new window
  • ASC Guide: Outlining and Annotating This link opens in a new window
  • Library Guide: Organizing Research & Citations This link opens in a new window
  • Library Guide: RefWorks This link opens in a new window
  • Library Guide: Copyright Information This link opens in a new window

In a project designed to address a local problem as is the case in the applied doctoral experience, alignment of problem, purpose, and questions is key to having a successful project outcome. To help check alignment, some students find the following activity to be helpful.

  • Activity - Aligning Problem, Purpose, and Questions Download this activity to check the alignment of your problem, purpose, and questions.

Instructions for completing the activity:

  • Copy each segment of your specific problem statement into a cell in the first column.
  • Then copy the corresponding segment of your purpose statement into the second column.
  • Finally, copy the related questions into the third column.
  • Read across to note any discrepancies.

Activity example:

For information: Please visit the NU ASC website and view the resources on constructing a problem statement. 

The problem of your study can be determined by gaps in the literature; HOWEVER, a gap in the literature is not the problem. A problem is a clear and distinct problem that can be empirically verified and has a consequence. NOTE: A problem statement does not suggest any action to be taken nor does it ask a question. 

Example: “My car has a flat tire, so I cannot go to work and my livelihood is affected.” (This is a statement of fact and can be verified.)

As soon as  an action is noted, it becomes a purpose statement – “I need to investigate why I have a flat tire.”

If you ask a question, it is no longer a problem statement either – “How does my flat tire affect my livelihood?”   

Your general and specific problem statements should have at least two to three current (within three years) citations.

An example problem statement format is provided below. Please use the information and templates below to construct each component based on the quantitative research design selected earlier.

Constructing the General problem and Specific Problem Statements using the Funnel Approach

The premise is that the “funnel” approach to constructing the problem statement funnels from a general problem to a specific one. 

The general problem statement. Using the funnel approach to write a problem statement, the first component developed is the general problem. The general problem represents a situation that exists that can be directly attributed to a specific problem that is the focus of the doctoral project or dissertation-in-practice. 

Exercise #3.

Based on the type of problem addressed by the doctoral project or dissertation-in-practice, write the general problem statement below.

"The general problem is (describe the situation linked to the negative outcome) (two-three citations)."

The Specific Problem Statement

Once again, using the funnel approach to write a problem statement (see Problem Statement webinar on the NU ASC website at http://www.viddler.com/v/a70ecc81), the second  component developed is the “specific problem.” The specific problem represents an undesirable or negative outcome that can be researched, and is directly attributable to the general problem.

Exercise #4.

With the type of problem in mind, write the specific problem addressed by the proposed project below.

"The specific problem to be studied is when the (study population/site/program) experience/results in/causes (the general problem), (state the negative outcome) (two to three citations."

Following the Problem Statement is the Purpose Statement. The purpose should directly align with the problem.

The Purpose Statement.

The purpose statement describes the aim of the doctoral project or dissertation-in-practice and includes the project design, method, and variables. 

Based on the design the purpose statement can be constructed slightly differently.

Correlational Design Purpose Statement

The purpose of this quantitative correlational Doctoral Project or Applied Dissertation is to examine if there is a relationship between (variable 1) and (variable 2). 

Causal Comparative Design Purpose Statement

The purpose of this quantitative causal-comparative doctoral project or dissertation-in-practice is to examine the difference in (dependent variable) between (group 1) and (group2). 

NOTE:  The groups represent the independent variable. For example, you could be investigating the difference between high school and college students, so the independent variable is education level.

Exercise #5.

Based on the design write the purpose statement for the proposed doctoral project or dissertation-in-practice below.

"The purpose of this quantitative (design) doctoral project or dissertation-in-practice is to examine (connection) of (variables)."

Doctoral Project or Dissertation-in-Practice Research Questions

The type and number of research questions are dependent upon the design and purpose of the doctoral project or dissertation-in-practice.

Visit the following site to identify the appropriate structure for the proposed project: http://dissertation.laerd.com/how-to-structure-quantitative-research-questions.php

  • Causal Comparative Research Questions

RQ1.  What is the difference in (dependent variable) between (group 1)? (group 2), (group…n)? OR RQ1. How are/is (group 1) different from (group 2) in terms of (dependent variable) for (participants) at (research location)?

  • Correlational Research Questions

Q1. What is the relationship of (variable 1)to ( variable 2) for (participants) at (research location)?

Exercise #6.

Write the appropriate number research question(s) based on the project design and purpose of the proposed Doctoral Project or Applied Dissertation.

Research Question: RQ1. (see examples above to complete)

Hypotheses For each research question there should be a null and alternative hypothesis.

Causal Comparative Hypotheses H10. There is no difference in (dependent variable between (group 1) and (group 2). H1A. There is a statistically significant difference in (dependent variable between (group 1) and (group 2). Correlational Hypotheses H10. There is no relationship between (variable 1) and (variable 2). H1A. There is a relationship between (variable 1) and (variable 2).

Visit Please review the following site to properly construct hypotheses: https://statistics.laerd.com/statistical-guides/hypothesis-testing-3.php

Write the appropriate hypotheses for the proposed doctoral project or dissertation-in-practice below.

H10. (see examples above to complete)     H1A. (see examples above to complete)

For information: Please visit the NU ASC website and view the webinar about constructing a problem statement. 

As soon as an action is noted, it becomes a purpose statement – “I need to investigate why I have a flat tire.”

If you ask a question, it is no longer a problem statement either – “How does my flat tire affect my livelihood?”  

In qualitative studies, the problem is the phenomenon under study.  

The General Problem Statement

Using the funnel approach, i.e., moving from a general to a specific problem, to write a problem statement (see Problem Statement webinar, on the NU ASC website at http://www.viddler.com/v/a70ecc81, the first component developed is the “phenomenon,” also known as the general problem. The phenomenon represents a situation that exists that can be directly attributed to a specific problem that is the focus of the purposed Doctoral Project or Applied Dissertation.

Use the script below by replacing the italicized text with the appropriate information to write a one-sentence statement representing the phenomenon, and include at least two to three current (within three years) citations to support the statement.

"The general problem is that (describe the phenomenon) (two to three current citations)."

Once again, using the funnel approach to write a problem statement (see Problem Statement webinar on the NU ASC website at http://www.viddler.com/v/a70ecc81), the second component developed is the “specific problem.” The specific problem represents an undesirable or negative outcome that can be researched, and is directly attributable to the phenomenon of the proposed  doctoral project or dissertation-in-practice.

Use the script below by replacing the italicized text with the appropriate information to write a one-sentence statement representing the specific problem, and include at least two to three current (within three years) citations to support the statement.

"The specific problem is when the (doctoral project or dissertation-in-practice participants) (experience the phenomenon), (negative/undesirable outcome) (two to three current citations)."

Often, it may be more effective to write one overarching problem statement that includes both the general and specific problems.

The Purpose Statement

The purpose statement describes the aim of the proposed doctoral project or dissertation-in-practice and includes the research methodology and design, phenomenon, and project participants.

Use the script below by replacing the italicized text with the appropriate information to write a one-sentence statement representing the purpose statement.

"The purpose of this qualitative (design) study is to explore (the phenomenon), as perceived by ( Doctoral Project or Dissertation in Practice participants)."

Research Questions

Often, one question is designed to explore the barriers or challenges related to the phenomenon, and the second question asks about how to improve the phenomenon. However, there can be more than two research questions. The questions can be constructed in several different ways; a few examples are shown in RQ1. And RQ2. The questions should always include the phenomenon and doctoral project or dissertation-in-practice participants and ask the “How,” “What,” or “Why,” as related to the phenomenon.

Use the script below by replacing the italicized text with the appropriate information to write two one-sentence research questions that together explore the phenomenon as it is perceived by the  (Doctoral Project or Applied Dissertation participants). 

"RQ1.  What are the challenges of the (phenomenon) from the perspectives of the (doctoral project or dissertation-in-practice participants)?" "RQ2.  How can the (phenomenon) be improved, as perceived by the ( Doctoral Project or Dissertation in Practice participants)?"

  • << Previous: Purpose Statement
  • Next: Conceptual Framework >>
  • Last Updated: Apr 24, 2024 2:48 PM
  • URL: https://resources.nu.edu/c.php?g=1013602

National University

© Copyright 2024 National University. All Rights Reserved.

Privacy Policy | Consumer Information

  • Cookies & Privacy
  • GETTING STARTED
  • Introduction
  • FUNDAMENTALS
  • Acknowledgements
  • Research questions & hypotheses
  • Concepts, constructs & variables
  • Research limitations
  • Getting started
  • Sampling Strategy
  • Research Quality
  • Research Ethics
  • Data Analysis

Structure of comparative research questions

There are five steps required to construct a comparative research question: (1) choose your starting phrase; (2) identify and name the dependent variable; (3) identify the groups you are interested in; (4) identify the appropriate adjoining text; and (5) write out the comparative research question. Each of these steps is discussed in turn:

Choose your starting phrase

Identify and name the dependent variable

Identify the groups you are interested in

Identify the appropriate adjoining text

Write out the comparative research question

FIRST Choose your starting phrase

Comparative research questions typically start with one of two phrases:

Some of these starting phrases are highlighted in blue text in the examples below:

What is the difference in the daily calorific intake of American men and women?

What is the difference in the weekly photo uploads on Facebook between British male and female university students?

What are the differences in perceptions towards Internet banking security between adolescents and pensioners?

What are the differences in attitudes towards music piracy when pirated music is freely distributed or purchased?

SECOND Identify and name the dependent variable

All comparative research questions have a dependent variable . You need to identify what this is. However, how the dependent variable is written out in a research question and what you call it are often two different things. In the examples below, we have illustrated the name of the dependent variable and highlighted how it would be written out in the blue text .

The first three examples highlight that while the name of the dependent variable is the same, namely daily calorific intake, the way that this dependent variable is written out differs in each case.

THIRD Identify the groups you are interested in

All comparative research questions have at least two groups . You need to identify these groups. In the examples below, we have identified the groups in the green text .

What is the difference in the daily calorific intake of American men and women ?

What is the difference in the weekly photo uploads on Facebook between British male and female university students ?

What are the differences in perceptions towards Internet banking security between adolescents and pensioners ?

What are the differences in attitudes towards music piracy when pirated music is freely distributed or purchased ?

It is often easy to identify groups because they reflect different types of people (e.g., men and women, adolescents and pensioners), as highlighted by the first three examples. However, sometimes the two groups you are interested in reflect two different conditions, as highlighted by the final example. In this final example, the two conditions (i.e., groups) are pirated music that is freely distributed and pirated music that is purchased. So we are interested in how the attitudes towards music piracy differ when pirated music is freely distributed as opposed to when pirated music in purchased.

FOURTH Identify the appropriate adjoining text

Before you write out the groups you are interested in comparing, you typically need to include some adjoining text. Typically, this adjoining text includes the words between or amongst , but other words may be more appropriate, as highlighted by the examples in red text below:

FIFTH Write out the comparative research question

Once you have these details - (1) the starting phrase, (2) the name of the dependent variable, (3) the name of the groups you are interested in comparing, and (4) any potential adjoining words - you can write out the comparative research question in full. The example comparative research questions discussed above are written out in full below:

In the section that follows, the structure of relationship-based research questions is discussed.

Structure of relationship-based research questions

There are six steps required to construct a relationship-based research question: (1) choose your starting phrase; (2) identify the independent variable(s); (3) identify the dependent variable(s); (4) identify the group(s); (5) identify the appropriate adjoining text; and (6) write out the relationship-based research question. Each of these steps is discussed in turn.

Identify the independent variable(s)

Identify the dependent variable(s)

Identify the group(s)

Write out the relationship-based research question

Relationship-based research questions typically start with one or two phrases:

What is the relationship between gender and attitudes towards music piracy amongst adolescents?

What is the relationship between study time and exam scores amongst university students?

What is the relationship of career prospects, salary and benefits, and physical working conditions on job satisfaction between managers and non-managers?

SECOND Name the independent variable(s)

All relationship-based research questions have at least one independent variable . You need to identify what this is. In the examples that follow, the independent variable(s) is highlighted in the purple text .

What is the relationship of career prospects , salary and benefits , and physical working conditions on job satisfaction between managers and non-managers?

When doing a dissertation at the undergraduate and master's level, it is likely that your research question will only have one or two independent variables, but this is not always the case.

THIRD Name the dependent variable(s)

All relationship-based research questions also have at least one dependent variable . You also need to identify what this is. At the undergraduate and master's level, it is likely that your research question will only have one dependent variable. In the examples that follow, the dependent variable is highlighted in the blue text .

FOURTH Name of the group(s)

All relationship-based research questions have at least one group , but can have multiple groups . You need to identify this group(s). In the examples below, we have identified the group(s) in the green text .

What is the relationship between gender and attitudes towards music piracy amongst adolescents ?

What is the relationship between study time and exam scores amongst university students ?

What is the relationship of career prospects, salary and benefits, and physical working conditions on job satisfaction between managers and non-managers ?

FIFTH Identify the appropriate adjoining text

Before you write out the groups you are interested in comparing, you typically need to include some adjoining text (i.e., usually the words between or amongst):

Some examples are highlighted in red text below:

SIXTH Write out the relationship-based research question

Once you have these details ? (1) the starting phrase, (2) the name of the dependent variable, (3) the name of the independent variable, (4) the name of the group(s) you are interested in, and (5) any potential adjoining words ? you can write out the relationship-based research question in full. The example relationship-based research questions discussed above are written out in full below:

STEP FOUR Write out the problem or issues you are trying to address in the form of a complete research question

In the previous section, we illustrated how to write out the three types of research question (i.e., descriptive, comparative and relationship-based research questions). Whilst these rules should help you when writing out your research question(s), the main thing you should keep in mind is whether your research question(s) flow and are easy to read .

  • Open access
  • Published: 07 May 2021

The use of Qualitative Comparative Analysis (QCA) to address causality in complex systems: a systematic review of research on public health interventions

  • Benjamin Hanckel 1 ,
  • Mark Petticrew 2 ,
  • James Thomas 3 &
  • Judith Green 4  

BMC Public Health volume  21 , Article number:  877 ( 2021 ) Cite this article

42 Citations

34 Altmetric

Metrics details

Qualitative Comparative Analysis (QCA) is a method for identifying the configurations of conditions that lead to specific outcomes. Given its potential for providing evidence of causality in complex systems, QCA is increasingly used in evaluative research to examine the uptake or impacts of public health interventions. We map this emerging field, assessing the strengths and weaknesses of QCA approaches identified in published studies, and identify implications for future research and reporting.

PubMed, Scopus and Web of Science were systematically searched for peer-reviewed studies published in English up to December 2019 that had used QCA methods to identify the conditions associated with the uptake and/or effectiveness of interventions for public health. Data relating to the interventions studied (settings/level of intervention/populations), methods (type of QCA, case level, source of data, other methods used) and reported strengths and weaknesses of QCA were extracted and synthesised narratively.

The search identified 1384 papers, of which 27 (describing 26 studies) met the inclusion criteria. Interventions evaluated ranged across: nutrition/obesity ( n  = 8); physical activity ( n  = 4); health inequalities ( n  = 3); mental health ( n  = 2); community engagement ( n  = 3); chronic condition management ( n  = 3); vaccine adoption or implementation ( n  = 2); programme implementation ( n  = 3); breastfeeding ( n  = 2), and general population health ( n  = 1). The majority of studies ( n  = 24) were of interventions solely or predominantly in high income countries. Key strengths reported were that QCA provides a method for addressing causal complexity; and that it provides a systematic approach for understanding the mechanisms at work in implementation across contexts. Weaknesses reported related to data availability limitations, especially on ineffective interventions. The majority of papers demonstrated good knowledge of cases, and justification of case selection, but other criteria of methodological quality were less comprehensively met.

QCA is a promising approach for addressing the role of context in complex interventions, and for identifying causal configurations of conditions that predict implementation and/or outcomes when there is sufficiently detailed understanding of a series of comparable cases. As the use of QCA in evaluative health research increases, there may be a need to develop advice for public health researchers and journals on minimum criteria for quality and reporting.

Peer Review reports

Interest in the use of Qualitative Comparative Analysis (QCA) arises in part from growing recognition of the need to broaden methodological capacity to address causality in complex systems [ 1 , 2 , 3 ]. Guidance for researchers for evaluating complex interventions suggests process evaluations [ 4 , 5 ] can provide evidence on the mechanisms of change, and the ways in which context affects outcomes. However, this does not address the more fundamental problems with trial and quasi-experimental designs arising from system complexity [ 6 ]. As Byrne notes, the key characteristic of complex systems is ‘emergence’ [ 7 ]: that is, effects may accrue from combinations of components, in contingent ways, which cannot be reduced to any one level. Asking about ‘what works’ in complex systems is not to ask a simple question about whether an intervention has particular effects, but rather to ask: “how the intervention works in relation to all existing components of the system and to other systems and their sub-systems that intersect with the system of interest” [ 7 ]. Public health interventions are typically attempts to effect change in systems that are themselves dynamic; approaches to evaluation are needed that can deal with emergence [ 8 ]. In short, understanding the uptake and impact of interventions requires methods that can account for the complex interplay of intervention conditions and system contexts.

To build a useful evidence base for public health, evaluations thus need to assess not just whether a particular intervention (or component) causes specific change in one variable, in controlled circumstances, but whether those interventions shift systems, and how specific conditions of interventions and setting contexts interact to lead to anticipated outcomes. There have been a number of calls for the development of methods in intervention research to address these issues of complex causation [ 9 , 10 , 11 ], including calls for the greater use of case studies to provide evidence on the important elements of context [ 12 , 13 ]. One approach for addressing causality in complex systems is Qualitative Comparative Analysis (QCA): a systematic way of comparing the outcomes of different combinations of system components and elements of context (‘conditions’) across a series of cases.

The potential of qualitative comparative analysis

QCA is an approach developed by Charles Ragin [ 14 , 15 ], originating in comparative politics and macrosociology to address questions of comparative historical development. Using set theory, QCA methods explore the relationships between ‘conditions’ and ‘outcomes’ by identifying configurations of necessary and sufficient conditions for an outcome. The underlying logic is different from probabilistic reasoning, as the causal relationships identified are not inferred from the (statistical) likelihood of them being found by chance, but rather from comparing sets of conditions and their relationship to outcomes. It is thus more akin to the generative conceptualisations of causality in realist evaluation approaches [ 16 ]. QCA is a non-additive and non-linear method that emphasises diversity, acknowledging that different paths can lead to the same outcome. For evaluative research in complex systems [ 17 ], QCA therefore offers a number of benefits, including: that QCA can identify more than one causal pathway to an outcome (equifinality); that it accounts for conjectural causation (where the presence or absence of conditions in relation to other conditions might be key); and that it is asymmetric with respect to the success or failure of outcomes. That is, that specific factors explain success does not imply that their absence leads to failure (causal asymmetry).

QCA was designed, and is typically used, to compare data from a medium N (10–50) series of cases that include those with and those without the (dichotomised) outcome. Conditions can be dichotomised in ‘crisp sets’ (csQCA) or represented in ‘fuzzy sets’ (fsQCA), where set membership is calibrated (either continuously or with cut offs) between two extremes representing fully in (1) or fully out (0) of the set. A third version, multi-value QCA (mvQCA), infrequently used, represents conditions as ‘multi-value sets’, with multinomial membership [ 18 ]. In calibrating set membership, the researcher specifies the critical qualitative anchors that capture differences in kind (full membership and full non-membership), as well as differences in degree in fuzzy sets (partial membership) [ 15 , 19 ]. Data on outcomes and conditions can come from primary or secondary qualitative and/or quantitative sources. Once data are assembled and coded, truth tables are constructed which “list the logically possible combinations of causal conditions” [ 15 ], collating the number of cases where those configurations occur to see if they share the same outcome. Analysis of these truth tables assesses first whether any conditions are individually necessary or sufficient to predict the outcome, and then whether any configurations of conditions are necessary or sufficient. Necessary conditions are assessed by examining causal conditions shared by cases with the same outcome, whilst identifying sufficient conditions (or combinations of conditions) requires examining cases with the same causal conditions to identify if they have the same outcome [ 15 ]. However, as Legewie argues, the presence of a condition, or a combination of conditions in actual datasets, are likely to be “‘quasi-necessary’ or ‘quasi-sufficient’ in that the causal relation holds in a great majority of cases, but some cases deviate from this pattern” [ 20 ]. Following reduction of the complexity of the model, the final model is tested for coverage (the degree to which a configuration accounts for instances of an outcome in the empirical cases; the proportion of cases belonging to a particular configuration) and consistency (the degree to which the cases sharing a combination of conditions align with a proposed subset relation). The result is an analysis of complex causation, “defined as a situation in which an outcome may follow from several different combinations of causal conditions” [ 15 ] illuminating the ‘causal recipes’, the causally relevant conditions or configuration of conditions that produce the outcome of interest.

QCA, then, has promise for addressing questions of complex causation, and recent calls for the greater use of QCA methods have come from a range of fields related to public health, including health research [ 17 ], studies of social interventions [ 7 ], and policy evaluation [ 21 , 22 ]. In making arguments for the use of QCA across these fields, researchers have also indicated some of the considerations that must be taken into account to ensure robust and credible analyses. There is a need, for instance, to ensure that ‘contradictions’, where cases with the same configurations show different outcomes, are resolved and reported [ 15 , 23 , 24 ]. Additionally, researchers must consider the ratio of cases to conditions, and limit the number of conditions to cases to ensure the validity of models [ 25 ]. Marx and Dusa, examining crisp set QCA, have provided some guidance to the ‘ceiling’ number of conditions which can be included relative to the number of cases to increase the probability of models being valid (that is, with a low probability of being generated through random data) [ 26 ].

There is now a growing body of published research in public health and related fields drawing on QCA methods. This is therefore a timely point to map the field and assess the potential of QCA as a method for contributing to the evidence base for what works in improving public health. To inform future methodological development of robust methods for addressing complexity in the evaluation of public health interventions, we undertook a systematic review to map existing evidence, identify gaps in, and strengths and weakness of, the QCA literature to date, and identify the implications of these for conducting and reporting future QCA studies for public health evaluation. We aimed to address the following specific questions [ 27 ]:

1. How is QCA used for public health evaluation? What populations, settings, methods used in source case studies, unit/s and level of analysis (‘cases’), and ‘conditions’ have been included in QCA studies?

2. What strengths and weaknesses have been identified by researchers who have used QCA to understand complex causation in public health evaluation research?

3. What are the existing gaps in, and strengths and weakness of, the QCA literature in public health evaluation, and what implications do these have for future research and reporting of QCA studies for public health?

This systematic review was registered with the International Prospective Register of Systematic Reviews (PROSPERO) on 29 April 2019 ( CRD42019131910 ). A protocol was prepared in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols (PRISMA-P) 2015 statement [ 28 ], and published in 2019 [ 27 ], where the methods are explained in detail. EPPI-Reviewer 4 was used to manage the process and undertake screening of abstracts [ 29 ].

Search strategy

We searched for peer-reviewed published papers in English, which used QCA methods to examine causal complexity in evaluating the implementation, uptake and/or effects of a public health intervention, in any region of the world, for any population. ‘Public health interventions’ were defined as those which aim to promote or protect health, or prevent ill health, in the population. No date exclusions were made, and papers published up to December 2019 were included.

Search strategies used the following phrases “Qualitative Comparative Analysis” and “QCA”, which were combined with the keywords “health”, “public health”, “intervention”, and “wellbeing”. See Additional file  1 for an example. Searches were undertaken on the following databases: PubMed, Web of Science, and Scopus. Additional searches were undertaken on Microsoft Academic and Google Scholar in December 2019, where the first pages of results were checked for studies that may have been missed in the initial search. No additional studies were identified. The list of included studies was sent to experts in QCA methods in health and related fields, including authors of included studies and/or those who had published on QCA methodology. This generated no additional studies within scope, but a suggestion to check the COMPASSS (Comparative Methods for Systematic Cross-Case Analysis) database; this was searched, identifying one further study that met the inclusion criteria [ 30 ]. COMPASSS ( https://compasss.org/ ) collates publications of studies using comparative case analysis.

We excluded studies where no intervention was evaluated, which included studies that used QCA to examine public health infrastructure (i.e. staff training) without a specific health outcome, and papers that report on prevalence of health issues (i.e. prevalence of child mortality). We also excluded studies of health systems or services interventions where there was no public health outcome.

After retrieval, and removal of duplicates, titles and abstracts were screened by one of two authors (BH or JG). Double screening of all records was assisted by EPPI Reviewer 4’s machine learning function. Of the 1384 papers identified after duplicates were removed, we excluded 820 after review of titles and abstracts (Fig.  1 ). The excluded studies included: a large number of papers relating to ‘quantitative coronary angioplasty’ and some which referred to the Queensland Criminal Code (both of which are also abbreviated to ‘QCA’); papers that reported methodological issues but not empirical studies; protocols; and papers that used the phrase ‘qualitative comparative analysis’ to refer to qualitative studies that compared different sub-populations or cases within the study, but did not include formal QCA methods.

figure 1

Flow Diagram

Full texts of the 51 remaining studies were screened by BH and JG for inclusion, with 10 papers double coded by both authors, with complete agreement. Uncertain inclusions were checked by the third author (MP). Of the full texts, 24 were excluded because: they did not report a public health intervention ( n  = 18); had used a methodology inspired by QCA, but had not undertaken a QCA ( n  = 2); were protocols or methodological papers only ( n  = 2); or were not published in peer-reviewed journals ( n  = 2) (see Fig.  1 ).

Data were extracted manually from the 27 remaining full texts by BH and JG. Two papers relating to the same research question and dataset were combined, such that analysis was by study ( n  = 26) not by paper. We retrieved data relating to: publication (journal, first author country affiliation, funding reported); the study setting (country/region setting, population targeted by the intervention(s)); intervention(s) studied; methods (aims, rationale for using QCA, crisp or fuzzy set QCA, other analysis methods used); data sources drawn on for cases (source [primary data, secondary data, published analyses], qualitative/quantitative data, level of analysis, number of cases, final causal conditions included in the analysis); outcome explained; and claims made about strengths and weaknesses of using QCA (see Table  1 ). Data were synthesised narratively, using thematic synthesis methods [ 31 , 32 ], with interventions categorised by public health domain and level of intervention.

Quality assessment

There are no reporting guidelines for QCA studies in public health, but there are a number of discussions of best practice in the methodological literature [ 25 , 26 , 33 , 34 ]. These discussions suggest several criteria for strengthening QCA methods that we used as indicators of methodological and/or reporting quality: evidence of familiarity of cases; justification for selection of cases; discussion and justification of set membership score calibration; reporting of truth tables; reporting and justification of solution formula; and reporting of consistency and coverage measures. For studies using csQCA, and claiming an explanatory analysis, we additionally identified whether the number of cases was sufficient for the number of conditions included in the model, using a pragmatic cut-off in line with Marx & Dusa’s guideline thresholds, which indicate how many cases are sufficient for given numbers of conditions to reject a 10% probability that models could be generated with random data [ 26 ].

Overview of scope of QCA research in public health

Twenty-seven papers reporting 26 studies were included in the review (Table  1 ). The earliest was published in 2005, and 17 were published after 2015. The majority ( n  = 19) were published in public health/health promotion journals, with the remainder published in other health science ( n  = 3) or in social science/management journals ( n  = 4). The public health domain(s) addressed by each study were broadly coded by the main area of focus. They included nutrition/obesity ( n  = 8); physical activity (PA) (n = 4); health inequalities ( n  = 3); mental health ( n  = 2); community engagement ( n  = 3); chronic condition management ( n  = 3); vaccine adoption or implementation (n = 2); programme implementation ( n  = 3); breastfeeding ( n  = 2); or general population health ( n  = 1). The majority ( n  = 24) of studies were conducted solely or predominantly in high-income countries (systematic reviews in general searched global sources, but commented that the overwhelming majority of studies were from high-income countries). Country settings included: any ( n  = 6); OECD countries ( n  = 3); USA ( n  = 6); UK ( n  = 6) and one each from Nepal, Austria, Belgium, Netherlands and Africa. These largely reflected the first author’s country affiliations in the UK ( n  = 13); USA ( n  = 9); and one each from South Africa, Austria, Belgium, and the Netherlands. All three studies primarily addressing health inequalities [ 35 , 36 , 37 ] were from the UK.

Eight of the interventions evaluated were individual-level behaviour change interventions (e.g. weight management interventions, case management, self-management for chronic conditions); eight evaluated policy/funding interventions; five explored settings-based health promotion/behaviour change interventions (e.g. schools-based physical activity intervention, store-based food choice interventions); three evaluated community empowerment/engagement interventions, and two studies evaluated networks and their impact on health outcomes.

Methods and data sets used

Fifteen studies used crisp sets (csQCA), 11 used fuzzy sets (fsQCA). No study used mvQCA. Eleven studies included additional analyses of the datasets drawn on for the QCA, including six that used qualitative approaches (narrative synthesis, case comparisons), typically to identify cases or conditions for populating the QCA; and four reporting additional statistical analyses (meta-regression, linear regression) to either identify differences overall between cases prior to conducting a QCA (e.g. [ 38 ]) or to explore correlations in more detail (e.g. [ 39 ]). One study used an additional Boolean configurational technique to reduce the number of conditions in the QCA analysis [ 40 ]. No studies reported aiming to compare the findings from the QCA with those from other techniques for evaluating the uptake or effectiveness of interventions, although some [ 41 , 42 ] were explicitly using the study to showcase the possibilities of QCA compared with other approaches in general. Twelve studies drew on primary data collected specifically for the study, with five of those additionally drawing on secondary data sets; five drew only on secondary data sets, and nine used data from systematic reviews of published research. Seven studies drew primarily on qualitative data, generally derived from interviews or observations.

Many studies were undertaken in the context of one or more trials, which provided evidence of effect. Within single trials, this was generally for a process evaluation, with cases being trial sites. Fernald et al’s study, for instance, was in the context of a trial of a programme to support primary care teams in identifying and implementing self-management support tools for their patients, which measured patient and health care provider level outcomes [ 43 ]. The QCA reported here used qualitative data from the trial to identify a set of necessary conditions for health care provider practices to implement the tools successfully. In studies drawing on data from systematic reviews, cases were always at the level of intervention or intervention component, with data included from multiple trials. Harris et al., for instance, undertook a mixed-methods systematic review of school-based self-management interventions for asthma, using meta-analysis methods to identify effective interventions and QCA methods to identify which intervention features were aligned with success [ 44 ].

The largest number of studies ( n  = 10), including all the systematic reviews, analysed cases at the level of the intervention, or a component of the intervention; seven analysed organisational level cases (e.g. school class, network, primary care practice); five analysed sub-national region level cases (e.g. state, local authority area), and two each analysed country or individual level cases. Sample sizes ranged from 10 to 131, with no study having small N (< 10) sample sizes, four having large N (> 50) sample sizes, and the majority (22) being medium N studies (in the range 10–50).

Rationale for using QCA

Most papers reported a rationale for using QCA that mentioned ‘complexity’ or ‘context’, including: noting that QCA is appropriate for addressing causal complexity or multiple pathways to outcome [ 37 , 43 , 45 , 46 , 47 , 48 , 49 , 50 , 51 ]; noting the appropriateness of the method for providing evidence on how context impacts on interventions [ 41 , 50 ]; or the need for a method that addressed causal asymmetry [ 52 ]. Three stated that the QCA was an ‘exploratory’ analysis [ 53 , 54 , 55 ]. In addition to the empirical aims, several papers (e.g. [ 42 , 48 ]) sought to demonstrate the utility of QCA, or to develop QCA methods for health research (e.g. [ 47 ]).

Reported strengths and weaknesses of approach

There was a general agreement about the strengths of QCA. Specifically, that it was a useful tool to address complex causality, providing a systematic approach to understand the mechanisms at work in implementation across contexts [ 38 , 39 , 43 , 45 , 46 , 47 , 55 , 56 , 57 ], particularly as they relate to (in) effective intervention implementation [ 44 , 51 ] and the evaluation of interventions [ 58 ], or “where it is not possible to identify linearity between variables of interest and outcomes” [ 49 ]. Authors highlighted the strengths of QCA as providing possibilities for examining complex policy problems [ 37 , 59 ]; for testing existing as well as new theory [ 52 ]; and for identifying aspects of interventions which had not been previously perceived as critical [ 41 ] or which may have been missed when drawing on statistical methods that use, for instance, linear additive models [ 42 ]. The strengths of QCA in terms of providing useful evidence for policy were flagged in a number of studies, particularly where the causal recipes suggested that conventional assumptions about effectiveness were not confirmed. Blackman et al., for instance, in a series of studies exploring why unequal health outcomes had narrowed in some areas of the UK and not others, identified poorer outcomes in settings with ‘better’ contracting [ 35 , 36 , 37 ]; Harting found, contrary to theoretical assumptions about the necessary conditions for successful implementation of public health interventions, that a multisectoral network was not a necessary condition [ 30 ].

Weaknesses reported included the limitations of QCA in general for addressing complexity, as well as specific limitations with either the csQCA or the fsQCA methods employed. One general concern discussed across a number of studies was the problem of limited empirical diversity, which resulted in: limitations in the possible number of conditions included in each study, particularly with small N studies [ 58 ]; missing data on important conditions [ 43 ]; or limited reported diversity (where, for instance, data were drawn from systematic reviews, reflecting publication biases which limit reporting of ineffective interventions) [ 41 ]. Reported methodological limitations in small and intermediate N studies included concerns about the potential that case selection could bias findings [ 37 ].

In terms of potential for addressing causal complexity, the limitations of QCA for identifying unintended consequences, tipping points, and/or feedback loops in complex adaptive systems were noted [ 60 ], as were the potential limitations (especially in csQCA studies) of reducing complex conditions, drawn from detailed qualitative understanding, to binary conditions [ 35 ]. The impossibility of doing this was a rationale for using fsQCA in one study [ 57 ], where detailed knowledge of conditions is needed to make theoretically justified calibration decisions. However, others [ 47 ] make the case that csQCA provides more appropriate findings for policy: dichotomisation forces a focus on meaningful distinctions, including those related to decisions that practitioners/policy makers can action. There is, then, a potential trade-off in providing ‘interpretable results’, but ones which preclude potential for utilising more detailed information [ 45 ]. That QCA does not deal with probabilistic causation was noted [ 47 ].

Quality of published studies

Assessment of ‘familiarity with cases’ was made subjectively on the basis of study authors’ reports of their knowledge of the settings (empirical or theoretical) and the descriptions they provided in the published paper: overall, 14 were judged as sufficient, and 12 less than sufficient. Studies which included primary data were more likely to be judged as demonstrating familiarity ( n  = 10) than those drawing on secondary sources or systematic reviews, of which only two were judged as demonstrating familiarity. All studies justified how the selection of cases had been made; for those not using the full available population of cases, this was in general (appropriately) done theoretically: following previous research [ 52 ]; purposively to include a range of positive and negative outcomes [ 41 ]; or to include a diversity of cases [ 58 ]. In identifying conditions leading to effective/not effective interventions, one purposive strategy was to include a specified percentage or number of the most effective and least effective interventions (e.g. [ 36 , 40 , 51 , 52 ]). Discussion of calibration of set membership scores was judged adequate in 15 cases, and inadequate in 11; 10 reported raw data matrices in the paper or supplementary material; 21 reported truth tables in the paper or supplementary material. The majority ( n  = 21) reported at least some detail on the coverage (the number of cases with a particular configuration) and consistency (the percentage of similar causal configurations which result in the same outcome). The majority ( n  = 21) included truth tables (or explicitly provided details of how to obtain them); fewer ( n  = 10) included raw data. Only five studies met all six of these quality criteria (evidence of familiarity with cases, justification of case selection, discussion of calibration, reporting truth tables, reporting raw data matrices, reporting coverage and consistency); a further six met at least five of them.

Of the csQCA studies which were not reporting an exploratory analysis, four appeared to have insufficient cases for the large number of conditions entered into at least one of the models reported, with a consequent risk to the validity of the QCA models [ 26 ].

QCA has been widely used in public health research over the last decade to advance understanding of causal inference in complex systems. In this review of published evidence to date, we have identified studies using QCA to examine the configurations of conditions that lead to particular outcomes across contexts. As noted by most study authors, QCA methods have promised advantages over probabilistic statistical techniques for examining causation where systems and/or interventions are complex, providing public health researchers with a method to test the multiple pathways (configurations of conditions), and necessary and sufficient conditions that lead to desired health outcomes.

The origins of QCA approaches are in comparative policy studies. Rihoux et al’s review of peer-reviewed journal articles using QCA methods published up to 2011 found the majority of published examples were from political science and sociology, with fewer than 5% of the 313 studies they identified coming from health sciences [ 61 ]. They also reported few examples of the method being used in policy evaluation and implementation studies [ 62 ]. In the decade since their review of the field [ 61 ], there has been an emerging body of evaluative work in health: we identified 26 studies in the field of public health alone, with the majority published in public health journals. Across these studies, QCA has been used for evaluative questions in a range of settings and public health domains to identify the conditions under which interventions are implemented and/or have evidence of effect for improving population health. All studies included a series of cases that included some with and some without the outcome of interest (such as behaviour change, successful programme implementation, or good vaccination uptake). The dominance of high-income countries in both intervention settings and author affiliations is disappointing, but reflects the disproportionate location of public health research in the global north more generally [ 63 ].

The largest single group of studies included were systematic reviews, using QCA to compare interventions (or intervention components) to identify successful (and non-successful) configurations of conditions across contexts. Here, the value of QCA lies in its potential for synthesis with quantitative meta-synthesis methods to identify the particular conditions or contexts in which interventions or components are effective. As Parrott et al. note, for instance, their meta-analysis could identify probabilistic effects of weight management programmes, and the QCA analysis enabled them to address the “role that the context of the [paediatric weight management] intervention has in influencing how, when, and for whom an intervention mix will be successful” [ 50 ]. However, using QCA to identify configurations of conditions that lead to effective or non- effective interventions across particular areas of population health is an application that does move away in some significant respects from the origins of the method. First, researchers drawing on evidence from systematic reviews for their data are reliant largely on published evidence for information on conditions (such as the organisational contexts in which interventions were implemented, or the types of behaviour change theory utilised). Although guidance for describing interventions [ 64 ] advises key aspects of context are included in reports, this may not include data on the full range of conditions that might be causally important, and review research teams may have limited knowledge of these ‘cases’ themselves. Second, less successful interventions are less likely to be published, potentially limiting the diversity of cases, particularly of cases with unsuccessful outcomes. A strength of QCA is the separate analysis of conditions leading to positive and negative outcomes: this is precluded where there is insufficient evidence on negative outcomes [ 50 ]. Third, when including a range of types of intervention, it can be unclear whether the cases included are truly comparable. A QCA study requires a high degree of theoretical and pragmatic case knowledge on the part of the researcher to calibrate conditions to qualitative anchors: it is reliant on deep understanding of complex contexts, and a familiarity with how conditions interact within and across contexts. Perhaps surprising is that only seven of the studies included here clearly drew on qualitative data, given that QCA is primarily seen as a method that requires thick, detailed knowledge of cases, particularly when the aim is to understand complex causation [ 8 ]. Whilst research teams conducting QCA in the context of systematic reviews may have detailed understanding in general of interventions within their spheres of expertise, they are unlikely to have this for the whole range of cases, particularly where a diverse set of contexts (countries, organisational settings) are included. Making a theoretical case for the valid comparability of such a case series is crucial. There may, then, be limitations in the portability of QCA methods for conducting studies entirely reliant on data from published evidence.

QCA was developed for small and medium N series of cases, and (as in the field more broadly, [ 61 ]), the samples in our studies predominantly had between 10 and 50 cases. However, there is increasing interest in the method as an alternative or complementary technique to regression-oriented statistical methods for larger samples [ 65 ], such as from surveys, where detailed knowledge of cases is likely to be replaced by theoretical knowledge of relationships between conditions (see [ 23 ]). The two larger N (> 100 cases) studies in our sample were an individual level analysis of survey data [ 46 , 47 ] and an analysis of intervention arms from a systematic review [ 50 ]. Larger sample sizes allow more conditions to be included in the analysis [ 23 , 26 ], although for evaluative research, where the aim is developing a causal explanation, rather than simply exploring patterns, there remains a limit to the number of conditions that can be included. As the number of conditions included increases, so too does the number of possible configurations, increasing the chance of unique combinations and of generating spurious solutions with a high level of consistency. As a rule of thumb, once the number of conditions exceeds 6–8 (with up to 50 cases) or 10 (for larger samples), the credibility of solutions may be severely compromised [ 23 ].

Strengths and weaknesses of the study

A systematic review has the potential advantages of transparency and rigour and, if not exhaustive, our search is likely to be representative of the body of research using QCA for evaluative public health research up to 2020. However, a limitation is the inevitable difficulty in operationalising a ‘public health’ intervention. Exclusions on scope are not straightforward, given that most social, environmental and political conditions impact on public health, and arguably a greater range of policy and social interventions (such as fiscal or trade policies) that have been the subject of QCA analyses could have been included, or a greater range of more clinical interventions. However, to enable a manageable number of papers to review, and restrict our focus to those papers that were most directly applicable to (and likely to be read by) those in public health policy and practice, we operationalised ‘public health interventions’ as those which were likely to be directly impacting on population health outcomes, or on behaviours (such as increased physical activity) where there was good evidence for causal relationships with public health outcomes, and where the primary research question of the study examined the conditions leading to those outcomes. This review has, of necessity, therefore excluded a considerable body of evidence likely to be useful for public health practice in terms of planning interventions, such as studies on how to better target smoking cessation [ 66 ] or foster social networks [ 67 ] where the primary research question was on conditions leading to these outcomes, rather than on conditions for outcomes of specific interventions. Similarly, there are growing number of descriptive epidemiological studies using QCA to explore factors predicting outcomes across such diverse areas as lupus and quality of life [ 68 ]; length of hospital stay [ 69 ]; constellations of factors predicting injury [ 70 ]; or the role of austerity, crisis and recession in predicting public health outcomes [ 71 ]. Whilst there is undoubtedly useful information to be derived from studying the conditions that lead to particular public health problems, these studies were not directly evaluating interventions, so they were also excluded.

Restricting our search to publications in English and to peer reviewed publications may have missed bodies of work from many regions, and has excluded research from non-governmental organisations using QCA methods in evaluation. As this is a rapidly evolving field, with relatively recent uptake in public health (all our included studies were after 2005), our studies may not reflect the most recent advances in the area.

Implications for conducting and reporting QCA studies

This systematic review has reviewed studies that deployed an emergent methodology, which has no reporting guidelines and has had, to date, a relatively low level of awareness among many potential evidence users in public health. For this reason, many of the studies reviewed were relatively detailed on the methods used, and the rationale for utilising QCA.

We did not assess quality directly, but used indicators of good practice discussed in QCA methodological literature, largely written for policy studies scholars, and often post-dating the publication dates of studies included in this review. It is also worth noting that, given the relatively recent development of QCA methods, methodological debate is still thriving on issues such as the reliability of causal inferences [ 72 ], alongside more general critiques of the usefulness of the method for policy decisions (see, for instance, [ 73 ]). The authors of studies included in this review also commented directly on methodological development: for instance, Thomas et al. suggests that QCA may benefit from methods development for sensitivity analyses around calibration decisions [ 42 ].

However, we selected quality criteria that, we argue, are relevant for public health research> Justifying the selection of cases, discussing and justifying the calibration of set membership, making data sets available, and reporting truth tables, consistency and coverage are all good practice in line with the usual requirements of transparency and credibility in methods. When QCA studies aim to provide explanation of outcomes (rather than exploring configurations), it is also vital that they are reported in ways that enhance the credibility of claims made, including justifying the number of conditions included relative to cases. Few of the studies published to date met all these criteria, at least in the papers included here (although additional material may have been provided in other publications). To improve the future discoverability and uptake up of QCA methods in public health, and to strengthen the credibility of findings from these methods, we therefore suggest the following criteria should be considered by authors and reviewers for reporting QCA studies which aim to provide causal evidence about the configurations of conditions that lead to implementation or outcomes:

The paper title and abstract state the QCA design;

The sampling unit for the ‘case’ is clearly defined (e.g.: patient, specified geographical population, ward, hospital, network, policy, country);

The population from which the cases have been selected is defined (e.g.: all patients in a country with X condition, districts in X country, tertiary hospitals, all hospitals in X country, all health promotion networks in X province, European policies on smoking in outdoor places, OECD countries);

The rationale for selection of cases from the population is justified (e.g.: whole population, random selection, purposive sample);

There are sufficient cases to provide credible coverage across the number of conditions included in the model, and the rationale for the number of conditions included is stated;

Cases are comparable;

There is a clear justification for how choices of relevant conditions (or ‘aspects of context’) have been made;

There is sufficient transparency for replicability: in line with open science expectations, datasets should be available where possible; truth tables should be reported in publications, and reports of coverage and consistency provided.

Implications for future research

In reviewing methods for evaluating natural experiments, Craig et al. focus on statistical techniques for enhancing causal inference, noting only that what they call ‘qualitative’ techniques (the cited references for these are all QCA studies) require “further studies … to establish their validity and usefulness” [ 2 ]. The studies included in this review have demonstrated that QCA is a feasible method when there are sufficient (comparable) cases for identifying configurations of conditions under which interventions are effective (or not), or are implemented (or not). Given ongoing concerns in public health about how best to evaluate interventions across complex contexts and systems, this is promising. This review has also demonstrated the value of adding QCA methods to the tool box of techniques for evaluating interventions such as public policies, health promotion programmes, and organisational changes - whether they are implemented in a randomised way or not. Many of the studies in this review have clearly generated useful evidence: whether this evidence has had more or less impact, in terms of influencing practice and policy, or is more valid, than evidence generated by other methods is not known. Validating the findings of a QCA study is perhaps as challenging as validating the findings from any other design, given the absence of any gold standard comparators. Comparisons of the findings of QCA with those from other methods are also typically constrained by the rather different research questions asked, and the different purposes of the analysis. In our review, QCA were typically used alongside other methods to address different questions, rather than to compare methods. However, as the field develops, follow up studies, which evaluate outcomes of interventions designed in line with conditions identified as causal in prior QCAs, might be useful for contributing to validation.

This review was limited to public health evaluation research: other domains that would be useful to map include health systems/services interventions and studies used to design or target interventions. There is also an opportunity to broaden the scope of the field, particularly for addressing some of the more intractable challenges for public health research. Given the limitations in the evidence base on what works to address inequalities in health, for instance [ 74 ], QCA has potential here, to help identify the conditions under which interventions do or do not exacerbate unequal outcomes, or the conditions that lead to differential uptake or impacts across sub-population groups. It is perhaps surprising that relatively few of the studies in this review included cases at the level of country or region, the traditional level for QCA studies. There may be scope for developing international comparisons for public health policy, and using QCA methods at the case level (nation, sub-national region) of classic policy studies in the field. In the light of debate around COVID-19 pandemic response effectiveness, comparative studies across jurisdictions might shed light on issues such as differential population responses to vaccine uptake or mask use, for example, and these might in turn be considered as conditions in causal configurations leading to differential morbidity or mortality outcomes.

When should be QCA be considered?

Public health evaluations typically assess the efficacy, effectiveness or cost-effectiveness of interventions and the processes and mechanisms through which they effect change. There is no perfect evaluation design for achieving these aims. As in other fields, the choice of design will in part depend on the availability of counterfactuals, the extent to which the investigator can control the intervention, and the range of potential cases and contexts [ 75 ], as well as political considerations, such as the credibility of the approach with key stakeholders [ 76 ]. There are inevitably ‘horses for courses’ [ 77 ]. The evidence from this review suggests that QCA evaluation approaches are feasible when there is a sufficient number of comparable cases with and without the outcome of interest, and when the investigators have, or can generate, sufficiently in-depth understanding of those cases to make sense of connections between conditions, and to make credible decisions about the calibration of set membership. QCA may be particularly relevant for understanding multiple causation (that is, where different configurations might lead to the same outcome), and for understanding the conditions associated with both lack of effect and effect. As a stand-alone approach, QCA might be particularly valuable for national and regional comparative studies of the impact of policies on public health outcomes. Alongside cluster randomised trials of interventions, or alongside systematic reviews, QCA approaches are especially useful for identifying core combinations of causal conditions for success and lack of success in implementation and outcome.

Conclusions

QCA is a relatively new approach for public health research, with promise for contributing to much-needed methodological development for addressing causation in complex systems. This review has demonstrated the large range of evaluation questions that have been addressed to date using QCA, including contributions to process evaluations of trials and for exploring the conditions leading to effectiveness (or not) in systematic reviews of interventions. There is potential for QCA to be more widely used in evaluative research, to identify the conditions under which interventions across contexts are implemented or not, and the configurations of conditions associated with effect or lack of evidence of effect. However, QCA will not be appropriate for all evaluations, and cannot be the only answer to addressing complex causality. For explanatory questions, the approach is most appropriate when there is a series of enough comparable cases with and without the outcome of interest, and where the researchers have detailed understanding of those cases, and conditions. To improve the credibility of findings from QCA for public health evidence users, we recommend that studies are reported with the usual attention to methodological transparency and data availability, with key details that allow readers to judge the credibility of causal configurations reported. If the use of QCA continues to expand, it may be useful to develop more comprehensive consensus guidelines for conduct and reporting.

Availability of data and materials

Full search strategies and extraction forms are available by request from the first author.

Abbreviations

Comparative Methods for Systematic Cross-Case Analysis

crisp set QCA

fuzzy set QCA

multi-value QCA

Medical Research Council

  • Qualitative Comparative Analysis

randomised control trial

Physical Activity

Green J, Roberts H, Petticrew M, Steinbach R, Goodman A, Jones A, et al. Integrating quasi-experimental and inductive designs in evaluation: a case study of the impact of free bus travel on public health. Evaluation. 2015;21(4):391–406. https://doi.org/10.1177/1356389015605205 .

Article   Google Scholar  

Craig P, Katikireddi SV, Leyland A, Popham F. Natural experiments: an overview of methods, approaches, and contributions to public health intervention research. Annu Rev Public Health. 2017;38(1):39–56. https://doi.org/10.1146/annurev-publhealth-031816-044327 .

Article   PubMed   PubMed Central   Google Scholar  

Shiell A, Hawe P, Gold L. Complex interventions or complex systems? Implications for health economic evaluation. BMJ. 2008;336(7656):1281–3. https://doi.org/10.1136/bmj.39569.510521.AD .

Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655.

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350(mar19 6):h1258. https://doi.org/10.1136/bmj.h1258 .

Pattyn V, Álamos-Concha P, Cambré B, Rihoux B, Schalembier B. Policy effectiveness through Configurational and mechanistic lenses: lessons for concept development. J Comp Policy Anal Res Pract. 2020;0:1–18.

Google Scholar  

Byrne D. Evaluating complex social interventions in a complex world. Evaluation. 2013;19(3):217–28. https://doi.org/10.1177/1356389013495617 .

Gerrits L, Pagliarin S. Social and causal complexity in qualitative comparative analysis (QCA): strategies to account for emergence. Int J Soc Res Methodol 2020;0:1–14, doi: https://doi.org/10.1080/13645579.2020.1799636 .

Grant RL, Hood R. Complex systems, explanation and policy: implications of the crisis of replication for public health research. Crit Public Health. 2017;27(5):525–32. https://doi.org/10.1080/09581596.2017.1282603 .

Rutter H, Savona N, Glonti K, Bibby J, Cummins S, Finegood DT, et al. The need for a complex systems model of evidence for public health. Lancet. 2017;390(10112):2602–4. https://doi.org/10.1016/S0140-6736(17)31267-9 .

Article   PubMed   Google Scholar  

Greenhalgh T, Papoutsi C. Studying complexity in health services research: desperately seeking an overdue paradigm shift. BMC Med. 2018;16(1):95. https://doi.org/10.1186/s12916-018-1089-4 .

Craig P, Di Ruggiero E, Frohlich KL, Mykhalovskiy E and White M, on behalf of the Canadian Institutes of Health Research (CIHR)–National Institute for Health Research (NIHR) Context Guidance Authors Group. Taking account of context in population health intervention research: guidance for producers, users and funders of research. Southampton: NIHR Evaluation, Trials and Studies Coordinating Centre; 2018.

Paparini S, Green J, Papoutsi C, Murdoch J, Petticrew M, Greenhalgh T, et al. Case study research for better evaluations of complex interventions: rationale and challenges. BMC Med. 2020;18(1):301. https://doi.org/10.1186/s12916-020-01777-6 .

Ragin. The Comparative Method: Moving Beyond Qualitative and Quantitative Strategies. Berkeley: University of California Press; 1987.

Ragin C. Redesigning social inquiry: fuzzy sets and beyond - Charles C: Ragin - Google Books. The University of Chicago Press; 2008. https://doi.org/10.7208/chicago/9780226702797.001.0001 .

Book   Google Scholar  

Befani B, Ledermann S, Sager F. Realistic evaluation and QCA: conceptual parallels and an empirical application. Evaluation. 2007;13(2):171–92. https://doi.org/10.1177/1356389007075222 .

Kane H, Lewis MA, Williams PA, Kahwati LC. Using qualitative comparative analysis to understand and quantify translation and implementation. Transl Behav Med. 2014;4(2):201–8. https://doi.org/10.1007/s13142-014-0251-6 .

Cronqvist L, Berg-Schlosser D. Chapter 4: Multi-Value QCA (mvQCA). In: Rihoux B, Ragin C, editors. Configurational Comparative Methods: Qualitative Comparative Analysis (QCA) and Related Techniques. 2455 Teller Road, Thousand Oaks California 91320 United States: SAGE Publications, Inc.; 2009. p. 69–86. doi: https://doi.org/10.4135/9781452226569 .

Ragin CC. Using qualitative comparative analysis to study causal complexity. Health Serv Res. 1999;34(5 Pt 2):1225–39.

CAS   PubMed   PubMed Central   Google Scholar  

Legewie N. An introduction to applied data analysis with qualitative comparative analysis (QCA). Forum Qual Soc Res. 2013;14.  https://doi.org/10.17169/fqs-14.3.1961 .

Varone F, Rihoux B, Marx A. A new method for policy evaluation? In: Rihoux B, Grimm H, editors. Innovative comparative methods for policy analysis: beyond the quantitative-qualitative divide. Boston: Springer US; 2006. p. 213–36. https://doi.org/10.1007/0-387-28829-5_10 .

Chapter   Google Scholar  

Gerrits L, Verweij S. The evaluation of complex infrastructure projects: a guide to qualitative comparative analysis. Cheltenham: Edward Elgar Pub; 2018. https://doi.org/10.4337/9781783478422 .

Greckhamer T, Misangyi VF, Fiss PC. The two QCAs: from a small-N to a large-N set theoretic approach. In: Configurational Theory and Methods in Organizational Research. Emerald Group Publishing Ltd.; 2013. p. 49–75. https://pennstate.pure.elsevier.com/en/publications/the-two-qcas-from-a-small-n-to-a-large-n-set-theoretic-approach . Accessed 16 Apr 2021.

Rihoux B, Ragin CC. Configurational comparative methods: qualitative comparative analysis (QCA) and related techniques. SAGE; 2009, doi: https://doi.org/10.4135/9781452226569 .

Marx A. Crisp-set qualitative comparative analysis (csQCA) and model specification: benchmarks for future csQCA applications. Int J Mult Res Approaches. 2010;4(2):138–58. https://doi.org/10.5172/mra.2010.4.2.138 .

Marx A, Dusa A. Crisp-set qualitative comparative analysis (csQCA), contradictions and consistency benchmarks for model specification. Methodol Innov Online. 2011;6(2):103–48. https://doi.org/10.4256/mio.2010.0037 .

Hanckel B, Petticrew M, Thomas J, Green J. Protocol for a systematic review of the use of qualitative comparative analysis for evaluative questions in public health research. Syst Rev. 2019;8(1):252. https://doi.org/10.1186/s13643-019-1159-5 .

Shamseer L, Moher D, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ. 2015;349(1):g7647. https://doi.org/10.1136/bmj.g7647 .

EPPI-Reviewer 4.0: Software for research synthesis. UK: University College London; 2010.

Harting J, Peters D, Grêaux K, van Assema P, Verweij S, Stronks K, et al. Implementing multiple intervention strategies in Dutch public health-related policy networks. Health Promot Int. 2019;34(2):193–203. https://doi.org/10.1093/heapro/dax067 .

Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. 2008;8(1):45. https://doi.org/10.1186/1471-2288-8-45 .

Popay J, Roberts H, Sowden A, Petticrew M, Arai L, Rodgers M, et al. Guidance on the conduct of narrative synthesis in systematic reviews: a product from the ESRC methods Programme. 2006.

Wagemann C, Schneider CQ. Qualitative comparative analysis (QCA) and fuzzy-sets: agenda for a research approach and a data analysis technique. Comp Sociol. 2010;9:376–96.

Schneider CQ, Wagemann C. Set-theoretic methods for the social sciences: a guide to qualitative comparative analysis: Cambridge University Press; 2012. https://doi.org/10.1017/CBO9781139004244 .

Blackman T, Dunstan K. Qualitative comparative analysis and health inequalities: investigating reasons for differential Progress with narrowing local gaps in mortality. J Soc Policy. 2010;39(3):359–73. https://doi.org/10.1017/S0047279409990675 .

Blackman T, Wistow J, Byrne D. A Qualitative Comparative Analysis of factors associated with trends in narrowing health inequalities in England. Soc Sci Med 1982. 2011;72:1965–74.

Blackman T, Wistow J, Byrne D. Using qualitative comparative analysis to understand complex policy problems. Evaluation. 2013;19(2):126–40. https://doi.org/10.1177/1356389013484203 .

Glatman-Freedman A, Cohen M-L, Nichols KA, Porges RF, Saludes IR, Steffens K, et al. Factors affecting the introduction of new vaccines to poor nations: a comparative study of the haemophilus influenzae type B and hepatitis B vaccines. PLoS One. 2010;5(11):e13802. https://doi.org/10.1371/journal.pone.0013802 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Ford EW, Duncan WJ, Ginter PM. Health departments’ implementation of public health’s core functions: an assessment of health impacts. Public Health. 2005;119(1):11–21. https://doi.org/10.1016/j.puhe.2004.03.002 .

Article   CAS   PubMed   Google Scholar  

Lucidarme S, Cardon G, Willem A. A comparative study of health promotion networks: configurations of determinants for network effectiveness. Public Manag Rev. 2016;18(8):1163–217. https://doi.org/10.1080/14719037.2015.1088567 .

Melendez-Torres GJ, Sutcliffe K, Burchett HED, Rees R, Richardson M, Thomas J. Weight management programmes: re-analysis of a systematic review to identify pathways to effectiveness. Health Expect Int J Public Particip Health Care Health Policy. 2018;21:574–84.

CAS   Google Scholar  

Thomas J, O’Mara-Eves A, Brunton G. Using qualitative comparative analysis (QCA) in systematic reviews of complex interventions: a worked example. Syst Rev. 2014;3(1):67. https://doi.org/10.1186/2046-4053-3-67 .

Fernald DH, Simpson MJ, Nease DE, Hahn DL, Hoffmann AE, Michaels LC, et al. Implementing community-created self-management support tools in primary care practices: multimethod analysis from the INSTTEPP study. J Patient-Centered Res Rev. 2018;5(4):267–75. https://doi.org/10.17294/2330-0698.1634 .

Harris K, Kneale D, Lasserson TJ, McDonald VM, Grigg J, Thomas J. School-based self-management interventions for asthma in children and adolescents: a mixed methods systematic review. Cochrane Database Syst Rev. 2019. https://doi.org/10.1002/14651858.CD011651.pub2 .

Kahwati LC, Lewis MA, Kane H, Williams PA, Nerz P, Jones KR, et al. Best practices in the veterans health Administration’s MOVE! Weight management program. Am J Prev Med. 2011;41(5):457–64. https://doi.org/10.1016/j.amepre.2011.06.047 .

Warren J, Wistow J, Bambra C. Applying qualitative comparative analysis (QCA) to evaluate a public health policy initiative in the north east of England. Polic Soc. 2013;32(4):289–301. https://doi.org/10.1016/j.polsoc.2013.10.002 .

Warren J, Wistow J, Bambra C. Applying qualitative comparative analysis (QCA) in public health: a case study of a health improvement service for long-term incapacity benefit recipients. J Public Health. 2014;36(1):126–33. https://doi.org/10.1093/pubmed/fdt047 .

Article   CAS   Google Scholar  

Brunton G, O’Mara-Eves A, Thomas J. The “active ingredients” for successful community engagement with disadvantaged expectant and new mothers: a qualitative comparative analysis. J Adv Nurs. 2014;70(12):2847–60. https://doi.org/10.1111/jan.12441 .

McGowan VJ, Wistow J, Lewis SJ, Popay J, Bambra C. Pathways to mental health improvement in a community-led area-based empowerment initiative: evidence from the big local ‘communities in control’ study. England J Public Health. 2019;41(4):850–7. https://doi.org/10.1093/pubmed/fdy192 .

Parrott JS, Henry B, Thompson KL, Ziegler J, Handu D. Managing Complexity in Evidence Analysis: A Worked Example in Pediatric Weight Management. J Acad Nutr Diet. 2018;118:1526–1542.e3.

Kien C, Grillich L, Nussbaumer-Streit B, Schoberberger R. Pathways leading to success and non-success: a process evaluation of a cluster randomized physical activity health promotion program applying fuzzy-set qualitative comparative analysis. BMC Public Health. 2018;18(1):1386. https://doi.org/10.1186/s12889-018-6284-x .

Lubold AM. The effect of family policies and public health initiatives on breastfeeding initiation among 18 high-income countries: a qualitative comparative analysis research design. Int Breastfeed J. 2017;12(1):34. https://doi.org/10.1186/s13006-017-0122-0 .

Bianchi F, Garnett E, Dorsel C, Aveyard P, Jebb SA. Restructuring physical micro-environments to reduce the demand for meat: a systematic review and qualitative comparative analysis. Lancet Planet Health. 2018;2(9):e384–97. https://doi.org/10.1016/S2542-5196(18)30188-8 .

Bianchi F, Dorsel C, Garnett E, Aveyard P, Jebb SA. Interventions targeting conscious determinants of human behaviour to reduce the demand for meat: a systematic review with qualitative comparative analysis. Int J Behav Nutr Phys Act. 2018;15(1):102. https://doi.org/10.1186/s12966-018-0729-6 .

Hartmann-Boyce J, Bianchi F, Piernas C, Payne Riches S, Frie K, Nourse R, et al. Grocery store interventions to change food purchasing behaviors: a systematic review of randomized controlled trials. Am J Clin Nutr. 2018;107(6):1004–16. https://doi.org/10.1093/ajcn/nqy045 .

Burchett HED, Sutcliffe K, Melendez-Torres GJ, Rees R, Thomas J. Lifestyle weight management programmes for children: a systematic review using qualitative comparative analysis to identify critical pathways to effectiveness. Prev Med. 2018;106:1–12. https://doi.org/10.1016/j.ypmed.2017.08.025 .

Chiappone A. Technical assistance and changes in nutrition and physical activity practices in the National Early Care and education learning Collaboratives project, 2015–2016. Prev Chronic Dis. 2018;15. https://doi.org/10.5888/pcd15.170239 .

Kane H, Hinnant L, Day K, Council M, Tzeng J, Soler R, et al. Pathways to program success: a qualitative comparative analysis (QCA) of communities putting prevention to work case study programs. J Public Health Manag Pract JPHMP. 2017;23(2):104–11. https://doi.org/10.1097/PHH.0000000000000449 .

Roberts MC, Murphy T, Moss JL, Wheldon CW, Psek W. A qualitative comparative analysis of combined state health policies related to human papillomavirus vaccine uptake in the United States. Am J Public Health. 2018;108(4):493–9. https://doi.org/10.2105/AJPH.2017.304263 .

Breuer E, Subba P, Luitel N, Jordans M, Silva MD, Marchal B, et al. Using qualitative comparative analysis and theory of change to unravel the effects of a mental health intervention on service utilisation in Nepal. BMJ Glob Health. 2018;3(6):e001023. https://doi.org/10.1136/bmjgh-2018-001023 .

Rihoux B, Álamos-Concha P, Bol D, Marx A, Rezsöhazy I. From niche to mainstream method? A comprehensive mapping of QCA applications in journal articles from 1984 to 2011. Polit Res Q. 2013;66:175–84.

Rihoux B, Rezsöhazy I, Bol D. Qualitative comparative analysis (QCA) in public policy analysis: an extensive review. Ger Policy Stud. 2011;7:9–82.

Plancikova D, Duric P, O’May F. High-income countries remain overrepresented in highly ranked public health journals: a descriptive analysis of research settings and authorship affiliations. Crit Public Health 2020;0:1–7, DOI: https://doi.org/10.1080/09581596.2020.1722313 .

Hoffmann TC, Glasziou PP, Boutron I, Milne R, Perera R, Moher D, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ. 2014;348(mar07 3):g1687. https://doi.org/10.1136/bmj.g1687 .

Fiss PC, Sharapov D, Cronqvist L. Opposites attract? Opportunities and challenges for integrating large-N QCA and econometric analysis. Polit Res Q. 2013;66:191–8.

Blackman T. Can smoking cessation services be better targeted to tackle health inequalities? Evidence from a cross-sectional study. Health Educ J. 2008;67(2):91–101. https://doi.org/10.1177/0017896908089388 .

Haynes P, Banks L, Hill M. Social networks amongst older people in OECD countries: a qualitative comparative analysis. J Int Comp Soc Policy. 2013;29(1):15–27. https://doi.org/10.1080/21699763.2013.802988 .

Rioja EC. Valero-Moreno S, Giménez-Espert M del C, Prado-Gascó V. the relations of quality of life in patients with lupus erythematosus: regression models versus qualitative comparative analysis. J Adv Nurs. 2019;75(7):1484–92. https://doi.org/10.1111/jan.13957 .

Dy SM. Garg Pushkal, Nyberg Dorothy, Dawson Patricia B., Pronovost Peter J., Morlock Laura, et al. critical pathway effectiveness: assessing the impact of patient, hospital care, and pathway characteristics using qualitative comparative analysis. Health Serv Res. 2005;40(2):499–516. https://doi.org/10.1111/j.1475-6773.2005.0r370.x .

MELINDER KA, ANDERSSON R. The impact of structural factors on the injury rate in different European countries. Eur J Pub Health. 2001;11(3):301–8. https://doi.org/10.1093/eurpub/11.3.301 .

Saltkjel T, Holm Ingelsrud M, Dahl E, Halvorsen K. A fuzzy set approach to economic crisis, austerity and public health. Part II: How are configurations of crisis and austerity related to changes in population health across Europe? Scand J Public Health. 2017;45(18_suppl):48–55.

Baumgartner M, Thiem A. Often trusted but never (properly) tested: evaluating qualitative comparative analysis. Sociol Methods Res. 2020;49(2):279–311. https://doi.org/10.1177/0049124117701487 .

Tanner S. QCA is of questionable value for policy research. Polic Soc. 2014;33(3):287–98. https://doi.org/10.1016/j.polsoc.2014.08.003 .

Mackenbach JP. Tackling inequalities in health: the need for building a systematic evidence base. J Epidemiol Community Health. 2003;57(3):162. https://doi.org/10.1136/jech.57.3.162 .

Stern E, Stame N, Mayne J, Forss K, Davies R, Befani B. Broadening the range of designs and methods for impact evaluations. Technical report. London: DfiD; 2012.

Pattyn V. Towards appropriate impact evaluation methods. Eur J Dev Res. 2019;31(2):174–9. https://doi.org/10.1057/s41287-019-00202-w .

Petticrew M, Roberts H. Evidence, hierarchies, and typologies: horses for courses. J Epidemiol Community Health. 2003;57(7):527–9. https://doi.org/10.1136/jech.57.7.527 .

Download references

Acknowledgements

The authors would like to thank and acknowledge the support of Sara Shaw, PI of MR/S014632/1 and the rest of the Triple C project team, the experts who were consulted on the final list of included studies, and the reviewers who provided helpful feedback on the original submission.

This study was funded by MRC: MR/S014632/1 ‘Case study, context and complex interventions (Triple C): development of guidance and publication standards to support case study research’. The funder played no part in the conduct or reporting of the study. JG is supported by a Wellcome Trust Centre grant 203109/Z/16/Z.

Author information

Authors and affiliations.

Institute for Culture and Society, Western Sydney University, Sydney, Australia

Benjamin Hanckel

Department of Public Health, Environments and Society, LSHTM, London, UK

Mark Petticrew

UCL Institute of Education, University College London, London, UK

James Thomas

Wellcome Centre for Cultures & Environments of Health, University of Exeter, Exeter, UK

Judith Green

You can also search for this author in PubMed   Google Scholar

Contributions

BH - research design, data acquisition, data extraction and coding, data interpretation, paper drafting; JT – research design, data interpretation, contributing to paper; MP – funding acquisition, research design, data interpretation, contributing to paper; JG – funding acquisition, research design, data extraction and coding, data interpretation, paper drafting. All authors approved the final version.

Corresponding author

Correspondence to Judith Green .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Competing interests

All authors declare they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Example search strategy.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Hanckel, B., Petticrew, M., Thomas, J. et al. The use of Qualitative Comparative Analysis (QCA) to address causality in complex systems: a systematic review of research on public health interventions. BMC Public Health 21 , 877 (2021). https://doi.org/10.1186/s12889-021-10926-2

Download citation

Received : 03 February 2021

Accepted : 22 April 2021

Published : 07 May 2021

DOI : https://doi.org/10.1186/s12889-021-10926-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Public health
  • Intervention
  • Systematic review

BMC Public Health

ISSN: 1471-2458

causal comparative research questions examples

PhD Thesis Bangalore

Call us : 0091 80 4111 6961

Guide to ‘causal-comparative’ research design: Identifying causative relationship between an independent & dependent variable

  • PhD Research

'  data-srcset=

Most often, in experimental research, when a researcher wants to compare groups in a more natural way, the approach used is causal design. On the other hand, in a non-experimental setting, if a researcher wants to identify consequences or causes of differences between groups of individuals, then typically  causal-comparative design is deployed.  

Causal-comparative, also known as ex post facto (after the fact) research design, is an approach that attempts to figure out a causative relationship between an independent variable & a dependent variable. It must be noted that the relationship between the independent variable and dependent variable is a suggested relationship and not proven as the researcher do not have complete control over the independent variable.

This method seeks to build causal relationships between events and circumstances. Simply said, it determines to find out the reasons/causes of specific occurrences or non-occurrences. Based on Mill’s canon of agreement and disagreement, causal-comparative research involves comparison in contrast to correlation studies which looks at relationships. 

For example , you may wish to compare the body composition of individuals who are trained with exercise machines versus individuals trained only free weights. Here you will not be manipulating any variables, but only investigating the impact of exercise machines and free weights on body composition. However, since factors such as training programs, diet, aerobic conditioning affects the body composition, causal-comparative research will be assessed scrupulously to determine how the other factors were controlled. 

This research design is further segregated into:

  • Retrospective causal-comparative research –  In this method, a research question after the effects have occurred is investigated. The researcher aims to determine how one variable may have impacted another variable.  
  • Prospective causal-comparative research – This method begins with studying the causes and is progressed by investigating the possible effects of a condition. 

How to conduct causal-comparative research? 

The basic outline for performing this type of research is similar to other researches. The steps involved in this process are: 

  • Topic selection – Identify & define a specific phenomenon of interest and consider the possible consequences for the phenomenon. This method involves the selection of two groups that differ on a certain variable of interest. 
  • Review the literature – Assess the literature in order to identify the independent and dependent variables for the study. This process lets you figure out external variables that contribute to a cause-effect relationship.
  • Develop a hypothesis – The hypothesis developed must define the effect of the independent variable on the dependent variable.
  • Selection of comparison groups – Choose groups that differ in regards to the independent variable. This enables you to control external variables and reduce their impact. Here, you can use the matching technique to find groups that differ mainly by the presence of the independent variable. 
  • Choosing a tool for variable measurement variables and data collection – In this type of research, the researcher need not incorporate a treatment protocol. It is a matter of gathering data from surveys, interviews, etc. that allows comparisons to be made between the groups.
  • Data analysis – Here, data is reported as a frequency or mean for each group using descriptive statistics. This is followed by determining the significant mean difference between the groups using inferential statistics (T-test, Chi-square test). 
  • Interpretation of results – In this step carefully state that the independent variable causes the dependent variable. However, due to the presence of external variables and lack of randomisation in participant selection, it is probably ideal to state that the results showcase a possible effect or cause.  

Flow chart 

So, when should one consider using this research design? 

Typically, causal-comparative research design can be considered as an alternative to experimental design due to its feasibility, cost-affordability and easy to perform the research. 

However, in causal-comparative design, the independent variables cannot be manipulated, unlike experimental research. For example , if you want to investigate if ethnicity affects self-esteem, you cannot manipulate the self-esteem of the participants’. The independent variable here is already selected, and hence, some other method needs to be utilised to determine the cause.

Threats to the internal validity of the research 

In this type of research, since the participants are not randomly selected and placed in the groups, there is a threat to internal validity. Another threat to internal validity is its inability to manipulate the independent variable. 

In order to counter the threats and strengthen the research, impose selection strategies of matching utilising ANCOVA or homogeneous subgroups. 

Causal-comparative design includes basic features such as:

  • Involves selection of two comparison groups (experimental & control group) to be studied
  • Includes making comparisons between pre-existing groups in regards to interested variables 
  • Studies variables which cannot be manipulated for practical or ethical reasons
  • Consumes reduced amount of time and cost

Although this approach gives an opportunity to analyse data on the basis of your personal opinion and come out with the best conclusion, while predicting the relationship, you might fall to post hoc fallacy. Therefore, pay extra attention while predicting the relationship and then arrive at a conclusion.

2 thoughts on “Guide to ‘causal-comparative’ research design: Identifying causative relationship between an independent & dependent variable”

Well-research article. Bookmarked

Very helpful for the research information and also, can you specify the advantages of the comparative research for certain approaches as well.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Academic Writing
  • Avoiding Rejection
  • Data Analysis
  • Data collection
  • Dissertation
  • Dissertation Defense Preparation
  • Experimental Research
  • Limitation and Delimitation
  • Literature Review
  • Manuscript Publication
  • Quantitative Research
  • Questionnaires
  • Research Design
  • Research Methodology
  • Research Paper Writing
  • Research Proposal
  • Research Scholar
  • Topic Selection
  • Uncategorized

Recent Posts

  • A Beginner’s Guide to Top 3 Best PhD Research Topics for 2024-25 thesis , Topic Selection January 16, 2024
  • How to Write a Research Proposal and How NOT to in 2024 Research Proposal December 20, 2023
  • 5 Unknown Differences Between Limitation and Delimitation Limitation and Delimitation November 21, 2023
  • 3 Game-Changing Tips for PhD Thesis Writing in 2023-24 Citation , PhD Research , PhD Thesis November 3, 2023
  • What to Do When PhD Dissertation Defense Preparation Derail? Dissertation Defense Preparation September 16, 2023

How to get published in SCI Indexed Journals

About Us

IMAGES

  1. CAUSAL COMPARATIVE RESEARCH

    causal comparative research questions examples

  2. Causal Research: Definition, Examples and How to Use it

    causal comparative research questions examples

  3. PPT

    causal comparative research questions examples

  4. Causal Comparative Research: Definition, Types & Benefits

    causal comparative research questions examples

  5. PPT

    causal comparative research questions examples

  6. Causal comparative research/types, characteristics, advantages and disadvantages

    causal comparative research questions examples

VIDEO

  1. #4 Purpose of Research

  2. Quantitative Approach

  3. [7] Causal-comparative research

  4. 4. ទម្រង់នៃការស្រាវជ្រាវ៖ Causal-Comparative Research

  5. Case study, causal comparative or ex-post-facto research, prospective, retrospective research

  6. Causal

COMMENTS

  1. Causal Comparative Research: Definition, Types & Benefits

    Causal-comparative research is a methodology used to identify cause-effect relationships between independent and dependent variables. Researchers can study cause and effect in retrospect. This can help determine the consequences or causes of differences already existing among or between different groups of people.

  2. Causal Research: Definition, examples and how to use it

    Causal research, also known as explanatory research or causal-comparative research, identifies the extent and nature of cause-and-effect relationships between two or more variables. It's often used by companies to determine the impact of changes in products, features, or services process on critical company metrics. Some examples:

  3. Causal Comparative Research: Methods And Examples

    In a causal-comparative research design, the researcher compares two groups to find out whether the independent variable affected the outcome or the dependent variable. A causal-comparative method determines whether one variable has a direct influence on the other and why. It identifies the causes of certain occurrences (or non-occurrences).

  4. Causal Comparative Research: Insights and Implications

    Illustrative examples of causal comparative research. Across varied disciplines, Causal Comparative Research has been employed to address pressing questions, providing insights into causal factors without the need for direct manipulation. Let's explore a few examples that encapsulate its breadth and significance.

  5. Types of Research Questions: Descriptive, Predictive, or Causal

    A previous Evidence in Practice article explained why a specific and answerable research question is important for clinicians and researchers. Determining whether a study aims to answer a descriptive, predictive, or causal question should be one of the first things a reader does when reading an article. Any type of question can be relevant and useful to support evidence-based practice, but ...

  6. Research: Articulating Questions, Generating Hypotheses, and Choosing

    For example, a question could be asked as to "what is a pharmacist intervention" and a definition and classification system developed for use in further research. ... The acronym PICO has been used to describe the components of well-crafted causality-comparative research questions. 3 The letters in this acronym stand for Population ...

  7. What is causal-comparative research: Definition, types & methods

    Causal-comparative research example #2. There may or may not be a difference in leadership styles between men and women, and it is possible to figure out the difference by looking at various examples. ... Frequently asked questions about causal-comparative research; Final words; Related Posts. 50+ Must-ask questions for your market research ...

  8. PDF Research Methodology Group UOPX Research Community Causal-Comparative

    Agenda. "Causal-comparative research is a methodology used to identify cause-effect relationships between independent and dependent variables. Researchers can study cause and effect in retrospect. This can help determine the consequences or causes of differences already existing among or between different groups of people. Retrospective ...

  9. Thinking Clearly About Correlations and Causation: Graphical Causal

    Causal inferences based on observational data require researchers to make very strong assumptions. Researchers who attempt to answer a causal research question with observational data should not only be aware that such an endeavor is challenging, but also understand the assumptions implied by their models and communicate them transparently.

  10. Comparative Research Methods

    The challenge to establishing a causal link lies in the question of how to deal with all the other known and unknown variables that also differentiate these media systems (for example, market size) and may have plausible effects on the outcome variable (that is market pluralism). ... The second type of research question addressed in comparative ...

  11. Alignment

    Write the appropriate number research question(s) based on the project design and purpose of the proposed Doctoral Project or Applied Dissertation. Research Question: RQ1. (see examples above to complete) Hypotheses For each research question there should be a null and alternative hypothesis. Examples: Causal Comparative Hypotheses H10.

  12. (PDF) A Short Introduction to Comparative Research

    A comparative study is a kind of method that analyzes phenomena and then put them together. to find the points of differentiation and similarity (MokhtarianPour, 2016). A comparative perspective ...

  13. How to structure quantitative research questions

    Structure of comparative research questions. There are five steps required to construct a comparative research question: (1) choose your starting phrase; (2) identify and name the dependent variable; (3) identify the groups you are interested in; (4) identify the appropriate adjoining text; and (5) write out the comparative research question. Each of these steps is discussed in turn:

  14. The use of Qualitative Comparative Analysis (QCA) to address causality

    Qualitative Comparative Analysis (QCA) is a method for identifying the configurations of conditions that lead to specific outcomes. Given its potential for providing evidence of causality in complex systems, QCA is increasingly used in evaluative research to examine the uptake or impacts of public health interventions. We map this emerging field, assessing the strengths and weaknesses of QCA ...

  15. PDF A Quantitative Causal-comparative Study of Reading Intervention

    P a g e | i ABSTRACT It has been well-documented that an educational gap in literacy skills exists among children when they begin school. Some students are able to make progress without support beyond

  16. PDF A Causal Comparative Study on The Effect of Proficiency-based ...

    A CAUSAL COMPARATIVE STUDY ON THE EFFECT OF PROFICIENCY-BASED EDUCATION ON SCHOOL CLIMATE by Kay B. York Liberty University A Dissertation Presented in Partial Fulfillment Of the Requirements for the Degree Doctor of Education Liberty University 2017 View metadata, citation and similar papers at core.ac.uk brought to you by CORE

  17. Correlation vs. Causation

    Correlation vs. Causation | Difference, Designs & Examples. Published on July 12, 2021 by Pritha Bhandari.Revised on June 22, 2023. Correlation means there is a statistical association between variables.Causation means that a change in one variable causes a change in another variable.. In research, you might have come across the phrase "correlation doesn't imply causation."

  18. Guide to 'causal-comparative' research design ...

    Retrospective causal-comparative research - In this method, a research question after the effects have occurred is investigated. The researcher aims to determine how one variable may have impacted another variable. ... in causal-comparative design, the independent variables cannot be manipulated, unlike experimental research. For example, if ...

  19. Causal Comparative Study: The Effect of School Scheduling and Academic

    Causal Comparative Study: The Effect of School Scheduling and Academic Outcomes of Economically Disadvantaged Students Marcus D. Brannon Stephen F Austin State University, [email protected] Follow this and additional works at: https://scholarworks.sfasu.edu/etds

  20. causal-comparative research design: Topics by Science.gov

    The study was in the form of a causal comparative design, and a triangulation technique was used to collect data including the use of survey data, archival data, and a participatory rural appraisal (PRA). The causal comparative method requires a comparison of villages with and without the FWSP. Therefore, a survey was conducted using stratified ...

  21. (PDF) Causal Comparative Research

    A comparative Study of Popular Poverty Attributions in Europe. Book. Full-text available. Sep 2007. Dorota Lepianka. PDF | a study in which the researcher attempts to determine the cause, or ...

  22. causal comparative study: Topics by Science.gov

    2013-01-01. The purpose of this quantitative causal-comparative study was to investigate the relationship between the instructional effects of the interactive whiteboard and students' proficiency levels in eighth-grade science as evidenced by the state FCAT scores. A total of 46 eighth-grade science teachers in a South Florida public school ...