Logo for Open Educational Resources

Chapter 2. Research Design

Getting started.

When I teach undergraduates qualitative research methods, the final product of the course is a “research proposal” that incorporates all they have learned and enlists the knowledge they have learned about qualitative research methods in an original design that addresses a particular research question. I highly recommend you think about designing your own research study as you progress through this textbook. Even if you don’t have a study in mind yet, it can be a helpful exercise as you progress through the course. But how to start? How can one design a research study before they even know what research looks like? This chapter will serve as a brief overview of the research design process to orient you to what will be coming in later chapters. Think of it as a “skeleton” of what you will read in more detail in later chapters. Ideally, you will read this chapter both now (in sequence) and later during your reading of the remainder of the text. Do not worry if you have questions the first time you read this chapter. Many things will become clearer as the text advances and as you gain a deeper understanding of all the components of good qualitative research. This is just a preliminary map to get you on the right road.

Null

Research Design Steps

Before you even get started, you will need to have a broad topic of interest in mind. [1] . In my experience, students can confuse this broad topic with the actual research question, so it is important to clearly distinguish the two. And the place to start is the broad topic. It might be, as was the case with me, working-class college students. But what about working-class college students? What’s it like to be one? Why are there so few compared to others? How do colleges assist (or fail to assist) them? What interested me was something I could barely articulate at first and went something like this: “Why was it so difficult and lonely to be me?” And by extension, “Did others share this experience?”

Once you have a general topic, reflect on why this is important to you. Sometimes we connect with a topic and we don’t really know why. Even if you are not willing to share the real underlying reason you are interested in a topic, it is important that you know the deeper reasons that motivate you. Otherwise, it is quite possible that at some point during the research, you will find yourself turned around facing the wrong direction. I have seen it happen many times. The reason is that the research question is not the same thing as the general topic of interest, and if you don’t know the reasons for your interest, you are likely to design a study answering a research question that is beside the point—to you, at least. And this means you will be much less motivated to carry your research to completion.

Researcher Note

Why do you employ qualitative research methods in your area of study? What are the advantages of qualitative research methods for studying mentorship?

Qualitative research methods are a huge opportunity to increase access, equity, inclusion, and social justice. Qualitative research allows us to engage and examine the uniquenesses/nuances within minoritized and dominant identities and our experiences with these identities. Qualitative research allows us to explore a specific topic, and through that exploration, we can link history to experiences and look for patterns or offer up a unique phenomenon. There’s such beauty in being able to tell a particular story, and qualitative research is a great mode for that! For our work, we examined the relationships we typically use the term mentorship for but didn’t feel that was quite the right word. Qualitative research allowed us to pick apart what we did and how we engaged in our relationships, which then allowed us to more accurately describe what was unique about our mentorship relationships, which we ultimately named liberationships ( McAloney and Long 2021) . Qualitative research gave us the means to explore, process, and name our experiences; what a powerful tool!

How do you come up with ideas for what to study (and how to study it)? Where did you get the idea for studying mentorship?

Coming up with ideas for research, for me, is kind of like Googling a question I have, not finding enough information, and then deciding to dig a little deeper to get the answer. The idea to study mentorship actually came up in conversation with my mentorship triad. We were talking in one of our meetings about our relationship—kind of meta, huh? We discussed how we felt that mentorship was not quite the right term for the relationships we had built. One of us asked what was different about our relationships and mentorship. This all happened when I was taking an ethnography course. During the next session of class, we were discussing auto- and duoethnography, and it hit me—let’s explore our version of mentorship, which we later went on to name liberationships ( McAloney and Long 2021 ). The idea and questions came out of being curious and wanting to find an answer. As I continue to research, I see opportunities in questions I have about my work or during conversations that, in our search for answers, end up exposing gaps in the literature. If I can’t find the answer already out there, I can study it.

—Kim McAloney, PhD, College Student Services Administration Ecampus coordinator and instructor

When you have a better idea of why you are interested in what it is that interests you, you may be surprised to learn that the obvious approaches to the topic are not the only ones. For example, let’s say you think you are interested in preserving coastal wildlife. And as a social scientist, you are interested in policies and practices that affect the long-term viability of coastal wildlife, especially around fishing communities. It would be natural then to consider designing a research study around fishing communities and how they manage their ecosystems. But when you really think about it, you realize that what interests you the most is how people whose livelihoods depend on a particular resource act in ways that deplete that resource. Or, even deeper, you contemplate the puzzle, “How do people justify actions that damage their surroundings?” Now, there are many ways to design a study that gets at that broader question, and not all of them are about fishing communities, although that is certainly one way to go. Maybe you could design an interview-based study that includes and compares loggers, fishers, and desert golfers (those who golf in arid lands that require a great deal of wasteful irrigation). Or design a case study around one particular example where resources were completely used up by a community. Without knowing what it is you are really interested in, what motivates your interest in a surface phenomenon, you are unlikely to come up with the appropriate research design.

These first stages of research design are often the most difficult, but have patience . Taking the time to consider why you are going to go through a lot of trouble to get answers will prevent a lot of wasted energy in the future.

There are distinct reasons for pursuing particular research questions, and it is helpful to distinguish between them.  First, you may be personally motivated.  This is probably the most important and the most often overlooked.   What is it about the social world that sparks your curiosity? What bothers you? What answers do you need in order to keep living? For me, I knew I needed to get a handle on what higher education was for before I kept going at it. I needed to understand why I felt so different from my peers and whether this whole “higher education” thing was “for the likes of me” before I could complete my degree. That is the personal motivation question. Your personal motivation might also be political in nature, in that you want to change the world in a particular way. It’s all right to acknowledge this. In fact, it is better to acknowledge it than to hide it.

There are also academic and professional motivations for a particular study.  If you are an absolute beginner, these may be difficult to find. We’ll talk more about this when we discuss reviewing the literature. Simply put, you are probably not the only person in the world to have thought about this question or issue and those related to it. So how does your interest area fit into what others have studied? Perhaps there is a good study out there of fishing communities, but no one has quite asked the “justification” question. You are motivated to address this to “fill the gap” in our collective knowledge. And maybe you are really not at all sure of what interests you, but you do know that [insert your topic] interests a lot of people, so you would like to work in this area too. You want to be involved in the academic conversation. That is a professional motivation and a very important one to articulate.

Practical and strategic motivations are a third kind. Perhaps you want to encourage people to take better care of the natural resources around them. If this is also part of your motivation, you will want to design your research project in a way that might have an impact on how people behave in the future. There are many ways to do this, one of which is using qualitative research methods rather than quantitative research methods, as the findings of qualitative research are often easier to communicate to a broader audience than the results of quantitative research. You might even be able to engage the community you are studying in the collecting and analyzing of data, something taboo in quantitative research but actively embraced and encouraged by qualitative researchers. But there are other practical reasons, such as getting “done” with your research in a certain amount of time or having access (or no access) to certain information. There is nothing wrong with considering constraints and opportunities when designing your study. Or maybe one of the practical or strategic goals is about learning competence in this area so that you can demonstrate the ability to conduct interviews and focus groups with future employers. Keeping that in mind will help shape your study and prevent you from getting sidetracked using a technique that you are less invested in learning about.

STOP HERE for a moment

I recommend you write a paragraph (at least) explaining your aims and goals. Include a sentence about each of the following: personal/political goals, practical or professional/academic goals, and practical/strategic goals. Think through how all of the goals are related and can be achieved by this particular research study . If they can’t, have a rethink. Perhaps this is not the best way to go about it.

You will also want to be clear about the purpose of your study. “Wait, didn’t we just do this?” you might ask. No! Your goals are not the same as the purpose of the study, although they are related. You can think about purpose lying on a continuum from “ theory ” to “action” (figure 2.1). Sometimes you are doing research to discover new knowledge about the world, while other times you are doing a study because you want to measure an impact or make a difference in the world.

Purpose types: Basic Research, Applied Research, Summative Evaluation, Formative Evaluation, Action Research

Basic research involves research that is done for the sake of “pure” knowledge—that is, knowledge that, at least at this moment in time, may not have any apparent use or application. Often, and this is very important, knowledge of this kind is later found to be extremely helpful in solving problems. So one way of thinking about basic research is that it is knowledge for which no use is yet known but will probably one day prove to be extremely useful. If you are doing basic research, you do not need to argue its usefulness, as the whole point is that we just don’t know yet what this might be.

Researchers engaged in basic research want to understand how the world operates. They are interested in investigating a phenomenon to get at the nature of reality with regard to that phenomenon. The basic researcher’s purpose is to understand and explain ( Patton 2002:215 ).

Basic research is interested in generating and testing hypotheses about how the world works. Grounded Theory is one approach to qualitative research methods that exemplifies basic research (see chapter 4). Most academic journal articles publish basic research findings. If you are working in academia (e.g., writing your dissertation), the default expectation is that you are conducting basic research.

Applied research in the social sciences is research that addresses human and social problems. Unlike basic research, the researcher has expectations that the research will help contribute to resolving a problem, if only by identifying its contours, history, or context. From my experience, most students have this as their baseline assumption about research. Why do a study if not to make things better? But this is a common mistake. Students and their committee members are often working with default assumptions here—the former thinking about applied research as their purpose, the latter thinking about basic research: “The purpose of applied research is to contribute knowledge that will help people to understand the nature of a problem in order to intervene, thereby allowing human beings to more effectively control their environment. While in basic research the source of questions is the tradition within a scholarly discipline, in applied research the source of questions is in the problems and concerns experienced by people and by policymakers” ( Patton 2002:217 ).

Applied research is less geared toward theory in two ways. First, its questions do not derive from previous literature. For this reason, applied research studies have much more limited literature reviews than those found in basic research (although they make up for this by having much more “background” about the problem). Second, it does not generate theory in the same way as basic research does. The findings of an applied research project may not be generalizable beyond the boundaries of this particular problem or context. The findings are more limited. They are useful now but may be less useful later. This is why basic research remains the default “gold standard” of academic research.

Evaluation research is research that is designed to evaluate or test the effectiveness of specific solutions and programs addressing specific social problems. We already know the problems, and someone has already come up with solutions. There might be a program, say, for first-generation college students on your campus. Does this program work? Are first-generation students who participate in the program more likely to graduate than those who do not? These are the types of questions addressed by evaluation research. There are two types of research within this broader frame; however, one more action-oriented than the next. In summative evaluation , an overall judgment about the effectiveness of a program or policy is made. Should we continue our first-gen program? Is it a good model for other campuses? Because the purpose of such summative evaluation is to measure success and to determine whether this success is scalable (capable of being generalized beyond the specific case), quantitative data is more often used than qualitative data. In our example, we might have “outcomes” data for thousands of students, and we might run various tests to determine if the better outcomes of those in the program are statistically significant so that we can generalize the findings and recommend similar programs elsewhere. Qualitative data in the form of focus groups or interviews can then be used for illustrative purposes, providing more depth to the quantitative analyses. In contrast, formative evaluation attempts to improve a program or policy (to help “form” or shape its effectiveness). Formative evaluations rely more heavily on qualitative data—case studies, interviews, focus groups. The findings are meant not to generalize beyond the particular but to improve this program. If you are a student seeking to improve your qualitative research skills and you do not care about generating basic research, formative evaluation studies might be an attractive option for you to pursue, as there are always local programs that need evaluation and suggestions for improvement. Again, be very clear about your purpose when talking through your research proposal with your committee.

Action research takes a further step beyond evaluation, even formative evaluation, to being part of the solution itself. This is about as far from basic research as one could get and definitely falls beyond the scope of “science,” as conventionally defined. The distinction between action and research is blurry, the research methods are often in constant flux, and the only “findings” are specific to the problem or case at hand and often are findings about the process of intervention itself. Rather than evaluate a program as a whole, action research often seeks to change and improve some particular aspect that may not be working—maybe there is not enough diversity in an organization or maybe women’s voices are muted during meetings and the organization wonders why and would like to change this. In a further step, participatory action research , those women would become part of the research team, attempting to amplify their voices in the organization through participation in the action research. As action research employs methods that involve people in the process, focus groups are quite common.

If you are working on a thesis or dissertation, chances are your committee will expect you to be contributing to fundamental knowledge and theory ( basic research ). If your interests lie more toward the action end of the continuum, however, it is helpful to talk to your committee about this before you get started. Knowing your purpose in advance will help avoid misunderstandings during the later stages of the research process!

The Research Question

Once you have written your paragraph and clarified your purpose and truly know that this study is the best study for you to be doing right now , you are ready to write and refine your actual research question. Know that research questions are often moving targets in qualitative research, that they can be refined up to the very end of data collection and analysis. But you do have to have a working research question at all stages. This is your “anchor” when you get lost in the data. What are you addressing? What are you looking at and why? Your research question guides you through the thicket. It is common to have a whole host of questions about a phenomenon or case, both at the outset and throughout the study, but you should be able to pare it down to no more than two or three sentences when asked. These sentences should both clarify the intent of the research and explain why this is an important question to answer. More on refining your research question can be found in chapter 4.

Chances are, you will have already done some prior reading before coming up with your interest and your questions, but you may not have conducted a systematic literature review. This is the next crucial stage to be completed before venturing further. You don’t want to start collecting data and then realize that someone has already beaten you to the punch. A review of the literature that is already out there will let you know (1) if others have already done the study you are envisioning; (2) if others have done similar studies, which can help you out; and (3) what ideas or concepts are out there that can help you frame your study and make sense of your findings. More on literature reviews can be found in chapter 9.

In addition to reviewing the literature for similar studies to what you are proposing, it can be extremely helpful to find a study that inspires you. This may have absolutely nothing to do with the topic you are interested in but is written so beautifully or organized so interestingly or otherwise speaks to you in such a way that you want to post it somewhere to remind you of what you want to be doing. You might not understand this in the early stages—why would you find a study that has nothing to do with the one you are doing helpful? But trust me, when you are deep into analysis and writing, having an inspirational model in view can help you push through. If you are motivated to do something that might change the world, you probably have read something somewhere that inspired you. Go back to that original inspiration and read it carefully and see how they managed to convey the passion that you so appreciate.

At this stage, you are still just getting started. There are a lot of things to do before setting forth to collect data! You’ll want to consider and choose a research tradition and a set of data-collection techniques that both help you answer your research question and match all your aims and goals. For example, if you really want to help migrant workers speak for themselves, you might draw on feminist theory and participatory action research models. Chapters 3 and 4 will provide you with more information on epistemologies and approaches.

Next, you have to clarify your “units of analysis.” What is the level at which you are focusing your study? Often, the unit in qualitative research methods is individual people, or “human subjects.” But your units of analysis could just as well be organizations (colleges, hospitals) or programs or even whole nations. Think about what it is you want to be saying at the end of your study—are the insights you are hoping to make about people or about organizations or about something else entirely? A unit of analysis can even be a historical period! Every unit of analysis will call for a different kind of data collection and analysis and will produce different kinds of “findings” at the conclusion of your study. [2]

Regardless of what unit of analysis you select, you will probably have to consider the “human subjects” involved in your research. [3] Who are they? What interactions will you have with them—that is, what kind of data will you be collecting? Before answering these questions, define your population of interest and your research setting. Use your research question to help guide you.

Let’s use an example from a real study. In Geographies of Campus Inequality , Benson and Lee ( 2020 ) list three related research questions: “(1) What are the different ways that first-generation students organize their social, extracurricular, and academic activities at selective and highly selective colleges? (2) how do first-generation students sort themselves and get sorted into these different types of campus lives; and (3) how do these different patterns of campus engagement prepare first-generation students for their post-college lives?” (3).

Note that we are jumping into this a bit late, after Benson and Lee have described previous studies (the literature review) and what is known about first-generation college students and what is not known. They want to know about differences within this group, and they are interested in ones attending certain kinds of colleges because those colleges will be sites where academic and extracurricular pressures compete. That is the context for their three related research questions. What is the population of interest here? First-generation college students . What is the research setting? Selective and highly selective colleges . But a host of questions remain. Which students in the real world, which colleges? What about gender, race, and other identity markers? Will the students be asked questions? Are the students still in college, or will they be asked about what college was like for them? Will they be observed? Will they be shadowed? Will they be surveyed? Will they be asked to keep diaries of their time in college? How many students? How many colleges? For how long will they be observed?

Recommendation

Take a moment and write down suggestions for Benson and Lee before continuing on to what they actually did.

Have you written down your own suggestions? Good. Now let’s compare those with what they actually did. Benson and Lee drew on two sources of data: in-depth interviews with sixty-four first-generation students and survey data from a preexisting national survey of students at twenty-eight selective colleges. Let’s ignore the survey for our purposes here and focus on those interviews. The interviews were conducted between 2014 and 2016 at a single selective college, “Hilltop” (a pseudonym ). They employed a “purposive” sampling strategy to ensure an equal number of male-identifying and female-identifying students as well as equal numbers of White, Black, and Latinx students. Each student was interviewed once. Hilltop is a selective liberal arts college in the northeast that enrolls about three thousand students.

How did your suggestions match up to those actually used by the researchers in this study? It is possible your suggestions were too ambitious? Beginning qualitative researchers can often make that mistake. You want a research design that is both effective (it matches your question and goals) and doable. You will never be able to collect data from your entire population of interest (unless your research question is really so narrow to be relevant to very few people!), so you will need to come up with a good sample. Define the criteria for this sample, as Benson and Lee did when deciding to interview an equal number of students by gender and race categories. Define the criteria for your sample setting too. Hilltop is typical for selective colleges. That was a research choice made by Benson and Lee. For more on sampling and sampling choices, see chapter 5.

Benson and Lee chose to employ interviews. If you also would like to include interviews, you have to think about what will be asked in them. Most interview-based research involves an interview guide, a set of questions or question areas that will be asked of each participant. The research question helps you create a relevant interview guide. You want to ask questions whose answers will provide insight into your research question. Again, your research question is the anchor you will continually come back to as you plan for and conduct your study. It may be that once you begin interviewing, you find that people are telling you something totally unexpected, and this makes you rethink your research question. That is fine. Then you have a new anchor. But you always have an anchor. More on interviewing can be found in chapter 11.

Let’s imagine Benson and Lee also observed college students as they went about doing the things college students do, both in the classroom and in the clubs and social activities in which they participate. They would have needed a plan for this. Would they sit in on classes? Which ones and how many? Would they attend club meetings and sports events? Which ones and how many? Would they participate themselves? How would they record their observations? More on observation techniques can be found in both chapters 13 and 14.

At this point, the design is almost complete. You know why you are doing this study, you have a clear research question to guide you, you have identified your population of interest and research setting, and you have a reasonable sample of each. You also have put together a plan for data collection, which might include drafting an interview guide or making plans for observations. And so you know exactly what you will be doing for the next several months (or years!). To put the project into action, there are a few more things necessary before actually going into the field.

First, you will need to make sure you have any necessary supplies, including recording technology. These days, many researchers use their phones to record interviews. Second, you will need to draft a few documents for your participants. These include informed consent forms and recruiting materials, such as posters or email texts, that explain what this study is in clear language. Third, you will draft a research protocol to submit to your institutional review board (IRB) ; this research protocol will include the interview guide (if you are using one), the consent form template, and all examples of recruiting material. Depending on your institution and the details of your study design, it may take weeks or even, in some unfortunate cases, months before you secure IRB approval. Make sure you plan on this time in your project timeline. While you wait, you can continue to review the literature and possibly begin drafting a section on the literature review for your eventual presentation/publication. More on IRB procedures can be found in chapter 8 and more general ethical considerations in chapter 7.

Once you have approval, you can begin!

Research Design Checklist

Before data collection begins, do the following:

  • Write a paragraph explaining your aims and goals (personal/political, practical/strategic, professional/academic).
  • Define your research question; write two to three sentences that clarify the intent of the research and why this is an important question to answer.
  • Review the literature for similar studies that address your research question or similar research questions; think laterally about some literature that might be helpful or illuminating but is not exactly about the same topic.
  • Find a written study that inspires you—it may or may not be on the research question you have chosen.
  • Consider and choose a research tradition and set of data-collection techniques that (1) help answer your research question and (2) match your aims and goals.
  • Define your population of interest and your research setting.
  • Define the criteria for your sample (How many? Why these? How will you find them, gain access, and acquire consent?).
  • If you are conducting interviews, draft an interview guide.
  •  If you are making observations, create a plan for observations (sites, times, recording, access).
  • Acquire any necessary technology (recording devices/software).
  • Draft consent forms that clearly identify the research focus and selection process.
  • Create recruiting materials (posters, email, texts).
  • Apply for IRB approval (proposal plus consent form plus recruiting materials).
  • Block out time for collecting data.
  • At the end of the chapter, you will find a " Research Design Checklist " that summarizes the main recommendations made here ↵
  • For example, if your focus is society and culture , you might collect data through observation or a case study. If your focus is individual lived experience , you are probably going to be interviewing some people. And if your focus is language and communication , you will probably be analyzing text (written or visual). ( Marshall and Rossman 2016:16 ). ↵
  • You may not have any "live" human subjects. There are qualitative research methods that do not require interactions with live human beings - see chapter 16 , "Archival and Historical Sources." But for the most part, you are probably reading this textbook because you are interested in doing research with people. The rest of the chapter will assume this is the case. ↵

One of the primary methodological traditions of inquiry in qualitative research, ethnography is the study of a group or group culture, largely through observational fieldwork supplemented by interviews. It is a form of fieldwork that may include participant-observation data collection. See chapter 14 for a discussion of deep ethnography. 

A methodological tradition of inquiry and research design that focuses on an individual case (e.g., setting, institution, or sometimes an individual) in order to explore its complexity, history, and interactive parts.  As an approach, it is particularly useful for obtaining a deep appreciation of an issue, event, or phenomenon of interest in its particular context.

The controlling force in research; can be understood as lying on a continuum from basic research (knowledge production) to action research (effecting change).

In its most basic sense, a theory is a story we tell about how the world works that can be tested with empirical evidence.  In qualitative research, we use the term in a variety of ways, many of which are different from how they are used by quantitative researchers.  Although some qualitative research can be described as “testing theory,” it is more common to “build theory” from the data using inductive reasoning , as done in Grounded Theory .  There are so-called “grand theories” that seek to integrate a whole series of findings and stories into an overarching paradigm about how the world works, and much smaller theories or concepts about particular processes and relationships.  Theory can even be used to explain particular methodological perspectives or approaches, as in Institutional Ethnography , which is both a way of doing research and a theory about how the world works.

Research that is interested in generating and testing hypotheses about how the world works.

A methodological tradition of inquiry and approach to analyzing qualitative data in which theories emerge from a rigorous and systematic process of induction.  This approach was pioneered by the sociologists Glaser and Strauss (1967).  The elements of theory generated from comparative analysis of data are, first, conceptual categories and their properties and, second, hypotheses or generalized relations among the categories and their properties – “The constant comparing of many groups draws the [researcher’s] attention to their many similarities and differences.  Considering these leads [the researcher] to generate abstract categories and their properties, which, since they emerge from the data, will clearly be important to a theory explaining the kind of behavior under observation.” (36).

An approach to research that is “multimethod in focus, involving an interpretative, naturalistic approach to its subject matter.  This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them.  Qualitative research involves the studied use and collection of a variety of empirical materials – case study, personal experience, introspective, life story, interview, observational, historical, interactional, and visual texts – that describe routine and problematic moments and meanings in individuals’ lives." ( Denzin and Lincoln 2005:2 ). Contrast with quantitative research .

Research that contributes knowledge that will help people to understand the nature of a problem in order to intervene, thereby allowing human beings to more effectively control their environment.

Research that is designed to evaluate or test the effectiveness of specific solutions and programs addressing specific social problems.  There are two kinds: summative and formative .

Research in which an overall judgment about the effectiveness of a program or policy is made, often for the purpose of generalizing to other cases or programs.  Generally uses qualitative research as a supplement to primary quantitative data analyses.  Contrast formative evaluation research .

Research designed to improve a program or policy (to help “form” or shape its effectiveness); relies heavily on qualitative research methods.  Contrast summative evaluation research

Research carried out at a particular organizational or community site with the intention of affecting change; often involves research subjects as participants of the study.  See also participatory action research .

Research in which both researchers and participants work together to understand a problematic situation and change it for the better.

The level of the focus of analysis (e.g., individual people, organizations, programs, neighborhoods).

The large group of interest to the researcher.  Although it will likely be impossible to design a study that incorporates or reaches all members of the population of interest, this should be clearly defined at the outset of a study so that a reasonable sample of the population can be taken.  For example, if one is studying working-class college students, the sample may include twenty such students attending a particular college, while the population is “working-class college students.”  In quantitative research, clearly defining the general population of interest is a necessary step in generalizing results from a sample.  In qualitative research, defining the population is conceptually important for clarity.

A fictional name assigned to give anonymity to a person, group, or place.  Pseudonyms are important ways of protecting the identity of research participants while still providing a “human element” in the presentation of qualitative data.  There are ethical considerations to be made in selecting pseudonyms; some researchers allow research participants to choose their own.

A requirement for research involving human participants; the documentation of informed consent.  In some cases, oral consent or assent may be sufficient, but the default standard is a single-page easy-to-understand form that both the researcher and the participant sign and date.   Under federal guidelines, all researchers "shall seek such consent only under circumstances that provide the prospective subject or the representative sufficient opportunity to consider whether or not to participate and that minimize the possibility of coercion or undue influence. The information that is given to the subject or the representative shall be in language understandable to the subject or the representative.  No informed consent, whether oral or written, may include any exculpatory language through which the subject or the representative is made to waive or appear to waive any of the subject's rights or releases or appears to release the investigator, the sponsor, the institution, or its agents from liability for negligence" (21 CFR 50.20).  Your IRB office will be able to provide a template for use in your study .

An administrative body established to protect the rights and welfare of human research subjects recruited to participate in research activities conducted under the auspices of the institution with which it is affiliated. The IRB is charged with the responsibility of reviewing all research involving human participants. The IRB is concerned with protecting the welfare, rights, and privacy of human subjects. The IRB has the authority to approve, disapprove, monitor, and require modifications in all research activities that fall within its jurisdiction as specified by both the federal regulations and institutional policy.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

Jump to navigation

Home

Cochrane Training

Chapter 2: determining the scope of the review and the questions it will address.

James Thomas, Dylan Kneale, Joanne E McKenzie, Sue E Brennan, Soumyadeep Bhaumik

Key Points:

  • Systematic reviews should address answerable questions and fill important gaps in knowledge.
  • Developing good review questions takes time, expertise and engagement with intended users of the review.
  • Cochrane Reviews can focus on broad questions, or be more narrowly defined. There are advantages and disadvantages of each.
  • Logic models are a way of documenting how interventions, particularly complex interventions, are intended to ‘work’, and can be used to refine review questions and the broader scope of the review.
  • Using priority-setting exercises, involving relevant stakeholders, and ensuring that the review takes account of issues relating to equity can be strategies for ensuring that the scope and focus of reviews address the right questions.

Cite this chapter as: Thomas J, Kneale D, McKenzie JE, Brennan SE, Bhaumik S. Chapter 2: Determining the scope of the review and the questions it will address. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.4 (updated August 2023). Cochrane, 2023. Available from www.training.cochrane.org/handbook .

2.1 Rationale for well-formulated questions

As with any research, the first and most important decision in preparing a systematic review is to determine its focus. This is best done by clearly framing the questions the review seeks to answer. The focus of any Cochrane Review should be on questions that are important to people making decisions about health or health care. These decisions will usually need to take into account both the benefits and harms of interventions (see MECIR Box 2.1.a ). Good review questions often take time to develop, requiring engagement with not only the subject area, but with a wide group of stakeholders (Section 2.4.2 ).

Well-formulated questions will guide many aspects of the review process, including determining eligibility criteria, searching for studies, collecting data from included studies, structuring the syntheses and presenting findings (Cooper 1984, Hedges 1994, Oliver et al 2017) . In Cochrane Reviews, questions are stated broadly as review ‘Objectives’, and operationalized in terms of the studies that will be eligible to answer those questions as ‘Criteria for considering studies for this review’. As well as focusing review conduct, the contents of these sections are used by readers in their initial assessments of whether the review is likely to be directly relevant to the issues they face.

The FINER criteria have been proposed as encapsulating the issues that should be addressed when developing research questions. These state that questions should be F easible, I nteresting, N ovel, E thical, and R elevant (Cummings et al 2007). All of these criteria raise important issues for consideration at the outset of a review and should be borne in mind when questions are formulated.

A feasible review is one that asks a question that the author team is capable of addressing using the evidence available. Issues concerning the breadth of a review are discussed in Section 2.3.1 , but in terms of feasibility it is important not to ask a question that will result in retrieving unmanageable quantities of information; up-front scoping work will help authors to define sensible boundaries for their reviews. Likewise, while it can be useful to identify gaps in the evidence base, review authors and stakeholders should be aware of the possibility of asking a question that may not be answerable using the existing evidence (i.e. that will result in an ‘empty’ review, see also Section 2.5.3 ).

Embarking on a review that authors are interested in is important because reviews are a significant undertaking and review authors need sufficient commitment to see the work through to its conclusion.

A novel review will address a genuine gap in knowledge, so review authors should be aware of any related or overlapping reviews. This reduces duplication of effort, and also ensures that authors understand the wider research context to which their review will contribute. Authors should check for pre-existing syntheses in the published research literature and also for ongoing reviews in the PROSPERO register of systematic reviews before beginning their own review.

Given the opportunity cost involved in undertaking an activity as demanding as a systematic review, authors should ensure that their work is relevant by: (i) involving relevant stakeholders in defining its focus and the questions it will address; and (ii) writing up the review in such a way as to facilitate the translation of its findings to inform decisions. The GRADE framework aims to achieve this, and should be considered throughout the review process, not only when it is being written up (see Chapter 14 and Chapter 15 ).

Consideration of opportunity costs is also relevant in terms of the ethics of conducting a review, though ethical issues should also be considered primarily in terms of the questions that are prioritized for answering and the way that they are framed. Research questions are often not value-neutral, and the way that a given problem is approached can have political implications which can result in, for example, the widening of health inequalities (whether intentional or not). These issues are explored in Section 2.4.3 and Chapter 16 .

MECIR Box 2.1.a Relevant expectations for conduct of intervention reviews

2.2 Aims of reviews of interventions

Systematic reviews can address any question that can be answered by a primary research study. This Handbook focuses on a subset of all possible review questions: the impact of intervention(s) implemented within a specified human population. Even within these limits, systematic reviews examining the effects of intervention(s) can vary quite markedly in their aims. Some will focus specifically on evidence of an effect of an intervention compared with a specific alternative, whereas others may examine a range of different interventions. Reviews that examine multiple interventions and aim to identify which might be the most effective can be broader and more challenging than those looking at single interventions. These can also be the most useful for end users, where decision making involves selecting from a number of intervention options. The incorporation of network meta-analysis as a core method in this edition of the Handbook (see Chapter 11 ) reflects the growing importance of these types of reviews.

As well as looking at the balance of benefit and harm that can be attributed to a given intervention, reviews within the ambit of this Handbook might also aim to investigate the relationship between the size of an intervention effect and other characteristics, such as aspects of the population, the intervention itself, how the outcome is measured, or the methodology of the primary research studies included. Such approaches might be used to investigate which components of multi-component interventions are more or less important or essential (and when). While it is not always necessary to know how an intervention achieves its effect for it to be useful, many reviews will aim to articulate an intervention’s mechanisms of action (see Section 2.5.1 ), either by making this an explicit aim of the review itself (see Chapter 17 and Chapter 21 ), or when describing the scope of the review. Understanding how an intervention works (or is intended to work) can be an important aid to decision makers in assessing the applicability of the review to their situation. These investigations can be assisted by the incorporation of results from process evaluations conducted alongside trials (see Chapter 21 ). Further, many decisions in policy and practice are at least partially constrained by the resource available, so review authors often need to consider the economic context of interventions (see Chapter 20 ).

2.3 Defining the scope of a review question

Studies comparing healthcare interventions, notably randomized trials, use the outcomes of participants to compare the effects of different interventions. Statistical syntheses (e.g. meta-analysis) focus on comparisons of interventions, such as a new intervention versus a control intervention (which may represent conditions of usual practice or care), or the comparison of two competing interventions. Throughout the Handbook we use the terminology experimental intervention versus comparator intervention. This implies a need to identify one of the interventions as experimental, and is used only for convenience since all methods apply to both controlled and head-to-head comparisons. The contrast between the outcomes of two groups treated differently is known as the ‘effect’, the ‘treatment effect’ or the ‘intervention effect’; we generally use the last of these throughout the Handbook .

A statement of the review’s objectives should begin with a precise statement of the primary objective, ideally in a single sentence ( MECIR Box 2.3.a ). Where possible the style should be of the form ‘To assess the effects of [ intervention or comparison ] for [ health problem ] in [ types of people, disease or problem and setting if specified ]’. This might be followed by one or more secondary objectives, for example relating to different participant groups, different comparisons of interventions or different outcome measures. The detailed specification of the review question(s) requires consideration of several key components (Richardson et al 1995, Counsell 1997) which can often be encapsulated by the ‘PICO’ mnemonic, an acronym for P opulation, I ntervention, C omparison(s) and O utcome. Equal emphasis in addressing, and equal precision in defining, each PICO component is not necessary. For example, a review might concentrate on competing interventions for a particular stage of breast cancer, with stage and severity of the disease being defined very precisely; or alternately focus on a particular drug for any stage of breast cancer, with the treatment formulation being defined very precisely.

Throughout the Handbook we make a distinction between three different stages in the review at which the PICO construct might be used. This division is helpful for understanding the decisions that need to be made:

  • The review PICO (planned at the protocol stage) is the PICO on which eligibility of studies is based (what will be included and what excluded from the review).
  • The PICO for each synthesis (also planned at the protocol stage) defines the question that each specific synthesis aims to answer, determining how the synthesis will be structured, specifying planned comparisons (including intervention and comparator groups, any grouping of outcome and population subgroups).
  • The PICO of the included studies (determined at the review stage) is what was actually investigated in the included studies.

Reaching the point where it is possible to articulate the review’s objectives in the above form – the review PICO – requires time and detailed discussion between potential authors and users of the review. It is important that those involved in developing the review’s scope and questions have a good knowledge of the practical issues that the review will address as well as the research field to be synthesized. Developing the questions is a critical part of the research process. As such, there are methodological issues to bear in mind, including: how to determine which questions are most important to answer; how to engage stakeholders in question formulation; how to account for changes in focus as the review progresses; and considerations about how broad (or narrow) a review should be.

MECIR Box 2.3 . a Relevant expectations for conduct of intervention reviews

2.3.1 Broad versus narrow reviews

The questions addressed by a review may be broad or narrow in scope. For example, a review might address a broad question regarding whether antiplatelet agents in general are effective in preventing all thrombotic events in humans. Alternatively, a review might address whether a particular antiplatelet agent, such as aspirin, is effective in decreasing the risks of a particular thrombotic event, stroke, in elderly persons with a previous history of stroke. Increasingly, reviews are becoming broader, aiming, for example, to identify which intervention – out of a range of treatment options – is most effective, or to investigate how an intervention varies depending on implementation and participant characteristics.

Overviews of reviews (see  Chapter V ), in which multiple reviews are summarized, can be one way of addressing the need for breadth when synthesizing the evidence base, since they can summarize multiple reviews of different interventions for the same condition, or multiple reviews of the same intervention for different types of participants. It may be considered desirable to plan a series of reviews with a relatively narrow scope, alongside an Overview to summarize their findings. Alternatively, it may be more useful – particularly given the growth in support for network meta-analysis – to combine comparisons of different treatment options within the same review (see Chapter 11 ). When deciding whether or not an overview might be the most appropriate approach, review authors should take account of the breadth of the question being asked and the resources available. Some questions are simply too broad for a review of all relevant primary research to be practicable, and if a field has sufficient high-quality reviews, then the production of another review of primary research that duplicates the others might not be a sensible use of resources.

Some of the advantages and disadvantages of broad and narrow reviews are summarized in Table 2.3.a . While having a broad scope in terms of the range of participants has the potential to increase generalizability, the extent to which findings are ultimately applicable to broader (or different) populations will depend on the participants who have actually been recruited into research studies. Likewise, heterogeneity can be a disadvantage when the expectation is for homogeneity of effects between studies, but an advantage when the review question seeks to understand differential effects (see Chapter 10 ).A distinction should be drawn between the scope of a review and the precise questions within, since it is possible to have a broad review that addresses quite narrow questions. In the antiplatelet agents for preventing thrombotic events example, a systematic review with a broad scope might include all available treatments. Rather than combining all the studies into one comparison though, specific treatments would be compared with one another in separate comparisons, thus breaking a heterogeneous set of treatments into narrower, more homogenous groups. This relates to the three levels of PICO, outlined in Section 2.3 . The review PICO defines the broad scope of the review, and the PICO for comparison defines the specific treatments that will be compared with one another; Chapter 3 elaborates on the use of PICOs.

In practice, a Cochrane Review may start (or have started) with a broad scope, and be divided up into narrower reviews as evidence accumulates and the original review becomes unwieldy. This may be done for practical and logistical reasons, for example to make updating easier as well as to make it easier for readers to see which parts of the evidence base are changing. Individual review authors must decide if there are instances where splitting a broader focused review into a series of more narrowly focused reviews is appropriate and implement appropriate methods to achieve this. If a major change is to be undertaken, such as splitting a broad review into a series of more narrowly focused reviews, a new protocol must be written for each of the component reviews that documents the eligibility criteria for each one.

Ultimately, the selected breadth of a review depends upon multiple factors including perspectives regarding a question’s relevance and potential impact; supporting theoretical, biologic and epidemiological information; the potential generalizability and validity of answers to the questions; and available resources. As outlined in Section 2.4.2 , authors should consider carefully the needs of users of the review and the context(s) in which they expect the review to be used when determining the most optimal scope for their review.

Table 2.3.a Some advantages and disadvantages of broad versus narrow reviews

2.3.2 ‘Lumping’ versus ‘splitting’

It is important not to confuse the issue of the breadth of the review (determined by the review PICO) with concerns about between-study heterogeneity and the legitimacy of combining results from diverse studies in the same analysis (determined by the PICOs for comparison).

Broad reviews have been criticized as ‘mixing apples and oranges’, and one of the inventors of meta-analysis, Gene Glass, has responded “Of course it mixes apples and oranges… comparing apples and oranges is the only endeavour worthy of true scientists; comparing apples to apples is trivial” (Glass 2015). In fact, the two concepts (‘broad reviews’ and ‘mixing apples and oranges’) are different issues. Glass argues that broad reviews, with diverse studies, provide the opportunity to ask interesting questions about the reasons for differential intervention effects.

The ‘apples and oranges’ critique refers to the inappropriate mixing of studies within a single comparison, where the purpose is to estimate an average effect. In situations where good biologic or sociological evidence suggests that various formulations of an intervention behave very differently or that various definitions of the condition of interest are associated with markedly different effects of the intervention, the uncritical aggregation of results from quite different interventions or populations/settings may well be questionable.

Unfortunately, determining the situations where studies are similar enough to combine with one another is not always straightforward, and it can depend, to some extent, on the question being asked. While the decision is sometimes characterized as ‘lumping’ (where studies are combined in the same analysis) or ‘splitting’ (where they are not) (Squires et al 2013), it is better to consider these issues on a continuum, with reviews that have greater variation in the types of included interventions, settings and populations, and study designs being towards the ‘lumped’ end, and those that include little variation in these elements being towards the ‘split’ end (Petticrew and Roberts 2006).

While specification of the review PICO sets the boundary for the inclusion and exclusion of studies, decisions also need to be made when planning the PICO for the comparisons to be made in the analysis as to whether they aim to address broader (‘lumped’) or narrower (‘split’) questions (Caldwell and Welton 2016). The degree of ‘lumping’ in the comparisons will be primarily driven by the review’s objectives, but will sometimes be dictated by the availability of studies (and data) for a particular comparison (see Chapter 9 for discussion of the latter). The former is illustrated by a Cochrane Review that examined the effects of newer-generation antidepressants for depressive disorders in children and adolescents (Hetrick et al 2012).

Newer-generation antidepressants include multiple different compounds (e.g. paroxetine, fluoxetine). The objectives of this review were to (i) estimate the overall effect of newer-generation antidepressants on depression, (ii) estimate the effect of each compound, and (iii) examine whether the compound type and age of the participants (children versus adolescents) is associated with the intervention effect. Objective (i) addresses a broad, ‘in principle’ (Caldwell and Welton 2016), question of whether newer-generation antidepressants improve depression, where the different compounds are ‘lumped’ into a single comparison. Objective (ii) seeks to address narrower, ‘split’, questions that investigate the effect of each compound on depression separately. Answers to both questions can be identified by setting up separate comparisons for each compound, or by subgrouping the ‘lumped’ comparison by compound ( Chapter 10, Section 10.11.2 ). Objective (iii) seeks to explore factors that explain heterogeneity among the intervention effects, or equivalently, whether the intervention effect varies by the factor. This can be examined using subgroup analysis or meta-regression ( Chapter 10, Section 10.11 ) but, in the case of intervention types, is best achieved using network meta-analysis (see Chapter 11 ).

There are various advantages and disadvantages to bear in mind when defining the PICO for the comparison and considering whether ‘lumping’ or ‘splitting’ is appropriate. Lumping allows for the investigation of factors that may explain heterogeneity. Results from these investigations may provide important leads as to whether an intervention operates differently in, for example, different populations (such as in children and adolescents in the example above). Ultimately, this type of knowledge is useful for clinical decision making. However, lumping is likely to introduce heterogeneity, which will not always be explained by a priori specified factors, and this may lead to a combined effect that is clinically difficult to interpret and implement. For example, when multiple intervention types are ‘lumped’ in one comparison (as in objective (i) above), and there is unexplained heterogeneity, the combined intervention effect would not enable a clinical decision as to which intervention should be selected. Splitting comparisons carries its own risk of there being too few studies to yield a useful synthesis. Inevitably, some degree of aggregation across the PICO elements is required for a meta-analysis to be undertaken (Caldwell and Welton 2016).

2.4 Ensuring the review addresses the right questions

Since systematic reviews are intended for use in healthcare decision making, review teams should ensure not only the application of robust methodology, but also that the review question is meaningful for healthcare decision making. Two approaches are discussed below:

  • Using results from existing research priority-setting exercises to define the review question.
  • In the absence of, or in addition to, existing research priority-setting exercises, engaging with stakeholders to define review questions and establish their relevance to policy and practice.

2.4.1 Using priority-setting exercises to define review questions

A research priority-setting exercise is a “collective activity for deciding which uncertainties are most worth trying to resolve through research; uncertainties considered may be problems to be understood or solutions to be developed or tested; across broad or narrow areas” (Sandy Oliver, referenced in Nasser 2018). Using research priority-setting exercises to define the scope of a review helps to prevent the waste of scarce resources for research by making the review more relevant to stakeholders (Chalmers et al 2014).

Research priority setting is always conducted in a specific context, setting and population with specific principles, values and preferences (which should be articulated). Different stakeholders’ interpretation of the scope and purpose of a ‘research question’ might vary, resulting in priorities that might be difficult to interpret. Researchers or review teams might find it necessary to translate the research priorities into an answerable PICO research question format, and may find it useful to recheck the question with the stakeholder groups to determine whether they have accurately reflected their intentions.

While Cochrane Review teams are in most cases reviewing the effects of an intervention with a global scope, they may find that the priorities identified by important stakeholders (such as the World Health Organization or other organizations or individuals in a representative health system) are informative in planning the review. Review authors may find that differences between different stakeholder groups’ views on priorities and the reasons for these differences can help them to define the scope of the review. This is particularly important for making decisions about excluding specific populations or settings, or being inclusive and potentially conducting subgroup analyses.

Whenever feasible, systematic reviews should be based on priorities identified by key stakeholders such as decision makers, patients/public, and practitioners. Cochrane has developed a list of priorities for reviews in consultation with key stakeholders, which is available on the Cochrane website. Issues relating to equity (see Chapter 16 and Section 2.4.3 ) need to be taken into account when conducting and interpreting the results from priority-setting exercises. Examples of materials to support these processes are available (Viergever et al 2010, Nasser et al 2013, Tong et al 2017).

The results of research priority-setting exercises can be searched for in electronic databases and via websites of relevant organizations. Examples are: James Lind Alliance , World Health Organization, organizations of health professionals including research disciplines, and ministries of health in different countries (Viergever 2010). Examples of search strategies for identifying research priority-setting exercises are available (Bryant et al 2014, Tong et al 2015).

Other sources of questions are often found in ‘implications for future research’ sections of articles in journals and clinical practice guidelines. Some guideline developers have prioritized questions identified through the guideline development process (Sharma et al 2018), although these priorities will be influenced by the needs of health systems in which different guideline development teams are working.

2.4.2 Engaging stakeholders to help define the review questions

In the absence of a relevant research priority-setting exercise, or when a systematic review is being conducted for a very specific purpose (for example, commissioned to inform the development of a guideline), researchers should work with relevant stakeholders to define the review question. This practice is especially important when developing review questions for studying the effectiveness of health systems and policies, because of the variability between countries and regions; the significance of these differences may only become apparent through discussion with the stakeholders.

The stakeholders for a review could include consumers or patients, carers, health professionals of different kinds, policy decision makers and others ( Chapter 1, Section 1.3.1 ). Identifying the stakeholders who are critical to a particular question will depend on the question, who the answer is likely to affect, and who will be expected to implement the intervention if it is found to be effective (or to discontinue it if not).

Stakeholder engagement should, optimally, be an ongoing process throughout the life of the systematic review, from defining the question to dissemination of results (Keown et al 2008). Engaging stakeholders increases relevance, promotes mutual learning, improves uptake and decreases research waste (see Chapter 1, Section 1.3.1 and Section 1.3.2 ). However, because such engagement can be challenging and resource intensive, a one-off engagement process to define the review question might only be possible. Review questions that are conceptualized and refined by multiple stakeholders can capture much of the complexity that should be addressed in a systematic review.

2.4.3 Considering issues relating to equity when defining review questions

Deciding what should be investigated, who the participants should be, and how the analysis will be carried out can be considered political activities, with the potential for increasing or decreasing inequalities in health. For example, we now know that well-intended interventions can actually widen inequalities in health outcomes since researchers have chosen to investigate this issue (Lorenc et al 2013). Decision makers can now take account of this knowledge when planning service provision. Authors should therefore consider the potential impact on disadvantaged groups of the intervention(s) that they are investigating on disadvantaged groups, and whether socio-economic inequalities in health might be affected depending on whether or how they are implemented.

Health equity is the absence of avoidable and unfair differences in health (Whitehead 1992). Health inequity may be experienced across characteristics defined by PROGRESS-Plus (Place of residence, Race/ethnicity/culture/language, Occupation, Gender/sex, Religion, Education, Socio-economic status, Social capital, and other characteristics (‘Plus’) such as sexual orientation, age, and disability) (O’Neill et al 2014). Issues relating to health equity should be considered when review questions are developed ( MECIR Box 2.4.a ). Chapter 16 presents detailed guidance on this issue for review authors.

MECIR Box 2.4 . a Relevant expectations for conduct of intervention reviews

2.5 Methods and tools for structuring the review

It is important for authors to develop the scope of their review with care: without a clear understanding of where the review will contribute to existing knowledge – and how it will be used – it may be at risk of conceptual incoherence. It may mis-specify critical elements of how the intervention(s) interact with the context(s) within which they operate to produce specific outcomes, and become either irrelevant or possibly misleading. For example, in a systematic review about smoking cessation interventions in pregnancy, it was essential for authors to take account of the way that health service provision has changed over time. The type and intensity of ‘usual care’ in more recent evaluations was equivalent to the interventions being evaluated in older studies, and the analysis needed to take this into account. This review also found that the same intervention can have different effects in different settings depending on whether its materials are culturally appropriate in each context (Chamberlain et al 2017).

In order to protect the review against conceptual incoherence and irrelevance, review authors need to spend time at the outset developing definitions for key concepts and ensuring that they are clear about the prior assumptions on which the review depends. These prior assumptions include, for example, why particular populations should be considered inside or outside the review’s scope; how the intervention is thought to achieve its effect; and why specific outcomes are selected for evaluation. Being clear about these prior assumptions also requires review authors to consider the evidential basis for these assumptions and decide for themselves which they can place more or less reliance on. When considered as a whole, this initial conceptual and definitional work states the review’s conceptual framework . Each element of the review’s PICO raises its own definitional challenges, which are discussed in detail in the Chapter 3 .

In this section we consider tools that may help to define the scope of the review and the relationships between its key concepts; in particular, articulating how the intervention gives rise to the outcomes selected. In some situations, long sequences of events are expected to occur between an intervention being implemented and an outcome being observed. For example, a systematic review examining the effects of asthma education interventions in schools on children’s health and well-being needed to consider: the interplay between core intervention components and their introduction into differing school environments; different child-level effect modifiers; how the intervention then had an impact on the knowledge of the child (and their family); the child’s self-efficacy and adherence to their treatment regime; the severity of their asthma; the number of days of restricted activity; how this affected their attendance at school; and finally, the distal outcomes of education attainment and indicators of child health and well-being (Kneale et al 2015).

Several specific tools can help authors to consider issues raised when defining review questions and planning their review; these are also helpful when developing eligibility criteria and classifying included studies. These include the following.

  • Taxonomies: hierarchical structures that can be used to categorize (or group) related interventions, outcomes or populations.
  • Generic frameworks for examining and structuring the description of intervention characteristics (e.g. TIDieR for the description of interventions (Hoffmann et al 2014), iCAT_SR for describing multiple aspects of complexity in systematic reviews (Lewin et al 2017)).
  • Core outcome sets for identifying and defining agreed outcomes that should be measured for specific health conditions (described in more detail in Chapter 3 ).

Unlike these tools, which focus on particular aspects of a review, logic models provide a framework for planning and guiding synthesis at the review level (see Section 2.5.1 ).

2.5.1 Logic models

Logic models (sometimes referred to as conceptual frameworks or theories of change) are graphical representations of theories about how interventions work. They depict intervention components, mechanisms (pathways of action), outputs, and outcomes as sequential (although not necessarily linear) chains of events. Among systematic review authors, they were originally proposed as a useful tool when working with evaluations of complex social and population health programmes and interventions, to conceptualize the pathways through which interventions are intended to change outcomes (Anderson et al 2011).

In reviews where intervention complexity is a key consideration (see Chapter 17 ), logic models can be particularly helpful. For example, in a review of psychosocial group interventions for those with HIV, a logic model was used to show how the intervention might work (van der Heijden et al 2017). The review authors depicted proximal outcomes, such as self-esteem, but chose only to include psychological health outcomes in their review. In contrast, Bailey and colleagues included proximal outcomes in their review of computer-based interventions for sexual health promotion using a logic model to show how outcomes were grouped (Bailey et al 2010). Finally, in a review of slum upgrading, a logic model showed the broad range of interventions and their interlinkages with health and socio-economic outcomes (Turley et al 2013), and enabled the review authors to select a specific intervention category (physical upgrading) on which to focus the review. Further resources provide further examples of logic models, and can help review authors develop and use logic models (Anderson et al 2011, Baxter et al 2014, Kneale et al 2015, Pfadenhauer et al 2017, Rohwer et al 2017).

Logic models can vary in their emphasis, with a distinction sometimes made between system-based and process-oriented logic models (Rehfuess et al 2018). System-based logic models have particular value in examining the complexity of the system (e.g. the geographical, epidemiological, political, socio-cultural and socio-economic features of a system), and the interactions between contextual features, participants and the intervention (see Chapter 17 ). Process-oriented logic models aim to capture the complexity of causal pathways by which the intervention leads to outcomes, and any factors that may modify intervention effects. However, this is not a crisp distinction; the two types are interrelated; with some logic models depicting elements of both systems and process models simultaneously.

The way that logic models can be represented diagrammatically (see Chapter 17 for an example) provides a valuable visual summary for readers and can be a communication tool for decision makers and practitioners. They can aid initially in the development of a shared understanding between different stakeholders of the scope of the review and its PICO, helping to support decisions taken throughout the review process, from developing the research question and setting the review parameters, to structuring and interpreting the results. They can be used in planning the PICO elements of a review as well as for determining how the synthesis will be structured (i.e. planned comparisons, including intervention and comparator groups, and any grouping of outcome and population subgroups). These models may help review authors specify the link between the intervention, proximal and distal outcomes, and mediating factors. In other words, they depict the intervention theory underpinning the synthesis plan.

Anderson and colleagues note the main value of logic models in systematic review as (Anderson et al 2011):

  • refining review questions;
  • deciding on ‘lumping’ or ‘splitting’ a review topic;
  • identifying intervention components;
  • defining and conducting the review;
  • identifying relevant study eligibility criteria;
  • guiding the literature search strategy;
  • explaining the rationale behind surrogate outcomes used in the review;
  • justifying the need for subgroup analyses (e.g. age, sex/gender, socio-economic status);
  • making the review relevant to policy and practice;
  • structuring the reporting of results;
  • illustrating how harms and feasibility are connected with interventions; and
  • interpreting results based on intervention theory and systems thinking (see Chapter 17 ).

Logic models can be useful in systematic reviews when considering whether failure to find a beneficial effect of an intervention is due to a theory failure, an implementation failure, or both (see Chapter 17 and Cargo et al 2018). Making a distinction between implementation and intervention theory can help to determine whether and how the intervention interacts with (and potentially changes) its context (see Chapter 3 and Chapter 17 for further discussion of context). This helps to elucidate situations in which variations in how the intervention is implemented have the potential to affect the integrity of the intervention and intended outcomes.

Given their potential value in conceptualizing and structuring a review, logic models are increasingly published in review protocols. Logic models may be specified a priori and remain unchanged throughout the review; it might be expected, however, that the findings of reviews produce evidence and new understandings that could be used to update the logic model in some way (Kneale et al 2015). Some reviews take a more staged approach, pre-specifying points in the review process where the model may be revised on the basis of (new) evidence (Rehfuess et al 2018) and a staged logic model can provide an efficient way to report revisions to the synthesis plan. For example, in a review of portion, package and tableware size for changing selection or consumption of food and other products, the authors presented a logic model that clearly showed changes to their original synthesis plan (Hollands et al 2015).

It is preferable to seek out existing logic models for the intervention and revise or adapt these models in line with the review focus, although this may not always be possible. More commonly, new models are developed starting with the identification of outcomes and theorizing the necessary pre-conditions to reach those outcomes. This process of theorizing and identifying the steps and necessary pre-conditions continues, working backwards from the intended outcomes, until the intervention itself is represented. As many mechanisms of action are invisible and can only be ‘known’ through theory, this process is invaluable in exposing assumptions as to how interventions are thought to work; assumptions that might then be tested in the review. Logic models can be developed with stakeholders (see Section 2.5.2 ) and it is considered good practice to obtain stakeholder input in their development.

Logic models are representations of how interventions are intended to ‘work’, but they can also provide a useful basis for thinking through the unintended consequences of interventions and identifying potential adverse effects that may need to be captured in the review (Bonell et al 2015). While logic models provide a guiding theory of how interventions are intended to work, critiques exist around their use, including their potential to oversimplify complex intervention processes (Rohwer et al 2017). Here, contributions from different stakeholders to the development of a logic model may be able to articulate where complex processes may occur; theorizing unintended intervention impacts; and the explicit representation of ambiguity within certain parts of the causal chain where new theory/explanation is most valuable.

2.5.2 Changing review questions

While questions should be posed in the protocol before initiating the full review, these questions should not prevent exploration of unexpected issues. Reviews are analyses of existing data that are constrained by previously chosen study populations, settings, intervention formulations, outcome measures and study designs. It is generally not possible to formulate an answerable question for a review without knowing some of the studies relevant to the question, and it may become clear that the questions a review addresses need to be modified in light of evidence accumulated in the process of conducting the review.

Although a certain fluidity and refinement of questions is to be expected in reviews as a fuller understanding of the evidence is gained, it is important to guard against bias in modifying questions. Data-driven questions can generate false conclusions based on spurious results. Any changes to the protocol that result from revising the question for the review should be documented at the beginning of the Methods section. Sensitivity analyses may be used to assess the impact of changes on the review findings (see Chapter 10, Section 10.14 ). When refining questions it is useful to ask the following questions.

  • What is the motivation for the refinement?
  • Could the refinement have been influenced by results from any of the included studies?
  • Does the refined question require a modification to the search strategy and/or reassessment of any decisions regarding study eligibility?
  • Are data collection methods appropriate to the refined question?
  • Does the refined question still meet the FINER criteria discussed in Section 2.1 ?

2.5.3 Building in contingencies to deal with sparse data

The ability to address the review questions will depend on the maturity and validity of the evidence base. When few studies are identified, there will be limited opportunity to address the question through an informative synthesis. In anticipation of this scenario, review authors may build contingencies into their protocol analysis plan that specify grouping (any or multiple) PICO elements at a broader level; thus potentially enabling synthesis of a larger number of studies. Broader groupings will generally address a less specific question, for example:

  • ‘the effect of any antioxidant supplement on …’ instead of ‘the effect of vitamin C on …’;
  • ‘the effect of sexual health promotion on biological outcomes ’ instead of ‘the effect of sexual health promotion on sexually transmitted infections ’; or
  • ‘the effect of cognitive behavioural therapy in children and adolescents on …’ instead of ‘the effect of cognitive behavioural therapy in children on …’.

However, such broader questions may be useful for identifying important leads in areas that lack effective interventions and for guiding future research. Changes in the grouping may affect the assessment of the certainty of the evidence (see Chapter 14 ).

2.5.4 Economic data

Decision makers need to consider the economic aspects of an intervention, such as whether its adoption will lead to a more efficient use of resources. Economic data such as resource use, costs or cost-effectiveness (or a combination of these) may therefore be included as outcomes in a review. It is useful to break down measures of resource use and costs to the level of specific items or categories. It is helpful to consider an international perspective in the discussion of costs. Economics issues are discussed in detail in Chapter 20 .

2.6 Chapter information

Authors: James Thomas, Dylan Kneale, Joanne E McKenzie, Sue E Brennan, Soumyadeep Bhaumik

Acknowledgements: This chapter builds on earlier versions of the Handbook . Mona Nasser, Dan Fox and Sally Crowe contributed to Section 2.4 ; Hilary J Thomson contributed to Section 2.5.1 .

Funding: JT and DK are supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care North Thames at Barts Health NHS Trust. JEM is supported by an Australian National Health and Medical Research Council (NHMRC) Career Development Fellowship (1143429). SEB’s position is supported by the NHMRC Cochrane Collaboration Funding Program. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, the Department of Health or the NHMRC.

2.7 References

Anderson L, Petticrew M, Rehfuess E, Armstrong R, Ueffing E, Baker P, Francis D, Tugwell P. Using logic models to capture complexity in systematic reviews. Research Synthesis Methods 2011; 2 : 33–42.

Bailey JV, Murray E, Rait G, Mercer CH, Morris RW, Peacock R, Cassell J, Nazareth I. Interactive computer-based interventions for sexual health promotion. Cochrane Database of Systematic Reviews 2010; 9 : CD006483.

Baxter SK, Blank L, Woods HB, Payne N, Rimmer M, Goyder E. Using logic model methods in systematic review synthesis: describing complex pathways in referral management interventions. BMC Medical Research Methodology 2014; 14 : 62.

Bonell C, Jamal F, Melendez-Torres GJ, Cummins S. ‘Dark logic’: theorising the harmful consequences of public health interventions. Journal of Epidemiology and Community Health 2015; 69 : 95–98.

Bryant J, Sanson-Fisher R, Walsh J, Stewart J. Health research priority setting in selected high income countries: a narrative review of methods used and recommendations for future practice. Cost Effectiveness and Resource Allocation 2014; 12 : 23.

Caldwell DM, Welton NJ. Approaches for synthesising complex mental health interventions in meta-analysis. Evidence-Based Mental Health 2016; 19 : 16–21.

Cargo M, Harris J, Pantoja T, Booth A, Harden A, Hannes K, Thomas J, Flemming K, Garside R, Noyes J. Cochrane Qualitative and Implementation Methods Group guidance series-paper 4: methods for assessing evidence on intervention implementation. Journal of Clinical Epidemiology 2018; 97 : 59–69.

Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gülmezoglu AM, Howells DW, Ioannidis JPA, Oliver S. How to increase value and reduce waste when research priorities are set. Lancet 2014; 383 : 156–165.

Chamberlain C, O’Mara-Eves A, Porter J, Coleman T, Perlen S, Thomas J, McKenzie J. Psychosocial interventions for supporting women to stop smoking in pregnancy. Cochrane Database of Systematic Reviews 2017; 2 : CD001055.

Cooper H. The problem formulation stage. In: Cooper H, editor. Integrating Research: A Guide for Literature Reviews . Newbury Park (CA) USA: Sage Publications; 1984.

Counsell C. Formulating questions and locating primary studies for inclusion in systematic reviews. Annals of Internal Medicine 1997; 127 : 380–387.

Cummings SR, Browner WS, Hulley SB. Conceiving the research question and developing the study plan. In: Hulley SB, Cummings SR, Browner WS, editors. Designing Clinical Research: An Epidemiological Approach . 4th ed. Philadelphia (PA): Lippincott Williams & Wilkins; 2007. p. 14–22.

Glass GV. Meta-analysis at middle age: a personal history. Research Synthesis Methods 2015; 6 : 221–231.

Hedges LV. Statistical considerations. In: Cooper H, Hedges LV, editors. The Handbook of Research Synthesis . New York (NY): USA: Russell Sage Foundation; 1994.

Hetrick SE, McKenzie JE, Cox GR, Simmons MB, Merry SN. Newer generation antidepressants for depressive disorders in children and adolescents. Cochrane Database of Systematic Reviews 2012; 11 : CD004851.

Hoffmann T, Glasziou P, Boutron I. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ 2014; 348: g1687.

Hollands GJ, Shemilt I, Marteau TM, Jebb SA, Lewis HB, Wei Y, Higgins JPT, Ogilvie D. Portion, package or tableware size for changing selection and consumption of food, alcohol and tobacco. Cochrane Database of Systematic Reviews 2015; 9 : CD011045.

Keown K, Van Eerd D, Irvin E. Stakeholder engagement opportunities in systematic reviews: Knowledge transfer for policy and practice. Journal of Continuing Education in the Health Professions 2008; 28 : 67–72.

Kneale D, Thomas J, Harris K. Developing and optimising the use of logic models in systematic reviews: exploring practice and good practice in the use of programme theory in reviews. PloS One 2015; 10 : e0142187.

Lewin S, Hendry M, Chandler J, Oxman AD, Michie S, Shepperd S, Reeves BC, Tugwell P, Hannes K, Rehfuess EA, Welch V, McKenzie JE, Burford B, Petkovic J, Anderson LM, Harris J, Noyes J. Assessing the complexity of interventions within systematic reviews: development, content and use of a new tool (iCAT_SR). BMC Medical Research Methodology 2017; 17 : 76.

Lorenc T, Petticrew M, Welch V, Tugwell P. What types of interventions generate inequalities? Evidence from systematic reviews. Journal of Epidemiology and Community Health 2013; 67 : 190–193.

Nasser M, Ueffing E, Welch V, Tugwell P. An equity lens can ensure an equity-oriented approach to agenda setting and priority setting of Cochrane Reviews. Journal of Clinical Epidemiology 2013; 66 : 511–521.

Nasser M. Setting priorities for conducting and updating systematic reviews [PhD Thesis]: University of Plymouth; 2018.

O’Neill J, Tabish H, Welch V, Petticrew M, Pottie K, Clarke M, Evans T, Pardo Pardo J, Waters E, White H, Tugwell P. Applying an equity lens to interventions: using PROGRESS ensures consideration of socially stratifying factors to illuminate inequities in health. Journal of Clinical Epidemiology 2014; 67 : 56–64.

Oliver S, Dickson K, Bangpan M, Newman M. Getting started with a review. In: Gough D, Oliver S, Thomas J, editors. An Introduction to Systematic Reviews . London (UK): Sage Publications Ltd.; 2017.

Petticrew M, Roberts H. Systematic Reviews in the Social Sciences: A Practical Guide . Oxford (UK): Blackwell; 2006.

Pfadenhauer L, Gerhardus A, Mozygemba K, Lysdahl KB, Booth A, Hofmann B, Wahlster P, Polus S, Burns J, Brereton L, Rehfuess E. Making sense of complexity in context and implementation: the Context and Implementation of Complex Interventions (CICI) framework. Implementation Science 2017; 12 : 21.

Rehfuess EA, Booth A, Brereton L, Burns J, Gerhardus A, Mozygemba K, Oortwijn W, Pfadenhauer LM, Tummers M, van der Wilt GJ, Rohwer A. Towards a taxonomy of logic models in systematic reviews and health technology assessments: a priori, staged, and iterative approaches. Research Synthesis Methods 2018; 9 : 13–24.

Richardson WS, Wilson MC, Nishikawa J, Hayward RS. The well-built clinical question: a key to evidence-based decisions. ACP Journal Club 1995; 123 : A12–13.

Rohwer A, Pfadenhauer L, Burns J, Brereton L, Gerhardus A, Booth A, Oortwijn W, Rehfuess E. Series: Clinical epidemiology in South Africa. Paper 3: Logic models help make sense of complexity in systematic reviews and health technology assessments. Journal of Clinical Epidemiology 2017; 83 : 37–47.

Sharma T, Choudhury M, Rejón-Parrilla JC, Jonsson P, Garner S. Using HTA and guideline development as a tool for research priority setting the NICE way: reducing research waste by identifying the right research to fund. BMJ Open 2018; 8 : e019777.

Squires J, Valentine J, Grimshaw J. Systematic reviews of complex interventions: framing the review question. Journal of Clinical Epidemiology 2013; 66 : 1215–1222.

Tong A, Chando S, Crowe S, Manns B, Winkelmayer WC, Hemmelgarn B, Craig JC. Research priority setting in kidney disease: a systematic review. American Journal of Kidney Diseases 2015; 65 : 674–683.

Tong A, Sautenet B, Chapman JR, Harper C, MacDonald P, Shackel N, Crowe S, Hanson C, Hill S, Synnot A, Craig JC. Research priority setting in organ transplantation: a systematic review. Transplant International 2017; 30 : 327–343.

Turley R, Saith R, Bhan N, Rehfuess E, Carter B. Slum upgrading strategies involving physical environment and infrastructure interventions and their effects on health and socio-economic outcomes. Cochrane Database of Systematic Reviews 2013; 1 : CD010067.

van der Heijden I, Abrahams N, Sinclair D. Psychosocial group interventions to improve psychological well-being in adults living with HIV. Cochrane Database of Systematic Reviews 2017; 3 : CD010806.

Viergever RF. Health Research Prioritization at WHO: An Overview of Methodology and High Level Analysis of WHO Led Health Research Priority Setting Exercises . Geneva (Switzerland): World Health Organization; 2010.

Viergever RF, Olifson S, Ghaffar A, Terry RF. A checklist for health research priority setting: nine common themes of good practice. Health Research Policy and Systems 2010; 8 : 36.

Whitehead M. The concepts and principles of equity and health. International Journal of Health Services 1992; 22 : 429–25.

For permission to re-use material from the Handbook (either academic or commercial), please see here for full details.

Research Methods

Chapter 2 introduction.

Maybe you have already gained some experience in doing research, for example in your bachelor studies, or as part of your work.

The challenge in conducting academic research at masters level, is that it is multi-faceted.

The types of activities are:

  • Finding and reviewing literature on your research topic;
  • Designing a research project that will answer your research questions;
  • Collecting relevant data from one or more sources;
  • Analyzing the data, statistically or otherwise, and
  • Writing up and presenting your findings.

Some researchers are strong on some parts but weak on others.

We do not require perfection. But we do require high quality.

Going through all stages of the research project, with the guidance of your supervisor, is a learning process.

The journey is hard at times, but in the end your thesis is considered an academic publication, and we want you to be proud of what you have achieved!

Probably the biggest challenge is, where to begin?

  • What will be your topic?
  • And once you have selected a topic, what are the questions that you want to answer, and how?

In the first chapter of the book, you will find several views on the nature and scope of business research.

Since a study in business administration derives its relevance from its application to real-life situations, an MBA typically falls in the grey area between applied research and basic research.

The focus of applied research is on finding solutions to problems, and on improving (y)our understanding of existing theories of management.

Applied research that makes use of existing theories, often leads to amendments or refinements of these theories. That is, the applied research feeds back to basic research.

In the early stages of your research, you will feel like you are running around in circles.

You start with an idea for a research topic. Then, after reading literature on the topic, you will revise or refine your idea. And start reading again with a clearer focus ...

A thesis research/project typically consists of two main stages.

The first stage is the research proposal .

Once the research proposal has been approved, you can start with the data collection, analysis and write-up (including conclusions and recommendations).

Stage 1, the research proposal consists of he first three chapters of the commonly used five-chapter structure :

  • Chapter 1: Introduction
  • An introduction to the topic.
  • The research questions that you want to answer (and/or hypotheses that you want to test).
  • A note on why the research is of academic and/or professional relevance.
  • Chapter 2: Literature
  • A review of relevant literature on the topic.
  • Chapter 3: Methodology

The methodology is at the core of your research. Here, you define how you are going to do the research. What data will be collected, and how?

Your data should allow you to answer your research questions. In the research proposal, you will also provide answers to the questions when and how much . Is it feasible to conduct the research within the given time-frame (say, 3-6 months for a typical master thesis)? And do you have the resources to collect and analyze the data?

In stage 2 you collect and analyze the data, and write the conclusions.

  • Chapter 4: Data Analysis and Findings
  • Chapter 5: Summary, Conclusions and Recommendations

This video gives a nice overview of the elements of writing a thesis.

Logo for Open Textbook Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

2 Chapter 2 (Introducing Research)

Joining a Conversation

Typically, when students are taught about citing sources, it is in the context of the need to avoid plagiarism. While that is a valuable and worthwhile goal in its own right, it shifts the focus past one of the original motives for source citation. The goal of referencing sources was originally to situate thoughts in a conversation and to provide support for ideas. If I learned about ethics from Kant, then I cite Kant so that people would know whose understanding shaped my thinking. More than that, if they liked what I had to say, they could read more from Kant to explore those ideas. Of course, if they disliked what I had to say, I could also refer them to Kant’s arguments and use them to back up my own thinking.

For example, you should not take my word for it that oranges contain Vitamin C. I could not cut up an orange and extract the Vitamin C, and I’m only vaguely aware of its chemical formula. However, you don’t have to. The FDA and the CDC both support this idea, and they can provide the documentation. For that matter, I can also find formal articles that provide more information. One of the purposes of a college education is to introduce students to the larger body of knowledge that exists. For example, a student studying marketing is in no small part trying to gain access to the information that others have learned—over time—about what makes for effective marketing techniques. Chemistry students are not required to derive the periodic table on their own once every four years.

In short, college classes (and college essays) are often about joining a broader conversation on a subject. Learning, in general, is about opening one’s mind to the idea that the person doing the learning is not the beginning nor the end of all knowledge.

Remember that most college-level assignments often exist so that a teacher can evaluate a student’s knowledge. This means that displaying more of that knowledge and explaining more of the reasoning that goes into a claim typically does more to fulfill the goals of an assignment. The difference between an essay and a multiple choice test is that an essay typically gives students more room to demonstrate a thought process in action. It is a way of having students “show their work,” and so essays that jump to the end without that work are setting themselves up for failure.

Learning, Not Listing

Aristotle once claimed “It is the mark of an educated mind to be able to entertain a thought without accepting it.” Many students are familiar with the idea of search . The internet makes it tremendously easy to search for information—and misinformation. It takes a few seconds to find millions of results on almost any topic. However, that is not research. Where, after all, is the re in all of that? The Cambridge Dictionary offers the following definition:

Research (verb): to study a subject in detail, especially in order to discover new information or reach a new understanding.

Nothing about that implies a casual effort to type a couple of words into search engine and assuming that a result on the first screen is probably good enough. Nor does that imply that a weighted or biased search question, like “why is animal testing bad” will get worthwhile results.  Instead, research requires that the researcher searches, learns a little, and then searches again . Additionally, the level of detail matters. Research often involves knowing enough to understand the deeper levels of the subject.

For example, if someone is researching the efficacy of animal testing, they might encounter a claim that mice share a certain percentage of their DNA with human beings. Even this is problematic, because measuring DNA by percentage isn’t as simple as it sounds. However, estimates range from 85% to 97.5%, with the latter number being the one that refers to the active or “working” DNA. Unfortunately, the casual reader still knows nothing of value about using mice for human research. Why? Because the casual reader doesn’t know if the testing being done involves the 97.5% or the 2.5%, or even if the test is one where it can be separated.

To put it bluntly, Abraham Lincoln once had a trip to the theater that started 97.5% the same as his other trips to the theater.

In order for casual readers to make sense of this single factoid, they need to know more about DNA and about the nature of the tests being performed on the animals. They probably need to understand biology at least a little. They certainly need to understand math at a high enough level to understand basic statistics. All of this, of course, assumes that the student has also decided that the source itself is worthy of trust. In other words, an activist website found through a search engine that proclaims “mice are almost identical to humans” or “mice lack 300 million base pairs that humans have”. Neither source is lying. Just neither source helps the reader understand what is being talked about (if the sources themselves even understand).

Before a student can write a decent paper, the student needs to have decent information. Finding that information requires research, not search. Often, student writers (and other rhetors) mistakenly begin with a presupposed position that they then try to force into the confines of their rhetoric.  Argumentation requires an investigation into an issue before any claim is proffered for discussion.  The ‘thesis statement’ comes last; in many ways it is the product of extensive investigation and learning. A student should be equally open to and skeptical of all sources.

Skepticism in Research

Skepticism is not doubting everyone who disagrees with you. True skepticism is doubting all claims equally and requiring every claim to be held to the same burden of proof (not just the claims we disagree with). By far, the biggest misconception novice writers struggle with is the idea that it is okay to use a low-quality source (like a blog, or a news article, or an activist organization) because they got “just facts” from that source.

The assumption seems to be that all presentations of fact are equally presented, or that sources don’t lie. However, even leaving aside that many times people do lie in their own interests, which facts are presented and how they are presented changes immensely. There’s no such thing as “just facts.” The presentation of facts matters, as does how they are gathered. Source evaluation is a fundamental aspect of advanced academic writing.

“Lies” of Omission and Inclusion:  One of the simplest ways to misrepresent information is simply to exclude material that could weaken the stance that is favored by the author. This tactic is frequently called stacking the deck , and it is obviously dishonest. However, there is a related problem known as observational bias , wherein the author might not have a single negative intention whatsoever. Instead, xe simply only pays attention to the evidence that supports xir cause, because it’s what’s relevant to xir.

  • Arguments in favor of nuclear energy as a “clean” fuel source frequently leave out the problem of what to do with the spent fuel rods (i.e. radioactive waste). Similarly, arguments against nuclear energy frequently count only dramatic failures of older plants and not the safe operation of numerous modern plants; another version is to highlight the health risks of nuclear energy without providing the context of health risks caused by equivalent fuel sources (e.g. coal or natural gas).
  • Those who rely on personal observation in support of the idea that Zoomers are lazy might count only the times they see younger people playing games or relaxing, ignoring the number of times they see people that same age working jobs or—more accurately—the times they don’t see people that age because they are too busy helping around the house or doing homework.

A source that simply lists ideas without providing evidence or justifying how the evidence supports its conclusions is likely not a source that meets the rigor needed for an academic argument. While later chapters will go into the subject in greater detail, these guidelines suggest that in general, news media are not ideal sources. Neither are activist webpages, nor are blogs or government outlets. As later chapters will explore, all of these “sources” are not in fact sources of information. Inevitably, these documents to not create information, they simply report it. Instead, finding the original studies (performed by experts, typically controlled for bias, and reviewed by other experts before being published) is a much better alternative.

Examining Sources Using the Toulmin Model

On most issues, contradictory evidence exists and the researcher must review the options in a way that establishes one piece of evidence as more verifiable, or as otherwise preferable, to the other.  In essence, researchers must be able to compare arguments to one another.

Stephen Toulmin introduced a model of analyzing arguments that broke arguments down into three essential components and three additional factors. His model provides a widely-used and accessible means of both studying and drafting arguments.

research study chapter 2

The Toulmin model can be complicated with three other components, as well: backing, rebuttals, and qualifiers. Backing represents support of the data (e.g. ‘the thermostat has always been reliable in the past’ or ‘these studies have been replicated dozens of times with many different populations’). Rebuttals, on the other hand, admit limits to the argument (e.g. ‘unless the thermostat is broken’ or ‘if you care about your long-term health’). Finally, qualifiers indicate how certain someone is about the argument (e.g. ‘it is definitely too cold in here’ is different than ‘it might be too cold in here’; likewise, ‘you might want to stop smoking’ is a lot less forceful than ‘you absolutely should stop smoking’).

At a minimum, an argument (either one made by the student or by a source being evaluated) should have all three of the primary components, even if they are incorporated together. However, most developed arguments (even short answers on tests or simple blog posts) should have all six elements in place. If they are missing, it is up to the reader to go looking for what is missing and to try to figure out why it might have been left out.

Here is an example of an underdeveloped argument that is simply phrased like an absolute claim of fact. It is a poor argument, in that it offers none of the rationale behind what it says—it just insists that it is correct:

“Other countries hate the United States for a reason.”

What other countries? What reason? Is it just one reason, or is it one reason per country?

By contrast, here is an argument that has at least some minimal development:

“In the eyes of many (Qualifier), the United States has earned the hatred of other countries (Claim). The U.S. involvement in Iranian politics alone has earned the country criticism (Data). By helping to overthrow a democratically elected leader in favor of a monarch in 1953, the U.S. acted in a manner that seemed hypocritical and self-interested (Backing). While many countries do act in favor of their own interests (Rebuttal), the U.S. publicly championing democracy while covertly acting against it serves to justify criticism of the country (Warrant).”

Is there room to disagree with this argument? Yes. However, this argument provides its rationale, it offers at least some sort of evidence for its claims, and it provides a place to begin engagement. A researcher who wishes to know more about this argument can go looking into the history of U.S.-Iran relations, for example.

When reviewing a source, or making their own arguments, researchers should consider the following questions. Is there evidence that can be verified and examined by others (in the same spirit as the scientific method)? More specifically:

  • What is the claim?
  • What data backs up this claim?
  • What assumptions do I have to make to consider this evidence to be adequate support?

The various pieces of data which support claims in the Toulmin model are often called into question.  Studies are refuted, statistics countered with rival numbers, and their applicability to the claim in question is often murky.  Evidence—whether offered as matters of fact or as subjective considerations—does not exist in a vacuum.  Data are themselves claims.  If the supporting data are accepted as true, the argument has a generally accepted conclusion.  Such pieces of ‘evidence’ are contentions .

Although Toulmin distinguishes between qualifiers and rebuttal conditions, such a distinction is difficult to maintain in practice.  The important consideration—the one acknowledged by both terms—is that unconditional or absolute claims are difficult to support.  Specific fields have their own ways of hedging their bets.  Science has its error bar (Sagan) and the terminology of probability.  Statistics and polling have a margin of error.  Ethnography has its confrontation of personal bias.  When a rhetor expresses the limitations of a given claim, when the unconditional becomes conditional, claims become more than categorical propositions or thesis statements.  They become arguments.

Example 1:  Here is a minimalistic overview of one claim on the topic of traffic cameras.

  • Claim = Traffic cameras increase minor accidents
  • Evidence = David Kidwell and Alex Richards of The Chicago Tribune performed a study that was later cited by ABC News.
  • Assumptions = This study was conducted honestly and reasonably represents the reality of accidents around these cameras (i.e. I can trust the agenda and the methods of the Chicago Tribune staff).

Example 2 : And here is a second claim on the same subject.

  • Claim = The types of accidents by traffic cameras tend to be less severe
  • Evidence = The Insurance Institute for Highway Safety examined national trends and compared medical reports, police reports, and various bills, posting the results on their website.
  • Assumptions = If the IIHS has a bias, it would be toward fewer accidents, or at least less severe accidents (because this means they have to pay less money out).

As an Essay Fragment : According to some, traffic cameras actually increase accidents. A study conducted by the Chicago Tribune found that rear-end collisions increased when traffic cameras were installed, meaning that they make things worse, not better (Kidwell and Richards). However, the Insurance Institute for Highway Safety points out that while there are sometimes increases in minor collisions, the number of crashes resulting in injuries actually decreases.

  • “According to…not better (Kidwell and Richards).” This uses a parenthetical source citation to provide a “link” to the evidence and to invite readers to examine both the data and the warrants.
  • “However, the…decreases.” This uses a signal statement to introduce the source of the evidence first, often because the source has so much credibility the author is hoping to impress the reader.

Note that this is not a particularly powerful fragment–it is simply the  minimum level of rigor that a student should offer (or look for) in an academic essay or in an academic source.

Academic arguments typically make concessions.  These concessions help define the scope of the argument and the range of the inquiry.  In Section 1, I mentioned a relatively straightforward value claim: “Plan X is bad.”  Argumentation engages such value claims and defines their scope and limits.  Who is plan X bad for?  By what standards?  Why then is anyone in favor of plan X?  A more practical approach could be “If you favor Y, then Plan X is bad.”  This is a concession, of sorts—Plan X is only bad if you favor Y.  The argument admits that if you do not, then Plan X might not be all that bad, after all.

Such a concession, worded in such a way, has added merit.  It functions as what Aristotle would have called an artistic proof, although maybe not an enthymeme.  It establishes a bond between the rhetor and the audience through the shared favoring of Y; it nurtures consubtantiality—the basis of what Burke calls identification.  Clearly, concessions can be made in a way that both prevents some counterarguments from applying and still furthers a rhetorical point.

Such phrasing is practical, and only truly cynical interrogators would consider it sinister.  An inversion of this approach is possible.  “Unless you favor Y, Plan X is bad.”  So long as Y is sufficiently negative in the minds of the audience, the rhetor loses no actual impact here.  Here, connecting Y to X might require substantiation on the part of the rhetor, because the concession has become, itself, a justification of why X is bad (it is related to or involves Y).  The additional rhetorical power—gained through positive and negative associations—often compensates for such additional effort.

Research, Evidence, and Written Arguments Copyright © by jsunderb. All Rights Reserved.

Share This Book

Introduction

Chapter outline.

Have you ever wondered whether the violence you see on television affects your behavior? Are you more likely to behave aggressively in real life after watching people behave violently in dramatic situations on the screen? Or, could seeing fictional violence actually get aggression out of your system, causing you to be more peaceful? How are children influenced by the media they are exposed to? A psychologist interested in the relationship between behavior and exposure to violent images might ask these very questions.

Since ancient times, humans have been concerned about the effects of new technologies on our behaviors and thinking processes. The Greek philosopher Socrates, for example, worried that writing—a new technology at that time—would diminish people’s ability to remember because they could rely on written records rather than committing information to memory. In our world of rapidly changing technologies, questions about their effects on our daily lives and their resulting long-term impacts continue to emerge. In addition to the impact of screen time (on smartphones, tablets, computers, and gaming), technology is emerging in our vehicles (such as GPS and smart cars) and residences (with devices like Alexa or Google Home and doorbell cameras). As these technologies become integrated into our lives, we are faced with questions about their positive and negative impacts. Many of us find ourselves with a strong opinion on these issues, only to find the person next to us bristling with the opposite view.

How can we go about finding answers that are supported not by mere opinion, but by evidence that we can all agree on? The findings of psychological research can help us navigate issues like this.

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/psychology-2e/pages/1-introduction
  • Authors: Rose M. Spielman, William J. Jenkins, Marilyn D. Lovett
  • Publisher/website: OpenStax
  • Book title: Psychology 2e
  • Publication date: Apr 22, 2020
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/psychology-2e/pages/1-introduction
  • Section URL: https://openstax.org/books/psychology-2e/pages/2-introduction

© Jan 6, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

Study Site Homepage

  • Request new password
  • Create a new account

Research Methods in Early Childhood: An Introductory Guide

Student resources, chapter 2: the research proposal.

Abdulai, R.T. and Owusu-Ansah, A. (2014) ‘Essential ingredients of a good research proposal for undergraduate and postgraduate students in the social sciences’, SAGE Open , 4(3): 1–11. http://journals.sagepub.com/doi/pdf/10.1177/2158244014548178

This article is a comprehensive guide to writing research proposals. It is recommended that you read Chapters 1 and 2 in the textbook before you read this article as the article looks at both research proposals and research design. You will discover that terminology is not fixed. Different authors will use different terms to describe the same thing. For example, in this article the research question is called the research objective. Your institution may expect you to use specific terminology, but as long as you define your terms and use them consistently then these differences are of no account.

Wharewera-Mika, J., Cooper, E., Kool, B., Pereira, S. and Kelly, P. (2015) ‘Caregivers’ voices: the experiences of caregivers of children who sustained serious accidental and non-accidental head injury in early childhood’, Clinical Child Psychology and Psychiatry , 21(2): 268–86. http://journals.sagepub.com/doi/pdf/10.1177/1359104515589636

This article looks at parents’ experiences of caring for children who received a head injury before the age of 5. It is a New Zealand study. Imagine you were the researchers at the beginning of the research process. They were required to present a research proposal to one of New Zealand Health and Disability Ethics Committees. Outline what their proposal might look like (minus the literature review).

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

CHAPTER 2 RESEARCH METHODOLOGY

Profile image of Jamie Francis Ray Rn

Related Papers

Randel D Estacio

The purpose of this study is to evaluate the acceptability of the proposed instructional material, the learning assessment tool in Physics 1 (Mechanics), and to investigate its effect in teaching the subject. The design of this study is a combination of descriptive and experimental designs. A total of thirty three (33) experts and instructors in Physics from selected local colleges and universities in Metro Manila evaluated the proposed instructional material and learning assessment tool. In order to determine the effect of the proposed instructional material and learning assessment tool in Physics 1 (Mechanics) a one group pretest-posttest was employed and a total of 50 freshmen Industrial Engineering students of the Quezon City Polytechnic University (QCPU) served as the experimental group. Lessons in Physics 1 (Mechanics) were presented according to the outcomes based learning approach and the proposed instructional material and learning assessment tool were utilized. An instructional material and learning assessment tool were developed based from the results of validity, reliability, and item analysis of the achievement test in Physics 1 (Mechanics). The acceptability of the proposed instructional material and assessment of learning tool as assessed by the experts and faculty in Physics respondents revealed the following findings: As to Objective. It revealed that the objectives found in the proposed instructional material and learning assessment tool in Physics were highly acceptable as a result by its mean of 4.16. As to Content. The content of the proposed instructional material and learning assessment tool in Physics was highly acceptable having a mean of 4.42 as assessed by the experts and faculty in Physics. As to Usefulness. The study revealed that the usefulness of the proposed instructional material and learning assessment tool in Physics subject was highly acceptable with a mean of 4.45 as assessed by the experts and faculty in the field. As for Clarity. Both expert and faculty agreed that when it comes to clarity, the proposed instructional material and learning assessment tool in Physics were highly acceptable with a mean of 4.43. As to Presentation. The mean response of the expert and faculty in Physics was 4.44 and indicates that the presentation of the lessons in the proposed instructional material and assessment of learning tool was highly acceptable. As to Evaluation. The study found out that the evaluation of the proposed instructional material in a form of concept cartoon was highly accepted by the experts and faculty in Physics as supported by a mean of 4.49. As to Language and Style. Experts and faculty members in the field of Physics assessed the language and style of the proposed instructional-material and assessment of learning tool as very highly acceptable having a mean of 4.46. The study revealed that there is a statistically significant difference in the performance in the post-test of students who were taught with the use of the proposed instructional material and assessment of learning tool as compared to those who did not, p(98)=8.9174, p<.05. When the result of pre-test and post-test of each group was compared, statistically significant difference was found, p(49)=12.9769, p<.05 (control group) and p(49)=22.9071, p<.05 (experimental group). This implies that the proposed instructional material and assessment of learning tool in Physics 1 (Mechanics) greatly affect the performance of students in the class; the result also signifies that students were able to learn the lesson easily if it was presented by means of picture diagrams. The study claims and reaffirms that the findings of other researches that concept cartoon when used as formative assessment can improve the performance and achievement of students in difficult subjects like Physics.

research study chapter 2

José G. Vargas-hernández

Globalization has become a trigger for international trade due to its role as an integrator of the world economy and social standardization in a technological, cultural and universal knowledge that allows free access to resources with minimal effort context. The study aimed to analyze the Port of Manzanillo from the perspective of theories based on the Industry, the Dynamic Resources and Institutions, all around the Mexican Port System. The study utilized qualitative research method and is based on a literature review of the current status of the port and its global environment.

Prof. Negar Elhamian , Helen Bihag , Dondon Salingay

International Journal of Engineering Research and Technology (IJERT)

IJERT Journal

https://www.ijert.org/difference-of-pretest-and-post-test-in-philippine-history-of-cas-freshmen-students https://www.ijert.org/research/difference-of-pretest-and-post-test-in-philippine-history-of-cas-freshmen-students-IJERTV5IS040470.pdf This research centers on the difference of pretest and post test in Philippine history of CAS freshmen students. Specifically, it determines the appropriate instructional materials suited to the student's level to maximize learning have to be provided. The study utilized the descriptive method of research. Pretest and Post test were made and it was found out that film viewing in teaching History of the respondents were assessed effective and that the overall mean scores of the students in the pretest and posttest had increased. Results of the study show that the null hypothesis that there is no significant difference between the mean scores of the respondent was rejected.

International Journal of Emergency Medicine

Shaik Farid Abdull Wahab , Tuan Hairulnizam Tuan Kamauzaman , Ida Zaini

nomie valencia

Science Insights

Insights Publisher , E. Agatep

The study assessed the level of internet addiction exists among students of AMA Computer Colleges in Region III, Philippines and identified internet addiction management practices as strategies to address the problem, to lessen if not to eliminate, to prevent or cure level of internet addiction that exists. The descriptive analysis method of research was utilized. A total of one thousand five hundred fifteen student-respondents and one hundred fifty-eight administrator-respondents participated in the study. The researcher found out that there is a severe addiction level described as Often. There is a severe dependence on the internet as reflected in the internet addiction mean test scores of the student-respondents. There is a significant relationship between the level of internet addiction and the perceived level of implementation of the internet addiction management practices. There is a significant relationship between the internet addiction test scores and the perceived level of implementation of the internet addiction management practices. Overall findings conclude that there was a significant very strong negative relationship between the level of internet addiction and in-ternet addiction test scores of student-respondents and the perceived level of implementation of internet addiction management practices of administrator-respondents; hence, the negative relationship indicates that as the intensity of the perceived level of implementation increases, the level of internet addiction and internet addiction test scores among student-respondents decreases. This study is expected to provide a worthy contribution to the institution and to international literature on internet addiction; the result can be used in providing solution, actions and remedies to lessen if not to eliminate addiction in Internet usage.■

Maricel Mendoza Fider

finding answers to my querry about how the learners of today in the secondary best describe

ResearchGate.net

DR. DAVID C . BUENO

The course aims to give an understanding of some topical and contemporary issues in educational administration and how such issues have influenced the educational system. You are required to do and submit literature reviews or syntheses (IMRaD format) on the various current issues, trends or problems affecting the educational system in the Philippines.

Jong Azores

This study is aimed at assessing the data gathered from the survey of 102 musicians about their status and condition in working at the bars and restaurants in the city of Olongapo and the Subic Bay Freeport Zone and at identifying their collective aspirations. Based on its findings, the challenge to develop the adjacent localities of Olongapo City and Subic Bay Freeport Zone as a music tourism destination was identified.

RELATED PAPERS

Institutional Multidsciplinary Research and Development (IMRaD)

DR. DAVID C . BUENO , Edward San Agustin

Arnolfo Monleon

Polytechnic University of the Philippines Open University

Francisco B Bautista

Jo Dominado

Xenery Madera

Asia Pacific Journal of Education, Arts and Sciences

Research and Statistics Center

Laela Montezor

Research Paper

Zoe Vera Acain

IP innovative publication pvt. ltd

IP Innovative Publication Pvt. Ltd.

Historically Digitized

ronaldo pasion

Bangladesh Journal of Pharmacology

Zakirul Islam

US-China Education Review A & B

Maine Morales

Lanie Torres

Susan Houge Mackenzie

Ramon Alvarado

caroline tobing , Jimmy Kijai , Francis H , Stenly Pungus , Damrong Satayavaksakul , Evy Indrawati Siregar , yane sinaga , Ika Suhartanti Darmo , Fanny Soewignyo , Mariju Pimentel

Andy N Cubalit , Naely Muchtar , Jittrapat Piankrad , Dararat Khampusaen

YOLI LLORICO

Rainulfo Pagaran

Asian EFL Journal

Romualdo Mabuan

simarjeet kaur

Ritchie Bilasa

Ioannis Syrmpas , Nikolaos Digelidis , Achillios A. Koutelidas

José G. Vargas-hernandez

Ioannis Syrmpas , Nikolaos Digelidis

San Beda College Alabang

Savipra Gorospe, C.Ht., RPm , Chennie Regala , Renzen Martinez

Gilbert Bagsic

Journal of Institutional Research South East Asia

Siti H Stapa , Nor Hasni Mokhtar , Zarina Othman , Azizah Yaacob , Sharifah Zurina

International Journal of Social & Scientific Research

John Mark R . Asio , Ediric D . Gadia

Maribel Malana

Nikolaos Digelidis , D. Pasco

Jeniesel Lopian

International Journal of Scientific Research in Multidisciplinary Studies

Edward Jimenez , John Mark R . Asio

Joanah Marie Mercado

Rommel Tabula

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Logo for JCU Open eBooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

References for Chapter 2

  • Busetto L, Wick W, Gumbinger C. How to use and assess qualitative research methods. Neurological Research and Practice. 2020;2;14. doi: 10.1186/s42466-020-00059-z
  • Wintersberger D,  Saunders M. Formulating and clarifying the research topic: Insights and a guide for the production management research community. Production 2020;30.
  • Saunders M, Lewis P, Thornhill A. Research methods for business students . Prentice Hall: Financial Times; 2003.
  • Saunders MN, Lewis P. Great ideas and blind alleys? A review of the literature on starting research. Management Learning 1997;28;283-299.
  • Gill J, Johnson P, Clark M. Research Methods for Managers. 4th ed. Sage; 2010.
  • Farrugia P, Petrisor BA, Farrokhyar F, Bhandari M. Practical tips for surgical research: Research questions, hypotheses and objectives. Can J Surg 2010;53;278-281.
  • Hanson BP. Designing, conducting and reporting clinical research: A step by step approach. Injury 2006;37;583-594.
  • Paraskevas A, Saunders MN. Beyond consensus: An alternative use of Delphi enquiry in hospitality research. International Journal of Contemporary Hospitality Management. 2012;24(6).
  • Lipowski EE. Developing great research questions. American Journal of Health-System Pharmacy 2008;65;1667-1670.
  • Supino PG.  The research hypothesis: Role and construction. In: Supino PG, Borer JS, eds. Principles of Research Methodology: A Guide for Clinical Investigators. Springer Link; 2012: 31-53.
  • Hulley SB. Designing Clinical Research . Lippincott Williams & Wilkins; 2007.
  • Goldschmidt G, Matthews B. Formulating design research questions: A framework. Design Studies. 2022;78;101062. doi: https://doi.org/10.1016/j.destud.2021.101062
  • Ratan SK, Anand T, Ratan J. Formulation of Research Question – Stepwise Approach. J Indian Assoc Pediatr Surg. 2019;24;15-20. doi: 10.4103/jiaps.JIAPS_76_18
  • Fandino W. Formulating a good research question: Pearls and pitfalls. Indian J Anaesth 2019;63;611-616. doi: 10.4103/ija.IJA_198_19
  • Fink A. Conducting Research Literature Reviews: From the Internet to Paper . SAGE Publications; 2019.
  • Cronin P, Ryan F, Coughlan M. Undertaking a literature review: A step-by-step approach. Br J Nurs 2008;17, 38-43, doi: 10.12968/bjon.2008.17.1.28059.
  • Sutton A, Clowes M, Preston L, Booth, A. Meeting the review family: Exploring review types and associated information retrieval requirements. Health Information & Libraries Journal 2019;36;202-222.
  • Higgins JP, Thomas J, Chandler J, et al. Cochrane handbook for systematic reviews of interventions . John Wiley & Sons; 2019.
  • Page MJ,  McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021;372;n71, doi: 10.1136/bmj.n71
  • Moher D, Stewart L, Shekelle P. All in the family: Systematic reviews, rapid reviews, scoping reviews, realist reviews, and more. Systematic Reviews   2015;4;183. doi: 10.1186/s13643-015-0163-7
  • Seers K. Qualitative systematic reviews: Their importance for our understanding of research relevant to pain. Br J Pain 2015;9;36-40. doi: 10.1177/2049463714549777.
  • Harris JL, Booth A, Cargo M, et al. Cochrane qualitative and implementation methods group guidance series—paper 2: Methods for question formulation, searching, and protocol development for qualitative evidence synthesis. J. Clin. Epidemiol . 2018;97;39-48.
  • Timmins F, McCabe C. How to conduct an effective literature search. Nursing Standard 2005;20;41-47.
  • Thakre SB, Thakre SS, Thakre AD. Electronic biomedical literature search for budding researcher. J Clin Diagn Res 2013;7;2033-2037. doi: 10.7860/jcdr/2013/6348.3399.

An Introduction to Research Methods for Undergraduate Health Profession Students Copyright © 2023 by Faith Alele and Bunmi Malau-Aduli is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.

MetLife

Maximizing Employee Care

Opportunities for enhancing care delivery across the employee experience, gated content form - title.

Gated Content Form - Subcopy

Download the 2024 Employee Benefit Trends Study to find out:

Whatfactors havethe biggestimpact

Data Viz Copy

The EmployeeBenefit TrendsStudy

MetLifes2024 EmployeeBenefit TrendsStudy

Anchoring Employee Care on Benefits

Benefits are key in the delivery of employee care. Employees who choose, use, and are satisfied with the benefits that employers offer are more holistically healthy. And improved utilization correlates to better work-life outcomes for workers and improved talent management results for employers.

Employees who are satisfied with their benefits are:

more likely to be productive

more likely to be engaged

more likely to be loyal

Inside the 2024 study:

The Importance of Employee Care in a Time of Turbulence

Chapter 1: The Importance of Employee Care in a Time of Turbulence

From macroeconomic uncertainty to societal discord to geopolitical conflicts, powerful forces outside employers’ control make it harder to satisfy employee needs.

Pathways to Well-Being

Chapter 2: Pathways to Well-Being: Uncovering the Moments that Impact the Employee Experience

Boosting employee well-being and improving talent management outcomes through employee care strategies requires a deeper understanding of critical moments and events at work and in workers’ personal lives.

What It Takes to Deliver Always-On Care Across the Employee Experience

Chapter 3: What It Takes to Deliver Always-On Care Across the Employee Experience

Designing the employee experience around the “always-on” care will help employers navigate the complex and diverse needs of the workforce in pursuit of increased well-being and better talent management outcomes.

Download the Report

IMAGES

  1. 03chapter3

    research study chapter 2

  2. Overall Process Diagram Of This Research Study Download Scientific

    research study chapter 2

  3. Research Chapter 2 Etc..

    research study chapter 2

  4. Research Paper Chapter 2

    research study chapter 2

  5. Theoretical Framework Examples Research Paper

    research study chapter 2

  6. Chapter 3 Methodology Example In Research

    research study chapter 2

VIDEO

  1. Hebrews: An In-Depth Study- Chapter 2 (March 13, 2024)

  2. Forms of Business Organisations

  3. Pak Study Class 9 Chapter 3 Part 1 For ETEA Test Preparation

  4. NCERT Solutions

  5. 2nd year pak study important short questions chapter 2

  6. business study chapter 2 hidi medium important theory by Lodhi Sir Part 1

COMMENTS

  1. PDF CHAPTER 2: Literature Review

    CHAPTER 2: Literature Review. This chapter will explore the literature that is relevant to understanding the development of, and interpreting the results of this convergent study. The first two parts of this review of the literature will describe two types of research: research on teaching and research on teachers' conceptions.

  2. Chapter 2. Research Design

    Chapter 2. Research Design Getting Started. When I teach undergraduates qualitative research methods, the final product of the course is a "research proposal" that incorporates all they have learned and enlists the knowledge they have learned about qualitative research methods in an original design that addresses a particular research question.

  3. Chapter 2: Determining the scope of the review and the questions it

    2.2 Aims of reviews of interventions. Systematic reviews can address any question that can be answered by a primary research study. This Handbook focuses on a subset of all possible review questions: the impact of intervention(s) implemented within a specified human population. Even within these limits, systematic reviews examining the effects of intervention(s) can vary quite markedly in ...

  4. Chapter 2: Theoretical Perspectives and Research Methodologies

    Case studies. Chapter 2: Theoretical Perspectives and Research Methodologies; Chapter 3: Selecting and Planning Research Proposals and Projects; Chapter 4: Research Ethics; Chapter 5: Searching, Critically Reviewing and Using the Literature; Chapter 6: Research Design: Quantitative Methods; Chapter 7: Research Design: Qualitative Methods

  5. PDF CHAPTER 2 Introduction & Literature Review A distribute

    CHAPTER 19 2 Introduction & Literature Review. A . common misconception about phenomenological research is ... a research topic to study. Topics can be general to start and will, most often, become more specific as the review of the literature progresses. Perhaps a student researcher proposes "the experience

  6. Chapter 2 Introduction

    Chapter 2 Introduction. Chapter 2. Introduction. Maybe you have already gained some experience in doing research, for example in your bachelor studies, or as part of your work. The challenge in conducting academic research at masters level, is that it is multi-faceted. The types of activities are: Writing up and presenting your findings.

  7. PDF Guidelines for Writing Research Proposals and Dissertations

    Research (Chapter 2), and the Methodology (Chapter 3). The completed dissertation begins with the same three chapters and concludes with two ... Chapter 1, which introduces the study and states the focus of the study, begins with background information regarding the problem under investigation.

  8. PDF CHAPTER 2 Foundational Concepts for Quantitative Research

    Research CHAPTER2 Learning Objectives After reading this chapter, you will be able to do the following: 1. Define basic terms for quantitative research. 2. Describe the research circle. 3. Identify the four major goals of social research. 4. Write a checklist of the W's. 5. Understand the reasons for both reporting and interpreting numbers. 6.

  9. PDF RESEARCH TOPICS, LITERATURE REVIEWS, AND HYPOTHESES

    Chapter 2 Research Topics, Literature Reviews, and Hypotheses 29 Steps for Creating Research Questions As I mentioned previously, sometimes research questions or topics are easy to create because someone else—a boss or professor, for example—tells you what to study and what you are told

  10. 2.2 Approaches to Research

    In experimental research, which will be discussed later in this chapter, there is a tremendous amount of control over variables of interest. While this is a powerful approach, experiments are often conducted in artificial settings. ... In one study, 140 research participants filled out a survey with 10 questions, including questions asking ...

  11. Chapter 2 (Introducing Research)

    2 Chapter 2 (Introducing Research) Joining a Conversation. Typically, when students are taught about citing sources, it is in the context of the need to avoid plagiarism. While that is a valuable and worthwhile goal in its own right, it shifts the focus past one of the original motives for source citation. ... Research (verb): to study a ...

  12. Ch. 2 Introduction

    Since ancient times, humans have been concerned about the effects of new technologies on our behaviors and thinking processes. The Greek philosopher Socrates, for example, worried that writing—a new technology at that time—would diminish people's ability to remember because they could rely on written records rather than committing ...

  13. Chapter 2: The research proposal

    Chapter 2: The research proposal Abdulai, R.T. and Owusu-Ansah, A. (2014) 'Essential ingredients of a good research proposal for undergraduate and postgraduate students in the social sciences', SAGE Open , 4(3): 1-11.

  14. (PDF) CHAPTER 2 REVIEW OF RELATED LITERATURE

    INTRODUCTION. A review of literature is a classification and evaluation of what accredited scholars and. researchers have written on a topic, organized according to a guiding concept such as a ...

  15. PDF CHAPTER 2: RESEARCH METHODOLOGY

    The proposal established the validity and direction of the research, Fortune Bank as the unit of study and initial research questions - which were refined up until the point of . Chapter 2: Research methodology ... Chapter 2: Research methodology -existing research on BI at a conceptual level (as identified in Chapter 1) - both from an ...

  16. Chapter 2

    August 23, 2017. PSYCH 1101. Chapter 2 - Research Methodology. 2 How is the Scientific Method Used in Psychological Research? Science has Four Primary Goals o Description, prediction, control and explanation Critical Thinking Means Questioning and Evaluating Information The Scientific Method Aids Critical Thinking o Research ~ scientific process that involves careful collection of data o ...

  17. PDF Overview of the Action Research Process

    problem; it is the question the action researcher seeks to answer through conducting the study. The research question provides the guiding structure to the study itself. Every part of the action research Chapter 2 • Overview of the Action Research Process— 33 02-Mertler (Action)-45613:02-Mertler (Action)-45613 6/7/2008 3:39 PM Page 33

  18. Chapter 2- Quantitative research study about readiness of students

    CHAPTER 2 REVIEW OF RELATED LITERATURE AND STUDIES. This chapter includes the ideas, finished thesis, generalizations or conclusions, methodologies, and the other data that are related to the present study. The materials are included in this chapter help in familiarizing information that is relevant and similar to the present study.

  19. (PDF) Chapter 2 Review of Related Literature

    study. Chapter 2 is divided into 4 parts, namely : (1) E-. Learning, (2) Conventional classroom learning, (3) English. Achievement; and (4) Synthesis. The first topic, E-Learning, is a discussion ...

  20. (DOC) CHAPTER 2 RESEARCH METHODOLOGY

    Chapter 2 RESEARCH METHODOLOGY The methodology describes and explains about the different procedures including research design, respondents of the study, research instrument, validity and reliability of the instrument, data gathering procedure, as well as the statistical treatment and analysis. Research Method The descriptive method was used in ...

  21. References for Chapter 2

    6.4 A Case Study - The Tuskegee Syphilis Experiment. ... References for Chapter 2 Busetto L, Wick W, Gumbinger C. How to use and assess qualitative research methods. Neurological Research and Practice. 2020;2;14. doi: 10.1186/s42466-020-00059-z; Wintersberger D, Saunders M. Formulating and clarifying the research topic: Insights and a guide for ...

  22. Nursing Research Chapter 2 Flashcards

    Operation definition. specifies the operation that researchers must perform to collect the required information on a particular concept. Scientific merit. several criteria used to assess the quality of a study. Reliability. the accuracy and consistency of information obtained in a study. Validity. more complex concept that concerns the ...

  23. MetLife Employee Benefit Trends Study (EBTS) 2024

    Chapter 2: Pathways to Well-Being: Uncovering the Moments that Impact the Employee Experience Boosting employee well-being and improving talent management outcomes through employee care strategies requires a deeper understanding of critical moments and events at work and in workers' personal lives.