How to Write Limitations of the Study (with examples)

This blog emphasizes the importance of recognizing and effectively writing about limitations in research. It discusses the types of limitations, their significance, and provides guidelines for writing about them, highlighting their role in advancing scholarly research.

Updated on August 24, 2023

a group of researchers writing their limitation of their study

No matter how well thought out, every research endeavor encounters challenges. There is simply no way to predict all possible variances throughout the process.

These uncharted boundaries and abrupt constraints are known as limitations in research . Identifying and acknowledging limitations is crucial for conducting rigorous studies. Limitations provide context and shed light on gaps in the prevailing inquiry and literature.

This article explores the importance of recognizing limitations and discusses how to write them effectively. By interpreting limitations in research and considering prevalent examples, we aim to reframe the perception from shameful mistakes to respectable revelations.

What are limitations in research?

In the clearest terms, research limitations are the practical or theoretical shortcomings of a study that are often outside of the researcher’s control . While these weaknesses limit the generalizability of a study’s conclusions, they also present a foundation for future research.

Sometimes limitations arise from tangible circumstances like time and funding constraints, or equipment and participant availability. Other times the rationale is more obscure and buried within the research design. Common types of limitations and their ramifications include:

  • Theoretical: limits the scope, depth, or applicability of a study.
  • Methodological: limits the quality, quantity, or diversity of the data.
  • Empirical: limits the representativeness, validity, or reliability of the data.
  • Analytical: limits the accuracy, completeness, or significance of the findings.
  • Ethical: limits the access, consent, or confidentiality of the data.

Regardless of how, when, or why they arise, limitations are a natural part of the research process and should never be ignored . Like all other aspects, they are vital in their own purpose.

Why is identifying limitations important?

Whether to seek acceptance or avoid struggle, humans often instinctively hide flaws and mistakes. Merging this thought process into research by attempting to hide limitations, however, is a bad idea. It has the potential to negate the validity of outcomes and damage the reputation of scholars.

By identifying and addressing limitations throughout a project, researchers strengthen their arguments and curtail the chance of peer censure based on overlooked mistakes. Pointing out these flaws shows an understanding of variable limits and a scrupulous research process.

Showing awareness of and taking responsibility for a project’s boundaries and challenges validates the integrity and transparency of a researcher. It further demonstrates the researchers understand the applicable literature and have thoroughly evaluated their chosen research methods.

Presenting limitations also benefits the readers by providing context for research findings. It guides them to interpret the project’s conclusions only within the scope of very specific conditions. By allowing for an appropriate generalization of the findings that is accurately confined by research boundaries and is not too broad, limitations boost a study’s credibility .

Limitations are true assets to the research process. They highlight opportunities for future research. When researchers identify the limitations of their particular approach to a study question, they enable precise transferability and improve chances for reproducibility. 

Simply stating a project’s limitations is not adequate for spurring further research, though. To spark the interest of other researchers, these acknowledgements must come with thorough explanations regarding how the limitations affected the current study and how they can potentially be overcome with amended methods.

How to write limitations

Typically, the information about a study’s limitations is situated either at the beginning of the discussion section to provide context for readers or at the conclusion of the discussion section to acknowledge the need for further research. However, it varies depending upon the target journal or publication guidelines. 

Don’t hide your limitations

It is also important to not bury a limitation in the body of the paper unless it has a unique connection to a topic in that section. If so, it needs to be reiterated with the other limitations or at the conclusion of the discussion section. Wherever it is included in the manuscript, ensure that the limitations section is prominently positioned and clearly introduced.

While maintaining transparency by disclosing limitations means taking a comprehensive approach, it is not necessary to discuss everything that could have potentially gone wrong during the research study. If there is no commitment to investigation in the introduction, it is unnecessary to consider the issue a limitation to the research. Wholly consider the term ‘limitations’ and ask, “Did it significantly change or limit the possible outcomes?” Then, qualify the occurrence as either a limitation to include in the current manuscript or as an idea to note for other projects. 

Writing limitations

Once the limitations are concretely identified and it is decided where they will be included in the paper, researchers are ready for the writing task. Including only what is pertinent, keeping explanations detailed but concise, and employing the following guidelines is key for crafting valuable limitations:

1) Identify and describe the limitations : Clearly introduce the limitation by classifying its form and specifying its origin. For example:

  • An unintentional bias encountered during data collection
  • An intentional use of unplanned post-hoc data analysis

2) Explain the implications : Describe how the limitation potentially influences the study’s findings and how the validity and generalizability are subsequently impacted. Provide examples and evidence to support claims of the limitations’ effects without making excuses or exaggerating their impact. Overall, be transparent and objective in presenting the limitations, without undermining the significance of the research. 

3) Provide alternative approaches for future studies : Offer specific suggestions for potential improvements or avenues for further investigation. Demonstrate a proactive approach by encouraging future research that addresses the identified gaps and, therefore, expands the knowledge base.

Whether presenting limitations as an individual section within the manuscript or as a subtopic in the discussion area, authors should use clear headings and straightforward language to facilitate readability. There is no need to complicate limitations with jargon, computations, or complex datasets.

Examples of common limitations

Limitations are generally grouped into two categories , methodology and research process .

Methodology limitations

Methodology may include limitations due to:

  • Sample size
  • Lack of available or reliable data
  • Lack of prior research studies on the topic
  • Measure used to collect the data
  • Self-reported data

methodology limitation example

The researcher is addressing how the large sample size requires a reassessment of the measures used to collect and analyze the data.

Research process limitations

Limitations during the research process may arise from:

  • Access to information
  • Longitudinal effects
  • Cultural and other biases
  • Language fluency
  • Time constraints

research process limitations example

The author is pointing out that the model’s estimates are based on potentially biased observational studies.

Final thoughts

Successfully proving theories and touting great achievements are only two very narrow goals of scholarly research. The true passion and greatest efforts of researchers comes more in the form of confronting assumptions and exploring the obscure.

In many ways, recognizing and sharing the limitations of a research study both allows for and encourages this type of discovery that continuously pushes research forward. By using limitations to provide a transparent account of the project's boundaries and to contextualize the findings, researchers pave the way for even more robust and impactful research in the future.

Charla Viera, MS

See our "Privacy Policy"

Ensure your structure and ideas are consistent and clearly communicated

Pair your Premium Editing with our add-on service Presubmission Review for an overall assessment of your manuscript.

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Limitations of the Study
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

The limitations of the study are those characteristics of design or methodology that impacted or influenced the interpretation of the findings from your research. Study limitations are the constraints placed on the ability to generalize from the results, to further describe applications to practice, and/or related to the utility of findings that are the result of the ways in which you initially chose to design the study or the method used to establish internal and external validity or the result of unanticipated challenges that emerged during the study.

Price, James H. and Judy Murnan. “Research Limitations and the Necessity of Reporting Them.” American Journal of Health Education 35 (2004): 66-67; Theofanidis, Dimitrios and Antigoni Fountouki. "Limitations and Delimitations in the Research Process." Perioperative Nursing 7 (September-December 2018): 155-163. .

Importance of...

Always acknowledge a study's limitations. It is far better that you identify and acknowledge your study’s limitations than to have them pointed out by your professor and have your grade lowered because you appeared to have ignored them or didn't realize they existed.

Keep in mind that acknowledgment of a study's limitations is an opportunity to make suggestions for further research. If you do connect your study's limitations to suggestions for further research, be sure to explain the ways in which these unanswered questions may become more focused because of your study.

Acknowledgment of a study's limitations also provides you with opportunities to demonstrate that you have thought critically about the research problem, understood the relevant literature published about it, and correctly assessed the methods chosen for studying the problem. A key objective of the research process is not only discovering new knowledge but also to confront assumptions and explore what we don't know.

Claiming limitations is a subjective process because you must evaluate the impact of those limitations . Don't just list key weaknesses and the magnitude of a study's limitations. To do so diminishes the validity of your research because it leaves the reader wondering whether, or in what ways, limitation(s) in your study may have impacted the results and conclusions. Limitations require a critical, overall appraisal and interpretation of their impact. You should answer the question: do these problems with errors, methods, validity, etc. eventually matter and, if so, to what extent?

Price, James H. and Judy Murnan. “Research Limitations and the Necessity of Reporting Them.” American Journal of Health Education 35 (2004): 66-67; Structure: How to Structure the Research Limitations Section of Your Dissertation. Dissertations and Theses: An Online Textbook. Laerd.com.

Descriptions of Possible Limitations

All studies have limitations . However, it is important that you restrict your discussion to limitations related to the research problem under investigation. For example, if a meta-analysis of existing literature is not a stated purpose of your research, it should not be discussed as a limitation. Do not apologize for not addressing issues that you did not promise to investigate in the introduction of your paper.

Here are examples of limitations related to methodology and the research process you may need to describe and discuss how they possibly impacted your results. Note that descriptions of limitations should be stated in the past tense because they were discovered after you completed your research.

Possible Methodological Limitations

  • Sample size -- the number of the units of analysis you use in your study is dictated by the type of research problem you are investigating. Note that, if your sample size is too small, it will be difficult to find significant relationships from the data, as statistical tests normally require a larger sample size to ensure a representative distribution of the population and to be considered representative of groups of people to whom results will be generalized or transferred. Note that sample size is generally less relevant in qualitative research if explained in the context of the research problem.
  • Lack of available and/or reliable data -- a lack of data or of reliable data will likely require you to limit the scope of your analysis, the size of your sample, or it can be a significant obstacle in finding a trend and a meaningful relationship. You need to not only describe these limitations but provide cogent reasons why you believe data is missing or is unreliable. However, don’t just throw up your hands in frustration; use this as an opportunity to describe a need for future research based on designing a different method for gathering data.
  • Lack of prior research studies on the topic -- citing prior research studies forms the basis of your literature review and helps lay a foundation for understanding the research problem you are investigating. Depending on the currency or scope of your research topic, there may be little, if any, prior research on your topic. Before assuming this to be true, though, consult with a librarian! In cases when a librarian has confirmed that there is little or no prior research, you may be required to develop an entirely new research typology [for example, using an exploratory rather than an explanatory research design ]. Note again that discovering a limitation can serve as an important opportunity to identify new gaps in the literature and to describe the need for further research.
  • Measure used to collect the data -- sometimes it is the case that, after completing your interpretation of the findings, you discover that the way in which you gathered data inhibited your ability to conduct a thorough analysis of the results. For example, you regret not including a specific question in a survey that, in retrospect, could have helped address a particular issue that emerged later in the study. Acknowledge the deficiency by stating a need for future researchers to revise the specific method for gathering data.
  • Self-reported data -- whether you are relying on pre-existing data or you are conducting a qualitative research study and gathering the data yourself, self-reported data is limited by the fact that it rarely can be independently verified. In other words, you have to the accuracy of what people say, whether in interviews, focus groups, or on questionnaires, at face value. However, self-reported data can contain several potential sources of bias that you should be alert to and note as limitations. These biases become apparent if they are incongruent with data from other sources. These are: (1) selective memory [remembering or not remembering experiences or events that occurred at some point in the past]; (2) telescoping [recalling events that occurred at one time as if they occurred at another time]; (3) attribution [the act of attributing positive events and outcomes to one's own agency, but attributing negative events and outcomes to external forces]; and, (4) exaggeration [the act of representing outcomes or embellishing events as more significant than is actually suggested from other data].

Possible Limitations of the Researcher

  • Access -- if your study depends on having access to people, organizations, data, or documents and, for whatever reason, access is denied or limited in some way, the reasons for this needs to be described. Also, include an explanation why being denied or limited access did not prevent you from following through on your study.
  • Longitudinal effects -- unlike your professor, who can literally devote years [even a lifetime] to studying a single topic, the time available to investigate a research problem and to measure change or stability over time is constrained by the due date of your assignment. Be sure to choose a research problem that does not require an excessive amount of time to complete the literature review, apply the methodology, and gather and interpret the results. If you're unsure whether you can complete your research within the confines of the assignment's due date, talk to your professor.
  • Cultural and other type of bias -- we all have biases, whether we are conscience of them or not. Bias is when a person, place, event, or thing is viewed or shown in a consistently inaccurate way. Bias is usually negative, though one can have a positive bias as well, especially if that bias reflects your reliance on research that only support your hypothesis. When proof-reading your paper, be especially critical in reviewing how you have stated a problem, selected the data to be studied, what may have been omitted, the manner in which you have ordered events, people, or places, how you have chosen to represent a person, place, or thing, to name a phenomenon, or to use possible words with a positive or negative connotation. NOTE :   If you detect bias in prior research, it must be acknowledged and you should explain what measures were taken to avoid perpetuating that bias. For example, if a previous study only used boys to examine how music education supports effective math skills, describe how your research expands the study to include girls.
  • Fluency in a language -- if your research focuses , for example, on measuring the perceived value of after-school tutoring among Mexican-American ESL [English as a Second Language] students and you are not fluent in Spanish, you are limited in being able to read and interpret Spanish language research studies on the topic or to speak with these students in their primary language. This deficiency should be acknowledged.

Aguinis, Hermam and Jeffrey R. Edwards. “Methodological Wishes for the Next Decade and How to Make Wishes Come True.” Journal of Management Studies 51 (January 2014): 143-174; Brutus, Stéphane et al. "Self-Reported Limitations and Future Directions in Scholarly Reports: Analysis and Recommendations." Journal of Management 39 (January 2013): 48-75; Senunyeme, Emmanuel K. Business Research Methods. Powerpoint Presentation. Regent University of Science and Technology; ter Riet, Gerben et al. “All That Glitters Isn't Gold: A Survey on Acknowledgment of Limitations in Biomedical Studies.” PLOS One 8 (November 2013): 1-6.

Structure and Writing Style

Information about the limitations of your study are generally placed either at the beginning of the discussion section of your paper so the reader knows and understands the limitations before reading the rest of your analysis of the findings, or, the limitations are outlined at the conclusion of the discussion section as an acknowledgement of the need for further study. Statements about a study's limitations should not be buried in the body [middle] of the discussion section unless a limitation is specific to something covered in that part of the paper. If this is the case, though, the limitation should be reiterated at the conclusion of the section.

If you determine that your study is seriously flawed due to important limitations , such as, an inability to acquire critical data, consider reframing it as an exploratory study intended to lay the groundwork for a more complete research study in the future. Be sure, though, to specifically explain the ways that these flaws can be successfully overcome in a new study.

But, do not use this as an excuse for not developing a thorough research paper! Review the tab in this guide for developing a research topic . If serious limitations exist, it generally indicates a likelihood that your research problem is too narrowly defined or that the issue or event under study is too recent and, thus, very little research has been written about it. If serious limitations do emerge, consult with your professor about possible ways to overcome them or how to revise your study.

When discussing the limitations of your research, be sure to:

  • Describe each limitation in detailed but concise terms;
  • Explain why each limitation exists;
  • Provide the reasons why each limitation could not be overcome using the method(s) chosen to acquire or gather the data [cite to other studies that had similar problems when possible];
  • Assess the impact of each limitation in relation to the overall findings and conclusions of your study; and,
  • If appropriate, describe how these limitations could point to the need for further research.

Remember that the method you chose may be the source of a significant limitation that has emerged during your interpretation of the results [for example, you didn't interview a group of people that you later wish you had]. If this is the case, don't panic. Acknowledge it, and explain how applying a different or more robust methodology might address the research problem more effectively in a future study. A underlying goal of scholarly research is not only to show what works, but to demonstrate what doesn't work or what needs further clarification.

Aguinis, Hermam and Jeffrey R. Edwards. “Methodological Wishes for the Next Decade and How to Make Wishes Come True.” Journal of Management Studies 51 (January 2014): 143-174; Brutus, Stéphane et al. "Self-Reported Limitations and Future Directions in Scholarly Reports: Analysis and Recommendations." Journal of Management 39 (January 2013): 48-75; Ioannidis, John P.A. "Limitations are not Properly Acknowledged in the Scientific Literature." Journal of Clinical Epidemiology 60 (2007): 324-329; Pasek, Josh. Writing the Empirical Social Science Research Paper: A Guide for the Perplexed. January 24, 2012. Academia.edu; Structure: How to Structure the Research Limitations Section of Your Dissertation. Dissertations and Theses: An Online Textbook. Laerd.com; What Is an Academic Paper? Institute for Writing Rhetoric. Dartmouth College; Writing the Experimental Report: Methods, Results, and Discussion. The Writing Lab and The OWL. Purdue University.

Writing Tip

Don't Inflate the Importance of Your Findings!

After all the hard work and long hours devoted to writing your research paper, it is easy to get carried away with attributing unwarranted importance to what you’ve done. We all want our academic work to be viewed as excellent and worthy of a good grade, but it is important that you understand and openly acknowledge the limitations of your study. Inflating the importance of your study's findings could be perceived by your readers as an attempt hide its flaws or encourage a biased interpretation of the results. A small measure of humility goes a long way!

Another Writing Tip

Negative Results are Not a Limitation!

Negative evidence refers to findings that unexpectedly challenge rather than support your hypothesis. If you didn't get the results you anticipated, it may mean your hypothesis was incorrect and needs to be reformulated. Or, perhaps you have stumbled onto something unexpected that warrants further study. Moreover, the absence of an effect may be very telling in many situations, particularly in experimental research designs. In any case, your results may very well be of importance to others even though they did not support your hypothesis. Do not fall into the trap of thinking that results contrary to what you expected is a limitation to your study. If you carried out the research well, they are simply your results and only require additional interpretation.

Lewis, George H. and Jonathan F. Lewis. “The Dog in the Night-Time: Negative Evidence in Social Research.” The British Journal of Sociology 31 (December 1980): 544-558.

Yet Another Writing Tip

Sample Size Limitations in Qualitative Research

Sample sizes are typically smaller in qualitative research because, as the study goes on, acquiring more data does not necessarily lead to more information. This is because one occurrence of a piece of data, or a code, is all that is necessary to ensure that it becomes part of the analysis framework. However, it remains true that sample sizes that are too small cannot adequately support claims of having achieved valid conclusions and sample sizes that are too large do not permit the deep, naturalistic, and inductive analysis that defines qualitative inquiry. Determining adequate sample size in qualitative research is ultimately a matter of judgment and experience in evaluating the quality of the information collected against the uses to which it will be applied and the particular research method and purposeful sampling strategy employed. If the sample size is found to be a limitation, it may reflect your judgment about the methodological technique chosen [e.g., single life history study versus focus group interviews] rather than the number of respondents used.

Boddy, Clive Roland. "Sample Size for Qualitative Research." Qualitative Market Research: An International Journal 19 (2016): 426-432; Huberman, A. Michael and Matthew B. Miles. "Data Management and Analysis Methods." In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 428-444; Blaikie, Norman. "Confounding Issues Related to Determining Sample Size in Qualitative Research." International Journal of Social Research Methodology 21 (2018): 635-641; Oppong, Steward Harrison. "The Problem of Sampling in qualitative Research." Asian Journal of Management Sciences and Education 2 (2013): 202-210.

  • << Previous: 8. The Discussion
  • Next: 9. The Conclusion >>
  • Last Updated: Sep 27, 2024 1:09 PM
  • URL: https://libguides.usc.edu/writingguide

helpful professor logo

21 Research Limitations Examples

21 Research Limitations Examples

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

Learn about our Editorial Process

research limitations examples and definition, explained below

Research limitations refer to the potential weaknesses inherent in a study. All studies have limitations of some sort, meaning declaring limitations doesn’t necessarily need to be a bad thing, so long as your declaration of limitations is well thought-out and explained.

Rarely is a study perfect. Researchers have to make trade-offs when developing their studies, which are often based upon practical considerations such as time and monetary constraints, weighing the breadth of participants against the depth of insight, and choosing one methodology or another.

In research, studies can have limitations such as limited scope, researcher subjectivity, and lack of available research tools.

Acknowledging the limitations of your study should be seen as a strength. It demonstrates your willingness for transparency, humility, and submission to the scientific method and can bolster the integrity of the study. It can also inform future research direction.

Typically, scholars will explore the limitations of their study in either their methodology section, their conclusion section, or both.

Research Limitations Examples

Qualitative and quantitative research offer different perspectives and methods in exploring phenomena, each with its own strengths and limitations. So, I’ve split the limitations examples sections into qualitative and quantitative below.

Qualitative Research Limitations

Qualitative research seeks to understand phenomena in-depth and in context. It focuses on the ‘why’ and ‘how’ questions.

It’s often used to explore new or complex issues, and it provides rich, detailed insights into participants’ experiences, behaviors, and attitudes. However, these strengths also create certain limitations, as explained below.

1. Subjectivity

Qualitative research often requires the researcher to interpret subjective data. One researcher may examine a text and identify different themes or concepts as more dominant than others.

Close qualitative readings of texts are necessarily subjective – and while this may be a limitation, qualitative researchers argue this is the best way to deeply understand everything in context.

Suggested Solution and Response: To minimize subjectivity bias, you could consider cross-checking your own readings of themes and data against other scholars’ readings and interpretations. This may involve giving the raw data to a supervisor or colleague and asking them to code the data separately, then coming together to compare and contrast results.

2. Researcher Bias

The concept of researcher bias is related to, but slightly different from, subjectivity.

Researcher bias refers to the perspectives and opinions you bring with you when doing your research.

For example, a researcher who is explicitly of a certain philosophical or political persuasion may bring that persuasion to bear when interpreting data.

In many scholarly traditions, we will attempt to minimize researcher bias through the utilization of clear procedures that are set out in advance or through the use of statistical analysis tools.

However, in other traditions, such as in postmodern feminist research , declaration of bias is expected, and acknowledgment of bias is seen as a positive because, in those traditions, it is believed that bias cannot be eliminated from research, so instead, it is a matter of integrity to present it upfront.

Suggested Solution and Response: Acknowledge the potential for researcher bias and, depending on your theoretical framework , accept this, or identify procedures you have taken to seek a closer approximation to objectivity in your coding and analysis.

3. Generalizability

If you’re struggling to find a limitation to discuss in your own qualitative research study, then this one is for you: all qualitative research, of all persuasions and perspectives, cannot be generalized.

This is a core feature that sets qualitative data and quantitative data apart.

The point of qualitative data is to select case studies and similarly small corpora and dig deep through in-depth analysis and thick description of data.

Often, this will also mean that you have a non-randomized sample size.

While this is a positive – you’re going to get some really deep, contextualized, interesting insights – it also means that the findings may not be generalizable to a larger population that may not be representative of the small group of people in your study.

Suggested Solution and Response: Suggest future studies that take a quantitative approach to the question.

4. The Hawthorne Effect

The Hawthorne effect refers to the phenomenon where research participants change their ‘observed behavior’ when they’re aware that they are being observed.

This effect was first identified by Elton Mayo who conducted studies of the effects of various factors ton workers’ productivity. He noticed that no matter what he did – turning up the lights, turning down the lights, etc. – there was an increase in worker outputs compared to prior to the study taking place.

Mayo realized that the mere act of observing the workers made them work harder – his observation was what was changing behavior.

So, if you’re looking for a potential limitation to name for your observational research study , highlight the possible impact of the Hawthorne effect (and how you could reduce your footprint or visibility in order to decrease its likelihood).

Suggested Solution and Response: Highlight ways you have attempted to reduce your footprint while in the field, and guarantee anonymity to your research participants.

5. Replicability

Quantitative research has a great benefit in that the studies are replicable – a researcher can get a similar sample size, duplicate the variables, and re-test a study. But you can’t do that in qualitative research.

Qualitative research relies heavily on context – a specific case study or specific variables that make a certain instance worthy of analysis. As a result, it’s often difficult to re-enter the same setting with the same variables and repeat the study.

Furthermore, the individual researcher’s interpretation is more influential in qualitative research, meaning even if a new researcher enters an environment and makes observations, their observations may be different because subjectivity comes into play much more. This doesn’t make the research bad necessarily (great insights can be made in qualitative research), but it certainly does demonstrate a weakness of qualitative research.

6. Limited Scope

“Limited scope” is perhaps one of the most common limitations listed by researchers – and while this is often a catch-all way of saying, “well, I’m not studying that in this study”, it’s also a valid point.

No study can explore everything related to a topic. At some point, we have to make decisions about what’s included in the study and what is excluded from the study.

So, you could say that a limitation of your study is that it doesn’t look at an extra variable or concept that’s certainly worthy of study but will have to be explored in your next project because this project has a clearly and narrowly defined goal.

Suggested Solution and Response: Be clear about what’s in and out of the study when writing your research question.

7. Time Constraints

This is also a catch-all claim you can make about your research project: that you would have included more people in the study, looked at more variables, and so on. But you’ve got to submit this thing by the end of next semester! You’ve got time constraints.

And time constraints are a recognized reality in all research.

But this means you’ll need to explain how time has limited your decisions. As with “limited scope”, this may mean that you had to study a smaller group of subjects, limit the amount of time you spent in the field, and so forth.

Suggested Solution and Response: Suggest future studies that will build on your current work, possibly as a PhD project.

8. Resource Intensiveness

Qualitative research can be expensive due to the cost of transcription, the involvement of trained researchers, and potential travel for interviews or observations.

So, resource intensiveness is similar to the time constraints concept. If you don’t have the funds, you have to make decisions about which tools to use, which statistical software to employ, and how many research assistants you can dedicate to the study.

Suggested Solution and Response: Suggest future studies that will gain more funding on the back of this ‘ exploratory study ‘.

9. Coding Difficulties

Data analysis in qualitative research often involves coding, which can be subjective and complex, especially when dealing with ambiguous or contradicting data.

After naming this as a limitation in your research, it’s important to explain how you’ve attempted to address this. Some ways to ‘limit the limitation’ include:

  • Triangulation: Have 2 other researchers code the data as well and cross-check your results with theirs to identify outliers that may need to be re-examined, debated with the other researchers, or removed altogether.
  • Procedure: Use a clear coding procedure to demonstrate reliability in your coding process. I personally use the thematic network analysis method outlined in this academic article by Attride-Stirling (2001).

Suggested Solution and Response: Triangulate your coding findings with colleagues, and follow a thematic network analysis procedure.

10. Risk of Non-Responsiveness

There is always a risk in research that research participants will be unwilling or uncomfortable sharing their genuine thoughts and feelings in the study.

This is particularly true when you’re conducting research on sensitive topics, politicized topics, or topics where the participant is expressing vulnerability .

This is similar to the Hawthorne effect (aka participant bias), where participants change their behaviors in your presence; but it goes a step further, where participants actively hide their true thoughts and feelings from you.

Suggested Solution and Response: One way to manage this is to try to include a wider group of people with the expectation that there will be non-responsiveness from some participants.

11. Risk of Attrition

Attrition refers to the process of losing research participants throughout the study.

This occurs most commonly in longitudinal studies , where a researcher must return to conduct their analysis over spaced periods of time, often over a period of years.

Things happen to people over time – they move overseas, their life experiences change, they get sick, change their minds, and even die. The more time that passes, the greater the risk of attrition.

Suggested Solution and Response: One way to manage this is to try to include a wider group of people with the expectation that there will be attrition over time.

12. Difficulty in Maintaining Confidentiality and Anonymity

Given the detailed nature of qualitative data , ensuring participant anonymity can be challenging.

If you have a sensitive topic in a specific case study, even anonymizing research participants sometimes isn’t enough. People might be able to induce who you’re talking about.

Sometimes, this will mean you have to exclude some interesting data that you collected from your final report. Confidentiality and anonymity come before your findings in research ethics – and this is a necessary limiting factor.

Suggested Solution and Response: Highlight the efforts you have taken to anonymize data, and accept that confidentiality and accountability place extremely important constraints on academic research.

13. Difficulty in Finding Research Participants

A study that looks at a very specific phenomenon or even a specific set of cases within a phenomenon means that the pool of potential research participants can be very low.

Compile on top of this the fact that many people you approach may choose not to participate, and you could end up with a very small corpus of subjects to explore. This may limit your ability to make complete findings, even in a quantitative sense.

You may need to therefore limit your research question and objectives to something more realistic.

Suggested Solution and Response: Highlight that this is going to limit the study’s generalizability significantly.

14. Ethical Limitations

Ethical limitations refer to the things you cannot do based on ethical concerns identified either by yourself or your institution’s ethics review board.

This might include threats to the physical or psychological well-being of your research subjects, the potential of releasing data that could harm a person’s reputation, and so on.

Furthermore, even if your study follows all expected standards of ethics, you still, as an ethical researcher, need to allow a research participant to pull out at any point in time, after which you cannot use their data, which demonstrates an overlap between ethical constraints and participant attrition.

Suggested Solution and Response: Highlight that these ethical limitations are inevitable but important to sustain the integrity of the research.

For more on Qualitative Research, Explore my Qualitative Research Guide

Quantitative Research Limitations

Quantitative research focuses on quantifiable data and statistical, mathematical, or computational techniques. It’s often used to test hypotheses, assess relationships and causality, and generalize findings across larger populations.

Quantitative research is widely respected for its ability to provide reliable, measurable, and generalizable data (if done well!). Its structured methodology has strengths over qualitative research, such as the fact it allows for replication of the study, which underpins the validity of the research.

However, this approach is not without it limitations, explained below.

1. Over-Simplification

Quantitative research is powerful because it allows you to measure and analyze data in a systematic and standardized way. However, one of its limitations is that it can sometimes simplify complex phenomena or situations.

In other words, it might miss the subtleties or nuances of the research subject.

For example, if you’re studying why people choose a particular diet, a quantitative study might identify factors like age, income, or health status. But it might miss other aspects, such as cultural influences or personal beliefs, that can also significantly impact dietary choices.

When writing about this limitation, you can say that your quantitative approach, while providing precise measurements and comparisons, may not capture the full complexity of your subjects of study.

Suggested Solution and Response: Suggest a follow-up case study using the same research participants in order to gain additional context and depth.

2. Lack of Context

Another potential issue with quantitative research is that it often focuses on numbers and statistics at the expense of context or qualitative information.

Let’s say you’re studying the effect of classroom size on student performance. You might find that students in smaller classes generally perform better. However, this doesn’t take into account other variables, like teaching style , student motivation, or family support.

When describing this limitation, you might say, “Although our research provides important insights into the relationship between class size and student performance, it does not incorporate the impact of other potentially influential variables. Future research could benefit from a mixed-methods approach that combines quantitative analysis with qualitative insights.”

3. Applicability to Real-World Settings

Oftentimes, experimental research takes place in controlled environments to limit the influence of outside factors.

This control is great for isolation and understanding the specific phenomenon but can limit the applicability or “external validity” of the research to real-world settings.

For example, if you conduct a lab experiment to see how sleep deprivation impacts cognitive performance, the sterile, controlled lab environment might not reflect real-world conditions where people are dealing with multiple stressors.

Therefore, when explaining the limitations of your quantitative study in your methodology section, you could state:

“While our findings provide valuable information about [topic], the controlled conditions of the experiment may not accurately represent real-world scenarios where extraneous variables will exist. As such, the direct applicability of our results to broader contexts may be limited.”

Suggested Solution and Response: Suggest future studies that will engage in real-world observational research, such as ethnographic research.

4. Limited Flexibility

Once a quantitative study is underway, it can be challenging to make changes to it. This is because, unlike in grounded research, you’re putting in place your study in advance, and you can’t make changes part-way through.

Your study design, data collection methods, and analysis techniques need to be decided upon before you start collecting data.

For example, if you are conducting a survey on the impact of social media on teenage mental health, and halfway through, you realize that you should have included a question about their screen time, it’s generally too late to add it.

When discussing this limitation, you could write something like, “The structured nature of our quantitative approach allows for consistent data collection and analysis but also limits our flexibility to adapt and modify the research process in response to emerging insights and ideas.”

Suggested Solution and Response: Suggest future studies that will use mixed-methods or qualitative research methods to gain additional depth of insight.

5. Risk of Survey Error

Surveys are a common tool in quantitative research, but they carry risks of error.

There can be measurement errors (if a question is misunderstood), coverage errors (if some groups aren’t adequately represented), non-response errors (if certain people don’t respond), and sampling errors (if your sample isn’t representative of the population).

For instance, if you’re surveying college students about their study habits , but only daytime students respond because you conduct the survey during the day, your results will be skewed.

In discussing this limitation, you might say, “Despite our best efforts to develop a comprehensive survey, there remains a risk of survey error, including measurement, coverage, non-response, and sampling errors. These could potentially impact the reliability and generalizability of our findings.”

Suggested Solution and Response: Suggest future studies that will use other survey tools to compare and contrast results.

6. Limited Ability to Probe Answers

With quantitative research, you typically can’t ask follow-up questions or delve deeper into participants’ responses like you could in a qualitative interview.

For instance, imagine you are surveying 500 students about study habits in a questionnaire. A respondent might indicate that they study for two hours each night. You might want to follow up by asking them to elaborate on what those study sessions involve or how effective they feel their habits are.

However, quantitative research generally disallows this in the way a qualitative semi-structured interview could.

When discussing this limitation, you might write, “Given the structured nature of our survey, our ability to probe deeper into individual responses is limited. This means we may not fully understand the context or reasoning behind the responses, potentially limiting the depth of our findings.”

Suggested Solution and Response: Suggest future studies that engage in mixed-method or qualitative methodologies to address the issue from another angle.

7. Reliance on Instruments for Data Collection

In quantitative research, the collection of data heavily relies on instruments like questionnaires, surveys, or machines.

The limitation here is that the data you get is only as good as the instrument you’re using. If the instrument isn’t designed or calibrated well, your data can be flawed.

For instance, if you’re using a questionnaire to study customer satisfaction and the questions are vague, confusing, or biased, the responses may not accurately reflect the customers’ true feelings.

When discussing this limitation, you could say, “Our study depends on the use of questionnaires for data collection. Although we have put significant effort into designing and testing the instrument, it’s possible that inaccuracies or misunderstandings could potentially affect the validity of the data collected.”

Suggested Solution and Response: Suggest future studies that will use different instruments but examine the same variables to triangulate results.

8. Time and Resource Constraints (Specific to Quantitative Research)

Quantitative research can be time-consuming and resource-intensive, especially when dealing with large samples.

It often involves systematic sampling, rigorous design, and sometimes complex statistical analysis.

If resources and time are limited, it can restrict the scale of your research, the techniques you can employ, or the extent of your data analysis.

For example, you may want to conduct a nationwide survey on public opinion about a certain policy. However, due to limited resources, you might only be able to survey people in one city.

When writing about this limitation, you could say, “Given the scope of our research and the resources available, we are limited to conducting our survey within one city, which may not fully represent the nationwide public opinion. Hence, the generalizability of the results may be limited.”

Suggested Solution and Response: Suggest future studies that will have more funding or longer timeframes.

How to Discuss Your Research Limitations

1. in your research proposal and methodology section.

In the research proposal, which will become the methodology section of your dissertation, I would recommend taking the four following steps, in order:

  • Be Explicit about your Scope – If you limit the scope of your study in your research question, aims, and objectives, then you can set yourself up well later in the methodology to say that certain questions are “outside the scope of the study.” For example, you may identify the fact that the study doesn’t address a certain variable, but you can follow up by stating that the research question is specifically focused on the variable that you are examining, so this limitation would need to be looked at in future studies.
  • Acknowledge the Limitation – Acknowledging the limitations of your study demonstrates reflexivity and humility and can make your research more reliable and valid. It also pre-empts questions the people grading your paper may have, so instead of them down-grading you for your limitations; they will congratulate you on explaining the limitations and how you have addressed them!
  • Explain your Decisions – You may have chosen your approach (despite its limitations) for a very specific reason. This might be because your approach remains, on balance, the best one to answer your research question. Or, it might be because of time and monetary constraints that are outside of your control.
  • Highlight the Strengths of your Approach – Conclude your limitations section by strongly demonstrating that, despite limitations, you’ve worked hard to minimize the effects of the limitations and that you have chosen your specific approach and methodology because it’s also got some terrific strengths. Name the strengths.

Overall, you’ll want to acknowledge your own limitations but also explain that the limitations don’t detract from the value of your study as it stands.

2. In the Conclusion Section or Chapter

In the conclusion of your study, it is generally expected that you return to a discussion of the study’s limitations. Here, I recommend the following steps:

  • Acknowledge issues faced – After completing your study, you will be increasingly aware of issues you may have faced that, if you re-did the study, you may have addressed earlier in order to avoid those issues. Acknowledge these issues as limitations, and frame them as recommendations for subsequent studies.
  • Suggest further research – Scholarly research aims to fill gaps in the current literature and knowledge. Having established your expertise through your study, suggest lines of inquiry for future researchers. You could state that your study had certain limitations, and “future studies” can address those limitations.
  • Suggest a mixed methods approach – Qualitative and quantitative research each have pros and cons. So, note those ‘cons’ of your approach, then say the next study should approach the topic using the opposite methodology or could approach it using a mixed-methods approach that could achieve the benefits of quantitative studies with the nuanced insights of associated qualitative insights as part of an in-study case-study.

Overall, be clear about both your limitations and how those limitations can inform future studies.

In sum, each type of research method has its own strengths and limitations. Qualitative research excels in exploring depth, context, and complexity, while quantitative research excels in examining breadth, generalizability, and quantifiable measures. Despite their individual limitations, each method contributes unique and valuable insights, and researchers often use them together to provide a more comprehensive understanding of the phenomenon being studied.

Attride-Stirling, J. (2001). Thematic networks: an analytic tool for qualitative research. Qualitative research , 1 (3), 385-405. ( Source )

Atkinson, P., Delamont, S., Cernat, A., Sakshaug, J., & Williams, R. A. (2021).  SAGE research methods foundations . London: Sage Publications.

Clark, T., Foster, L., Bryman, A., & Sloan, L. (2021).  Bryman’s social research methods . Oxford: Oxford University Press.

Köhler, T., Smith, A., & Bhakoo, V. (2022). Templates in qualitative research methods: Origins, limitations, and new directions.  Organizational Research Methods ,  25 (2), 183-210. ( Source )

Lenger, A. (2019). The rejection of qualitative research methods in economics.  Journal of Economic Issues ,  53 (4), 946-965. ( Source )

Taherdoost, H. (2022). What are different research approaches? Comprehensive review of qualitative, quantitative, and mixed method research, their applications, types, and limitations.  Journal of Management Science & Engineering Research ,  5 (1), 53-63. ( Source )

Walliman, N. (2021).  Research methods: The basics . New York: Routledge.

Chris

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 10 Reasons you’re Perpetually Single
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 20 Montessori Toddler Bedrooms (Design Inspiration)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 21 Montessori Homeschool Setups
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 101 Hidden Talents Examples

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Instant insights, infinite possibilities

How to present limitations in research

Last updated

30 January 2024

Reviewed by

Short on time? Get an AI generated summary of this article instead

Limitations don’t invalidate or diminish your results, but it’s best to acknowledge them. This will enable you to address any questions your study failed to answer because of them.

In this guide, learn how to recognize, present, and overcome limitations in research.

  • What is a research limitation?

Research limitations are weaknesses in your research design or execution that may have impacted outcomes and conclusions. Uncovering limitations doesn’t necessarily indicate poor research design—it just means you encountered challenges you couldn’t have anticipated that limited your research efforts.

Does basic research have limitations?

Basic research aims to provide more information about your research topic . It requires the same standard research methodology and data collection efforts as any other research type, and it can also have limitations.

  • Common research limitations

Researchers encounter common limitations when embarking on a study. Limitations can occur in relation to the methods you apply or the research process you design. They could also be connected to you as the researcher.

Methodology limitations

Not having access to data or reliable information can impact the methods used to facilitate your research. A lack of data or reliability may limit the parameters of your study area and the extent of your exploration.

Your sample size may also be affected because you won’t have any direction on how big or small it should be and who or what you should include. Having too few participants won’t adequately represent the population or groups of people needed to draw meaningful conclusions.

Research process limitations

The study’s design can impose constraints on the process. For example, as you’re conducting the research, issues may arise that don’t conform to the data collection methodology you developed. You may not realize until well into the process that you should have incorporated more specific questions or comprehensive experiments to generate the data you need to have confidence in your results.

Constraints on resources can also have an impact. Being limited on participants or participation incentives may limit your sample sizes. Insufficient tools, equipment, and materials to conduct a thorough study may also be a factor.

Common researcher limitations

Here are some of the common researcher limitations you may encounter:

Time: some research areas require multi-year longitudinal approaches, but you might not be able to dedicate that much time. Imagine you want to measure how much memory a person loses as they age. This may involve conducting multiple tests on a sample of participants over 20–30 years, which may be impossible.

Bias: researchers can consciously or unconsciously apply bias to their research. Biases can contribute to relying on research sources and methodologies that will only support your beliefs about the research you’re embarking on. You might also omit relevant issues or participants from the scope of your study because of your biases.

Limited access to data : you may need to pay to access specific databases or journals that would be helpful to your research process. You might also need to gain information from certain people or organizations but have limited access to them. These cases require readjusting your process and explaining why your findings are still reliable.

  • Why is it important to identify limitations?

Identifying limitations adds credibility to research and provides a deeper understanding of how you arrived at your conclusions.

Constraints may have prevented you from collecting specific data or information you hoped would prove or disprove your hypothesis or provide a more comprehensive understanding of your research topic.

However, identifying the limitations contributing to your conclusions can inspire further research efforts that help gather more substantial information and data.

  • Where to put limitations in a research paper

A research paper is broken up into different sections that appear in the following order:

Introduction

Methodology

The discussion portion of your paper explores your findings and puts them in the context of the overall research. Either place research limitations at the beginning of the discussion section before the analysis of your findings or at the end of the section to indicate that further research needs to be pursued.

What not to include in the limitations section

Evidence that doesn’t support your hypothesis is not a limitation, so you shouldn’t include it in the limitation section. Don’t just list limitations and their degree of severity without further explanation.

  • How to present limitations

You’ll want to present the limitations of your study in a way that doesn’t diminish the validity of your research and leave the reader wondering if your results and conclusions have been compromised.

Include only the limitations that directly relate to and impact how you addressed your research questions. Following a specific format enables the reader to develop an understanding of the weaknesses within the context of your findings without doubting the quality and integrity of your research.

Identify the limitations specific to your study

You don’t have to identify every possible limitation that might have occurred during your research process. Only identify those that may have influenced the quality of your findings and your ability to answer your research question.

Explain study limitations in detail

This explanation should be the most significant portion of your limitation section.

Link each limitation with an interpretation and appraisal of their impact on the study. You’ll have to evaluate and explain whether the error, method, or validity issues influenced the study’s outcome and how.

Propose a direction for future studies and present alternatives

In this section, suggest how researchers can avoid the pitfalls you experienced during your research process.

If an issue with methodology was a limitation, propose alternate methods that may help with a smoother and more conclusive research project . Discuss the pros and cons of your alternate recommendation.

Describe steps taken to minimize each limitation

You probably took steps to try to address or mitigate limitations when you noticed them throughout the course of your research project. Describe these steps in the limitation section.

  • Limitation example

“Approaches like stem cell transplantation and vaccination in AD [Alzheimer’s disease] work on a cellular or molecular level in the laboratory. However, translation into clinical settings will remain a challenge for the next decade.”

The authors are saying that even though these methods showed promise in helping people with memory loss when conducted in the lab (in other words, using animal studies), more studies are needed. These may be controlled clinical trials, for example. 

However, the short life span of stem cells outside the lab and the vaccination’s severe inflammatory side effects are limitations. Researchers won’t be able to conduct clinical trials until these issues are overcome.

  • How to overcome limitations in research

You’ve already started on the road to overcoming limitations in research by acknowledging that they exist. However, you need to ensure readers don’t mistake weaknesses for errors within your research design.

To do this, you’ll need to justify and explain your rationale for the methods, research design, and analysis tools you chose and how you noticed they may have presented limitations.

Your readers need to know that even when limitations presented themselves, you followed best practices and the ethical standards of your field. You didn’t violate any rules and regulations during your research process.

You’ll also want to reinforce the validity of your conclusions and results with multiple sources, methods, and perspectives. This prevents readers from assuming your findings were derived from a single or biased source.

  • Learning and improving starts with limitations in research

Dealing with limitations with transparency and integrity helps identify areas for future improvements and developments. It’s a learning process, providing valuable insights into how you can improve methodologies, expand sample sizes, or explore alternate approaches to further support the validity of your findings.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 22 August 2024

Last updated: 5 February 2023

Last updated: 16 April 2023

Last updated: 9 March 2023

Last updated: 30 April 2024

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next, log in or sign up.

Get started for free

  • Affiliate Program

Wordvice

  • UNITED STATES
  • 台灣 (TAIWAN)
  • TÜRKIYE (TURKEY)
  • Academic Editing Services
  • - Research Paper
  • - Journal Manuscript
  • - Dissertation
  • - College & University Assignments
  • Admissions Editing Services
  • - Application Essay
  • - Personal Statement
  • - Recommendation Letter
  • - Cover Letter
  • - CV/Resume
  • Business Editing Services
  • - Business Documents
  • - Report & Brochure
  • - Website & Blog
  • Writer Editing Services
  • - Script & Screenplay
  • Our Editors
  • Client Reviews
  • Editing & Proofreading Prices
  • Wordvice Points
  • Partner Discount
  • Plagiarism Checker

APA Citation Generator

MLA Citation Generator

Chicago Citation Generator

Vancouver Citation Generator

  • - APA Style
  • - MLA Style
  • - Chicago Style
  • - Vancouver Style
  • Writing & Editing Guide
  • Academic Resources
  • Admissions Resources

Limitations of the Study – How to Write & Examples

limitations of study in research methodology

What are the limitations of a study?

Study limitations essentially detail any flaws or shortcomings in the methodology or study design that may affect the interpretation of your research results. Study limitations can exist due to constraints on research design, methodology, materials, etc., and these factors may impact the findings of your study. However, researchers are often reluctant to discuss the limitations of their study in their papers, feeling that bringing up limitations may undermine its research value in the eyes of readers and reviewers.

In spite of the impact it might have (and perhaps because of it) you should clearly acknowledge any limitations in your research paper in order to show readers—whether journal editors, other researchers, or the general public—that you are aware of these limitations and to explain how they affect the conclusions that can be drawn from the research.

In this article, we provide some guidelines for writing about research limitations, show examples of some frequently seen study limitations, and recommend techniques for presenting this information. And after you have finished drafting and have received manuscript editing for your work, you still might want to follow this up with academic editing before submitting your work to your target journal.

Why do I need to include limitations of research in my paper?

Although limitations address the potential weaknesses of a study, writing about them toward the end of your paper actually strengthens your study by identifying any problems before other researchers or reviewers find them.

Furthermore, pointing out study limitations shows that you’ve considered the impact of research weakness thoroughly and have an in-depth understanding of your research topic. Since all studies face limitations, being honest and detailing these limitations will impress researchers and reviewers more than ignoring them.

limitations of the study examples, brick wall with blue sky

Where should I put the limitations of the study in my paper?

Some limitations might be evident to researchers before the start of the study, while others might become clear while you are conducting the research. Whether these limitations are anticipated or not, and whether they are due to research design or to methodology, they should be clearly identified and discussed in the discussion section —the final section of your paper. Most journals now require you to include a discussion of potential limitations of your work, and many journals now ask you to place this “limitations section” at the very end of your article. 

Some journals ask you to also discuss the strengths of your work in this section, and some allow you to freely choose where to include that information in your discussion section—make sure to always check the author instructions of your target journal before you finalize a manuscript and submit it for peer review .

Limitations of the Study Examples

There are several reasons why limitations of research might exist. The two main categories of limitations are those that result from the methodology and those that result from issues with the researcher(s).

1. Issues with research samples and selection
2. Insufficient sample size for statistical measurements
3. Lack of previous research studies on the topic
4. Methods/instruments/techniques used to collect the data
1. Limited access to data
2. Time constraints
3. Conflicts arising from cultural bias and other personal issues

Common Methodological Limitations of Studies

Limitations of research due to methodological problems can be addressed by clearly and directly identifying the potential problem and suggesting ways in which this could have been addressed—and SHOULD be addressed in future studies. The following are some major potential methodological issues that can impact the conclusions researchers can draw from the research.

1. Issues with research samples and selection

Sampling errors occur when a probability sampling method is used to select a sample, but that sample does not reflect the general population or appropriate population concerned. This results in limitations of your study known as “sample bias” or “selection bias.”

For example, if you conducted a survey to obtain your research results, your samples (participants) were asked to respond to the survey questions. However, you might have had limited ability to gain access to the appropriate type or geographic scope of participants. In this case, the people who responded to your survey questions may not truly be a random sample.

2. Insufficient sample size for statistical measurements

When conducting a study, it is important to have a sufficient sample size in order to draw valid conclusions. The larger the sample, the more precise your results will be. If your sample size is too small, it will be difficult to identify significant relationships in the data.

Normally, statistical tests require a larger sample size to ensure that the sample is considered representative of a population and that the statistical result can be generalized to a larger population. It is a good idea to understand how to choose an appropriate sample size before you conduct your research by using scientific calculation tools—in fact, many journals now require such estimation to be included in every manuscript that is sent out for review.

3. Lack of previous research studies on the topic

Citing and referencing prior research studies constitutes the basis of the literature review for your thesis or study, and these prior studies provide the theoretical foundations for the research question you are investigating. However, depending on the scope of your research topic, prior research studies that are relevant to your thesis might be limited.

When there is very little or no prior research on a specific topic, you may need to develop an entirely new research typology. In this case, discovering a limitation can be considered an important opportunity to identify literature gaps and to present the need for further development in the area of study.

4. Methods/instruments/techniques used to collect the data

After you complete your analysis of the research findings (in the discussion section), you might realize that the manner in which you have collected the data or the ways in which you have measured variables has limited your ability to conduct a thorough analysis of the results.

For example, you might realize that you should have addressed your survey questions from another viable perspective, or that you were not able to include an important question in the survey. In these cases, you should acknowledge the deficiency or deficiencies by stating a need for future researchers to revise their specific methods for collecting data that includes these missing elements.

Common Limitations of the Researcher(s)

Study limitations that arise from situations relating to the researcher or researchers (whether the direct fault of the individuals or not) should also be addressed and dealt with, and remedies to decrease these limitations—both hypothetically in your study, and practically in future studies—should be proposed.

1. Limited access to data

If your research involved surveying certain people or organizations, you might have faced the problem of having limited access to these respondents. Due to this limited access, you might need to redesign or restructure your research in a different way. In this case, explain the reasons for limited access and be sure that your finding is still reliable and valid despite this limitation.

2. Time constraints

Just as students have deadlines to turn in their class papers, academic researchers might also have to meet deadlines for submitting a manuscript to a journal or face other time constraints related to their research (e.g., participants are only available during a certain period; funding runs out; collaborators move to a new institution). The time available to study a research problem and to measure change over time might be constrained by such practical issues. If time constraints negatively impacted your study in any way, acknowledge this impact by mentioning a need for a future study (e.g., a longitudinal study) to answer this research problem.

3. Conflicts arising from cultural bias and other personal issues

Researchers might hold biased views due to their cultural backgrounds or perspectives of certain phenomena, and this can affect a study’s legitimacy. Also, it is possible that researchers will have biases toward data and results that only support their hypotheses or arguments. In order to avoid these problems, the author(s) of a study should examine whether the way the research problem was stated and the data-gathering process was carried out appropriately.

Steps for Organizing Your Study Limitations Section

When you discuss the limitations of your study, don’t simply list and describe your limitations—explain how these limitations have influenced your research findings. There might be multiple limitations in your study, but you only need to point out and explain those that directly relate to and impact how you address your research questions.

We suggest that you divide your limitations section into three steps: (1) identify the study limitations; (2) explain how they impact your study in detail; and (3) propose a direction for future studies and present alternatives. By following this sequence when discussing your study’s limitations, you will be able to clearly demonstrate your study’s weakness without undermining the quality and integrity of your research.

Step 1. Identify the limitation(s) of the study

  • This part should comprise around 10%-20% of your discussion of study limitations.

The first step is to identify the particular limitation(s) that affected your study. There are many possible limitations of research that can affect your study, but you don’t need to write a long review of all possible study limitations. A 200-500 word critique is an appropriate length for a research limitations section. In the beginning of this section, identify what limitations your study has faced and how important these limitations are.

You only need to identify limitations that had the greatest potential impact on: (1) the quality of your findings, and (2) your ability to answer your research question.

limitations of a study example

Step 2. Explain these study limitations in detail

  • This part should comprise around 60-70% of your discussion of limitations.

After identifying your research limitations, it’s time to explain the nature of the limitations and how they potentially impacted your study. For example, when you conduct quantitative research, a lack of probability sampling is an important issue that you should mention. On the other hand, when you conduct qualitative research, the inability to generalize the research findings could be an issue that deserves mention.

Explain the role these limitations played on the results and implications of the research and justify the choice you made in using this “limiting” methodology or other action in your research. Also, make sure that these limitations didn’t undermine the quality of your dissertation .

methodological limitations example

Step 3. Propose a direction for future studies and present alternatives (optional)

  • This part should comprise around 10-20% of your discussion of limitations.

After acknowledging the limitations of the research, you need to discuss some possible ways to overcome these limitations in future studies. One way to do this is to present alternative methodologies and ways to avoid issues with, or “fill in the gaps of” the limitations of this study you have presented.  Discuss both the pros and cons of these alternatives and clearly explain why researchers should choose these approaches.

Make sure you are current on approaches used by prior studies and the impacts they have had on their findings. Cite review articles or scientific bodies that have recommended these approaches and why. This might be evidence in support of the approach you chose, or it might be the reason you consider your choices to be included as limitations. This process can act as a justification for your approach and a defense of your decision to take it while acknowledging the feasibility of other approaches.

P hrases and Tips for Introducing Your Study Limitations in the Discussion Section

The following phrases are frequently used to introduce the limitations of the study:

  • “There may be some possible limitations in this study.”
  • “The findings of this study have to be seen in light of some limitations.”
  •  “The first is the…The second limitation concerns the…”
  •  “The empirical results reported herein should be considered in the light of some limitations.”
  • “This research, however, is subject to several limitations.”
  • “The primary limitation to the generalization of these results is…”
  • “Nonetheless, these results must be interpreted with caution and a number of limitations should be borne in mind.”
  • “As with the majority of studies, the design of the current study is subject to limitations.”
  • “There are two major limitations in this study that could be addressed in future research. First, the study focused on …. Second ….”

For more articles on research writing and the journal submissions and publication process, visit Wordvice’s Academic Resources page.

And be sure to receive professional English editing and proofreading services , including paper editing services , for your journal manuscript before submitting it to journal editors.

Wordvice Resources

Proofreading & Editing Guide

Writing the Results Section for a Research Paper

How to Write a Literature Review

Research Writing Tips: How to Draft a Powerful Discussion Section

How to Captivate Journal Readers with a Strong Introduction

Tips That Will Make Your Abstract a Success!

APA In-Text Citation Guide for Research Writing

Additional Resources

  • Diving Deeper into Limitations and Delimitations (PhD student)
  • Organizing Your Social Sciences Research Paper: Limitations of the Study (USC Library)
  • Research Limitations (Research Methodology)
  • How to Present Limitations and Alternatives (UMASS)

Article References

Pearson-Stuttard, J., Kypridemos, C., Collins, B., Mozaffarian, D., Huang, Y., Bandosz, P.,…Micha, R. (2018). Estimating the health and economic effects of the proposed US Food and Drug Administration voluntary sodium reformulation: Microsimulation cost-effectiveness analysis. PLOS. https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1002551

Xu, W.L, Pedersen, N.L., Keller, L., Kalpouzos, G., Wang, H.X., Graff, C,. Fratiglioni, L. (2015). HHEX_23 AA Genotype Exacerbates Effect of Diabetes on Dementia and Alzheimer Disease: A Population-Based Longitudinal Study. PLOS. Retrieved from https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1001853

limitations of study in research methodology

Research Limitations 101 📖

A Plain-Language Explainer (With Practical Examples)

By: Derek Jansen (MBA) | Expert Reviewer: Dr. Eunice Rautenbach | May 2024

Research limitations are one of those things that students tend to avoid digging into, and understandably so. No one likes to critique their own study and point out weaknesses. Nevertheless, being able to understand the limitations of your study – and, just as importantly, the implications thereof – a is a critically important skill.

In this post, we’ll unpack some of the most common research limitations you’re likely to encounter, so that you can approach your project with confidence.

Overview: Research Limitations 101

  • What are research limitations ?
  • Access – based limitations
  • Temporal & financial limitations
  • Sample & sampling limitations
  • Design limitations
  • Researcher limitations
  • Key takeaways

What (exactly) are “research limitations”?

At the simplest level, research limitations (also referred to as “the limitations of the study”) are the constraints and challenges that will invariably influence your ability to conduct your study and draw reliable conclusions .

Research limitations are inevitable. Absolutely no study is perfect and limitations are an inherent part of any research design. These limitations can stem from a variety of sources , including access to data, methodological choices, and the more mundane constraints of budget and time. So, there’s no use trying to escape them – what matters is that you can recognise them.

Acknowledging and understanding these limitations is crucial, not just for the integrity of your research, but also for your development as a scholar. That probably sounds a bit rich, but realistically, having a strong understanding of the limitations of any given study helps you handle the inevitable obstacles professionally and transparently, which in turn builds trust with your audience and academic peers.

Simply put, recognising and discussing the limitations of your study demonstrates that you know what you’re doing , and that you’ve considered the results of your project within the context of these limitations. In other words, discussing the limitations is a sign of credibility and strength – not weakness. Contrary to the common misconception, highlighting your limitations (or rather, your study’s limitations) will earn you (rather than cost you) marks.

So, with that foundation laid, let’s have a look at some of the most common research limitations you’re likely to encounter – and how to go about managing them as effectively as possible.

Need a helping hand?

limitations of study in research methodology

Limitation #1: Access To Information

One of the first hurdles you might encounter is limited access to necessary information. For example, you may have trouble getting access to specific literature or niche data sets. This situation can manifest due to several reasons, including paywalls, copyright and licensing issues or language barriers.

To minimise situations like these, it’s useful to try to leverage your university’s resource pool to the greatest extent possible. In practical terms, this means engaging with your university’s librarian and/or potentially utilising interlibrary loans to get access to restricted resources. If this sounds foreign to you, have a chat with your librarian 🙃

In emerging fields or highly specific study areas, you might find that there’s very little existing research (i.e., literature) on your topic. This scenario, while challenging, also offers a unique opportunity to contribute significantly to your field , as it indicates that there’s a significant research gap .

All of that said, be sure to conduct an exhaustive search using a variety of keywords and Boolean operators before assuming that there’s a lack of literature. Also, remember to snowball your literature base . In other words, scan the reference lists of the handful of papers that are directly relevant and then scan those references for more sources. You can also consider using tools like Litmaps and Connected Papers (see video below).

Limitation #2: Time & Money

Almost every researcher will face time and budget constraints at some point. Naturally, these limitations can affect the depth and breadth of your research – but they don’t need to be a death sentence.

Effective planning is crucial to managing both the temporal and financial aspects of your study. In practical terms, utilising tools like Gantt charts can help you visualise and plan your research timeline realistically, thereby reducing the risk of any nasty surprises. Always take a conservative stance when it comes to timelines, especially if you’re new to academic research. As a rule of thumb, things will generally take twice as long as you expect – so, prepare for the worst-case scenario.

If budget is a concern, you might want to consider exploring small research grants or adjusting the scope of your study so that it fits within a realistic budget. Trimming back might sound unattractive, but keep in mind that a smaller, well-planned study can often be more impactful than a larger, poorly planned project.

If you find yourself in a position where you’ve already run out of cash, don’t panic. There’s usually a pivot opportunity hidden somewhere within your project. Engage with your research advisor or faculty to explore potential solutions – don’t make any major changes without first consulting your institution.

Research methodology webinar

Limitation #3: Sample Size & Composition

As we’ve discussed before , the size and representativeness of your sample are crucial , especially in quantitative research where the robustness of your conclusions often depends on these factors. All too often though, students run into issues achieving a sufficient sample size and composition.

To ensure adequacy in terms of your sample size, it’s important to plan for potential dropouts by oversampling from the outset . In other words, if you aim for a final sample size of 100 participants, aim to recruit 120-140 to account for unexpected challenges. If you still find yourself short on participants, consider whether you could complement your dataset with secondary data or data from an adjacent sample – for example, participants from another city or country. That said, be sure to engage with your research advisor before making any changes to your approach.

A related issue that you may run into is sample composition. In other words, you may have trouble securing a random sample that’s representative of your population of interest. In cases like this, you might again want to look at ways to complement your dataset with other sources, but if that’s not possible, it’s not the end of the world. As with all limitations, you’ll just need to recognise this limitation in your final write-up and be sure to interpret your results accordingly. In other words, don’t claim generalisability of your results if your sample isn’t random.

Limitation #4: Methodological Limitations

As we alluded earlier, every methodological choice comes with its own set of limitations . For example, you can’t claim causality if you’re using a descriptive or correlational research design. Similarly, as we saw in the previous example, you can’t claim generalisability if you’re using a non-random sampling approach.

Making good methodological choices is all about understanding (and accepting) the inherent trade-offs . In the vast majority of cases, you won’t be able to adopt the “perfect” methodology – and that’s okay. What’s important is that you select a methodology that aligns with your research aims and research questions , as well as the practical constraints at play (e.g., time, money, equipment access, etc.). Just as importantly, you must recognise and articulate the limitations of your chosen methods, and justify why they were the most suitable, given your specific context.

Limitation #5: Researcher (In)experience 

A discussion about research limitations would not be complete without mentioning the researcher (that’s you!). Whether we like to admit it or not, researcher inexperience and personal biases can subtly (and sometimes not so subtly) influence the interpretation and presentation of data within a study. This is especially true when it comes to dissertations and theses , as these are most commonly undertaken by first-time (or relatively fresh) researchers.

When it comes to dealing with this specific limitation, it’s important to remember the adage “ We don’t know what we don’t know ”. In other words, recognise and embrace your (relative) ignorance and subjectivity – and interpret your study’s results within that context . Simply put, don’t be overly confident in drawing conclusions from your study – especially when they contradict existing literature.

Cultivating a culture of reflexivity within your research practices can help reduce subjectivity and keep you a bit more “rooted” in the data. In practical terms, this simply means making an effort to become aware of how your perspectives and experiences may have shaped the research process and outcomes.

As with any new endeavour in life, it’s useful to garner as many outsider perspectives as possible. Of course, your university-assigned research advisor will play a large role in this respect, but it’s also a good idea to seek out feedback and critique from other academics. To this end, you might consider approaching other faculty at your institution, joining an online group, or even working with a private coach .

Your inexperience and personal biases can subtly (but significantly) influence how you interpret your data and draw your conclusions.

Key Takeaways

Understanding and effectively navigating research limitations is key to conducting credible and reliable academic work. By acknowledging and addressing these limitations upfront, you not only enhance the integrity of your research, but also demonstrate your academic maturity and professionalism.

Whether you’re working on a dissertation, thesis or any other type of formal academic research, remember the five most common research limitations and interpret your data while keeping them in mind.

  • Access to Information (literature and data)
  • Time and money
  • Sample size and composition
  • Research design and methodology
  • Researcher (in)experience and bias

If you need a hand identifying and mitigating the limitations within your study, check out our 1:1 private coaching service .

Literature Review Course

Psst… there’s more!

This post is an extract from our bestselling short course, Methodology Bootcamp . If you want to work smart, you don't want to miss this .

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

limitations of study in research methodology

  • Print Friendly

UNH Library home

CPS Online Graduate Studies Research Paper (UNH Manchester Library): Limitations of the Study

  • Overview of the Research Process for Capstone Projects
  • Types of Research Design
  • Selecting a Research Problem
  • The Title of Your Research Paper
  • Before You Begin Writing
  • 7 Parts of the Research Paper
  • Background Information
  • Quanitative and Qualitative Methods
  • Qualitative Methods
  • Quanitative Methods
  • Resources to Help You With the Literature Review
  • Non-Textual Elements

Limitations of the Study

  • Format of Capstone Research Projects at GSC
  • Editing and Proofreading Your Paper
  • Acknowledgements
  • UNH Scholar's Repository

The limitations of the study are those characteristics of design or methodology that impacted or influenced the interpretation of the findings from your research. They are the constraints on generalizability, applications to practice, and/or utility of findings that are the result of the ways in which you initially chose to design the study and/or the method used to establish internal and external validity.

Price, James H. and Judy Murnan. “Research Limitations and the Necessity of Reporting Them.” American Journal of Health Education 35 (2004): 66-67.

Always acknowledge a study's limitations. It is far better that you identify and acknowledge your study’s limitations than to have them pointed out by your professor and be graded down because you appear to have ignored them.

Keep in mind that acknowledgement of a study's limitations is an opportunity to make suggestions for further research. If you do connect your study's limitations to suggestions for further research, be sure to explain the ways in which these unanswered questions may become more focused because of your study.

Acknowledgement of a study's limitations also provides you with an opportunity to demonstrate that you have thought critically about the research problem, understood the relevant literature published about it, and correctly assessed the methods chosen for studying the problem. A key objective of the research process is not only discovering new knowledge but to also confront assumptions and explore what we don't know.

Claiming limitations is a subjective process because you must evaluate the impact of those limitations . Don't just list key weaknesses and the magnitude of a study's limitations. To do so diminishes the validity of your research because it leaves the reader wondering whether, or in what ways, limitation(s) in your study may have impacted the results and conclusions. Limitations require a critical, overall appraisal and interpretation of their impact. You should answer the question: do these problems with errors, methods, validity, etc. eventually matter and, if so, to what extent?

Price, James H. and Judy Murnan. “Research Limitations and the Necessity of Reporting Them.” American Journal of Health Education 35 (2004): 66-67; Structure: How to Structure the Research Limitations Section of Your Dissertation . Dissertations and Theses: An Online Textbook. Laerd.com.

Descriptions of Possible Limitations

All studies have limitations . However, it is important that you restrict your discussion to limitations related to the research problem under investigation. For example, if a meta-analysis of existing literature is not a stated purpose of your research, it should not be discussed as a limitation. Do not apologize for not addressing issues that you did not promise to investigate in the introduction of your paper.

Here are examples of limitations related to methodology and the research process you may need to describe and to discuss how they possibly impacted your results. Descriptions of limitations should be stated in the past tense because they were discovered after you completed your research.

Possible Methodological Limitations

  • Sample size -- the number of the units of analysis you use in your study is dictated by the type of research problem you are investigating. Note that, if your sample size is too small, it will be difficult to find significant relationships from the data, as statistical tests normally require a larger sample size to ensure a representative distribution of the population and to be considered representative of groups of people to whom results will be generalized or transferred. Note that sample size is less relevant in qualitative research.
  • Lack of available and/or reliable data -- a lack of data or of reliable data will likely require you to limit the scope of your analysis, the size of your sample, or it can be a significant obstacle in finding a trend and a meaningful relationship. You need to not only describe these limitations but to offer reasons why you believe data is missing or is unreliable. However, don’t just throw up your hands in frustration; use this as an opportunity to describe the need for future research.
  • Lack of prior research studies on the topic -- citing prior research studies forms the basis of your literature review and helps lay a foundation for understanding the research problem you are investigating. Depending on the currency or scope of your research topic, there may be little, if any, prior research on your topic. Before assuming this to be true, though, consult with a librarian. In cases when a librarian has confirmed that there is no prior research, you may be required to develop an entirely new research typology [for example, using an exploratory rather than an explanatory research design]. Note again that discovering a limitation can serve as an important opportunity to identify new gaps in the literature and to describe the need for further research.
  • Measure used to collect the data -- sometimes it is the case that, after completing your interpretation of the findings, you discover that the way in which you gathered data inhibited your ability to conduct a thorough analysis of the results. For example, you regret not including a specific question in a survey that, in retrospect, could have helped address a particular issue that emerged later in the study. Acknowledge the deficiency by stating a need for future researchers to revise the specific method for gathering data.
  • Self-reported data -- whether you are relying on pre-existing data or you are conducting a qualitative research study and gathering the data yourself, self-reported data is limited by the fact that it rarely can be independently verified. In other words, you have to take what people say, whether in interviews, focus groups, or on questionnaires, at face value. However, self-reported data can contain several potential sources of bias that you should be alert to and note as limitations. These biases become apparent if they are incongruent with data from other sources. These are: (1) selective memory [remembering or not remembering experiences or events that occurred at some point in the past]; (2) telescoping [recalling events that occurred at one time as if they occurred at another time]; (3) attribution [the act of attributing positive events and outcomes to one's own agency but attributing negative events and outcomes to external forces]; and, (4) exaggeration [the act of representing outcomes or embellishing events as more significant than is actually suggested from other data].

Possible Limitations of the Researcher

  • Access -- if your study depends on having access to people, organizations, or documents and, for whatever reason, access is denied or limited in some way, the reasons for this need to be described.
  • Longitudinal effects -- unlike your professor, who can literally devote years [even a lifetime] to studying a single topic, the time available to investigate a research problem and to measure change or stability over time is pretty much constrained by the due date of your assignment. Be sure to choose a research problem that does not require an excessive amount of time to complete the literature review, apply the methodology, and gather and interpret the results. If you're unsure whether you can complete your research within the confines of the assignment's due date, talk to your professor.
  • Cultural and other type of bias -- we all have biases, whether we are conscience of them or not. Bias is when a person, place, or thing is viewed or shown in a consistently inaccurate way. Bias is usually negative, though one can have a positive bias as well, especially if that bias reflects your reliance on research that only support for your hypothesis. When proof-reading your paper, be especially critical in reviewing how you have stated a problem, selected the data to be studied, what may have been omitted, the manner in which you have ordered events, people, or places, how you have chosen to represent a person, place, or thing, to name a phenomenon, or to use possible words with a positive or negative connotation.

NOTE:   If you detect bias in prior research, it must be acknowledged and you should explain what measures were taken to avoid perpetuating that bias.

  • Fluency in a language -- if your research focuses on measuring the perceived value of after-school tutoring among Mexican-American ESL [English as a Second Language] students, for example, and you are not fluent in Spanish, you are limited in being able to read and interpret Spanish language research studies on the topic. This deficiency should be acknowledged.

Aguinis, Hermam and Jeffrey R. Edwards. “Methodological Wishes for the Next Decade and How to Make Wishes Come True.” Journal of Management Studies 51 (January 2014): 143-174; Brutus, Stéphane et al. "Self-Reported Limitations and Future Directions in Scholarly Reports: Analysis and Recommendations." Journal of Management 39 (January 2013): 48-75; Senunyeme, Emmanuel K. Business Research Methods . Powerpoint Presentation. Regent University of Science and Technology; ter Riet, Gerben et al. “All That Glitters Isn't Gold: A Survey on Acknowledgment of Limitations in Biomedical Studies.” PLOS One 8 (November 2013): 1-6.

Structure and Writing Style

Information about the limitations of your study are generally placed either at the beginning of the discussion section of your paper so the reader knows and understands the limitations before reading the rest of your analysis of the findings, or, the limitations are outlined at the conclusion of the discussion section as an acknowledgement of the need for further study. Statements about a study's limitations should not be buried in the body [middle] of the discussion section unless a limitation is specific to something covered in that part of the paper. If this is the case, though, the limitation should be reiterated at the conclusion of the section. If you determine that your study is seriously flawed due to important limitations, such as, an inability to acquire critical data, consider reframing it as an exploratory study intended to lay the groundwork for a more complete research study in the future. Be sure, though, to specifically explain the ways that these flaws can be successfully overcome in a new study. But, do not use this as an excuse for not developing a thorough research paper! Review the tab in this guide for developing a research topic. If serious limitations exist, it generally indicates a likelihood that your research problem is too narrowly defined or that the issue or event under study is too recent and, thus, very little research has been written about it. If serious limitations do emerge, consult with your professor about possible ways to overcome them or how to revise your study. When discussing the limitations of your research, be sure to: Describe each limitation in detailed but concise terms; Explain why each limitation exists; Provide the reasons why each limitation could not be overcome using the method(s) chosen to acquire or gather the data [cite to other studies that had similar problems when possible]; Assess the impact of each limitation in relation to the overall findings and conclusions of your study; and, If appropriate, describe how these limitations could point to the need for further research. Remember that the method you chose may be the source of a significant limitation that has emerged during your interpretation of the results [for example, you didn't interview a group of people that you later wish you had]. If this is the case, don't panic. Acknowledge it, and explain how applying a different or more robust methodology might address the research problem more effectively in a future study. A underlying goal of scholarly research is not only to show what works, but to demonstrate what doesn't work or what needs further clarification. Aguinis, Hermam and Jeffrey R. Edwards. “Methodological Wishes for the Next Decade and How to Make Wishes Come True.” Journal of Management Studies 51 (January 2014): 143-174; Brutus, Stéphane et al. "Self-Reported Limitations and Future Directions in Scholarly Reports: Analysis and Recommendations." Journal of Management 39 (January 2013): 48-75; Ioannidis, John P.A. "Limitations are not Properly Acknowledged in the Scientific Literature." Journal of Clinical Epidemiology 60 (2007): 324-329; Pasek, Josh. Writing the Empirical Social Science Research Paper: A Guide for the Perplexed. January 24, 2012. Academia.edu; Structure: How to Structure the Research Limitations Section of Your Dissertation. Dissertations and Theses: An Online Textbook. Laerd.com; What Is an Academic Paper? Institute for Writing Rhetoric. Dartmouth College; Writing the Experimental Report: Methods, Results, and Discussion. The Writing Lab and The OWL. Purdue University.

Information about the limitations of your study are generally placed either at the beginning of the discussion section of your paper so the reader knows and understands the limitations before reading the rest of your analysis of the findings, or, the limitations are outlined at the conclusion of the discussion section as an acknowledgement of the need for further study. Statements about a study's limitations should not be buried in the body [middle] of the discussion section unless a limitation is specific to something covered in that part of the paper. If this is the case, though, the limitation should be reiterated at the conclusion of the section.

If you determine that your study is seriously flawed due to important limitations , such as, an inability to acquire critical data, consider reframing it as an exploratory study intended to lay the groundwork for a more complete research study in the future. Be sure, though, to specifically explain the ways that these flaws can be successfully overcome in a new study.

But, do not use this as an excuse for not developing a thorough research paper! Review the tab in this guide for developing a research topic . If serious limitations exist, it generally indicates a likelihood that your research problem is too narrowly defined or that the issue or event under study is too recent and, thus, very little research has been written about it. If serious limitations do emerge, consult with your professor about possible ways to overcome them or how to revise your study.

When discussing the limitations of your research, be sure to:

  • Describe each limitation in detailed but concise terms;
  • Explain why each limitation exists;
  • Provide the reasons why each limitation could not be overcome using the method(s) chosen to acquire or gather the data [cite to other studies that had similar problems when possible];
  • Assess the impact of each limitation in relation to the overall findings and conclusions of your study; and,
  • If appropriate, describe how these limitations could point to the need for further research.

Remember that the method you chose may be the source of a significant limitation that has emerged during your interpretation of the results [for example, you didn't interview a group of people that you later wish you had]. If this is the case, don't panic. Acknowledge it, and explain how applying a different or more robust methodology might address the research problem more effectively in a future study. A underlying goal of scholarly research is not only to show what works, but to demonstrate what doesn't work or what needs further clarification.

Aguinis, Hermam and Jeffrey R. Edwards. “Methodological Wishes for the Next Decade and How to Make Wishes Come True.” Journal of Management Studies 51 (January 2014): 143-174; Brutus, Stéphane et al. "Self-Reported Limitations and Future Directions in Scholarly Reports: Analysis and Recommendations." Journal of Management 39 (January 2013): 48-75; Ioannidis, John P.A. "Limitations are not Properly Acknowledged in the Scientific Literature." Journal of Clinical Epidemiology 60 (2007): 324-329; Pasek, Josh. Writing the Empirical Social Science Research Paper: A Guide for the Perplexed . January 24, 2012. Academia.edu; Structure: How to Structure the Research Limitations Section of Your Dissertation . Dissertations and Theses: An Online Textbook. Laerd.com; What Is an Academic Paper? Institute for Writing Rhetoric. Dartmouth College; Writing the Experimental Report: Methods, Results, and Discussion . The Writing Lab and The OWL. Purdue University.

  • << Previous: The Discussion
  • Next: Conclusion >>
  • Last Updated: Nov 6, 2023 1:43 PM
  • URL: https://libraryguides.unh.edu/cpsonlinegradpaper

Sacred Heart University Library

Organizing Academic Research Papers: Limitations of the Study

  • Purpose of Guide
  • Design Flaws to Avoid
  • Glossary of Research Terms
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Executive Summary
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tertiary Sources
  • What Is Scholarly vs. Popular?
  • Qualitative Methods
  • Quantitative Methods
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Annotated Bibliography
  • Dealing with Nervousness
  • Using Visual Aids
  • Grading Someone Else's Paper
  • How to Manage Group Projects
  • Multiple Book Review Essay
  • Reviewing Collected Essays
  • About Informed Consent
  • Writing Field Notes
  • Writing a Policy Memo
  • Writing a Research Proposal
  • Acknowledgements

The limitations of the study are those characteristics of design or methodology that impacted or influenced the application or interpretation of the results of your study. They are the constraints on generalizability and utility of findings that are the result of the ways in which you chose to design the study and/or the method used to establish internal and external validity.

Importance of...

Always acknowledge a study's limitations. It is far better for you to identify and acknowledge your study’s limitations than to have them pointed out by your professor and be graded down because you appear to have ignored them.

Keep in mind that acknowledgement of a study's limitations is an opportunity to make suggestions for further research. If you do connect your study's limitations to suggestions for further research, be sure to explain the ways in which these unanswered questions may become more focused because of your study.

Acknowledgement of a study's limitations also provides you with an opportunity to demonstrate to your professor that you have thought critically about the research problem, understood the relevant literature published about it, and correctly assessed the methods chosen for studying the problem. A key objective of the research process is not only discovering new knowledge but also to confront assumptions and explore what we don't know.

Claiming limitiations is a subjective process because you must evaluate the impact of those limitations . Don't just list key weaknesses and the magnitude of a study's limitations. To do so diminishes the validity of your research because it leaves the reader wondering whether, or in what ways, limitation(s) in your study may have impacted the findings and conclusions. Limitations require a critical, overall appraisal and interpretation of their impact. You should answer the question: do these problems with errors, methods, validity, etc. eventually matter and, if so, to what extent?

Structure: How to Structure the Research Limitations Section of Your Dissertation . Dissertations and Theses: An Online Textbook. Laerd.com.

Descriptions of Possible Limitations

All studies have limitations . However, it is important that you restrict your discussion to limitations related to the research problem under investigation. For example, if a meta-analysis of existing literature is not a stated purpose of your research, it should not be discussed as a limitation. Do not apologize for not addressing issues that you did not promise to investigate in your paper.

Here are examples of limitations you may need to describe and to discuss how they possibly impacted your findings. Descriptions of limitations should be stated in the past tense.

Possible Methodological Limitations

  • Sample size -- the number of the units of analysis you use in your study is dictated by the type of research problem you are investigating. Note that, if your sample size is too small, it will be difficult to find significant relationships from the data, as statistical tests normally require a larger sample size to ensure a representative distribution of the population and to be considered representative of groups of people to whom results will be generalized or transferred.
  • Lack of available and/or reliable data -- a lack of data or of reliable data will likely require you to limit the scope of your analysis, the size of your sample, or it can be a significant obstacle in finding a trend and a meaningful relationship. You need to not only describe these limitations but to offer reasons why you believe data is missing or is unreliable. However, don’t just throw up your hands in frustration; use this as an opportunity to describe the need for future research.
  • Lack of prior research studies on the topic -- citing prior research studies forms the basis of your literature review and helps lay a foundation for understanding the research problem you are investigating. Depending on the currency or scope of your research topic, there may be little, if any, prior research on your topic. Before assuming this to be true, consult with a librarian! In cases when a librarian has confirmed that there is a lack of prior research, you may be required to develop an entirely new research typology [for example, using an exploratory rather than an explanatory research design]. Note that this limitation can serve as an important opportunity to describe the need for further research.
  • Measure used to collect the data -- sometimes it is the case that, after completing your interpretation of the findings, you discover that the way in which you gathered data inhibited your ability to conduct a thorough analysis of the results. For example, you regret not including a specific question in a survey that, in retrospect, could have helped address a particular issue that emerged later in the study. Acknowledge the deficiency by stating a need in future research to revise the specific method for gathering data.
  • Self-reported data -- whether you are relying on pre-existing self-reported data or you are conducting a qualitative research study and gathering the data yourself, self-reported data is limited by the fact that it rarely can be independently verified. In other words, you have to take what people say, whether in interviews, focus groups, or on questionnaires, at face value. However, self-reported data contain several potential sources of bias that should be noted as limitations: (1) selective memory (remembering or not remembering experiences or events that occurred at some point in the past); (2) telescoping [recalling events that occurred at one time as if they occurred at another time]; (3) attribution [the act of attributing positive events and outcomes to one's own agency but attributing negative events and outcomes to external forces]; and, (4) exaggeration [the act of representing outcomes or embellishing events as more significant than is actually suggested from other data].

Possible Limitations of the Researcher

  • Access -- if your study depends on having access to people, organizations, or documents and, for whatever reason, access is denied or otherwise limited, the reasons for this need to be described.
  • Longitudinal effects -- unlike your professor, who can literally devote years [even a lifetime] to studying a single research problem, the time available to investigate a research problem and to measure change or stability within a sample is constrained by the due date of your assignment. Be sure to choose a topic that does not require an excessive amount of time to complete the literature review, apply the methodology, and gather and interpret the results. If you're unsure, talk to your professor.
  • Cultural and other type of bias -- we all have biases, whether we are conscience of them or not. Bias is when a person, place, or thing is viewed or shown in a consistently inaccurate way. It is usually negative, though one can have a positive bias as well. When proof-reading your paper, be especially critical in reviewing how you have stated a problem, selected the data to be studied, what may have been omitted, the manner in which you have ordered events, people, or places and how you have chosen to represent a person, place, or thing, to name a phenomenon, or to use possible words with a positive or negative connotation. Note that if you detect bias in prior research, it must be acknowledged and you should explain what measures were taken to avoid perpetuating bias.
  • Fluency in a language -- if your research focuses on measuring the perceived value of after-school tutoring among Mexican-American ESL [English as a Second Language] students, for example, and you are not fluent in Spanish, you are limited in being able to read and interpret Spanish language research studies on the topic. This deficiency should be acknowledged.

Brutus, Stéphane et al. Self-Reported Limitations and Future Directions in Scholarly Reports: Analysis and Recommendations. Journal of Management 39 (January 2013): 48-75; Senunyeme, Emmanuel K. Business Research Methods . Powerpoint Presentation. Regent University of Science and Technology.

Structure and Writing Style

Information about the limitations of your study are generally placed either at the beginning of the discussion section of your paper so the reader knows and understands the limitations before reading the rest of your analysis of the findings, or, the limitations are outlined at the conclusion of the discussion section as an acknowledgement of the need for further study. Statements about a study's limitations should not be buried in the body [middle] of the discussion section unless a limitation is specific to something covered in that part of the paper. If this is the case, though, the limitation should be reiterated at the conclusion of the section.

If you determine that your study is seriously flawed due to important limitations , such as, an inability to acquire critical data, consider reframing it as a pilot study intended to lay the groundwork for a more complete research study in the future. Be sure, though, to specifically explain the ways that these flaws can be successfully overcome in later studies.

But, do not use this as an excuse for not developing a thorough research paper! Review the tab in this guide for developing a research topic . If serious limitations exist, it generally indicates a likelihood that your research problem is too narrowly defined or that the issue or event under study  is too recent and, thus, very little research has been written about it. If serious limitations do emerge, consult with your professor about possible ways to overcome them or how to reframe your study.

When discussing the limitations of your research, be sure to:

  • Describe each limitation in detailed but concise terms;
  • Explain why each limitation exists;
  • Provide the reasons why each limitation could not be overcome using the method(s) chosen to gather the data [cite to other studies that had similar problems when possible];
  • Assess the impact of each limitation in relation to  the overall findings and conclusions of your study; and,
  • If appropriate, describe how these limitations could point to the need for further research.

Remember that the method you chose may be the source of a significant limitation that has emerged during your interpretation of the results [for example, you didn't ask a particular question in a survey that you later wish you had]. If this is the case, don't panic. Acknowledge it, and explain how applying a different or more robust methodology might address the research problem more effectively in any future study. A underlying goal of scholarly research is not only to prove what works, but to demonstrate what doesn't work or what needs further clarification.

Brutus, Stéphane et al. Self-Reported Limitations and Future Directions in Scholarly Reports: Analysis and Recommendations. Journal of Management 39 (January 2013): 48-75; Ioannidis, John P.A. Limitations are not Properly Acknowledged in the Scientific Literature. Journal of Clinical Epidemiology 60 (2007): 324-329; Pasek, Josh. Writing the Empirical Social Science Research Paper: A Guide for the Perplexed . January 24, 2012. Academia.edu; Structure: How to Structure the Research Limitations Section of Your Dissertation . Dissertations and Theses: An Online Textbook. Laerd.com; What Is an Academic Paper? Institute for Writing Rhetoric. Dartmouth College; Writing the Experimental Report: Methods, Results, and Discussion. The Writing Lab and The OWL. Purdue University.

Writing Tip

Don't Inflate the Importance of Your Findings! After all the hard work and long hours devoted to writing your research paper, it is easy to get carried away with attributing unwarranted importance to what you’ve done. We all want our academic work to be viewed as excellent and worthy of a good grade, but it is important that you understand and openly acknowledge the limitiations of your study. Inflating of the importance of your study's findings in an attempt hide its flaws is a big turn off to your readers. A measure of humility goes a long way!

Another Writing Tip

Negative Results are Not a Limitation!

Negative evidence refers to findings that unexpectedly challenge rather than support your hypothesis. If you didn't get the results you anticipated, it may mean your hypothesis was incorrect and needs to be reformulated, or, perhaps you have stumbled onto something unexpected that warrants further study. Moreover, the absence of an effect may be very telling in many situations, particularly in experimental research designs. In any case, your results may be of importance to others even though they did not support your hypothesis. Do not fall into the trap of thinking that results contrary to what you expected is a limitation to your study. If you carried out the research well, they are simply your results and only require additional interpretation.

Yet Another Writing Tip

A Note about Sample Size Limitations in Qualitative Research

Sample sizes are typically smaller in qualitative research because, as the study goes on, acquiring more data does not necessarily lead to more information. This is because one occurrence of a piece of data, or a code, is all that is necessary to ensure that it becomes part of the analysis framework. However, it remains true that sample sizes that are too small cannot adequately support claims of having achieved valid conclusions and sample sizes that are too large do not permit the deep, naturalistic, and inductive analysis that defines qualitative inquiry. Determining adequate sample size in qualitative research is ultimately a matter of judgment and experience in evaluating the quality of the information collected against the uses to which it will be applied and the particular research method and purposeful sampling strategy employed. If the sample size is found to be a limitation, it may reflect your judgement about the methodological technique chosen [e.g., single life history study versus focus group interviews] rather than the number of respondents used.

Huberman, A. Michael and Matthew B. Miles. Data Management and Analysis Methods. In Handbook of Qualitative Research. Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 428-444.

  • << Previous: 8. The Discussion
  • Next: 9. The Conclusion >>
  • Last Updated: Jul 18, 2023 11:58 AM
  • URL: https://library.sacredheart.edu/c.php?g=29803
  • QuickSearch
  • Library Catalog
  • Databases A-Z
  • Publication Finder
  • Course Reserves
  • Citation Linker
  • Digital Commons
  • Our Website

Research Support

  • Ask a Librarian
  • Appointments
  • Interlibrary Loan (ILL)
  • Research Guides
  • Databases by Subject
  • Citation Help

Using the Library

  • Reserve a Group Study Room
  • Renew Books
  • Honors Study Rooms
  • Off-Campus Access
  • Library Policies
  • Library Technology

User Information

  • Grad Students
  • Online Students
  • COVID-19 Updates
  • Staff Directory
  • News & Announcements
  • Library Newsletter

My Accounts

  • Interlibrary Loan
  • Staff Site Login

Sacred Heart University

FIND US ON  

Enago Academy

Writing Limitations of Research Study — 4 Reasons Why It Is Important!

' src=

It is not unusual for researchers to come across the term limitations of research during their academic paper writing. More often this is interpreted as something terrible. However, when it comes to research study, limitations can help structure the research study better. Therefore, do not underestimate significance of limitations of research study.

Allow us to take you through the context of how to evaluate the limits of your research and conclude an impactful relevance to your results.

Table of Contents

What Are the Limitations of a Research Study?

Every research has its limit and these limitations arise due to restrictions in methodology or research design.  This could impact your entire research or the research paper you wish to publish. Unfortunately, most researchers choose not to discuss their limitations of research fearing it will affect the value of their article in the eyes of readers.

However, it is very important to discuss your study limitations and show it to your target audience (other researchers, journal editors, peer reviewers etc.). It is very important that you provide an explanation of how your research limitations may affect the conclusions and opinions drawn from your research. Moreover, when as an author you state the limitations of research, it shows that you have investigated all the weaknesses of your study and have a deep understanding of the subject. Being honest could impress your readers and mark your study as a sincere effort in research.

peer review

Why and Where Should You Include the Research Limitations?

The main goal of your research is to address your research objectives. Conduct experiments, get results and explain those results, and finally justify your research question . It is best to mention the limitations of research in the discussion paragraph of your research article.

At the very beginning of this paragraph, immediately after highlighting the strengths of the research methodology, you should write down your limitations. You can discuss specific points from your research limitations as suggestions for further research in the conclusion of your thesis.

1. Common Limitations of the Researchers

Limitations that are related to the researcher must be mentioned. This will help you gain transparency with your readers. Furthermore, you could provide suggestions on decreasing these limitations in you and your future studies.

2. Limited Access to Information

Your work may involve some institutions and individuals in research, and sometimes you may have problems accessing these institutions. Therefore, you need to redesign and rewrite your work. You must explain your readers the reason for limited access.

3. Limited Time

All researchers are bound by their deadlines when it comes to completing their studies. Sometimes, time constraints can affect your research negatively. However, the best practice is to acknowledge it and mention a requirement for future study to solve the research problem in a better way.

4. Conflict over Biased Views and Personal Issues

Biased views can affect the research. In fact, researchers end up choosing only those results and data that support their main argument, keeping aside the other loose ends of the research.

Types of Limitations of Research

Before beginning your research study, know that there are certain limitations to what you are testing or possible research results. There are different types that researchers may encounter, and they all have unique characteristics, such as:

1. Research Design Limitations

Certain restrictions on your research or available procedures may affect your final results or research outputs. You may have formulated research goals and objectives too broadly. However, this can help you understand how you can narrow down the formulation of research goals and objectives, thereby increasing the focus of your study.

2. Impact Limitations

Even if your research has excellent statistics and a strong design, it can suffer from the influence of the following factors:

  • Presence of increasing findings as researched
  • Being population specific
  • A strong regional focus.

3. Data or statistical limitations

In some cases, it is impossible to collect sufficient data for research or very difficult to get access to the data. This could lead to incomplete conclusion to your study. Moreover, this insufficiency in data could be the outcome of your study design. The unclear, shabby research outline could produce more problems in interpreting your findings.

How to Correctly Structure Your Research Limitations?

There are strict guidelines for narrowing down research questions, wherein you could justify and explain potential weaknesses of your academic paper. You could go through these basic steps to get a well-structured clarity of research limitations:

  • Declare that you wish to identify your limitations of research and explain their importance,
  • Provide the necessary depth, explain their nature, and justify your study choices.
  • Write how you are suggesting that it is possible to overcome them in the future.

In this section, your readers will see that you are aware of the potential weaknesses in your business, understand them and offer effective solutions, and it will positively strengthen your article as you clarify all limitations of research to your target audience.

Know that you cannot be perfect and there is no individual without flaws. You could use the limitations of research as a great opportunity to take on a new challenge and improve the future of research. In a typical academic paper, research limitations may relate to:

1. Formulating your goals and objectives

If you formulate goals and objectives too broadly, your work will have some shortcomings. In this case, specify effective methods or ways to narrow down the formula of goals and aim to increase your level of study focus.

2. Application of your data collection methods in research

If you do not have experience in primary data collection, there is a risk that there will be flaws in the implementation of your methods. It is necessary to accept this, and learn and educate yourself to understand data collection methods.

3. Sample sizes

This depends on the nature of problem you choose. Sample size is of a greater importance in quantitative studies as opposed to qualitative ones. If your sample size is too small, statistical tests cannot identify significant relationships or connections within a given data set.

You could point out that other researchers should base the same study on a larger sample size to get more accurate results.

4. The absence of previous studies in the field you have chosen

Writing a literature review is an important step in any scientific study because it helps researchers determine the scope of current work in the chosen field. It is a major foundation for any researcher who must use them to achieve a set of specific goals or objectives.

However, if you are focused on the most current and evolving research problem or a very narrow research problem, there may be very little prior research on your topic. For example, if you chose to explore the role of Bitcoin as the currency of the future, you may not find tons of scientific papers addressing the research problem as Bitcoins are only a new phenomenon.

It is important that you learn to identify research limitations examples at each step. Whatever field you choose, feel free to add the shortcoming of your work. This is mainly because you do not have many years of experience writing scientific papers or completing complex work. Therefore, the depth and scope of your discussions may be compromised at different levels compared to academics with a lot of expertise. Include specific points from limitations of research. Use them as suggestions for the future.

Have you ever faced a challenge of writing the limitations of research study in your paper? How did you overcome it? What ways did you follow? Were they beneficial? Let us know in the comments below!

Frequently Asked Questions

Setting limitations in our study helps to clarify the outcomes drawn from our research and enhance understanding of the subject. Moreover, it shows that the author has investigated all the weaknesses in the study.

Scope is the range and limitations of a research project which are set to define the boundaries of a project. Limitations are the impacts on the overall study due to the constraints on the research design.

Limitation in research is an impact of a constraint on the research design in the overall study. They are the flaws or weaknesses in the study, which may influence the outcome of the research.

1. Limitations in research can be written as follows: Formulate your goals and objectives 2. Analyze the chosen data collection method and the sample sizes 3. Identify your limitations of research and explain their importance 4. Provide the necessary depth, explain their nature, and justify your study choices 5. Write how you are suggesting that it is possible to overcome them in the future

' src=

Excellent article ,,,it has helped me big

This is very helpful information. It has given me an insight on how to go about my study limitations.

Good comments and helpful

the topic is well covered

Rate this article Cancel Reply

Your email address will not be published.

limitations of study in research methodology

Enago Academy's Most Popular Articles

retractions and research integrity

  • Publishing Research
  • Trending Now
  • Understanding Ethics

Understanding the Impact of Retractions on Research Integrity – A global study

As we reach the midway point of 2024, ‘Research Integrity’ remains one of the hot…

Gender Bias in Science Funding

  • Diversity and Inclusion

The Silent Struggle: Confronting gender bias in science funding

In the 1990s, Dr. Katalin Kariko’s pioneering mRNA research seemed destined for obscurity, doomed by…

ResearchSummary

  • Promoting Research

Plain Language Summary — Communicating your research to bridge the academic-lay gap

Science can be complex, but does that mean it should not be accessible to the…

Addressing Biases in the Journey of PhD

Addressing Barriers in Academia: Navigating unconscious biases in the Ph.D. journey

In the journey of academia, a Ph.D. marks a transitional phase, like that of a…

limitations of study in research methodology

  • Manuscripts & Grants
  • Reporting Research

Unraveling Research Population and Sample: Understanding their role in statistical inference

Research population and sample serve as the cornerstones of any scientific inquiry. They hold the…

Research Problem Statement — Find out how to write an impactful one!

How to Develop a Good Research Question? — Types & Examples

5 Effective Ways to Avoid Ghostwriting for Busy Researchers

limitations of study in research methodology

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

Educational resources and simple solutions for your research journey

Limitations of a Study

How to Present the Limitations of a Study in Research?

The limitations of the study convey to the reader how and under which conditions your study results will be evaluated. Scientific research involves investigating research topics, both known and unknown, which inherently includes an element of risk. The risk could arise due to human errors, barriers to data gathering, limited availability of resources, and researcher bias. Researchers are encouraged to discuss the limitations of their research to enhance the process of research, as well as to allow readers to gain an understanding of the study’s framework and value.

Limitations of the research are the constraints placed on the ability to generalize from the results and to further describe applications to practice. It is related to the utility value of the findings based on how you initially chose to design the study, the method used to establish internal and external validity, or the result of unanticipated challenges that emerged during the study. Knowing about these limitations and their impact can explain how the limitations of your study can affect the conclusions and thoughts drawn from your research. 1

Table of Contents

What are the limitations of a study

Researchers are probably cautious to acknowledge what the limitations of the research can be for fear of undermining the validity of the research findings. No research can be faultless or cover all possible conditions. These limitations of your research appear probably due to constraints on methodology or research design and influence the interpretation of your research’s ultimate findings. 2 These are limitations on the generalization and usability of findings that emerge from the design of the research and/or the method employed to ensure validity internally and externally. But such limitations of the study can impact the whole study or research paper. However, most researchers prefer not to discuss the different types of limitations in research for fear of decreasing the value of their paper amongst the reviewers or readers.

limitations of study in research methodology

Importance of limitations of a study

Writing the limitations of the research papers is often assumed to require lots of effort. However, identifying the limitations of the study can help structure the research better. Therefore, do not underestimate the importance of research study limitations. 3

  • Opportunity to make suggestions for further research. Suggestions for future research and avenues for further exploration can be developed based on the limitations of the study.
  • Opportunity to demonstrate critical thinking. A key objective of the research process is to discover new knowledge while questioning existing assumptions and exploring what is new in the particular field. Describing the limitation of the research shows that you have critically thought about the research problem, reviewed relevant literature, and correctly assessed the methods chosen for studying the problem.
  • Demonstrate Subjective learning process. Writing limitations of the research helps to critically evaluate the impact of the said limitations, assess the strength of the research, and consider alternative explanations or interpretations. Subjective evaluation contributes to a more complex and comprehensive knowledge of the issue under study.

Why should I include limitations of research in my paper

All studies have limitations to some extent. Including limitations of the study in your paper demonstrates the researchers’ comprehensive and holistic understanding of the research process and topic. The major advantages are the following:

  • Understand the study conditions and challenges encountered . It establishes a complete and potentially logical depiction of the research. The boundaries of the study can be established, and realistic expectations for the findings can be set. They can also help to clarify what the study is not intended to address.
  • Improve the quality and validity of the research findings. Mentioning limitations of the research creates opportunities for the original author and other researchers to undertake future studies to improve the research outcomes.
  • Transparency and accountability. Including limitations of the research helps maintain mutual integrity and promote further progress in similar studies.
  • Identify potential bias sources.  Identifying the limitations of the study can help researchers identify potential sources of bias in their research design, data collection, or analysis. This can help to improve the validity and reliability of the findings.

Where do I need to add the limitations of the study in my paper

The limitations of your research can be stated at the beginning of the discussion section, which allows the reader to comprehend the limitations of the study prior to reading the rest of your findings or at the end of the discussion section as an acknowledgment of the need for further research.

Types of limitations in research

There are different types of limitations in research that researchers may encounter. These are listed below:

  • Research Design Limitations : Restrictions on your research or available procedures may affect the research outputs. If the research goals and objectives are too broad, explain how they should be narrowed down to enhance the focus of your study. If there was a selection bias in your sample, explain how this may affect the generalizability of your findings. This can help readers understand the limitations of the study in terms of their impact on the overall validity of your research.
  • Impact Limitations : Your study might be limited by a strong regional-, national-, or species-based impact or population- or experimental-specific impact. These inherent limitations on impact affect the extendibility and generalizability of the findings.
  • Data or statistical limitations : Data or statistical limitations in research are extremely common in experimental (such as medicine, physics, and chemistry) or field-based (such as ecology and qualitative clinical research) studies. Sometimes, it is either extremely difficult to acquire sufficient data or gain access to the data. These limitations of the research might also be the result of your study’s design and might result in an incomplete conclusion to your research.

Limitations of study examples

All possible limitations of the study cannot be included in the discussion section of the research paper or dissertation. It will vary greatly depending on the type and nature of the study. These include types of research limitations that are related to methodology and the research process and that of the researcher as well that you need to describe and discuss how they possibly impacted your results.

Common methodological limitations of the study

Limitations of research due to methodological problems are addressed by identifying the potential problem and suggesting ways in which this should have been addressed. Some potential methodological limitations of the study are as follows. 1

  • Sample size: The sample size 4 is dictated by the type of research problem investigated. If the sample size is too small, finding a significant relationship from the data will be difficult, as statistical tests require a large sample size to ensure a representative population distribution and generalize the study findings.
  • Lack of available/reliable data: A lack of available/reliable data will limit the scope of your analysis and the size of your sample or present obstacles in finding a trend or meaningful relationship. So, when writing about the limitations of the study, give convincing reasons why you feel data is absent or untrustworthy and highlight the necessity for a future study focused on developing a new data-gathering strategy.
  • Lack of prior research studies: Citing prior research studies is required to help understand the research problem being investigated. If there is little or no prior research, an exploratory rather than an explanatory research design will be required. Also, discovering the limitations of the study presents an opportunity to identify gaps in the literature and describe the need for additional study.
  • Measure used to collect the data: Sometimes, the data gathered will be insufficient to conduct a thorough analysis of the results. A limitation of the study example, for instance, is identifying in retrospect that a specific question could have helped address a particular issue that emerged during data analysis. You can acknowledge the limitation of the research by stating the need to revise the specific method for gathering data in the future.
  • Self-reported data: Self-reported data cannot be independently verified and can contain several potential bias sources, such as selective memory, attribution, and exaggeration. These biases become apparent if they are incongruent with data from other sources.

General limitations of researchers

Limitations related to the researcher can also influence the study outcomes. These should be addressed, and related remedies should be proposed.

  • Limited access to data : If your study requires access to people, organizations, data, or documents whose access is denied or limited, the reasons need to be described. An additional explanation stating why this limitation of research did not prevent you from following through on your study is also needed.
  • Time constraints : Researchers might also face challenges in meeting research deadlines due to a lack of timely participant availability or funds, among others. The impacts of time constraints must be acknowledged by mentioning the need for a future study addressing this research problem.
  • Conflicts due to biased views and personal issues : Differences in culture or personal views can contribute to researcher bias, as they focus only on the results and data that support their main arguments. To avoid this, pay attention to the problem statement and data gathering.

Steps for structuring the limitations section

Limitations are an inherent part of any research study. Issues may vary, ranging from sampling and literature review to methodology and bias. However, there is a structure for identifying these elements, discussing them, and offering insight or alternatives on how the limitations of the study can be mitigated. This enhances the process of the research and helps readers gain a comprehensive understanding of a study’s conditions.

  • Identify the research constraints : Identify those limitations having the greatest impact on the quality of the research findings and your ability to effectively answer your research questions and/or hypotheses. These include sample size, selection bias, measurement error, or other issues affecting the validity and reliability of your research.
  • Describe their impact on your research : Reflect on the nature of the identified limitations and justify the choices made during the research to identify the impact of the study’s limitations on the research outcomes. Explanations can be offered if needed, but without being defensive or exaggerating them. Provide context for the limitations of your research to understand them in a broader context. Any specific limitations due to real-world considerations need to be pointed out critically rather than justifying them as done by some other author group or groups.
  • Mention the opportunity for future investigations : Suggest ways to overcome the limitations of the present study through future research. This can help readers understand how the research fits into the broader context and offer a roadmap for future studies.

Frequently Asked Questions

  • Should I mention all the limitations of my study in the research report?

Restrict limitations to what is pertinent to the research question under investigation. The specific limitations you include will depend on the nature of the study, the research question investigated, and the data collected.

  • Can the limitations of a study affect its credibility?

Stating the limitations of the research is considered favorable by editors and peer reviewers. Connecting your study’s limitations with future possible research can help increase the focus of unanswered questions in this area. In addition, admitting limitations openly and validating that they do not affect the main findings of the study increases the credibility of your study. However, if you determine that your study is seriously flawed, explain ways to successfully overcome such flaws in a future study. For example, if your study fails to acquire critical data, consider reframing the research question as an exploratory study to lay the groundwork for more complete research in the future.

  • How can I mitigate the limitations of my study?

Strategies to minimize limitations of the research should focus on convincing reviewers and readers that the limitations do not affect the conclusions of the study by showing that the methods are appropriate and that the logic is sound. Here are some steps to follow to achieve this:

  • Use data that are valid.
  • Use methods that are appropriate and sound logic to draw inferences.
  • Use adequate statistical methods for drawing inferences from the data that studies with similar limitations have been published before.

Admit limitations openly and, at the same time, show how they do not affect the main conclusions of the study.

  • Can the limitations of a study impact its publication chances?

Limitations in your research can arise owing to restrictions in methodology or research design. Although this could impact your chances of publishing your research paper, it is critical to explain your study’s limitations to your intended audience. For example, it can explain how your study constraints may impact the results and views generated from your investigation. It also shows that you have researched the flaws of your study and have a thorough understanding of the subject.

  • How can limitations in research be used for future studies?

The limitations of a study give you an opportunity to offer suggestions for further research. Your study’s limitations, including problems experienced during the study and the additional study perspectives developed, are a great opportunity to take on a new challenge and help advance knowledge in a particular field.

References:

  • Brutus, S., Aguinis, H., & Wassmer, U. (2013). Self-reported limitations and future directions in scholarly reports: Analysis and recommendations.  Journal of Management ,  39 (1), 48-75.
  • Ioannidis, J. P. (2007). Limitations are not properly acknowledged in the scientific literature.  Journal of Clinical Epidemiology ,  60 (4), 324-329.
  • Price, J. H., & Murnan, J. (2004). Research limitations and the necessity of reporting them.  American Journal of Health Education ,  35 (2), 66.
  • Boddy, C. R. (2016). Sample size for qualitative research.  Qualitative Market Research: An International Journal ,  19 (4), 426-432.

R Discovery is a literature search and research reading platform that accelerates your research discovery journey by keeping you updated on the latest, most relevant scholarly content. With 250M+ research articles sourced from trusted aggregators like CrossRef, Unpaywall, PubMed, PubMed Central, Open Alex and top publishing houses like Springer Nature, JAMA, IOP, Taylor & Francis, NEJM, BMJ, Karger, SAGE, Emerald Publishing and more, R Discovery puts a world of research at your fingertips.  

Try R Discovery Prime FREE for 1 week or upgrade at just US$72 a year to access premium features that let you listen to research on the go, read in your language, collaborate with peers, auto sync with reference managers, and much more. Choose a simpler, smarter way to find and read research – Download the app and start your free 7-day trial today !  

Related Posts

difference between journal and conference papers

Conference Paper vs. Journal Paper: What’s the Difference 

literature mapping

Literature Mapping in Research: Definition, Types, and Benefits

Scientific Research and Methodology : An introduction to quantitative research and statistics

8 research design limitations.

So far, you have learnt to ask a RQ and designs research studies. In this chapter , you will learn to identify limitations to:

  • internally valid.
  • externally valid.
  • ecologically valid.

limitations of study in research methodology

8.1 Introduction

The type of study and the research design determine how the results of the study should be interpreted. Ideally, a study would be perfectly externally and internally valid; in practice this is very difficult to achieve. Practically every study has limitations. The results of a study should be interpreted in light of these limitations. Limitations are not necessarily problems .

Limitations generally can be discussed through three components:

  • Internal validity (Sect.  3.1 ): Discuss any limitations to internal validity due to the research design (such as identifying possible confounding variables). This is related to the effectiveness of the study within the sample (Sect.  8.2 ).
  • External validity (Sect.  6.1 ): Discuss how well the sample represents the intended population. This is related to the generalisability of the study to the intended population (Sect.  8.3 ).
  • Ecological validity : Discuss how well the study methods, materials and context approximate the real situation of interest. This is related to the practicality of the results to real life (Sect.  8.4 ).

The type of study often introduces some of these limitations (Chap.  4 ). All these issues should be addressed when considering the study limitations.

Almost every study has limitations. Identifying potential limitations, and discussing the likely impact they have on the interpretation of the study results, is important and ethical.

Different types of research studies have limitations. Experimental studies, in general, have higher internal validity than observational studies, since more of the research design in under the control of the researchers; for example, random allocation of treatments is possible to minimise confounding.

Only well-conducted experimental studies can show cause-and-effect relationships.

However, experimental studies may suffer from poor ecological validity; for instance, laboratory experiments are often conducted under controlled temperature and humidity. Many experiments also require that people be told about being in a study (due to ethics), and so internal validity may be comprised (the Hawthorne effect).

Example 8.1 (Retrofitting) Giandomenico, Papineau, and Rivers ( 2022 ) studied retro-fitting houses with energy-saving devices, and found large discrepancies in savings for observational studies ( \(12.2\) %) and experimental studies ( \(6.2\) %). The authors say that 'this finding reinforces the importance of using study designs with high internal validity to evaluate program savings' (p. 692).

8.2 Limitations related to internal validity

Internal validity refers to the extent to which a cause-and-effect relationship can be established in a study, eliminating other possible explanations (Sect.  3.1 ); that is, the effectiveness of the study using the sample. A discussion of the limitations of internal validity should cover, as appropriate: possible confounding and lurking variables variables; the impact of the Hawthorne, observer, placebo and carry-over effects; the impact of any other design decisions.

If any of these issues are likely to compromise internal validity, the implications on the interpretation of the results should be discussed. For example, if the participants were not blinded, this should be clearly stated, and the conclusion should indicate that the individuals in the study may have behaved differently than usual.

limitations of study in research methodology

Example 8.2 (Study limitations) Axmann et al. ( 2020 ) randomly allocated Ugandan farmers to receive, or not receive, hybrid maize seeds. One potential threat to internal validity was that farmers receiving the hybrid seeds could share their seeds with their neighbours.

Hence, the researchers contacted the \(75\) farmers allocated to receive the hybrid seeds; none of the contacted farmers reported selling or giving seeds to other farmers. This extra step increased the internal validity of the study.

Maximizing internal validity in observational studies is more difficult than in experimental studies (e.g., random allocation is not possible). The internal validity of experimental studies involving people is often compromised because people must be informed that they are participating in a study.

limitations of study in research methodology

Example 8.3 (Internal validity) In a study of the hand-hygiene practices of paramedics ( Barr et al. 2017 ) , self -reported hand-hygiene practices were very different than what was reported by peers . That is, how people self-report their behaviours may not align with how they actually behave, which influenced the internal validity of the study.

A study evaluated using a new therapy on elderly men, and listed some limitations of their study:

... the researcher was not blinded and had prior knowledge of the research aims, disease status, and intervention. As such, these could all have influenced data recording [...] The potential of reporting bias and observer bias could be reduced by implementing blinding in future studies. --- Kabata-Piżuch et al. ( 2021 ) , p. 10

8.3 Limitations related to external validity

limitations of study in research methodology

External validity refers to the ability to generalise the findings made from the sample to the entire intended population (Sect.  6.1 ). For a study to be externally valid, it must first be internally valid: that is, if the study of not effective in the sample studied (i.e., internally valid), the results may not apply to the intended population either.

External validity refers to how well the sample is likely to represent the intended population in the RQ.

If the population is Californians, then the study is externally valid if the sample is representative of Californians The results do not have to apply to people in the rest of the United States (though this can be commented on, too) to be externally valid. The intended population is Californians .

External validity depends on how the sample was obtained. Results from random samples (Sect.  6.5 ) are likely to generalise to the population and be externally valid. (The analyses in this book assume all samples are simple random samples .) Furthermore, results from approximately representative samples (Sect.  6.6 ) may generalise to the population and be externally valid if those in the study are not obviously different than those not in the study.

Any inclusion criteria, exclusion criteria or control variables may also limit the external validity of the study.

Example 8.4 (External validity) A New Zealand study ( Gammon et al. 2012 ) identified (for well-documented reasons) a population of interest: 'women of South Asian origin living in New Zealand' (p. 21). The women in the sample were 'women of South Asian origin [...] recruited using a convenience sample method throughout Auckland' (p. 21).

The results may not generalise to the intended population ( all women of South Asian origin living in New Zealand) because all the women in the sample came from Auckland, and the sample was not a random sample from this population anyway. The study was still useful however, since we have still learnt information about the population that the is represented by the sample, which may be similar to the intended population.

Example 8.5 (Using biochar) Farrar et al. ( 2018 ) studied growing ginger using biochar on one farm at Mt Mellum, Australia. The results may only generalise to growing ginger at Mt Mellum, but since ginger is usually grown in similar types of climates and soils, the results may apply to other ginger farms also.

8.4 Limitations related to ecological validity

The likely practicality of the study results in the real world should also be discussed. This is called ecological validity .

limitations of study in research methodology

Definition 8.1 (Ecological validity) A study is ecologically valid if the study methods, materials and context closely approximate the real situation of interest.

Studies don't need to be ecologically valid to be useful; much can be learnt under special conditions, as long as the potential limitations are understood when applying the results to the real world. The ecological validity of experimental studies may be compromised because the experimental conditions are sometimes highly controlled (for good reason).

limitations of study in research methodology

Example 8.6 (Ecological validity) Consider a study to determine the proportion of people that buy coffee in a reusable cup. People could be asked about their behaviour. This study may not be ecologically valid, as what people do may not align with what they say they will do.

An alternative study could watch people buy coffee at various coffee shops, and record what people do in practice. This second study is more likely to be ecologically valid , as real-world behaviour is observed.

A study observed the effect of using high-mounted rear brake lights ( Kahane and Hertz 1998 ) , which are now commonplace. The American study showed that such lights reduced rear-end collisions by about \(50\) %. However, after making these lights mandatory, rear-end collisions reduced by only \(5\) %. Why?

8.5 Chapter summary

The limitations in a study need to be identified, and may be related to:

  • internal validity (effectiveness): how well the study is conducted within the sample, isolating the relationship of interest.
  • external validity (generalisability): how well the sample results are likely to apply to the intended population.
  • ecological validity (practicality): how well the results may apply to the real-world situation of interest.

8.6 Quick review questions

Are the following statements true or false ?

  • When interpreting the results of a study, the steps taken to maximize internal validity should be evaluated. TRUE FALSE
  • If studies are not externally valid, then they are not useful. TRUE FALSE
  • When interpreting the results of a study, the steps taken to maximize external validity do not need to be evaluated. TRUE FALSE
  • When interpreting the results of a study, ecological validity is about the impact of the study on the environment. TRUE FALSE

8.7 Exercises

Answers to odd-numbered exercises are available in App.  E .

Exercise 8.1 A research study examined how people can save energy through lighting choices ( Gentile 2022 ) . The study states (p. 9) that the results 'are limited to the specific study and cannot be easily projected to other similar settings'.

What type of validity is being discussed here?

Exercise 8.2 Fill the blanks with the correct word: internal , external or ecological .

When interpreting the results of studies, we consider the practicality ( internal external ecological validity), the generalizability ( internal external ecological validity) and the effectiveness ( internal external ecological validity).

Exercise 8.3 A student project asked if 'the percentage of word retention is higher in male students than female students'. When discussing external validity , the students stated:

We cannot say whether or not the general public have better or worse word retention compared to the students that we will be studying.

Why is the statement not relevant in a discussion of external validity?

Exercise 8.4 Yeh et al. ( 2018 ) conducted an experimental study to 'determine if using a parachute prevents death or major traumatic injury when jumping from an aircraft'.

The researchers randomised \(23\) volunteers into one of two groups: wearing a parachute, or wearing an empty backpack. The response variable was a measurement of death or major traumatic injury upon landing. From the study, death or major injury was the same in both groups (0% for each group). However, the study used 'small stationary aircraft on the ground, suggesting cautious extrapolation to high altitude jumps' (p. 1).

Discuss the internal, external and ecological validity based on this information.

Exercise 8.5 A study examined how well hospital patients sleep at night ( Delaney et al. 2018 ) . The researchers state that 'convenience sampling was used to recruit patients' (p. 2). Later, the researchers state (p. 7):

... while most healthy individuals sleep primarily or exclusively at night, it is important to consider that patients requiring hospitalization will likely require some daytime nap periods. This study looks at sleep only in the night-time period \(22\) : \(00\) -- \(07\) : \(00\) h, without the context of daytime sleep considered.

Exercise 8.6 Botelho et al. ( 2019 ) examined the food choices made when subjects were asked to shop for ingredients to make a last-minute meal. Half were told to prepare a 'healthy meal', and the other half told just to prepare a 'meal'. The authors stated (p. 436):

Another limitation is that results report findings from a simulated purchase. As participants did not have to pay for their selection, actual choices could be different. Participants may also have not behaved in their usual manner since they were taking part in a research study...

Exercise 8.7 D. Johnson et al. ( 2018 ) studied the use of over-the-counter menthol cough-drops in people with a cough. One conclusion from the observational study of \(548\) people was that, taking 'too many cough drops [...] may actually make coughs more severe', as one author explained in an interview about the study Critique this statement.

Exercise 8.8 Suppose a group of student group was studying this RQ:

Among Australians, is the average serum cholesterol concentration different for smokers and non-smokers?

The students gave the following information about their study. Explain why each of these statements is incorrect.

  • The design is observational, as we cannot manipulate each person's serum cholesterol.
  • The Outcome is 'the average serum cholesterol concentration for smokers and non-smokers'.
  • The study is not externally valid, as the results may not apply to all people in the world.
  • The response variable is serum cholesterol.
  • In this experiment, the population is 'Australians'.
  • The data file will have two columns: one for smokers, and one for non-smokers.
  • 'Whether or not the person owns a cat' is likely to be a confounding variable.
  • The observer effect is not relevant, as the participants will know they are involved in a study.

Exercise 8.9 Delarue et al. ( 2019 ) discuss studies where subjects rate the taste of new food products. They note that taste-testing studies should be externally and internally valid (p. 78):

However, even with good internal and external validity, these studies often result in a 'high rate of failures of new launched products'.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Med Res Methodol

Logo of bmcmrm

A tutorial on methodological studies: the what, when, how and why

Lawrence mbuagbaw.

1 Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON Canada

2 Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario L8N 4A6 Canada

3 Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Daeria O. Lawson

Livia puljak.

4 Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000 Zagreb, Croatia

David B. Allison

5 Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN 47405 USA

Lehana Thabane

6 Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON Canada

7 Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON Canada

8 Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON Canada

Associated Data

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 – 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 – 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

An external file that holds a picture, illustration, etc.
Object name is 12874_2020_1107_Fig1_HTML.jpg

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 – 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

  • Comparing two groups
  • Determining a proportion, mean or another quantifier
  • Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

  • Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.
  • Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].
  • Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]
  • Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 – 67 ].
  • Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].
  • Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].
  • Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].
  • Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

  • What is the aim?

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

  • 2. What is the design?

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

  • 3. What is the sampling strategy?

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

  • 4. What is the unit of analysis?

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

An external file that holds a picture, illustration, etc.
Object name is 12874_2020_1107_Fig2_HTML.jpg

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Acknowledgements

Abbreviations.

CONSORTConsolidated Standards of Reporting Trials
EPICOTEvidence, Participants, Intervention, Comparison, Outcome, Timeframe
GRADEGrading of Recommendations, Assessment, Development and Evaluations
PICOTParticipants, Intervention, Comparison, Outcome, Timeframe
PRISMAPreferred Reporting Items of Systematic reviews and Meta-Analyses
SWARStudies Within a Review
SWATStudies Within a Trial

Authors’ contributions

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

This work did not receive any dedicated funding.

Availability of data and materials

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Open access
  • Published: 07 September 2020

A tutorial on methodological studies: the what, when, how and why

  • Lawrence Mbuagbaw   ORCID: orcid.org/0000-0001-5855-5461 1 , 2 , 3 ,
  • Daeria O. Lawson 1 ,
  • Livia Puljak 4 ,
  • David B. Allison 5 &
  • Lehana Thabane 1 , 2 , 6 , 7 , 8  

BMC Medical Research Methodology volume  20 , Article number:  226 ( 2020 ) Cite this article

42k Accesses

61 Citations

60 Altmetric

Metrics details

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

Peer Review reports

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 , 2 , 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 , 7 , 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

figure 1

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 , 13 , 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

Comparing two groups

Determining a proportion, mean or another quantifier

Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.

Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].

Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]

Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 , 66 , 67 ].

Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].

Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].

Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].

Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

What is the aim?

Methodological studies that investigate bias

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies that investigate quality (or completeness) of reporting

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Methodological studies that investigate the consistency of reporting

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

Methodological studies that investigate factors associated with reporting

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies that investigate methods

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Methodological studies that summarize other methodological studies

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Methodological studies that investigate nomenclature and terminology

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

Other types of methodological studies

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

What is the design?

Methodological studies that are descriptive

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Methodological studies that are analytical

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

What is the sampling strategy?

Methodological studies that include the target population

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Methodological studies that include a sample of the target population

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

What is the unit of analysis?

Methodological studies with a research report as the unit of analysis

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Methodological studies with a design, analysis or reporting item as the unit of analysis

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

figure 2

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Availability of data and materials

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Abbreviations

Consolidated Standards of Reporting Trials

Evidence, Participants, Intervention, Comparison, Outcome, Timeframe

Grading of Recommendations, Assessment, Development and Evaluations

Participants, Intervention, Comparison, Outcome, Timeframe

Preferred Reporting Items of Systematic reviews and Meta-Analyses

Studies Within a Review

Studies Within a Trial

Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.

PubMed   Google Scholar  

Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, Krumholz HM, Ghersi D, van der Worp HB. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.

PubMed   PubMed Central   Google Scholar  

Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.

Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Bmj. 2017;358:j4008.

Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Studies. 2020;6(1):13.

Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020.

Abbade LPF, Wang M, Sriganesh K, Jin Y, Mbuagbaw L, Thabane L. The framing of research questions using the PICOT format in randomized controlled trials of venous ulcer disease is suboptimal: a systematic survey. Wound Repair Regen. 2017;25(5):892–900.

Gohari F, Baradaran HR, Tabatabaee M, Anijidani S, Mohammadpour Touserkani F, Atlasi R, Razmgir M. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review. J Diabetes Metab Disord. 2015;15(1):36.

Wang M, Jin Y, Hu ZJ, Thabane A, Dennis B, Gajic-Veljanoski O, Paul J, Thabane L. The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: a systematic survey of the literature. Contemp Clin Trials Commun. 2017;8:1–10.

Shanthanna H, Kaushal A, Mbuagbaw L, Couban R, Busse J, Thabane L: A cross-sectional study of the reporting quality of pilot or feasibility trials in high-impact anesthesia journals Can J Anaesthesia 2018, 65(11):1180–1195.

Kosa SD, Mbuagbaw L, Borg Debono V, Bhandari M, Dennis BB, Ene G, Leenus A, Shi D, Thabane M, Valvasori S, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methodological review. Contemporary Clinical Trials. 2018;65:144–50.

Zhang Y, Florez ID, Colunga Lozano LE, Aloweni FAB, Kennedy SA, Li A, Craigie S, Zhang S, Agarwal A, Lopes LC, et al. A systematic survey on reporting and methods for handling missing participant data for continuous outcomes in randomized controlled trials. J Clin Epidemiol. 2017;88:57–66.

CAS   PubMed   Google Scholar  

Hernández AV, Boersma E, Murray GD, Habbema JD, Steyerberg EW. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J. 2006;151(2):257–64.

Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, Fruci V, Dennis B, Bawor M, Thabane L. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc. 2013;6:169–88.

Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.

Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary-of-findings tables with a new format. J Clin Epidemiol. 2016;74:7–18.

The Northern Ireland Hub for Trials Methodology Research: SWAT/SWAR Information [ https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/ ]. Accessed 31 Aug 2020.

Chick S, Sánchez P, Ferrin D, Morrice D. How to conduct a successful simulation study. In: Proceedings of the 2003 winter simulation conference: 2003; 2003. p. 66–70.

Google Scholar  

Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.

Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mount Sinai J Med New York. 1996;63(3–4):216–24.

CAS   Google Scholar  

Areia M, Soares M, Dinis-Ribeiro M. Quality reporting of endoscopic diagnostic studies in gastrointestinal journals: where do we stand on the use of the STARD and CONSORT statements? Endoscopy. 2010;42(2):138–47.

Knol M, Groenwold R, Grobbee D. P-values in baseline tables of randomised controlled trials are inappropriate but still common in high impact journals. Eur J Prev Cardiol. 2012;19(2):231–2.

Chen M, Cui J, Zhang AL, Sze DM, Xue CC, May BH. Adherence to CONSORT items in randomized controlled trials of integrative medicine for colorectal Cancer published in Chinese journals. J Altern Complement Med. 2018;24(2):115–24.

Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ. 2012;344:e4178.

The Cochrane Methodology Register Issue 2 2009 [ https://cmr.cochrane.org/help.htm ]. Accessed 31 Aug 2020.

Mbuagbaw L, Kredo T, Welch V, Mursleen S, Ross S, Zani B, Motaze NV, Quinlan L. Critical EPICOT items were absent in Cochrane human immunodeficiency virus systematic reviews: a bibliometric analysis. J Clin Epidemiol. 2016;74:66–72.

Barton S, Peckitt C, Sclafani F, Cunningham D, Chau I. The influence of industry sponsorship on the reporting of subgroup analyses within phase III randomised controlled trials in gastrointestinal oncology. Eur J Cancer. 2015;51(18):2732–9.

Setia MS. Methodology series module 5: sampling strategies. Indian J Dermatol. 2016;61(5):505–9.

Wilson B, Burnett P, Moher D, Altman DG, Al-Shahi Salman R. Completeness of reporting of randomised controlled trials including people with transient ischaemic attack or stroke: a systematic review. Eur Stroke J. 2018;3(4):337–46.

Kahale LA, Diab B, Brignardello-Petersen R, Agarwal A, Mustafa RA, Kwong J, Neumann I, Li L, Lopes LC, Briel M, et al. Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey. J Clin Epidemiol. 2018;99:14–23.

De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJPM, et al. Is this clinical trial fully registered?: a statement from the International Committee of Medical Journal Editors*. Ann Intern Med. 2005;143(2):146–8.

Ohtake PJ, Childs JD. Why publish study protocols? Phys Ther. 2014;94(9):1208–9.

Rombey T, Allers K, Mathes T, Hoffmann F, Pieper D. A descriptive analysis of the characteristics and the peer review process of systematic review protocols published in an open peer review journal from 2012 to 2017. BMC Med Res Methodol. 2019;19(1):57.

Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet. 2002;359(9302):248–52.

Porta M (ed.): A dictionary of epidemiology, 5th edn. Oxford: Oxford University Press, Inc.; 2008.

El Dib R, Tikkinen KAO, Akl EA, Gomaa HA, Mustafa RA, Agarwal A, Carpenter CR, Zhang Y, Jorge EC, Almeida R, et al. Systematic survey of randomized trials evaluating the impact of alternative diagnostic strategies on patient-important outcomes. J Clin Epidemiol. 2017;84:61–9.

Helzer JE, Robins LN, Taibleson M, Woodruff RA Jr, Reich T, Wish ED. Reliability of psychiatric diagnosis. I. a methodological review. Arch Gen Psychiatry. 1977;34(2):129–33.

Chung ST, Chacko SK, Sunehag AL, Haymond MW. Measurements of gluconeogenesis and Glycogenolysis: a methodological review. Diabetes. 2015;64(12):3996–4010.

CAS   PubMed   PubMed Central   Google Scholar  

Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Stat Med. 2002;21(11):1513–24.

Moen EL, Fricano-Kugler CJ, Luikart BW, O’Malley AJ. Analyzing clustered data: why and how to account for multiple observations nested within a study participant? PLoS One. 2016;11(1):e0146721.

Zyzanski SJ, Flocke SA, Dickinson LM. On the nature and analysis of clustered data. Ann Fam Med. 2004;2(3):199–200.

Mathes T, Klassen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152.

Bui DDA, Del Fiol G, Hurdle JF, Jonnalagadda S. Extractive text summarization system to aid data extraction from full text in systematic review development. J Biomed Inform. 2016;64:265–72.

Bui DD, Del Fiol G, Jonnalagadda S. PDF text classification to leverage information extraction from publication reports. J Biomed Inform. 2016;61:141–8.

Maticic K, Krnic Martinic M, Puljak L. Assessment of reporting quality of abstracts of systematic reviews with meta-analysis using PRISMA-A and discordance in assessments between raters without prior experience. BMC Med Res Methodol. 2019;19(1):32.

Speich B. Blinding in surgical randomized clinical trials in 2015. Ann Surg. 2017;266(1):21–2.

Abraha I, Cozzolino F, Orso M, Marchesi M, Germani A, Lombardo G, Eusebi P, De Florio R, Luchetta ML, Iorio A, et al. A systematic review found that deviations from intention-to-treat are common in randomized trials and systematic reviews. J Clin Epidemiol. 2017;84:37–46.

Zhong Y, Zhou W, Jiang H, Fan T, Diao X, Yang H, Min J, Wang G, Fu J, Mao B. Quality of reporting of two-group parallel randomized controlled clinical trials of multi-herb formulae: A survey of reports indexed in the Science Citation Index Expanded. Eur J Integrative Med. 2011;3(4):e309–16.

Farrokhyar F, Chu R, Whitlock R, Thabane L. A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007;50(4):266–77.

Oltean H, Gagnier JJ. Use of clustering analysis in randomized controlled trials in orthopaedic surgery. BMC Med Res Methodol. 2015;15:17.

Fleming PS, Koletsi D, Pandis N. Blinded by PRISMA: are systematic reviewers focusing on PRISMA and ignoring other guidelines? PLoS One. 2014;9(5):e96407.

Balasubramanian SP, Wiener M, Alshameeri Z, Tiruvoipati R, Elbourne D, Reed MW. Standards of reporting of randomized controlled trials in general surgery: can we do better? Ann Surg. 2006;244(5):663–7.

de Vries TW, van Roon EN. Low quality of reporting adverse drug reactions in paediatric randomised controlled trials. Arch Dis Child. 2010;95(12):1023–6.

Borg Debono V, Zhang S, Ye C, Paul J, Arya A, Hurlburt L, Murthy Y, Thabane L. The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement. BMC Anesthesiol. 2012;12:13.

Kaiser KA, Cofield SS, Fontaine KR, Glasser SP, Thabane L, Chu R, Ambrale S, Dwary AD, Kumar A, Nayyar G, et al. Is funding source related to study reporting quality in obesity or nutrition randomized control trials in top-tier medical journals? Int J Obes. 2012;36(7):977–81.

Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB. Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes. 2008;32(10):1531–6.

Khan NR, Saad H, Oravec CS, Rossi N, Nguyen V, Venable GT, Lillard JC, Patel P, Taylor DR, Vaughn BN, et al. A review of industry funding in randomized controlled trials published in the neurosurgical literature-the elephant in the room. Neurosurgery. 2018;83(5):890–7.

Hansen C, Lundh A, Rasmussen K, Hrobjartsson A. Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality. Cochrane Database Syst Rev. 2019;8:Mr000047.

Kiehna EN, Starke RM, Pouratian N, Dumont AS. Standards for reporting randomized controlled trials in neurosurgery. J Neurosurg. 2011;114(2):280–5.

Liu LQ, Morris PJ, Pengel LH. Compliance to the CONSORT statement of randomized controlled trials in solid organ transplantation: a 3-year overview. Transpl Int. 2013;26(3):300–6.

Bala MM, Akl EA, Sun X, Bassler D, Mertz D, Mejza F, Vandvik PO, Malaga G, Johnston BC, Dahm P, et al. Randomized trials published in higher vs. lower impact journals differ in design, conduct, and analysis. J Clin Epidemiol. 2013;66(3):286–95.

Lee SY, Teoh PJ, Camm CF, Agha RA. Compliance of randomized controlled trials in trauma surgery with the CONSORT statement. J Trauma Acute Care Surg. 2013;75(4):562–72.

Ziogas DC, Zintzaras E. Analysis of the quality of reporting of randomized controlled trials in acute and chronic myeloid leukemia, and myelodysplastic syndromes as governed by the CONSORT statement. Ann Epidemiol. 2009;19(7):494–500.

Alvarez F, Meyer N, Gourraud PA, Paul C. CONSORT adoption and quality of reporting of randomized controlled trials: a systematic analysis in two dermatology journals. Br J Dermatol. 2009;161(5):1159–65.

Mbuagbaw L, Thabane M, Vanniyasingam T, Borg Debono V, Kosa S, Zhang S, Ye C, Parpia S, Dennis BB, Thabane L. Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: a systematic review. Contemporary Clin trials. 2014;38(2):245–50.

Thabane L, Chu R, Cuddy K, Douketis J. What is the quality of reporting in weight loss intervention studies? A systematic review of randomized controlled trials. Int J Obes. 2007;31(10):1554–9.

Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evidence Based Med. 2017;22(4):139.

METRIC - MEthodological sTudy ReportIng Checklist: guidelines for reporting methodological studies in health research [ http://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#METRIC ]. Accessed 31 Aug 2020.

Jager KJ, Zoccali C, MacLeod A, Dekker FW. Confounding: what it is and how to deal with it. Kidney Int. 2008;73(3):256–60.

Parker SG, Halligan S, Erotocritou M, Wood CPJ, Boulton RW, Plumb AAO, Windsor ACJ, Mallett S. A systematic methodological review of non-randomised interventional studies of elective ventral hernia repair: clear definitions and a standardised minimum dataset are needed. Hernia. 2019.

Bouwmeester W, Zuithoff NPA, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, Altman DG, Moons KGM. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.

Schiller P, Burchardi N, Niestroj M, Kieser M. Quality of reporting of clinical non-inferiority and equivalence randomised trials--update and extension. Trials. 2012;13:214.

Riado Minguez D, Kowalski M, Vallve Odena M, Longin Pontzen D, Jelicic Kadic A, Jeric M, Dosenovic S, Jakus D, Vrdoljak M, Poklepovic Pericic T, et al. Methodological and reporting quality of systematic reviews published in the highest ranking journals in the field of pain. Anesth Analg. 2017;125(4):1348–54.

Thabut G, Estellat C, Boutron I, Samama CM, Ravaud P. Methodological issues in trials assessing primary prophylaxis of venous thrombo-embolism. Eur Heart J. 2005;27(2):227–36.

Puljak L, Riva N, Parmelli E, González-Lorenzo M, Moja L, Pieper D. Data extraction methods: an analysis of internal reporting discrepancies in single manuscripts and practical advice. J Clin Epidemiol. 2020;117:158–64.

Ritchie A, Seubert L, Clifford R, Perry D, Bond C. Do randomised controlled trials relevant to pharmacy meet best practice standards for quality conduct and reporting? A systematic review. Int J Pharm Pract. 2019.

Babic A, Vuka I, Saric F, Proloscic I, Slapnicar E, Cavar J, Pericic TP, Pieper D, Puljak L. Overall bias methods and their use in sensitivity analysis of Cochrane reviews were not consistent. J Clin Epidemiol. 2019.

Tan A, Porcher R, Crequit P, Ravaud P, Dechartres A. Differences in treatment effect size between overall survival and progression-free survival in immunotherapy trials: a Meta-epidemiologic study of trials with results posted at ClinicalTrials.gov. J Clin Oncol. 2017;35(15):1686–94.

Croitoru D, Huang Y, Kurdina A, Chan AW, Drucker AM. Quality of reporting in systematic reviews published in dermatology journals. Br J Dermatol. 2020;182(6):1469–76.

Khan MS, Ochani RK, Shaikh A, Vaduganathan M, Khan SU, Fatima K, Yamani N, Mandrola J, Doukky R, Krasuski RA: Assessing the Quality of Reporting of Harms in Randomized Controlled Trials Published in High Impact Cardiovascular Journals. Eur Heart J Qual Care Clin Outcomes 2019.

Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME. From conference abstract to full paper: differences between data presented in conferences and journals. FASEB J. 2005;19(7):673–80.

Mueller M, D’Addario M, Egger M, Cevallos M, Dekkers O, Mugglin C, Scott P. Methods to systematically review and meta-analyse observational studies: a systematic scoping review of recommendations. BMC Med Res Methodol. 2018;18(1):44.

Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, Wang M, Bhatt M, Zielinski L, Sanger N, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.

Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol. 2019;19(1):203.

Analytical study [ https://medical-dictionary.thefreedictionary.com/analytical+study ]. Accessed 31 Aug 2020.

Tricco AC, Tetzlaff J, Pham B, Brehaut J, Moher D. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. 2009;62(4):380–6 e381.

Schalken N, Rietbergen C. The reporting quality of systematic reviews and Meta-analyses in industrial and organizational psychology: a systematic review. Front Psychol. 2017;8:1395.

Ranker LR, Petersen JM, Fox MP. Awareness of and potential for dependent error in the observational epidemiologic literature: A review. Ann Epidemiol. 2019;36:15–9 e12.

Paquette M, Alotaibi AM, Nieuwlaat R, Santesso N, Mbuagbaw L. A meta-epidemiological study of subgroup analyses in cochrane systematic reviews of atrial fibrillation. Syst Rev. 2019;8(1):241.

Download references

Acknowledgements

This work did not receive any dedicated funding.

Author information

Authors and affiliations.

Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada

Lawrence Mbuagbaw, Daeria O. Lawson & Lehana Thabane

Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada

Lawrence Mbuagbaw & Lehana Thabane

Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Lawrence Mbuagbaw

Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia

Livia Puljak

Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN, 47405, USA

David B. Allison

Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada

Lehana Thabane

Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON, Canada

Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

Corresponding author

Correspondence to Lawrence Mbuagbaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mbuagbaw, L., Lawson, D.O., Puljak, L. et al. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol 20 , 226 (2020). https://doi.org/10.1186/s12874-020-01107-7

Download citation

Received : 27 May 2020

Accepted : 27 August 2020

Published : 07 September 2020

DOI : https://doi.org/10.1186/s12874-020-01107-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodological study
  • Meta-epidemiology
  • Research methods
  • Research-on-research

BMC Medical Research Methodology

ISSN: 1471-2288

limitations of study in research methodology

Research-Methodology

Research Limitations

It is for sure that your research will have some limitations and it is normal. However, it is critically important for you to be striving to minimize the range of scope of limitations throughout the research process.  Also, you need to provide the acknowledgement of your research limitations in conclusions chapter honestly.

It is always better to identify and acknowledge shortcomings of your work, rather than to leave them pointed out to your by your dissertation assessor. While discussing your research limitations, don’t just provide the list and description of shortcomings of your work. It is also important for you to explain how these limitations have impacted your research findings.

Your research may have multiple limitations, but you need to discuss only those limitations that directly relate to your research problems. For example, if conducting a meta-analysis of the secondary data has not been stated as your research objective, no need to mention it as your research limitation.

Research limitations in a typical dissertation may relate to the following points:

1. Formulation of research aims and objectives . You might have formulated research aims and objectives too broadly. You can specify in which ways the formulation of research aims and objectives could be narrowed so that the level of focus of the study could be increased.

2. Implementation of data collection method . Because you do not have an extensive experience in primary data collection (otherwise you would not be reading this book), there is a great chance that the nature of implementation of data collection method is flawed.

3. Sample size. Sample size depends on the nature of the research problem. If sample size is too small, statistical tests would not be able to identify significant relationships within data set. You can state that basing your study in larger sample size could have generated more accurate results. The importance of sample size is greater in quantitative studies compared to qualitative studies.

4. Lack of previous studies in the research area . Literature review is an important part of any research, because it helps to identify the scope of works that have been done so far in research area. Literature review findings are used as the foundation for the researcher to be built upon to achieve her research objectives.

However, there may be little, if any, prior research on your topic if you have focused on the most contemporary and evolving research problem or too narrow research problem. For example, if you have chosen to explore the role of Bitcoins as the future currency, you may not be able to find tons of scholarly paper addressing the research problem, because Bitcoins are only a recent phenomenon.

5. Scope of discussions . You can include this point as a limitation of your research regardless of the choice of the research area. Because (most likely) you don’t have many years of experience of conducing researches and producing academic papers of such a large size individually, the scope and depth of discussions in your paper is compromised in many levels compared to the works of experienced scholars.

You can discuss certain points from your research limitations as the suggestion for further research at conclusions chapter of your dissertation.

My e-book,  The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step assistance  offers practical assistance to complete a dissertation with minimum or no stress. The e-book covers all stages of writing a dissertation starting from the selection to the research area to submitting the completed version of the work within the deadline. John Dudovskiy

Research Limitations

Psychology: Research and Review

  • Open access
  • Published: 25 January 2017

Scale development: ten main limitations and recommendations to improve future research practices

  • Fabiane F. R. Morgado 1 ,
  • Juliana F. F. Meireles 2 ,
  • Clara M. Neves 2 ,
  • Ana C. S. Amaral 3 &
  • Maria E. C. Ferreira 2  

Psicologia: Reflexão e Crítica volume  30 , Article number:  3 ( 2018 ) Cite this article

105k Accesses

375 Citations

2 Altmetric

Metrics details

An Erratum to this article was published on 03 March 2017

The scale development process is critical to building knowledge in human and social sciences. The present paper aimed (a) to provide a systematic review of the published literature regarding current practices of the scale development process, (b) to assess the main limitations reported by the authors in these processes, and (c) to provide a set of recommendations for best practices in future scale development research. Papers were selected in September 2015, with the search terms “scale development” and “limitations” from three databases: Scopus, PsycINFO, and Web of Science, with no time restriction. We evaluated 105 studies published between 1976 and 2015. The analysis considered the three basic steps in scale development: item generation, theoretical analysis, and psychometric analysis. The study identified ten main types of limitation in these practices reported in the literature: sample characteristic limitations, methodological limitations, psychometric limitations, qualitative research limitations, missing data, social desirability bias, item limitations, brevity of the scale, difficulty controlling all variables, and lack of manual instructions. Considering these results, various studies analyzed in this review clearly identified methodological weaknesses in the scale development process (e.g., smaller sample sizes in psychometric analysis), but only a few researchers recognized and recorded these limitations. We hope that a systematic knowledge of the difficulties usually reported in scale development will help future researchers to recognize their own limitations and especially to make the most appropriate choices among different conceptions and methodological strategies.

Introduction

In recent years, numerous measurement scales have been developed to assess attitudes, techniques, and interventions in a variety of scientific applications (Meneses et al. 2014 ). Measurement is a fundamental activity of science, since it enables researchers to acquire knowledge about people, objects, events, and processes. Measurement scales are useful tools to attribute scores in some numerical dimension to phenomena that cannot be measured directly. They consist of sets of items revealing levels of theoretical variables otherwise unobservable by direct means (DeVellis 2003 ).

A variety of authors (Clark and Watson 1995 ; DeVellis 2003 ; Nunnally 1967 ; Pasquali 2010 ) have agreed that the scale development process involves complex and systematic procedures that require theoretical and methodological rigor. According to these authors, the scale development process can be carried out in three basic steps.

In the first step, commonly referred as “item generation,” the researcher provides theoretical support for the initial item pool (Hutz et al. 2015 ). Methods for the initial item generation can be classified as deductive, inductive, or a combination of the two. Deductive methods involve item generation based on an extensive literature review and pre-existing scales (Hinkin 1995 ). On the other hand, inductive methods base item development on qualitative information regarding a construct obtained from opinions gathered from the target population—e.g., focus groups, interviews, expert panels, and qualitative exploratory research methodologies (Kapuscinski and Masters 2010 ). The researcher is also concerned with a variety of parameters that regulate the setting of each item and of the scale as a whole. For example, suitable scale instructions, an appropriate number of items, adequate display format, appropriate item redaction (all items should be simple, clear, specific, ensure the variability of response, remain unbiased, etc.), among other parameters (DeVellis 2003 ; Pasquali 2010 ).

In the second step, usually referred to as the “theoretical analysis,” the researcher assesses the content validity of the new scale, ensuring that the initial item pool reflects the desired construct (Arias et al. 2014 ). A content validity assessment is required, since inferences are made based on the final scale items. The item content must be deemed valid to instill confidence in all consequent inferences. In order to ensure the content validity, the researcher seeks other opinions about the operationalized items. The opinions can be those of expert judges (experts in the development scales or experts in the target construct) or target population judges (potential users of the scale), enabling the researcher to ensure that the hypothesis elaborated in the research appropriately represents the construct of interest (Nunnally 1967 ).

In the last step, psychometric analysis, the researcher should assess whether the new scale has construct validity and reliability. Construct validity is most directly related to the question of what the instrument is in fact measuring—what construct, trait, or concept underlies an individual’s performance or score on a measure (Churchill 1979 ). This refers to the degree to which inferences can be legitimately made from the observed scores to the theoretical constructs about which these observations are supposed to contain information (Podsakoff et al. 2013 ). Construct validity can be assessed with the use of exploratory factor analysis (EFA), confirmatory factor analysis (CFA), or with convergent, discriminant, predictive/nomological, criterion, internal, and external validity. In turn, reliability is a measure of score consistency, usually measured by use of internal consistency, test-retest reliability, split-half, item-total correlation/inter-item reliability, and inter-observer reliability (DeVellis 2003 ). To ensure construct validity and reliability, the data should be collected in a large and appropriately representative sample of the target population. It is a common rule of thumb that there should be at least 10 participants for each item of the scale, making an ideal of 15:1 or 20:1 (Clark and Watson 1995 ; DeVellis 2003 ; Hair Junior et al. 2009 ).

Although the literature on theoretical and methodological care in scale development is extensive, many limitations have been identified in the process. These include failure to adequately define the construct domain, failure to correctly specify the measurement model, underutilization of some techniques that are helpful in establishing construct validity (MacKenzie et al. 2011 ), relatively weak psychometric properties, applicability to only a single form of treatment or manual, extensive time required to fill out the questionnaire (Hilsenroth et al. 2005 ), inappropriate item redaction, too few items and participants in the construction and analysis, an imbalance between items that assess positive beliefs and those that assess negative beliefs (Prados 2007 ), social desirability bias (King and Bruner 2000 ), among others.

These limitations in the scale development process weaken the obtained psychometric results, limiting the future applicability of the new scale and hindering its generalizability. In this sense, knowledge of the most often reported limitations is fundamental in providing essential information to help develop best practices for future research in this area. The purpose of this article is threefold: (a) to provide a systematic review of the published literature regarding some current practices of the scale development process, (b) to assess the main limitations reported by the authors in this process, and (c) to provide a set of recommendations for best practices in future scale development research.

This systematic review identified and selected papers from three databases: Scopus, PsycINFO, and Web of Science. There was no time restriction in the literature search, which was completed in September 1, 2015. The following search term was used: “scale development.” In the set of databases analyzed, the search was done inclusively in “Any Field” (PsycINFO), in “Article Title, Abstract, Keywords” (Scopus), or in any “Topic” (Web of Science). In addition, we used an advanced search to filter the articles in (search within results), with the search term “limitations” identified in “Any Field” in all databases. Both terms were used in English only. Four reviewers evaluated the papers in an independent and blinded way. Any disagreements on eligibility of a particular study were resolved through consensus among reviewers.

Figure  1 shows a flowchart summarizing the strategy adopted for identification and selection of studies. We used only one inclusion criteria for the evaluation of the studies: (a) articles that aim to develop and validate self-administered measurement scales for humans. We excluded (a) unavailable full-text papers in the analyzed databases, (b) papers in languages other than English, Portuguese, or Spanish, (c) articles which were not clearly aimed at the development of a new scale (i.e., we excluded articles investigating only the reliability, validity, or revisions of existing scales and studies that describe the validation of instruments for other languages), (d) papers with unvalidated scales, and (e) articles that did not declare the limitations of the study.

Flowchart showing summary of the systematic process of identifying and selecting article

In all, this systematic review evaluated 105 studies published between 1976 and 2015. Most (88.5%) was published between 2005 and 2015, and only two studies date from the last century. We analyzed two major issues: (a) current practices of the scale development process —considering the three steps usually reported in the literature (step 1—item generation, step 2—theoretical analysis, step 3—psychometric analysis), the number of participants in step 3, the number of items in the beginning scale, and the number of items in the final scale; (b) m ain limitations reported by the authors in the scale development process —considering the limitations observed and recorded by the authors during the scale development process. The description of these results can be found in Table  1 .

Current practices of the scale development process

Step 1—item generation.

In the first step, 35.2% ( n  = 37) of the studies reported using exclusively deductive methods to write items, 7.6% ( n  = 8) used only inductive methods, and 56.2% ( n  = 59) combined deductive and inductive strategies. The majority of the studies used a literature review (84.7%, n  = 89) as the deductive method in item generation. In inductive methods, 26.6% of studies ( n  = 28) chose to conduct an interview.

Step 2—theoretical analysis

In order to theoretically refine the items, several studies used opinions of experts (74.2%, n  = 78), whereas others used target population opinions (43.8%, n  = 46). In addition, 63.8% ( n  = 67) of the studies used only one of these approaches (expert or population judges).

Step 3—psychometric analysis

The most common analyses that have been used to assess construct validity are EFA (88.6%, n  = 93), CFA (72.3%, n  = 76), convergent validity (72.3%, n  = 76), and discriminant validity (56.2%, n  = 59). Most studies opted to combine EFA and CFA (65.7%, n  = 69). Only 4.7% ( n  = 5) failed to use factor analysis in their research. In relation to study reliability, internal consistency checks were used by all studies and test-retest reliability was the second most commonly used technique (22.8%, n  = 24).

Sample size in step 3 and number of items

Interestingly, 50.4% ( n  = 53) of the studies used sample sizes smaller than the rule of thumb, which is a minimum of 10 participants for each item in the scale. Regarding number of items, the majority of the studies (49.6%, n  = 52) lost more than 50% of the initial item pool during the validation process.

Table  2 summarizes and provides more details on our findings regarding the current practices in the scale development.

Main limitations reported in the scale development process

As result of this systematic review, we found ten main limitations commonly referenced in the scale development process: (1) sample characteristic limitations—cited by 81% of the studies, (2) methodological limitations—33.2%, (3) psychometric limitations—30.4%, (4) qualitative research limitations—5.6%, (5) missing data—2.8%, (6) social desirability bias—1.9%, (7) item limitations—1.9%, (8) brevity of the scale—1.9%, (9) difficulty controlling all variables—0.9%, and (10) lack of manual instructions—0.9%. Table  3 summarizes these findings.

This systematic review was primarily directed at identifying the published literature regarding current practices of the scale development. The results show a variety of practices that have been used to generate and assess items, both theoretically and psychometrically. We evaluated these current practices, considering three distinct steps (item generation, theoretical analysis, and psychometric analysis). We also considered the relationship between sample size and number of items, since this is considered an important methodological aspect to be evaluated during the scale development process. The results are discussed together with recommendations for best practices in future scale development research.

Current practices of the scale development process—findings and research implications

Regarding step 1, item generation, our results show that, although several studies used exclusively deductive methods (e.g., Henderson-King and Henderson-King 2005 ; Kim et al. 2011 ), the majority (e.g., Bakar and Mustaffa 2013 ; Uzunboylu and Ozdamli 2011 ) combined deductive and inductive methods, a combination consistent with the recommended strategy for the creation of new measures (DeVellis 2003 ). These findings, however, differ from previous critical reviews of scale development practices, which found that most of the reported studies used exclusively deductive methods (Hinkin 1995 ; Kapuscinski and Masters 2010 ; Ladhari 2010 ). This is particularly important since the quality of generated items depends on the way that the construct is defined. Failing to adequately define the conceptual domain of a construct causes several problems related to poor construct definition, leading to, for example, (a) confusion about what the construct does and does not refer to, including the similarities and differences between it and other constructs that already exist in the field, (b) indicators that may either be deficient or contaminated, and (c) invalid conclusions about relationships with other constructs (MacKenzie et al. 2011 ). Considering that item generation may be the most important part of the scale development process, future measures should be developed using the appropriate definition of the conceptual domain based on the combination of both deductive and inductive approaches.

Our results suggest that literature review was the most widely used deductive method (e.g., Bolton and Lane 2012 ; Henderson-King and Henderson-King 2005 ). This is consistent with the views of several other researchers who have systematically reviewed scales (Bastos et al. 2010 ; Ladhari 2010 ; Sveinbjornsdottir and Thorsteinsson 2008 ). Nevertheless, this finding differs from another study (Kapuscinski and Masters 2010 ) that found that the most common deductive strategies were reading works by spiritual leaders, theory written by psychologists, and discussion among authors. Literature review should be considered central for the enumeration of the constructs. It also serves to clarify the nature and variety of the target construct content. In addition, literature reviews help to identify existing measures that can be used as references to create new scales (Clark and Watson 1995 ; DeVellis 2003 ). In this sense, future research should consider the literature review as the initial and necessary deductive step foundational to building a new scale.

This review also highlights the fact that interviews and focus groups were the most widely used inductive methods (e.g., Lin and Hsieh 2011 ; Sharma 2010 ). Similar results were found in the systematic review by Kapuscinski and Masters ( 2010 ), Sveinbjornsdottir and Thorsteinsson ( 2008 ), and Ladhari ( 2010 ). These findings have particular relevance to future researchers, since they emphasize the importance of using methodological strategies that consider the opinions of the target population. Despite the fact that a panel of experts contributes widely to increasing the researchers’ confidence in the content validity of the new scale, it is important to also consider the most original and genuine information about the construct of interest, which can be best obtained through reports obtained from interviews and focus groups with the target population.

Related to step 2, theoretical analysis, the results of this review indicate that expert judges have been the most widely utilized tool for analyzing content validity (e.g., Uzunboylu and Ozdamli 2011 ; Zheng et al. 2010 ). Previous studies have also found expert opinion to be the most common qualitative method for the elimination of unsuitable items (Kapuscinski and Masters 2010 ; Ladhari 2010 ). In the literature review conducted by Hardesty and Bearden ( 2004 ), the authors highlighted the importance of these experts to carefully analyze the initial item pool. They suggested that any research using new, changed, or previously unexamined scale items, should at a minimum be judged by a panel of experts. However, the authors also point out the apparent lack of consistency in the literature in terms of how researchers use the opinions of expert judges in aiding the decision of whether or not to retain items for a scale. Given this inconsistency, the authors developed guidelines regarding the application of different decision rules to use for item retention. For example, the “sumscore decision rule,” defined as the total score for an item across all judges, is considered by the authors to be the most effective in predicting whether an item should be included in a scale and appears, therefore, to be a reasonable rule for researchers to employ.

Future research in developing scales should be concerned, not only with opinions from experts but also with the opinions of the target population. The results of this review show that only a minority of studies considered the review of the scales’ items by members of the target population (e.g., Uzunboylu and Ozdamli 2011 ; Zheng et al. 2010 ). In addition, a smaller minority combined the two approaches in the assessment of item content (e.g., Mahudin et al. 2012 ; Morgado et al. 2014 ). The limited use of target population opinions is a problem. A previous study of systematic scale development reviews found that the opinion of these people is the basis for content validity (Bastos et al. 2010 ). As highlighted by Clark and Watson ( 1995 ) and Malhotra ( 2004 ), it is essential for the new scale to undergo prior review by members of the target population. Pre-test or pilot study procedures make it possible to determine respondents’ opinions of, and reactions to, each item on the scale, enabling researchers to identify and eliminate potential problems in the scale before it is applied at large.

Another problem noted in this systematic review was that some studies failed to clearly report how they performed the theoretical analysis of the items (e.g., Glynn et al. 2015 ; Gottlieb et al. 2014 ). We hypothesized that the authors either did not perform this analysis or found it unimportant to record. Future research should consider this analysis, as well as all subsequent analyses, necessary and relevant for reporting.

Almost all studies (95.3%) reported using at least one type of factor analysis—EFA or CFA—in step 3, psychometric analysis (e.g., Sewitch et al. 2003 ; Tanimura et al. 2011 ). Clark and Watson ( 1995 ) consider that “unfortunately, many test developers are hesitant to use factor analysis, either because it requires a relatively large number of respondents or because it involves several perplexing decisions” (p. 17). They emphasized the importance of the researcher’s need to understand and apply this analysis, “it is important that test developers either learn about the technique or consult with a psychometrician during the scale development process” (Clark and Watson 1995 , p. 17). This question seems to have been almost overcome in recent studies, since the vast majority of the analyzed studies used the factor analysis method.

Among the studies than used factor analysis, the majority chose to use EFA (e.g., Bakar and Mustaffa 2013 ; Turker 2009 ). Similar to our findings, Bastos et al. ( 2010 ) and Ladhari ( 2010 ) found EFA to be the more commonly utilized construct validity method when compared to CFA. EFA has extensive value because it is considered to be effective in identifying the underlying latent variables or factors of a measure by exploring relationships among observed variables. However, it allows for more subjectivity in the decision-making process than many other statistical procedures, which can be considered a problem (Roberson et al. 2014 ).

For more consistent results on the psychometric indices of the new scale, DeVellis ( 2003 ) indicates the combined use of EFA and CFA, as was performed with most studies evaluated in this review. In CFA, the specific hypothesized factor structure proposed in EFA (including the correlations among the factors) is statistically evaluated. If the estimated model fits the data, then a researcher concludes that the factor structure replicates. If not, the modification indices are used to identify where constraints placed on the factor pattern are causing a misfit (Reise et al. 2000 ). Future studies should consider the combined use of EFA and CFA during the evaluation of construct validity of the new measure, and should also apply a combination of multiple fit indices (e.g., modification indices) in order to provide more consistent psychometric results.

After EFA and CFA, convergent validity was the preferred technique used in the vast majority of the studies included in this review (e.g., Brun et al. 2014 ; Cicero et al. 2010 ). This finding is consistent with prior research (Bastos et al. 2010 ). Convergent validity consists in examining whether a scale’s score is associated with the other variables and measures of the same construct to which it should be related. It is verified either by calculating the average variance extracted for each factor when the shared variance accounted for 0.50 or more of the total variance or by correlating their scales with a measure of overall quality (Ladhari 2010 ). In the sequence of convergent validity, the following methods were identified as favorites in the assessment of construct validity: discriminant validity (the extent to which the scale’s score does not correlate with unrelated constructs) (e.g., Coker et al. 2011 ), predictive/nomological validity (the extent to which the scores of one construct are empirically related to the scores of other conceptually related constructs) (e.g., Sharma 2010 ), criterion validity (the empirical association that the new scale has with a gold standard criterion concerned with the prediction of a certain behavior) (e.g., Tanimura et al. 2011 ), internal (signifies whether the study results and conclusions are valid for the study population), and external validity (generalizability of study) (e.g., Bolton and Lane 2012 ; Khorsan and Crawford 2014 ). Considering the importance of validity to ensure the quality of the collected data and the generalized potential of the new instrument, future studies should allow different ways to assess the validity of the new scale, thus increasing the psychometric rigor of the analysis.

With regard to reliability, all studies reported internal consistency statistics (Cronbach’s alpha) for all subscales and/or the final version of the full scale (e.g., Schlosser and McNaughton 2009 ; Sewitch et al. 2003 ). These findings are consistent with those of previous review studies (Bastos et al. 2010 ; Kapuscinski and Masters 2010 ). DeVellis ( 2003 ) explains that internal consistency is the most widely used measure of reliability. It is concerned with the homogeneity of the items within a scale. Given its importance, future studies should to consider alpha evaluation as a central point of measurement reliability, and yet, as much as possible, involve the assessment of internal consistency with other measures of reliability. In the sequence of internal consistency, the following methods were identified by this review: test-retest reliability (analysis of the temporal stability; items are applied on two separate occasions, and the scores could be correlated) (e.g., Forbush et al. 2013 ), item-total/inter-item correlation reliability (analysis of the correlation of each item with the total score of the scale or subscales/analysis of the correlation of each item with another item) (e.g., Rodrigues and Bastos 2012 ), split-half reliability (the scale is split in half and the first half of the items are compared to the second half) (e.g., Uzunboylu and Ozdamli 2011 ), and inter-judge reliability (analysis of the consistency between two different observers when they assess the same measure in the same individual) (e.g., Akter et al. 2013 ; DeVellis 2003 ; Nunnally 1967 ).

Regarding sample size in step 3 and number of items, a particularly noteworthy finding was that most studies utilized sample sizes smaller than the rule of thumb that the minimum required ratio should be 10:1 (e.g., Turker 2009 ; Zheng et al. 2010 ). DeVellis ( 2003 ) and Hair Junior et al. ( 2009 ) comment that the sample size should be as large as possible to ensure factor stability. The ‘observations to variables’ ratio is ideal at 15:1, or even 20:1. However, most of the studies included in this review failed to adopt this rule. Some studies looked for justification on evidence related to the effectiveness of much smaller observations to variables ratios. For example, Nagy et al. ( 2014 ) justified the small sample size used in their investigation based on the findings of Barrett and Kline ( 1981 ), concluding that the difference in ratios 1.25:1 and 31:1 was not a significant contributor to results obtained in the factor stability. Additionally, Arrindell and van der Ende ( 1985 ) concluded that ratios of 1.3:1 and 19.8:1 did not impact the factor stability. Although the rules of thumb vary enormously, ten participants to each item has widely been considered safe recommended (Sveinbjornsdottir and Thorsteinsson 2008 ).

Finally, several studies had their number final of items reduced by more than 50%. For example, Flight et al. ( 2011 ) developed an initial item pool composed of 122 items and finished the scale with only 43. Pommer et al. ( 2013 ) developed 391 initial items and finished with only 18. Our findings clearly indicate that a significant amount of items can get lost during the development of a new scale. These results are consistent with previous literature which states both that the initial number of items must be twice the desired number in the final scale, since, during the process of analysis of the items, many may be excluded for inadequacy (Nunnally 1967 ), and that the initial set of items should be three or four times more numerous than the number of items desired, as a good way to ensure internal consistency of the scale (DeVellis 2003 ). Future research should consider these issues and expect significant loss of items during the scale development process.

Ten main limitations reported in the scale development process—findings and research implications

In addition to identifying the current practices of the scale development process, this review also aims to assess the main limitations reported by the authors. Ten limitations were found, which will be discussed together with recommendations for best practices in future scale development research (Table  3 ).

Sample characteristic limitations

The above-mentioned limitations were recorded in the majority of the studies, in two main ways. The first and the most representative way was related to the sample type. Several studies used homogeneous sampling (e.g., Forbush et al. 2013 ; Morean et al. 2012 ), whereas others used convenience sampling (e.g., Coker et al. 2011 ; Flight et al. 2011 ). Both homogeneous and convenience samples were related to limitations of generalization. For example, Atkins and Kim ( 2012 ) pointed out that “the participants for all stages of the study were US consumers; therefore, this study cannot be generalized to other cultural contexts.” Or indeed, “convenience samples are weaknesses of this study, as they pose generalizability questions,” as highlighted by Blankson et al. ( 2012 ). Nunnally ( 1967 ) suggested that, to extend the generalizability of the new scale, sample diversification should be considered in terms of data collection, particularly in the psychometric evaluation step. Future studies should consider this suggestion, recruiting heterogeneous and truly random samples for the evaluation of construct validity and the reliability of the new measure.

The second way was related to small sample size. As previously described, most of the analyzed studies utilized sample sizes less than 10:1. Only some of the authors recognized this flaw. For example, Nagy et al. ( 2014 ) reported that “the sample size employed in conducting the exploratory factor analysis is another potential limitation of the study,” Rosenthal ( 2011 ) described, “the current study was limited by the relatively small nonprobability sample of university students,” and Ho and Lin ( 2010 ) recognized that “the respondent sample size was small.” Based in these results, we emphasize that future research should seek a larger sample size (minimum ratio of 10:1) to increase the credibility of the results and thus obtain a more exact outcome in the psychometric analysis.

Methodological limitations

Cross-sectional methods were the main methodological limitations reported by other studies (e.g., Schlosser and McNaughton 2009 ; Tombaugh et al. 2011 ). Data collected under a cross-sectional study design contains the typical limitation associated with this type of research methodology, namely inability to determine the causal relationship. If cross-sectional methods are used to estimate models whose parameters do in fact vary over time, the resulting estimation may fail to yield statistically valid results, fail to identify the true model parameters, and produce inefficient estimates (Bowen and Wiersema 1999 ). In this way, different authors (e.g., Akter et al. 2013 ; Boyar et al. 2014 ) recognized that employing instruments at one point in time limits the ability to assess causal relationships. With the goal of remediating these issues and gaining a deeper understanding of the construct of interest, different studies (e.g., Morean et al. 2012 ; Schlosser and McNaughton 2009 ) suggest conducting a longitudinal study during the scale development. Using the longitudinal studies in this process may also allow the assessment of the scale’s predictive validity, since longitudinal designs evaluate whether the proposed interpretation of test scores can predict outcomes of interest over time. Therefore, future studies should consider the longitudinal approach in the scale development, both to facilitate greater understanding of the analyzed variables and to assess the predictive validity.

Self-reporting methodologies were also cited as limitations in some studies (e.g., Fisher et al. 2014 ; Pan et al. 2013 ). Mahudin et al. ( 2012 ) clarified that the self-reporting nature of quantitative studies raises the possibility of participant bias, social desirability, demand characteristics, and response sets. Such possibilities may, in turn, affect the validity of the findings. We agree with the authors’ suggestion that future research may also incorporate other objective or independent measures to supplement the subjective evaluation of the variables studied in the development of the new scale and to improve the interpretation of findings.

In addition, web-based surveys were another methodological limitation reported in some studies (e.g., Kim et al. 2011 ; Reed et al. 2011 ). Although this particular method has time- and cost-saving elements for data collection, its limitations are also highlighted. Researchers have observed that important concerns include coverage bias (bias due to sampled individuals not having—or choosing not to access—the Internet) and nonresponse bias (bias due to participants of a survey differing from those who did not respond in terms of demographic or attitudinal variables) (Kim et al. 2011 ). Alternatives to minimize the problem in future research would be in-person surveys or survey interviews. Although more costly and more time consuming, these methods reduce problems related to concerns about confidentiality and the potential for coverage and nonresponse bias (Reed et al. 2011 ). Therefore, whenever possible, in-person surveys or survey interviews should be given priority in future research rather than web surveys.

Psychometric limitations

Consistent with previous reports (MacKenzie et al. 2011 ; Prados 2007 ), this systematic review found distinct psychometric limitations reported in the scale development process. The lack of a more robust demonstration of construct validity and/or reliability was the most often mentioned limitation in the majority of the analyzed studies. For example, Alvarado-Herrera et al. ( 2015 ) reported the lack of a more robust demonstration of the predictive validity whereas Kim et al. ( 2011 ) of the nomological validity. Caro and Garcia ( 2007 ) noted that the relationships of the scale with other constructs were not analyzed. Saxena et al. ( 2015 ) and Pan et al. ( 2013 ) described the lack of demonstrable temporal stability (e.g., test-retest reliability). Imprecise or incomplete psychometric procedures that are employed during scale development are likely to obscure the outcome. Therefore, it is necessary for future research to consider adverse consequences for the reliability and validity of any construct, caused by poor test-theoretical practices. Only through detailed information and explanation of the rationale for statistical choices can the new measures be shown to have sufficient psychometric adjustments (Sveinbjornsdottir and Thorsteinsson 2008 ).

Additionally, the inadequate choice of the instruments or variables to be correlated with the variable of interest was another psychometric limitation cited in some studies (e.g., Bakar and Mustaffa 2013 ; Tanimura et al. 2011 ). This kind of limitation directly affects the convergent validity, which is a problem since, as has already been shown in this review, this type of validity has been one of the most recurrent practices in scale development. One hypothesis for this limitation may be the lack of gold standard measures to assess similar constructs as those of a new scale. In such cases, a relatively recent study by Morgado et al. ( 2014 ) offers a valid alternative. The authors used information collected on sociodemographic questionnaires (e.g., level of education and intensity of physical activity) to correlate with the constructs of interest. Future researchers should seek support from the literature on the constructs that would be theoretically associated with the construct of interest, searching for alternatives in information collected on, for example, sociodemographic questionnaires, to assess the convergent validity of the new scale.

Another psychometric limitation reported in some studies was related to factor analysis. These limitations were identified in five main forms: (1) EFA and CFA were conducted using the data from the same sample (Zheng et al. 2010 )—when this occurs, good model fit in the CFA is expected, as a consequence, the added strength of the CFA in testing a hypothesized structure for a new data set based on theory or previous findings is lost (Khine 2008 ); (2) lack of CFA (Bolton and Lane 2012 )—if this happens, the researcher loses the possibility of assigning items to factors, testing the hypothesized structure of the data, and statistically comparing alternative models (Khine 2008 ); (3) a certain amount of subjectivity was necessary in identifying and labeling factors in EFA (Lombaerts et al. 2009 )—since a factor is qualitative, it is common practice to label each factor based on an interpretation of the variables loading most heavily on it; the problem is that these labels are subjective in nature, represent the authors’ interpretation, and can vary typically from 0.30 to 0.50 (Gottlieb et al. 2014 ; Khine 2008 ); (4) the initial unsatisfactory factor analysis output (Lombaerts et al. 2009 ); and (5) lack of a more robust CFA level (Jong et al. 2014 ) taken together—when the study result distances itself from statistical results expected for EFA (e.g., KMO, Bartlett test of sphericity) and/or CFA (e.g., CFI, GFI, RMSEA), it results in an important limitation, since the tested exploratory and theoretical models are not considered valid (Khine 2008 ). Taking these results, future studies should consider the use of separate samples for EFA and CFA, the combination of EFA and CFA, the definition of objective parameters to label factors, and about the consideration for unsatisfactory results of EFA and CFA, seeking alternatives to better fit the model.

Qualitative research limitations

This review also found reported limitations on the qualitative approach of the analyzed studies. The first limitation was related to the exclusive use of the deductive method to generate items. It is noteworthy that, although most of the studies included in this review used exclusively deductive methods to generate items, only two studies recognized this as a limitation (Coleman et al. 2011 ; Song et al. 2011 ). Both studies used only the literature review to generate and operationalize the initial item pool. The authors recognized the importance of this deductive method to theoretically operationalize the target construct, but they noted that, “for further research, more diverse views should be considered to reflect more comprehensive perspectives of human knowledge-creating behaviors to strengthen the validity of the developed scales” (Song et al. 2011 , p. 256) and, “a qualitative stage could have been used to generate additional items […]. This could also have reduced measurement error by using specific language the population used to communicate” (Coleman et al. 2011 ; p. 1069). Thus, the combination of deductive and inductive approaches (e.g., focus groups or interviews) in item generation is again suggested in future research.

In addition, it is also necessary that the researcher consider the quality of the reviewed literature. Napoli et al. ( 2014 , p. 1096) reported limitations related to the loss of a more robust literature review, suggesting that the scale developed in the study may have been incorrectly operationalized: “Yet some question remains as to whether cultural symbolism should form part of this scale. Perhaps the way in which the construct was initially conceptualized and operationalized was incorrect.” The incorrect operation of the construct compromises the psychometric results of scale and its applicability in future studies.

Another limitation involves the subjective analysis of the qualitative research. Fisher et al. ( 2014 , p. 488) pointed out that the qualitative methods (literature reviews and interviews) used to develop and conceptualize the construct were the main weaknesses of the study, “this research is limited by […] the nature of qualitative research in which the interpretations of one researcher may not reflect those of another.” The authors explained that, due to the potential for researcher bias when interpreting data, it has been recognized that credible results are difficult to achieve. Nevertheless, subjective analysis is the essence and nature of qualitative studies. Some precautions in future studies can be taken to rule out potential researcher bias, such as attempts at neutrality. This is not always possible, however, and this limitation will remain a common problem in any qualitative study.

In turn, Sewitch et al. ( 2003 , p. 260) reported that failure to formally assess content validity was a limitation. The reason given was budgetary constraints. It is worthwhile to remember that the content validity is an important step to ensure confidence in any inferences made using the final scale form. Therefore, it is necessarily required in any scale development process.

An additional limitation was reported by Lucas-Carrasco et al. ( 2011 ) in the recruitment of a larger number of interviewers, which may have affected the quality of the data collected. In order to minimize this limitation, the authors reported, “all interviewers had sufficient former education, received training on the study requirements, and were provided with a detailed guide” (p. 1223). Future studies planning the use of multiple interviewers should consider potential resulting bias.

Missing data

In connection, missing data was another issue reported by some studies included in this systematic review (e.g., Glynn et al. 2015 ; Ngorsuraches et al. 2007 ). Such limitations typically occur across different fields of scientific research. Missing data includes numbers that have been grouped, aggregated, rounded, censored, or truncated, resulting in partial loss of information (Schafer and Graham 2002 ). Collins et al. ( 2001 ) clarified that when researchers are confronted with missing data, they run an increased risk of reaching incorrect conclusions. This is because missing data may bias parameter estimates, inflate type I and type II error rates, and degrade the performance of confidence intervals. The authors also explained that, “because a loss of data is nearly always accompanied by a loss of information, missing values may dramatically reduce statistical power” (p. 330). Therefore, future researchers who wish to mitigate these risks during the scale development must pay close attention to the missing data aspect of the analysis and choose their strategy carefully.

Statistical methods to solve the problem of missing data have improved significantly, as demonstrated by Schafer and Graham ( 2002 ), although misconceptions still remain abundant. Several methods to deal with missing data were reviewed, issues raised, and advice offered for those that remain unresolved. Considering the fact that a more detailed discussion of the statistics dealing with missing data is beyond of the scope of this article, more details about missing data analysis can be found in Schafer and Graham ( 2002 ).

Social desirability bias

Another limitation reported in some studies (Bova et al. 2006 ; Ngorsuraches et al. 2007 ) and identified in this systematic review is social desirability bias. This type of bias is considered to be a systematic error in self-reporting measures resulting from the desire of respondents to avoid embarrassment and project a favorable image to others (Fisher 1993 ). According to King and Bruner ( 2000 ), social desirability bias is an important threat to the validity of research employing multi-item scales. Provision of socially desirable responses in self-reported data may lead to spurious correlations between variables, as well as the suppression or moderation of relationships between the constructs of interest. Thus, one aspect of scale validity, which should be of particular concern to researchers, is the potential threat of contamination due to social-desirability response bias. To remedy this problem, we agree with the authors that it is incumbent upon researchers to identify situations in which data may be systematically biased toward the respondents’ perceptions of what is socially acceptable, to determine the extent to which this represents contamination of the data, and to implement the most appropriate methods of control. Details on methods for identifying, testing for, and/or preventing social desirability bias are beyond the scope of this article, but can be found at King and Bruner ( 2000 ).

Item limitations

In comparison with at least one previous study (Prados 2007 ), our findings reflect some potential item limitations. Firstly, items that were ambiguous or difficult to answer were the main weaknesses reported by Gottlieb et al. ( 2014 ). On this issue, the literature dealing with the necessary caution in wording the items is extensive. For example, items must clearly define the problem being addressed, must be as simple as possible, express a single idea, and use common words that reflect the vocabulary level of the target population. Items should not be inductors or have alternative or underlying assumptions. They must be free of generalizations and estimates, and be written to ensure the variability of responses. In writing the items, the researcher should avoid using fashionable expressions and colloquialisms or other words or phrases that impair understanding for groups of varying ages, ethnicities, religions, or genders. Furthermore, the items should be organized properly. For example, the opening questions should be simple and interesting to win the trust of the subjects. The most delicate, complex, or dull questions should be asked at the end of the sequence (Clark and Watson 1995 ; Malhotra 2004 ; Pasquali 2010 ).

Furthermore, Cicero et al. ( 2010 ) reported that the main limitation of their study was the fact that none of the items were reverse-scored. Although some methodologists claim that reverse scoring is necessary to avoid acquiescence among participants, this advice should be taken with caution. There are reports that the reverse-scored items may be confusing to participants, that the opposite of a construct reverse-scored may be fundamentally different than the construct, that reverse-scored items tend to be the worst fitting items in factor analyses, or that the factor structure of scales includes a factor with straightforward wording compared to a reverse-scored factor (Cicero et al. 2010 ). Awareness of these issues is necessary for future researchers to choose between avoiding acquiescence among participants or preventing a number of other problems related to the use of reverse scores.

Brevity of the scale

Limitations on the scale size were also identified in this review. Studies by Negra and Mzoughi ( 2012 ) and Tombaugh et al. ( 2011 ) mentioned the short version of the scale as their main limitation. In both studies, the final version of the new scale included only five items. Generally, short scales are good, because they require less time from respondents. However, very short scales can in fact seriously compromise the reliability of the instrument (Raykov 2008 ). To the extent that the researcher removes items of the scale, the Cronbach’s alpha tends to decrease. It is valuable to remember that the minimum acceptable alpha should be at least 0.7, while an alpha value between 0.8 and 0.9 is considered ideal. Scales with many items tend to be more reliable, with higher alpha values (DeVellis 2003 ). In this context, future researchers should prioritize scales with enough items to keep the alpha within the acceptable range. Although many items may be lost during theoretical and psychometric analysis, an alternative already mentioned in this study would be to begin the initial item pool with at least twice the desired items of the final scale.

Difficulty controlling all variables

In addition to all limitations reported, Gottlieb et al. ( 2014 ) mentioned a common limitation in different research fields—the difficulty of controlling all the variables that could influence the central construct of the study. The authors reported that “it may be that there are other variables that influence visitors’ perception of trade show effectiveness that were not uncovered in the research” and suggest “future research might yield insights that are not provided here” (p. 104). The reported limitation calls attention to the importance of the first step—item generation—in the scale development process. A possible remedy to this issue would be to know the target construct in detail during the item generation, allowing for all possible and important variables to be investigated and controlled. However, this is not always possible. Even using inductive and deductive approaches to generate items (literature review and interview), the authors still reported that limitation. In this light, future researchers must use care in hypothesizing and testing potential variables that could be controlled during construction of the scale development process.

Lack of manual instructions

Finally, this review found a weakness reported on the loss of manualized instructions that regulate the data analysis. Saxena et al. ( 2015 , p. 492) pointed out that the initial version of the new scale “did not contain manualized instructions for raters, so it lacked objective anchor points for choosing specific ratings on many of its questions”. Therefore, an important detail that should have the attention of future researchers are instructions that determine the application methods of the new scale. Pasquali ( 2010 ) suggests that when drafting the instructions, the researcher should define the development of operational strategies that will enable the application of the instrument and the format in which it will be presented and decide both how the subject’s response will be given for each item and the way that the respondent should answer each item. The researcher should also define how the scale scores would be analyzed. In addition, the instructions need to be as short as possible without confusion to the subjects of the target population, should contain one or more examples of how the items should be answered, and should ensure that the subject is free of any related tension or anxiety.

Study limitations and strengths

This review itself is subject to some limitations that should be taken into consideration. First, during the selection of the articles included in the analysis, we may have missed some studies that could have been identified by using other terms related to “scale development.” This may have impacted our findings. However, application of this term alone was recommended by its widespread use by researchers in the area (Clark and Watson 1995 ; DeVellis 2003 ; Hinkin 1995 ; Nunnally 1967 ) and by the large number of publications identified with this descriptor in the period evaluated, as compared with those screened with correlates (e.g., “development of questionnaire” and “development of measure”). In the same way, we may also have missed numerous studies that, despite recording their weaknesses, did not have the search term “limitations” indexed in the analyzed databases. We could have reduced this limitation by also using the search term ‘weakness’ or a similar word for selection and inclusion of several other articles. However, a larger number of included studies would hinder the operationalization of our findings.

Second, particularly regarding analysis of items and reliability, we lost information about the basic theories that support the scale development process: classical test theory (CTT)—known as classical psychometry—and item response theory (IRT)—known as modern psychometry (PASQUALI 2010 ). Although it was beyond the scope of this article to examine these theories, information on the employability of one or the other could contribute to a deeper understanding of their main limitations. Future studies could focus on CTT and IRT, compare the applicability of both, and identify their main limitations in the scale development process.

Still, our review is current with studies published until September 2015. As new evidence emerges on current practices and limitations reported in the scale development process, revisions to this systematic review and practice guideline would be required in future studies.

Despite its weaknesses, the strengths of this study should be highlighted. First, this study reviews the updated and consistent literature on scale development practices to be applied in, not only a specific field of knowledge as carried out in most systematic review studies, but across various fields. With this variety of conceptions, we hope to assist future researchers in different areas of human and social sciences in making the most appropriate choice between strategies.

Second, this study differs from most studies of scale development revision, since it primarily considers the conceptions of the authors themselves about the main difficulties and mistakes made during the scale development process in their own studies. We hope to contribute to the efforts of future researchers, based on the knowledge of previous mistakes. While several weaknesses in scale development research were identified, specific recommendations for future research relevant to particular previously dimensions discussed were embedded within the appropriate sections throughout the article.

We observe that, although some weaknesses have been clearly identified in the scale development practices of many studies, only a few researchers recognized and recorded these limitations. This was evidenced in the large number of studies using exclusively deductive approaches to generate the initial item pool and the limited number of studies that recognized this as a limitation, or there were a large number of studies using smaller sample sizes than recommended in the literature for psychometric analysis and the limited number of studies that reported this issue as a limitation. Considering the observed distance between the limitation and its recognition, it is important that future researchers are comfortable with the detailed process of developing a new measure, especially as it pertains to avoiding theoretical and/or methodological mistakes, or at least, if they occur, to mention them as limitations.

Conclusions

In conclusion, the present research reviews numerous studies that both proposed current practices of the scale development process and also reported its main limitations. A variety of conceptions and methodological strategies and ten mains limitations were identified and discussed along with suggestions for future research. In this way, we believe that this paper makes important contributions to the literature, especially because it provides a comprehensive set of recommendations to increase the quality of future practices in the scale development process.

Aagja, J. P., & Garg, R. (2010). Measuring perceived service quality for public hospitals (PubHosQual) in the Indian context. International Journal of Pharmaceutical and Healthcare Marketing, 4 (10), 60–83. http://dx.doi.org/10.1108/17506121011036033 .

Article   Google Scholar  

Ahmad, N., Awan, M. U., Raouf, A., & Sparks, L. (2009). Development of a service quality scale for pharmaceutical supply chains. International Journal of Pharmaceutical and Healthcare Marketing, 3 (1), 26–45. http://dx.doi.org/10.1108/17506120910948494 .

Akter, S., D’Ambra, J., & Ray, P. (2013). Development and validation of an instrument to measure user perceived service quality of mHealth. Information and Management, 50 , 181–195. http://dx.doi.org/10.1016/j.im.2013.03.001 .

Alvarado-Herrera, A, Bigne, E, Aldas-Manzano, J, & Curras-Perez, R. (2015). A scale for measuring consumer perceptions of corporate social responsibility following the sustainable development paradigm. Journal of Business Ethics , 1-20. doi: http://dx.doi.org/10.1007/s10551-015-2654-9

Arias, M. R. M., Lloreda, M. J. H., & Lloreda, M. V. H. (2014). Psicometría . S.A.: Alianza Editorial

Armfield, J. M. (2010). Development and psychometric evaluation of the Index of Dental Anxiety and Fear (IDAF-4C + ). Psychological Assessment, 22 (2), 279–287. http://dx.doi.org/10.1037/a0018678 .

Article   PubMed   Google Scholar  

Arrindell, W. A., & van der Ende, J. (1985). An empirical-test of the utility of the observations-to-variables ratio in factor and components analysis. Applied Psychological Measurement, 9 (2), 165–178. http://dx.doi.org/10.1177/014662168500900205 .

Atkins, K. G., & Kim, Y. (2012). Smart shopping: conceptualization and measurement. International Journal of Retail and Distribution Management, 40 (5), 360–375. http://dx.doi.org/10.1108/09590551211222349 .

Bagdare, S., & Jain, R. (2013). Measuring retail customer experience. International Journal of Retail and Distribution Management, 41 (10), 790–804. http://dx.doi.org/10.1108/IJRDM-08-2012-0084 .

Bakar, H. A., & Mustaffa, C. S. (2013). Organizational communication in Malaysia organizations. Corporate Communications: An International Journal, 18 (1), 87–109. http://dx.doi.org/10.1108/13563281311294146 .

Barrett, P. T., & Kline, P. (1981). The observation to variable ratio in factor analysis. Personality Study and Group Behavior, 1 , 23–33.

Google Scholar  

Bastos, J. L., Celeste, R. K., Faerstein, E., & Barros, A. J. D. (2010). Racial discrimination and health: a systematic review of scales with a focus on their psychometric properties. Social Science and Medicine, 70 , 1091–1099. http://dx.doi.org/10.1016/j.socscimed.2009.12.20 .

Beaudreuil, J, Allard, A, Zerkak, D, Gerber, RA, Cappelleri, JC, Quintero, N, Lasbleiz, S, … Bardin, T. (2011). Unite’ Rhumatologique des Affections de la Main (URAM) Scale: development and validation of a tool to assess Dupuytren’s disease–specific disability. Arthritis Care & Research, 63 (10), 1448-1455. doi: http://dx.doi.org/10.1002/acr.20564

Bhattacherjee, A. (2002). Individual trust in online firms: scale development and initial test. Journal of Management Information Systems, 19 (1), 211–241. http://dx.doi.org/10.1080/07421222.2002.11045715 .

Blankson, C., Cheng, J. M., & Spears, N. (2007). Determinants of banks selection in USA, Taiwan and Ghana. International Journal of Bank Marketing, 25 (7), 469–489. http://dx.doi.org/10.1108/02652320710832621 .

Blankson, C., Paswan, A., & Boakye, K. G. (2012). College students’ consumption of credit cards. International Journal of Bank Marketing, 30 (7), 567–585. http://dx.doi.org/10.1108/02652321211274327 .

Bolton, D. L., & Lane, M. D. (2012). Individual entrepreneurial orientation: development of a measurement instrument. Education + Training, 54 (2/3), 219–233. http://dx.doi.org/10.1108/00400911211210314 .

Bova, C., Fennie, K. P., Watrous, E., Dieckhaus, K., & Williams, A. B. (2006). The health care relationship (HCR) trust scale: development and psychometric evaluation. Research in Nursing and Health, 29 , 477–488. http://dx.doi.org/10.1002/nur.20158 .

Bowen, H. P., & Wiersema, M. F. (1999). Matching method to paradigm in strategy research: limitations of cross-sectional analysis and some methodological alternatives. Strategic Management Journal, 20 , 625–636.

Boyar, S. L., Campbell, N. S., Mosley, D. C., Jr., & Carson, C. M. (2014). Development of a work/family social support measure. Journal of Managerial Psychology, 29 (7), 901–920. http://dx.doi.org/10.1108/JMP-06-2012-0189 .

Brock, J. K., & Zhou, Y. (2005). Organizational use of the internet. Internet Research, 15 (1), 67–87. http://dx.doi.org/10.1108/10662240510577077 .

Brun, I., Rajaobelina, L., & Ricard, L. (2014). Online relationship quality: scale development and initial testing. International Journal of Bank Marketing, 32 (1), 5–27. http://dx.doi.org/10.1108/IJBM-02-2013-0022 .

Butt, M. M., & Run, E. C. (2010). Private healthcare quality: applying a SERVQUAL model. International Journal of Health Care Quality Assurance, 23 (7), 658–673. http://dx.doi.org/10.1108/09526861011071580 .

Caro, L. M., & García, J. A. M. (2007). Measuring perceived service quality in urgent transport service. Journal of Retailing and Consumer Services, 14 , 60–72. http://dx.doi.org/10.1016/j.jretconser.2006.04.001 .

Chahal, H., & Kumari, N. (2012). Consumer perceived value. International Journal of Pharmaceutical and Healthcare Marketing, 6 (2), 167–190. http://dx.doi.org/10.1108/17506121211243086 .

Chen, H., Tian, Y., & Daugherty, P. J. (2009). Measuring process orientation. The International Journal of Logistics Management, 20 (2), 213–227. http://dx.doi.org/10.1108/09574090910981305 .

Choi, S. W., Victorson, D. E., Yount, S., Anton, S., & Cella, D. (2011). Development of a conceptual framework and calibrated item banks to measure patient-reported dyspnea severity and related functional limitations. Value in Health, 14 , 291–306. http://dx.doi.org/10.1016/j.jval.2010.06.001 .

Christophersen, T., & Konradt, U. (2012). Development and validation of a formative and a reflective measure for the assessment of online store usability. Behaviour and Information Technology, 31 (9), 839–857. http://dx.doi.org/10.1080/0144929X.2010.529165 .

Churchill, G. (1979). A paradigm for developing better measures of marketing constructs. Journal of Marketing Research, 16 (1), 64–73. http://dx.doi.org/10.2307/3150876 .

Cicero, D. C., Kerns, J. G., & McCarthy, D. M. (2010). The Aberrant Salience Inventory: a new measure of psychosis proneness. Psychological Assessment, 22 (3), 688–701. http://dx.doi.org/10.1037/a0019913 .

Clark, L. A., & Watson, D. (1995). Constructing validity: basic issues in objective scale development. Psychological Assessment, 7 (3), 309–319. http://dx.doi.org/10.1037/1040-3590.7.3.309 .

Coker, B. L. S., Ashill, N. J., & Hope, B. (2011). Measuring internet product purchase risk. European Journal of Marketing, 45 (7/8), 1130–1151. http://dx.doi.org/10.1108/03090561111137642 .

Coleman, D., Chernatony, L., & Christodoulides, G. (2011). B2B service brand identity: scale development and validation. Industrial Marketing Management, 40 , 1063–1071. http://dx.doi.org/10.1016/j.indmarman.2011.09.010 .

Collins, L. M., Schafer, J. L., & Kam, C.-M. (2001). A comparison of inclusive and restrictive strategies in modern missing data procedures. Psychological Methods, 6 (4), 330–351. http://dx.doi.org/10.1037/1082-989X.6.4.330 .

Colwell, S. R., Aung, M., Kanetkar, V., & Holden, A. L. (2008). Toward a measure of service convenience: multiple-item scale development and empirical test. Journal of Services Marketing, 22 (2), 160–169. http://dx.doi.org/10.1108/08876040810862895 .

Cossette, S., Cara, C., Ricard, N., & Pepin, J. (2005). Assessing nurse–patient interactions from a caring perspective: report of the development and preliminary psychometric testing of the Caring Nurse–Patient Interactions Scale. International Journal of Nursing Studies, 42 , 673–686. http://dx.doi.org/10.1016/j.ijnurstu.2004.10.004 .

Dennis, R. S., & Bocarnea, M. (2005). Development of the servant leadership assessment instrument. Leadership and Organization Development Journal, 26 (8), 600–615. http://dx.doi.org/10.1108/01437730510633692 .

DeVellis, R. F. (2003). Scale development: theory and applications (2nd ed.). Newbury Park: Sage Publications.

Devlin, J. F., Roy, S. K., & Sekhon, H. (2014). Perceptions of fair treatment in financial services. European Journal of Marketing, 48 (7/8), 1315–1332. http://dx.doi.org/10.1108/EJM-08-2012-0469 .

Dunham, A., & Burt, C. (2014). Understanding employee knowledge: the development of an organizational memory scale. The Learning Organization, 21 (2), 126–145. http://dx.doi.org/10.1108/TLO-04-2011-0026 .

Edwards, J. R., Knight, D. K., Broome, K. M., & Flynn, P. M. (2010). The development and validation of a transformational leadership survey for substance use treatment programs. Substance Use and Misuse, 45 , 1279–1302. http://dx.doi.org/10.3109/10826081003682834 .

Article   PubMed   PubMed Central   Google Scholar  

Feuerstein, M., Nicholas, R. A., Huang, G. D., Haufler, A. J., Pransky, G., & Robertson, M. (2005). Workstyle: development of a measure of response to work in those with upper extremity pain. Journal of Occupational Rehabilitation, 15 (2), 87–104. http://dx.doi.org/10.1007/s10926-005-3420-0 .

Fisher, R. J. (1993). Social desirability bias and the validity of indirect questioning. Journal of Consumer Research, 20 (2), 303–315. http://dx.doi.org/10.1086/209351 .

Fisher, R., Maritz, A., & Lobo, A. (2014). Evaluating entrepreneurs’ perception of success. International Journal of Entrepreneurial Behavior and Research, 20 (5), 478–492. http://dx.doi.org/10.1108/IJEBR-10-2013-0157 .

Flight, R. L., D’Souza, G., & Allaway, A. W. (2011). Characteristics-based innovation adoption: scale and model validation. Journal of Product and Brand Management, 20 (5), 343–355. http://dx.doi.org/10.1108/10610421111157874 .

Forbush, KT, Wildes, JE, Pollack, LO, Dunbar, D, Luo, J, Patterson, P, Petruzzi, L, … Watson, D. (2013). Development and validation of the Eating Pathology Symptoms Inventory (EPSI). Psychological Assessment, 25 (3), 859-878. doi: http://dx.doi.org/10.1037/a0032639 .

Foster, J. D., McCain, J. L., Hibberts, M. F., Brunell, A. B., & Johnson, B. (2015). The grandiose narcissism scale: a global and facet-level measure of grandiose narcissism. Personality and Individual Differences, 73 , 12–16. http://dx.doi.org/10.1016/j.paid.2014.08.042 .

Franche, R., Corbière, M., Lee, H., Breslin, F. C., & Hepburn, G. (2007). The readiness for return-to-work (RRTW) scale: development and validation of a self-report staging scale in lost-time claimants with musculoskeletal disorders. Journal of Occupational Rehabilitation, 17 , 450–472. http://dx.doi.org/10.1007/s10926-007-9097-9 .

Gesten, E. L. (1976). A health resources inventory: the development of a measure of the personal and social competence of primary-grade children. Journal of Consulting and Clinical Psychology, 44 (5), 775–786. http://dx.doi.org/10.1037/0022-006X.44.5.775 .

Gibbons, C. J., Kenning, C., Coventry, P. A., Bee, P., Bundy, C., Fisher, L., & Bower, P. (2013). Development of a Multimorbidity Illness Perceptions Scale (MULTIPleS). PloS One, 8 (12), e81852. http://dx.doi.org/10.1371/journal.pone.0081852 .

Gligor, D. M., & Holcomb, M. (2014). The road to supply chain agility: an RBV perspective on the role of logistics capabilities. The International Journal of Logistics Management, 25 (1), 160–179. http://dx.doi.org/10.1108/IJLM-07-2012-0062 .

Glynn, N. W., Santanasto, A. J., Simonsick, E. M., Boudreau, R. M., Beach, S. R., Schulz, R., & Newman, A. B. (2015). The Pittsburgh fatigability scale for older adults: development and validation. Journal of American Geriatrics Society, 63 , 130–135. http://dx.doi.org/10.1111/jgs.13191 .

Gottlieb, U., Brown, M., & Ferrier, L. (2014). Consumer perceptions of trade show effectiveness. European Journal of Marketing, 48 (1/2), 89–107. http://dx.doi.org/10.1108/EJM-06-2011-0310 .

Hair Junior, J. F., Black, W. C., Babin, N. J., Anderson, R. E., & Tatham, R. L. (2009). Análise multivariada de dados (6ath ed.). São Paulo: Bookman.

Hall, M. A., Camacho, F., Dugan, E., & Balkrishnan, R. (2002). Trust in the medical profession: conceptual and measurement issues. Health Services Research, 37 (5), 1419–1439. http://dx.doi.org/10.1111/1475-6773.01070 .

Han, H., Back, K., & Kim, Y. (2011). A multidimensional scale of switching barriers in the full-service restaurant industry. Cornell Hospitality Quarterly, 52 (1), 54–-63. http://dx.doi.org/10.1177/1938965510389261 .

Hardesty, D. M., & Bearden, W. O. (2004). The use of expert judges in scale development Implications for improving face validity of measures of unobservable constructs. Journal of Business Research, 57 , 98–107. http://dx.doi.org/10.1016/S0148-2963(01)00295-8 .

Henderson-King, D., & Henderson-King, E. (2005). Acceptance of cosmetic surgery: scale development and validation. Body Image, 2 , 137–149. http://dx.doi.org/10.1016/j.bodyim.2005.03.003 .

Hernandez, J. M. C., & Santos, C. C. (2010). Development-based trust: proposing and validating a new trust measurement model for buyer-seller relationships. Brazilian Administration Review, 7 (2), 172–197. http://dx.doi.org/10.1590/S1807-76922010000200005 .

Hildebrandt, T., Langenbucher, J., & Schlundt, D. G. (2004). Muscularity concerns among men: development of attitudinal and perceptual measures. Body Image, 1 , 169–181. http://dx.doi.org/10.1016/j.bodyim.2004.01.001 .

Hilsenroth, M. J., Blagys, M. D., Ackerman, S. J., Bonge, D. R., & Blais, M. A. (2005). Measuring psychodynamic-interpersonal and cognitive-behavioral techniques: development of the comparative psychotherapy process scale. Psychotherapy: Theory, Research, Practice, Training, 42 (3), 340–356. http://dx.doi.org/10.1037/0033-3204.42.3.340 .

Hinkin, T. R. (1995). A review of scale development practices in the study of organizations. Journal of Management, 21 (5), 967–988. http://dx.doi.org/10.1177/014920639502100509 .

Ho, C. B., & Lin, W. (2010). Measuring the service quality of internet banking: scale development and validation. European Business Review, 22 (1), 5–24. http://dx.doi.org/10.1108/09555341011008981 .

Hutz, CS, Bandeira, DR, & Trentini, CM. (Org.). (2015). Psicometria . Porto Alegre, Artmed

Jong, N., Van Leeuwen, R. G. J., Hoekstra, H. A., & van der Zee, K. I. (2014). CRIQ: an innovative measure using comparison awareness to avoid self-presentation tactics. Journal of Vocational Behavior, 84 , 199–214. http://dx.doi.org/10.1016/j.jvb.2014.01.003 .

Kapuscinski, A. N., & Masters, K. S. (2010). The current status of measures of spirituality: a critical review of scale development. Psychology of Religion and Spirituality, 2 (4), 191–205. http://dx.doi.org/10.1037/a0020498 .

Khine, M. S. (2008). Knowing, knowledge and beliefs: epistemological studies across diverse cultures . New York: Springer.

Book   Google Scholar  

Khorsan, R, & Crawford, C. (2014). External validity and model validity: a conceptual approach for systematic review methodology. Evidence-Based Complementary and Alternative Medicine, 2014 , Article ID 694804, 12 pages. doi: http://dx.doi.org/10.1155/2014/694804

Kim, S., Cha, J., Knutson, B. J., & Beck, J. A. (2011). Development and testing of the Consumer Experience Index (CEI). Managing Service Quality: An International Journal, 21 (2), 112–132. http://dx.doi.org/10.1108/09604521111113429 .

Kim, D., Lee, Y., Lee, J., Nam, J. K., & Chung, Y. (2014). Development of Korean smartphone addiction proneness scale for youth. PloS One, 9 (5), e97920. http://dx.doi.org/10.1371/journal.pone.0097920 .

King, M. F., & Bruner, G. C. (2000). Social desirability bias: a neglected aspect of validity testing. Psychology and Marketing, 17 (2), 79–103. http://dx.doi.org/10.1002/(SICI)1520-6793(200002)17:2<79::AID-MAR2>3.0.CO;2-0 .

Kwon, W., & Lennon, S. J. (2011). Assessing college women’s associations of American specialty apparel brands. Journal of Fashion Marketing and Management: An International Journal, 15 (2), 242–256. http://dx.doi.org/10.1108/13612021111132663 .

Ladhari, R. (2010). Developing e-service quality scales: a literature review. Journal of Retailing and Consumer Services, 17 , 464–477. http://dx.doi.org/10.1016/j.jretconser.2010.06.003 .

Lin, J. C., & Hsieh, P. (2011). Assessing the self-service technology encounters: development and validation of SSTQUAL scale. Journal of Retailing, 87 (2), 194–206. http://dx.doi.org/10.1016/j.jretai.2011.02.006 .

Lombaerts, K., Backer, F., Engels, N., Van Braak, J., & Athanasou, J. (2009). Development of the self-regulated learning teacher belief scale. European Journal of Psychology of Education, 24 (1), 79–96. http://dx.doi.org/10.1007/BF03173476 .

Lucas-Carrasco, R., Eser, E., Hao, Y., McPherson, K. M., Green, A., & Kullmann, L. (2011). The quality of care and support (QOCS) for people with disability scale: development and psychometric properties. Research in Developmental Disabilities, 32 , 1212–1225. http://dx.doi.org/10.1016/j.ridd.2010.12.030 .

MacKenzie, S. B., Podsakoff, P. M., & Podsakoff, N. P. (2011). Construct measurement and validation procedures in MIS and behavioral research: integrating new and existing techniques. MIS Quarterly, 35 (2), 293–334.

Mahudin, N. D. M., Cox, T., & Griffiths, A. (2012). Measuring rail passenger crowding: scale development and psychometric properties. Transportation Research Part, F 15 , 38–51. http://dx.doi.org/10.1016/j.trf.2011.11.006 .

Malhotra, N. K. (2004). Pesquisa de marketing: Uma orientação aplicada (4ath ed.). Porto Alegre: Bookman.

Medina-Pradas, C., Navarro, J. B., López, S. R., Grau, A., & Obiols, J. E. (2011). Further development of a scale of perceived expressed emotion and its evaluation in a sample of patients with eating disorders. Psychiatry Research, 190 , 291–296. http://dx.doi.org/10.1016/j.psychres.2011.06.011 .

Meneses, J., Barrios, M., Bonillo, A., Cosculluela, A., Lozano, L. M., Turbany, J., & Valero, S. (2014). Psicometría . Barcelona: Editorial UOC.

Morean, M. E., Corbin, W. R., & Treat, T. A. (2012). The anticipated effects of alcohol scale: development and psychometric evaluation of a novel assessment tool for measuring alcohol expectancies. Psychological Assessment, 24 (4), 1008–1023. http://dx.doi.org/10.1037/a0028982 .

Morgado, F. F. R., Campana, A. N. N. B., & Tavares, M. C. G. C. F. (2014). Development and validation of the self-acceptance scale for persons with early blindness: the SAS-EB. PloS One, 9 (9), e106848. http://dx.doi.org/10.1371/journal.pone.0106848 .

Nagy, B. G., Blair, E. S., & Lohrke, F. T. (2014). Developing a scale to measure liabilities and assets of newness after start-up. International Entrepreneurship and Management Journal, 10 , 277–295. http://dx.doi.org/10.1007/s11365-012-0219-2 .

Napoli, J., Dickinson, S. J., Beverland, M. B., & Farrelly, F. (2014). Measuring consumer-based brand authenticity. Journal of Business Research, 67 , 1090–1098. http://dx.doi.org/10.1016/j.jbusres.2013.06.001 .

Negra, A., & Mzoughi, M. N. (2012). How wise are online procrastinators? A scale development. Internet Research, 22 (4), 426–442. http://dx.doi.org/10.1108/10662241211250971 .

Ngorsuraches, S., Lerkiatbundit, S., Li, S. C., Treesak, C., Sirithorn, R., & Korwiwattanakarn, M. (2007). Development and validation of the patient trust in community pharmacists (TRUST-Ph) scale: results from a study conducted in Thailand. Research in Social and Administrative Pharmacy, 4 , 272–283. http://dx.doi.org/10.1016/j.sapharm.2007.10.002 .

Nunnally, J. C. (1967). Psychometric theory . New York: McGraw Hill.

Oh, H. (2005). Measuring affective reactions to print apparel advertisements: a scale development. Journal of Fashion Marketing and Management: An International Journal, 9 (3), 283–305. http://dx.doi.org/10.1108/13612020510610426 .

Olaya, B, Marsà, F, Ochoa, S, Balanzá-Martínez, V, Barbeito, S, González-Pinto, A, … Haro, JM. (2012). Development of the insight scale for affective disorders (ISAD): modification from the scale to assess unawareness of mental disorder . Journal of Affective Disorders, 142 , 65-71. doi: http://dx.doi.org/10.1016/j.jad.2012.03.041 .

Omar, N. A., & Musa, R. (2011). Measuring service quality in retail loyalty programmes (LPSQual). International Journal of Retail and Distribution Management, 39 (10), 759–784. http://dx.doi.org/10.1108/09590551111162257 .

Pan, J., Wong, D. F. K., & Ye, S. (2013). Post-migration growth scale for Chinese international students: development and validation. Journal of Happiness Studies, 14 , 1639–1655. http://dx.doi.org/10.1007/s10902-012-9401-z .

Pasquali, L. (2010). Instrumentação psicológica: fundamentos e práticas . Porto Alegre: Artmed.

Patwardhan, H., & Balasubramanian, S. K. (2011). Brand romance: a complementary approach to explain emotional attachment toward brands. Journal of Product and Brand Management, 20 (4), 297–308. http://dx.doi.org/10.1108/10610421111148315 .

Pimentel, C. E., Gouveia, V. V., & Pessoa, V. S. (2007). Escala de Preferência Musical: construção e comprovação da sua estrutura fatorial. Psico-USF, 12 (2), 145–155.

Podsakoff, N. P., Podsakoff, P. M., MacKenzie, S. B., & Klinger, R. L. (2013). Are we really measuring what we say we’re measuring? Using video techniques to supplement traditional construct validation procedures. Journal of Applied Psychology, 98 (1), 99–113. http://dx.doi.org/10.1037/a0029570 .

Pommer, AM, Prins, L, van Ranst, D, Meijer, J, Hul, AV, Janssen, J, … Pop, VJM. (2013). Development and validity of the Patient-Centred COPD Questionnaire (PCQ). Journal of Psychosomatic Research, 75 , 563-571. doi: http://dx.doi.org/10.1016/j.jpsychores.2013.10.001

Prados, J. M. (2007). Development of a new scale of beliefs about the worry consequences. Annals of Psychology, 23 (2), 226–230.

Raykov, T. (2008). Alpha if item deleted: a note on loss of criterion validity in scale development if maximizing coefficient alpha. British Journal of Mathematical and Statistical Psychology, 61 , 275–285. http://dx.doi.org/10.1348/000711007X188520 .

Reed, L. L., Vidaver-Cohen, D., & Colwell, S. R. (2011). A new scale to measure executive servant leadership: development, analysis, and implications for research. Journal of Business Ethics, 101 , 415–434. http://dx.doi.org/10.1007/s10551-010-0729-1 .

Reise, S. P., Waller, N. G., & Comrey, A. L. (2000). Factor analysis and scale revision. Psychological Assessment, 12 (3), 287–297. http://dx.doi.org/10.1037//1040-3590.12.3.287 .

Rice, S. M., Fallon, B. J., Aucote, H. M., & Möller-Leimkühler, A. M. (2013). Development and preliminary validation of the male depression risk scale: Furthering the assessment of depression in men. Journal of Affective Disorders, 151 , 950–958. http://dx.doi.org/10.1016/j.jad.2013.08.013 .

Riedel, M., Spellmann, I., Schennach-Wolff, R., Obermeier, M., & Musil, R. (2011). The RSM-scale: a pilot study on a new specific scale for self- and observer-rated quality of life in patients with schizophrenia. Quality of Life Research, 20 , 263–272. http://dx.doi.org/10.1007/s11136-010-9744-z .

Roberson, R. B., III, Elliott, T. R., Chang, J. E., & Hill, J. N. (2014). Exploratory factor analysis in rehabilitation psychology: a content analysis. Rehabilitation Psychology, 59 (4), 429–438. http://dx.doi.org/10.1037/a0037899 .

Rodrigues, A. C. A., & Bastos, A. V. B. (2012). Organizational entrenchment: scale development and validation. Psicologia: Reflexão e Crítica, 25 (4), 688–700. http://dx.doi.org/10.1590/S0102-79722012000400008 .

Rodríguez, I., Kozusznik, M. W., & Peiró, J. M. (2013). Development and validation of the Valencia Eustress-Distress Appraisal Scale. International Journal of Stress Management, 20 (4), 279–308. http://dx.doi.org/10.1037/a0034330 .

Rosenthal, S. (2011). Measuring knowledge of indoor environmental hazards. Journal of Environmental Psychology, 31 , 137–146. http://dx.doi.org/10.1016/j.jenvp.2010.08.003 .

Saxena, S., Ayers, C. R., Dozier, M. E., & Maidment, K. M. (2015). The UCLA Hoarding Severity Scale: development and validation. Journal of Affective Disorders, 175 , 488–493. http://dx.doi.org/10.1016/j.jad.2015.01.030 .

Schafer, J. L., & Graham, J. W. (2002). Missing data: our view of the state of the Art. Psychological Methods, 7 (2), 147–177. http://dx.doi.org/10.1037//1082-989X.7.2.147 .

Schlosser, F. K., & McNaughton, R. B. (2009). Using the I-MARKOR scale to identify market-oriented individuals in the financial services sector. Journal of Services Marketing, 23 (4), 236–248. http://dx.doi.org/10.1108/08876040910965575 .

Sewitch, M. J., Abrahamowicz, M., Dobkin, P. L., & Tamblyn, R. (2003). Measuring differences between patients’ and physicians’ health perceptions: the patient–physician discordance scale. Journal of Behavioral Medicine, 26 (3), 245–263. http://dx.doi.org/10.1023/A:1023412604715 .

Sharma, P. (2010). Measuring personal cultural orientations: scale development and validation. Journal of the Academy of Marketing Science, 38 , 787–806. http://dx.doi.org/.1007/s11747-009-0184-7 .

Sharma, D., & Gassenheimer, J. B. (2009). Internet channel and perceived cannibalization. European Journal of Marketing, 43 (7/8), 1076–1091. http://dx.doi.org/10.1108/03090560910961524 .

Shawyer, F., Ratcliff, K., Mackinnon, A., Farhall, J., Hayes, S. C., & Copolov, D. (2007). The Voices Acceptance and Action Scale (VAAS): pilot data. Journal of Clinical Psychology, 63 (6), 593–606. http://dx.doi.org/10.1002/jclp.20366 .

Sin, L. Y. M., Tse, A. C. B., & Yim, F. H. K. (2005). CRM: conceptualization and scale development. European Journal of Marketing, 39 (11/12), 1264–1290. http://dx.doi.org/10.1108/03090560510623253 .

Sohn, D., & Choi, S. M. (2014). Measuring expected interactivity: scale development and validation. New Media and Society, 16 (5), 856–870. http://dx.doi.org/10.1177/1461444813495808 .

Song, J. H., Uhm, D., & Yoon, S. W. (2011). Organizational knowledge creation practice. Leadership and Organization Development Journal, 32 (3), 243–259. http://dx.doi.org/10.1108/01437731111123906 .

Staines, Z. (2013). Managing tacit investigative knowledge: measuring “investigative thinking styles”. Policing: An International Journal of Police Strategies and Management, 36 (3), 604–619. http://dx.doi.org/10.1108/PIJPSM-07-2012-0072 .

Sultan, P., & Wong, H. (2010). Performance-based service quality model: an empirical study on Japanese universities. Quality Assurance in Education, 18 (2), 126–143. http://dx.doi.org/10.1108/09684881011035349 .

Sveinbjornsdottir, S., & Thorsteinsson, E. B. (2008). Adolescent coping scales: a critical psychometric review. Scandinavian Journal of Psychology, 49 (6), 533–548. http://dx.doi.org/10.1111/j.1467-9450.2008.00669.x .

Swaid, S. I., & Wigand, R. T. (2009). Measuring the quality of E-Service: scale development and initial validation. Journal of Electronic Commerce Research, 10 (1), 13–28.

Tanimura, C., Morimoto, M., Hiramatsu, K., & Hagino, H. (2011). Difficulties in the daily life of patients with osteoarthritis of the knee: scale development and descriptive study. Journal of Clinical Nursing, 20 , 743–753. http://dx.doi.org/10.1111/j.1365-2702.2010.03536.x .

Taute, H. A., & Sierra, J. (2014). Brand tribalism: an anthropological perspective. Journal of Product and Brand Management, 23 (1), 2–15. http://dx.doi.org/10.1108/JPBM-06-2013-0340 .

Tombaugh, J. R., Mayfield, C., & Durand, R. (2011). Spiritual expression at work: exploring the active voice of workplace spirituality. International Journal of Organizational Analysis, 19 (2), 146–170. http://dx.doi.org/10.1108/19348831111135083 .

Turker, D. (2009). Measuring corporate social responsibility: a scale development study. Journal of Business Ethics, 85 , 411–427. http://dx.doi.org/10.1007/s10551-008-9780-6 .

Uzunboylu, H., & Ozdamli, F. (2011). Teacher perception for m-learning: scale development and teachers’ perceptions. Journal of Computer Assisted Learning, 27 , 544–556. http://dx.doi.org/10.1111/j.1365-2729.2011.00415.x .

Van der Gaag, M, Schütz, C, ten Napel, A, Landa, Y, Delespaul, P, Bak, M, … Hert, M. (2013). Development of the Davos Assessment of Cognitive Biases Scale (DACOBS). Schizophrenia Research, 144 , 63-71. doi: http://dx.doi.org/10.1016/j.schres.2012.12.010

Von Steinbüchel, N, Wilson, L, Gibbons, H, Hawthorne, G, Höfer, S, Schmidt, S, … Truelle, J. (2010). Journal of Neurotrauma, 27 , 1167-1185. doi: http://dx.doi.org/10.1089/neu.2009.1076

Voon, B. H., Abdullah, F., Lee, N., & Kueh, K. (2014). Developing a HospiSE scale for hospital service excellence. International Journal of Quality and Reliability Management, 31 (3), 261–280. http://dx.doi.org/10.1108/IJQRM-10-2012-0143 .

Walshe, M., Peach, R. K., & Miller, N. (2009). Dysarthria Impact Profile: development of a scale to measure psychosocial effects. International Journal of Language and Communication Disorders, 44 (5), 693–715. http://dx.doi.org/10.1080/13682820802317536 .

Wang, C. L., & Mowen, J. C. (1997). The separateness-connectedness self-schema: scale development and application to message construction. Psychology and Marketing, 14 (2), 185–207. http://dx.doi.org/10.1002/(SICI)1520-6793(199703)14:2<185::AID-MAR5>3.0.CO;2-9 .

Wepener, M., & Boshoff, C. (2015). An instrument to measure the customer-based corporate reputation of large service organizations. Journal of Services Marketing, 29 (3), 163–172. http://dx.doi.org/10.1108/JSM-01-2014-0026 .

Williams, Z., Ponder, N., & Autry, C. W. (2009). Supply chain security culture: measure development and validation. The International Journal of Logistics Management, 20 (2), 243–260. http://dx.doi.org/10.1108/09574090910981323 .

Wilson, N. L., & Holmvall, C. M. (2013). The development and validation of the incivility from customers scale. Journal of Occupational Health Psychology, 18 (3), 310–326. http://dx.doi.org/10.1037/a0032753 .

Yang, M., Weng, S., & Hsiao, P. (2014). Measuring blog service innovation in social media services. Internet Research, 24 (1), 110–128. http://dx.doi.org/10.1108/IntR-12-2012-0253 .

Zhang, X., & Hu, D. (2011). Farmer-buyer relationships in China: the effects of contracts, trust and market environment. China Agricultural Economic Review, 3 (1), 42–53. http://dx.doi.org/10.1108/17561371111103534 .

Zheng, J, You, L, Lou, T, Chen, N, Lai, D, Liang, Y, … Zhai, C. (2010). Development and psychometric evaluation of the dialysis patient-perceived exercise benefits and barriers scale. International Journal of Nursing Studies, 47 , 166-180. doi: http://dx.doi.org/10.1016/j.ijnurstu.2009.05.023

Download references

Authors’ contributions

FFRM is responsible for all parts of this manuscript, from its conception to the final writing. JFFM, CMN, ACSA and MECF participated in the data collection, analysis and interpretation of data and critical review of the manuscript. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interest.

Author information

Authors and affiliations.

Institute of Education, Universidade Federal Rural do Rio de Janeiro, BR-465, km 7, Seropédica, Rio de Janeiro, 23890-000, Brazil

Fabiane F. R. Morgado

Faculty of Psychology, Universidade Federal de Juiz de Fora, Rua José Lourenço Kelmer, s/n—Campus Universitário Bairro São Pedro, Juiz de Fora, Minas Gerais, 36036-900, Brazil

Juliana F. F. Meireles, Clara M. Neves & Maria E. C. Ferreira

Faculty of Physical Education of the Instituto Federal de Educação, Ciência e Tecnologia do Sudeste de Minas Gerais, Av. Luz Interior, n 360, Estrela Sul, Juiz de Fora, Minas Gerais, 36030-776, Brazil

Ana C. S. Amaral

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Fabiane F. R. Morgado .

Additional information

An erratum to this article is available at http://dx.doi.org/10.1186/s41155-017-0059-7 .

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Morgado, F.F.R., Meireles, J.F.F., Neves, C.M. et al. Scale development: ten main limitations and recommendations to improve future research practices. Psicol. Refl. Crít. 30 , 3 (2018). https://doi.org/10.1186/s41155-016-0057-1

Download citation

Received : 03 August 2016

Accepted : 22 December 2016

Published : 25 January 2017

DOI : https://doi.org/10.1186/s41155-016-0057-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Measurement
  • Psychometrics
  • Reliability

limitations of study in research methodology

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 29 September 2024

Streamlining pediatric vital sign assessment: innovations and insights

  • Seayoung Goo   ORCID: orcid.org/0009-0002-9649-9565 1 ,
  • Wonjin Jang   ORCID: orcid.org/0009-0000-3885-2928 1 ,
  • You Sun Kim   ORCID: orcid.org/0000-0002-7687-2687 1 ,
  • Seungbae Ji   ORCID: orcid.org/0009-0003-8730-3142 2 ,
  • Taewoo Park   ORCID: orcid.org/0009-0009-9298-4337 2 ,
  • June Dong Park   ORCID: orcid.org/0000-0001-8113-1384 1 , 3 &
  • Bongjin Lee   ORCID: orcid.org/0000-0001-7878-9644 1 , 4  

Scientific Reports volume  14 , Article number:  22542 ( 2024 ) Cite this article

Metrics details

  • Medical research
  • Paediatric research

Accurate assessment of pediatric vital signs is critical for detecting abnormalities and guiding medical interventions, but interpretation is challenging due to age-dependent physiological variations. Therefore, this study aimed to develop age-specific centile curves for blood pressure, heart rate, and respiratory rate in pediatric patients and create a user-friendly web-based application for easy access to these data. We conducted a retrospective cross-sectional observational study analyzing 3,779,482 records from the National Emergency Department Information System of Korea, focusing on patients under 15 years old admitted between January 2016 and December 2017. After applying exclusion criteria to minimize the impact of patients’ symptoms on vital signs, 1,369,608 records were used for final analysis. The box–cox power exponential distribution and Lambda–Mu–Sigma (LMS) method were used to generate blood pressure centile charts, while heart rate and respiratory rate values were drawn from previously collected LMS values. We developed comprehensive age-specific centile curves for systolic, diastolic, and mean blood pressure, heart rate, and respiratory rate. These were integrated into a web-based application ( http://centile.research.or.kr ), allowing users to input patient data and promptly obtain centile and z-score information for vital signs. Our study provides an accessible system for pediatric vital sign evaluation, addressing previous limitations and offering a practical solution for clinical assessment. Future research should validate these centile curves in diverse populations.

Introduction

Vital signs such as blood pressure (BP), heart rate (HR), and respiratory rate (RR) serve as fundamental indicators of pediatric health, offering critical insights into the physiological state of pediatric patients. Accurate determination of BP plays a pivotal role in identifying potential complications. Elevated BP levels may herald serious conditions such as brain hemorrhage due to hypertension, whereas low BP levels could indicate shock states, including hypovolemic shock and septic shock. Prompt recognition and management of these BP abnormalities is crucial, as early intervention can substantially mitigate risks associated with pediatric mortality 1 . Both HR and RR further complement this array of essential diagnostics, with HR serving as a crucial indicator of compensatory status in shock and, along with changes in BP, as a key differential point for various types of shock. For instance, in patients with septic shock, tachycardia may manifest as a result of the shock compensation mechanism attempting to increase cardiac output 2 , 3 . Respiratory difficulty can be monitored primarily through RR. Early detection is critical in preventing severe respiratory complications because there is an increased risk of respiratory failure that could potentially lead to cardiac arrest in the pediatric population compared to adults 4 .

Even though interpreting vital signs is crucial in pediatric care, pediatric hemodynamic normal ranges vary with age unlike in adults, posing challenges in interpretation 5 . To address this issue, numerous studies have embarked on creating centile curves for systolic BP (SBP), diastolic BP (DBP), HR, and RR, aiming to provide a more intuitive framework for evaluating pediatric vital signs 6 . One study focused on pediatric BP by utilizing height and age to derive reference values 7 . It presented centile curves for SBP and DBP, primarily emphasizing ranges above the median value, while not providing specific curves for ranges below it 7 . Another study, centered on BP distribution in children under 10 years old, based its analysis on height and age 8 . As for HR and RR, several studies have aimed at deriving centile curves from large populations. Unlike the studies on BP, these investigations provided centile curves across the full range, and the research cohorts included diverse populations of healthy children 9 , hospitalized children 10 , and those visiting emergency departments (EDs) 11 .

However, the centile curves mentioned above, whether representing the full range or only a partial range, have limitations in clinical use due to their complexity and time-consuming nature, which can be impractical in busy clinical settings. While it may be feasible to conduct academic research or refer to centile curves while sitting at a desk, determining where a patient’s vital signs fall within the distribution outside of the office is not easy, especially when dealing with unstable patients. Even in a desk setting, calculating the patient’s age, finding the values of each vital sign measured, and accessing the patient’s weight or height at the time of measurement to determine the centiles is not always straightforward or convenient. Furthermore, determining the centile based on the patient’s weight or height becomes impossible if any of the necessary information is missing.

To address the previously highlighted inconveniences and limitations, our study was designed with the ultimate goal of developing a comprehensive system that provides full-range centile curves for pediatric vital signs. In pursuit of this ultimate objective, our practical aims are bifurcated into two distinct goals. The primary objective is to generate centile curves for SBP, DBP, mean BP (MBP), HR, and RR based solely on age. The secondary objective is to develop a user-friendly web-based application that facilitates easy access to these vital sign centiles.

Study setting and data source

This retrospective cross-sectional observational study utilized data from the National Emergency Department Information System (NEDIS) of Korea. NEDIS aggregates real-time information from emergency medical facilities nationwide, enabling access for research purposes through a data request process available on their website ( https://dw.nemc.or.kr ). Accessing NEDIS data requires submission of requisite forms, including Institutional Review Board (IRB) approval from the research institution and an official letter from its head. It is noteworthy that the provided data undergoes a stringent anonymization process, mitigating risks associated with personal data exposure.

The research design and protocol underwent review by the IRB of Seoul National University Hospital. Since the study solely relies on anonymized data from NEDIS to ensure individuals’ identities remain undisclosed, the IRB considered it to pose minimal risk to research subjects. As a result, both written consent and the overall research protocol were exempted from formal review (IRB approval number: E-2401-128-1505). Moreover, the study was conducted in accordance with the principles of the Declaration of Helsinki.

Data collection and pre-processing

The study included patients under 15 years old registered in NEDIS from January 2016 to December 2017. The collected data comprised SBP, DBP, body temperature (BT), age, and Korean Triage and Acuity Scale (KTAS) levels. KTAS categorizes patients into 5 urgency levels (1: resuscitation; 2: emergent; 3: urgent; 4: less urgent; and 5: non-urgent) 12 . Exclusion criteria included cases with missing vital signs or unassigned KTAS levels during ED stays. The study focused exclusively on KTAS levels 4 and 5, indicative of stable conditions, as levels 1, 2, and 3 denote severe emergencies. KTAS is age-dependent, with pediatric KTAS for individuals under 15 years old and adult KTAS for those aged 15 and above. Limiting the population to pediatric KTAS ensures consistency, aligning with our exclusive focus on individuals under the age of 15.

Meanwhile, centile curves and charts for HR and RR were directly employed from a previous study that focused on a population identical to ours 13 . In alignment with the methodology of the prior study, this study also excluded cases with BT outside the range of 36–38 °C during BP measurements. This exclusion was made due to BT’s acknowledged significant impact on HR and RR, as highlighted in the previous research 13 .

The primary objective of this study is to develop age-specific centile curves for SBP, DBP, and MBP. As a secondary objective, previously derived centile curves for HR and RR will be integrated to create a web-based application. In addition to presenting centile curves and charts in this manuscript, our aim is to make age-specific centiles and z-scores for vital signs readily accessible to everyone through a website.

Data analysis

Continuous variables were presented as medians (interquartile range), while categorical variables were expressed as numbers (%). As MBP is not directly available from NEDIS, it was calculated using the following formula: 14

The box–cox power exponential distribution and Lambda–Mu–Sigma (LMS) method were employed to generate centile charts for SBP, DBP, and MBP based on age. The data obtained through these methods were further smoothed using the B-spline method and the Generalized Additive Models for Location, Scale, and Shape package in the R software 15 . HR and RR values were drawn from the LMS values collected in the previous study 13 . The sitar package was used to derive centile values or z-scores for individual vital signs based on the derived centile charts. Centile curves were derived from data of individuals under 15 years old, and centile curves for ages 15–18 were extrapolated based on this data. The entire processes were conducted using R version 4.3.2 16 .

Web application development

The web application’s front-end was developed using PrimeVue and related technologies 17 . Chart.js was integrated for dynamic visualization of age-specific centile curves, with features like zooming and mouse-hover interactions 18 . The back-end was built with FastAPI, and rpy2 was used for seamless Python-R integration within Docker containers 19 , 20 , 21 .

The user interface has two sections: the left for inputting data (birthdate, measurement date, and analysis period), and the right for displaying vital sign data on interactive centile charts. These charts include lines marking centiles (1st, 3rd, 10th, 25th, 50th, 75th, 90th, 97th, and 99th) to help users understand the position of a subject’s vital signs relative to population standards.

Baseline characteristics

During the study period, a total of 3,779,482 records were collected. Following the application of the exclusion criteria, 1,369,608 records were deemed suitable for final analysis (Fig.  1 ). The mean age of the subjects was 4 (2–7) years, with females comprising 41.5% of the cohort. The distribution of vital signs and KTAS among the subjects included in the analysis is presented in Table 1 .

figure 1

Flow chart of the study. BT, body temperature; KTAS, Korean triage and acuity scale.

Main outcomes

The BP centile curves derived from this study are depicted in Fig.  2 , with SBP shown in Fig.  2 A, DBP in Fig.  2 B, and MBP in Fig.  2 C. Additionally, each centile chart can be found in supplementary Tables S1 , S2 , and S3 , respectively. The centile curves for HR and RR, imported from the previous study, are displayed in Fig.  3 A and B, respectively. Corresponding centile charts for HR and RR can be found in supplementary Tables S4 and S5 , respectively. Centile values for individuals under 15 years of age were derived based on actual measured values to construct the centile curve, while for those aged 15 and above, extrapolation was conducted using the derived LMS values. The BP curves demonstrate an increasing trend with age, whereas the HR and RR curves show a decreasing trend as age increases.

figure 2

Age-specific centile curves for blood pressure (BP). Centile curves of ( A ) systolic BP, ( B ) diastolic BP, and ( C ) mean BP are displayed. The gray shaded boxes indicate extrapolated data.

figure 3

Age-specific centile curves for heart rate and respiratory rate. Centile curves of ( A ) heart rate, and ( B ) respiratory rate are displayed. The gray shaded boxes indicate extrapolated data.

To cover the entire age range, the figures above display curves in 1-year increments. However, to examine detailed changes in early stages such as newborns and infants, supplementary figures provide more granular data: Figures S1 – S5 illustrate SBP, DBP, MBP, HR, and RR graphs for 0–24 months in 1-month increments, while Figs. S6 – S10 present the same parameters for 0–12 weeks in 1-week increments. The authors endeavor to support further research efforts by furnishing both the centile charts derived from this study and the LMS data as distinct resources. The LMS values corresponding to each vital sign across various age groups are detailed in the supplementary file, “Supplementary information.xlsx.”

The secondary outcome involved developing a user-friendly web-based application utilizing the centile curves and charts. The website is accessible at https://centile.research.or.kr . This application enables users to input a patient’s age, SBP, DBP, MBP, HR, and RR to promptly obtain centile and z-score information.

Our study developed comprehensive age-specific centile curves for BP, HR, and RR in pediatric patients. These curves were based solely on age, addressing the objective of creating an easily accessible system for vital sign evaluation. We successfully integrated these curves into a web-based application, providing a user-friendly tool for clinicians. Our analysis included 1,369,608 pediatric emergency department visits, representing a large-scale dataset for deriving these centile curves.

Pediatric vital sign assessment plays a crucial role in clinical care, serving as a fundamental tool for monitoring children’s health and guiding appropriate medical interventions 1 , 2 , 3 , 4 , 22 . The development of our age-specific centile curves for BP, HR, and RR in pediatric patients represents a significant step towards more accessible and comprehensive vital sign assessment tools. Our approach, which relies solely on age for deriving these curves, aligns with recent trends in simplifying pediatric vital sign evaluation, as evidenced by the American Heart Association’s 2010 update to the Pediatric Advanced Life Support guidelines 23 , 24 .

A key strength of our study is the provision of full-range centile data, including percentiles below the median. This contrasts with many previous studies that primarily focused on percentiles above the median 25 , 26 , 27 , 28 , particularly for hypertension measurements 7 , 8 , 29 . By offering a complete picture of vital sign distributions, our tool enables clinicians to assess both elevated and decreased vital signs with equal precision, potentially improving the detection of various clinical conditions. The results of this study address several limitations identified in previous research. Unlike many existing resources that offer only partial distributions or require additional variables like height and weight, our system provides comprehensive centile data with z-score for BP, HR, and RR based exclusively on age. This approach potentially enhances the utility of our tool across various clinical settings, particularly in scenarios requiring rapid assessment.

Additionally, we have provided not only the centile curves and charts derived from the study’s results but also the LMS data. Rather than keeping the findings proprietary, we have made them publicly accessible to contribute to the work of other researchers. This openness facilitates subsequent research and enables comparisons with other datasets.

Despite the strengths and significance of our research, there are several limitations to consider. Firstly, our study cohort comprises patients admitted to the ED, which means that the centiles derived may not be directly applicable to healthy children. This caveat should be taken into account when interpreting the results. To mitigate this limitation, we specifically analyzed relatively non-emergent and mild cases categorized as KTAS level 4 or 5. Additionally, prior studies focusing on HR and RR within identical study cohorts have demonstrated distributions comparable to those derived from other population groups 12 . However, further investigation is warranted to compare the BP distributions of our study cohort with other populations.

Secondly, centile curves were extrapolated for age groups older than 15 years based on data derived from subjects under 15 years old. Consequently, there are limitations in guaranteeing the reliability of the extrapolated vital sign distributions. Nevertheless, our findings for the older age group closely align with typical adult normal vital sign values.

Thirdly, the analysis of BP distributions in this study only considered age variation, omitting factors such as sex, weight, or height, which are traditionally necessary components in creating pediatric BP charts. Consequently, the omission of variations in sex, weight, and height might result in subtle discrepancies when compared to previously reported results. However, this deliberate exclusion was driven by a focus on simplicity, intuitiveness, and practicality.

In conclusion, our study provides a comprehensive, accessible system for pediatric vital sign evaluation, addressing key limitations of previous approaches. While acknowledging the inherent complexities in interpreting pediatric vital signs, our evidence-based method offers a practical solution for clinical settings requiring quick and efficient assessment. Future research should focus on validating these centile curves in diverse populations and clinical contexts, further refining the balance between simplicity and precision in pediatric vital sign evaluation.

Data availability

The data underlying this study belong to the National Emergency Medical Center (NEMC) of Korea. NEMC provides de-identified National Emergency Department Information System (NEDIS) data to researchers for nonprofit academic research. Any researchers who propose a study subject and plans with a standardized proposal form and are approved by the NEMC review committee on research support can access the raw data. Details of this process and a provision guide are now available at the NEMC website ( https://dw.nemc.or.kr ) or contact point of NEMC review committee ([email protected]). The authors accessed the data used in this study in the same manner that they expect future researchers to do so and did not receive special privileges from the NEMC of Korea.

Code availability

The code used in this study is not publicly available. However, if needed, you may contact the corresponding author with valid reasons, and it could be provided upon request.

Sepanski, R. J., Godambe, S. A. & Zaritsky, A. L. Pediatric vital sign distribution derived from a multi-centered emergency department database. Front. Pediatr. 6 , 66. https://doi.org/10.3389/fped.2018.00066 (2018).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Eisenberg, M. A. & Balamuth, F. Pediatric sepsis screening in US hospitals. Pediatr. Res. 91 , 351–358. https://doi.org/10.1038/s41390-021-01708-y (2022).

Article   PubMed   Google Scholar  

Evans, I. V. R. et al. Association between the New York sepsis care mandate and in-hospital mortality for pediatric sepsis. JAMA 320 , 358–367. https://doi.org/10.1001/jama.2018.9071 (2018).

Article   PubMed   PubMed Central   Google Scholar  

Lomez, J. et al. Airway management during a respiratory arrest in a clinical simulation scenario. Experience at a pediatric residency program. Arch. Argent. Pediatr. 122 , e202310172. https://doi.org/10.5546/aap.2023-10172.eng (2024).

National High Blood Pressure Education Program Working Group on High Blood Pressure in Children and Adolescents. The fourth report on the diagnosis, evaluation, and treatment of high blood pressure in children and adolescents. Pediatrics 114 , 555–576 (2004).

Article   Google Scholar  

Banker, A., Bell, C., Gupta-Malhotra, M. & Samuels, J. Blood pressure percentile charts to identify high or low blood pressure in children. BMC Pediatr. 16 , 98. https://doi.org/10.1186/s12887-016-0633-7 (2016).

Kim, S. H. et al. Blood pressure reference values for normal weight Korean children and adolescents: Data from The Korea national health and nutrition examination survey 1998–2016: The Korean working group of pediatric hypertension. Korean Circ. J. 49 , 1167–1180. https://doi.org/10.4070/kcj.2019.0075 (2019).

Lee, H. A. et al. Blood pressure curve for children less than 10 years of age: Findings from the Ewha birth and growth cohort study. J Korean Med. Sci. 35 , e91. https://doi.org/10.3346/jkms.2020.35.e91 (2020).

Article   MathSciNet   PubMed   PubMed Central   Google Scholar  

Fleming, S. et al. Normal ranges of heart rate and respiratory rate in children from birth to 18 years of age: A systematic review of observational studies. Lancet 377 , 1011–1018. https://doi.org/10.1016/s0140-6736(10)62226-x (2011).

Bonafide, C. P. et al. Development of heart and respiratory rate percentile curves for hospitalized children. Pediatrics 131 , e1150-1157. https://doi.org/10.1542/peds.2012-2443 (2013).

O’Leary, F., Hayen, A., Lockie, F. & Peat, J. Defining normal ranges and centiles for heart and respiratory rates in infants and children: A cross-sectional study of patients attending an Australian tertiary hospital paediatric emergency department. Arch. Dis. Child. 100 , 733–737. https://doi.org/10.1136/archdischild-2014-307401 (2015).

Lee, B., Kim, D. K., Park, J. D. & Kwak, Y. H. Clinical considerations when applying vital signs in pediatric Korean triage and acuity scale. J. Korean Med. Sci. 32 , 1702–1707. https://doi.org/10.3346/jkms.2017.32.10.1702 (2017).

Bae, W., Kim, K. & Lee, B. Distribution of pediatric vital signs in the emergency department: A nationwide study. Children (Basel) https://doi.org/10.3390/children7080089 (2020).

Geddes, L. A., Voelz, M., Combs, C., Reiner, D. & Babbs, C. F. Characterization of the oscillometric method for measuring indirect blood pressure. Ann. Biomed. Eng. 10 , 271–280. https://doi.org/10.1007/bf02367308 (1982).

Article   CAS   PubMed   Google Scholar  

Stasinopoulos, D. M. & Rigby, R. A. Generalized additive models for location scale and shape (GAMLSS) in R. J. Stat. Softw. 23 , 1–46 (2008).

Google Scholar  

R: Documentation—The R Project for Statistical Computing. https://www.r-project.org (2024).

PrimeTek. PrimeVue - Vue UI Component Library. https://www.primevue.org (2024).

Chart.js Official Documentation. https://www.chartjs.org (2024).

FastAPI Official Documentation . https://fastapi.tiangolo.com (2024).

rpy2 Official Documentation . https://rpy2.github.io (2024).

Docker Official Documentation. https://docs.docker.com (2024).

Baruteau, A. E., Perry, J. C., Sanatani, S., Horie, M. & Dubin, A. M. Evaluation and management of bradycardia in neonates and children. Eur. J. Pediatr. 175 , 151–161. https://doi.org/10.1007/s00431-015-2689-z (2016).

Kleinman, M. E. et al. Part 10: Pediatric basic and advanced life support: 2010 international consensus on cardiopulmonary resuscitation and emergency cardiovascular care science with treatment recommendations. Circulation 122 , S466-515. https://doi.org/10.1161/circulationaha.110.971093 (2010).

Haque, I. U. & Zaritsky, A. L. Analysis of the evidence for the lower limit of systolic and mean arterial pressure in children. Pediatr. Crit. Care Med. 8 , 138–144. https://doi.org/10.1097/01.Pcc.0000257039.32593.Dc (2007).

El-Shafie, A. M. et al. Establishment of blood pressure nomograms representative for Egyptian children and adolescents: A cross-sectional study. BMJ Open 8 , e020609. https://doi.org/10.1136/bmjopen-2017-020609 (2018).

Shypailo, R. Age-based Pediatric Blood Pressure Reference Charts. Baylor College of Medicine, Children’s Nutrition Research Center, Body Composition Laboratory. https://www.bcm.edu/bodycomplab/BPappZjs/BPvAgeAPPz.html (2021).

Flynn, J. T. et al. Clinical practice guideline for screening and management of high blood pressure in children and adolescents. Pediatrics https://doi.org/10.1542/peds.2017-1904 (2017).

Rosner, B., Cook, N., Portman, R., Daniels, S. & Falkner, B. Determination of blood pressure percentiles in normal-weight children: Some methodological issues. Am. J. Epidemiol. 167 , 653–666. https://doi.org/10.1093/aje/kwm348 (2008).

Schwandt, P., Scholze, J. E., Bertsch, T., Liepold, E. & Haas, G. M. Blood pressure percentiles in 22,051 German children and adolescents: The PEP family heart study. Am. J. Hypertens. 28 , 672–679. https://doi.org/10.1093/ajh/hpu208 (2015).

Download references

Acknowledgements

This research was conducted without any funding or support.

Author information

Authors and affiliations.

Department of Pediatrics, Seoul National University Hospital, 101, Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea

Seayoung Goo, Wonjin Jang, You Sun Kim, June Dong Park & Bongjin Lee

HUINNO AIM Co., Ltd., Seoul, Republic of Korea

Seungbae Ji & Taewoo Park

Department of Pediatrics, Seoul National University College of Medicine, Seoul, Republic of Korea

June Dong Park

Innovative Medical Technology Research Institute, Seoul National University Hospital, Seoul, Republic of Korea

Bongjin Lee

You can also search for this author in PubMed   Google Scholar

Contributions

Study concept and design: B.L. Data collection, analysis and cleaning: S.G. and B.L. Interpretation of data: S.G. and B.L. Web application implement: S.J. and T.P. Drafing and revising the manuscript: S.G., W.J., Y.S.K., S.J., T.P., J.D.P., and B.L. Critical edition: W.J. and B.L. Supervise the original draft and article: B.L. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Bongjin Lee .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information 1., supplementary tables., supplementary figures., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Goo, S., Jang, W., Kim, Y.S. et al. Streamlining pediatric vital sign assessment: innovations and insights. Sci Rep 14 , 22542 (2024). https://doi.org/10.1038/s41598-024-73148-7

Download citation

Received : 27 March 2024

Accepted : 13 September 2024

Published : 29 September 2024

DOI : https://doi.org/10.1038/s41598-024-73148-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

limitations of study in research methodology

Visit the UW-Superior Homepage

The library building will be open from 9:00am-3:00pm on Friday, March 29th. Our services will be available online 7:45am-4:30pm for your convenience.

  • University of Wisconsin-Superior
  • Jim Dan Hill Library
  • Help Guides
  • TRIO McNair Undergraduate Research Guide
  • Limitations of the Study

TRIO McNair Undergraduate Research Guide: Limitations of the Study

  • Purpose of Guide
  • Design Flaws to Avoid
  • Glossary of Research Terms
  • Ethics of Research
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Expanding the Timeliness of a Topic Idea
  • Writing a Research Proposal
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • The Abstract
  • Background Information
  • The Research Problem/The Question
  • Theoretical Framework
  • Citation Mining
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tertiary Sources
  • Scholarly v. Popular Sources
  • Qualitative Methods
  • Quantitative Methods
  • Using Non-Textual Elements
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Annotated Bibliography
  • Preparing Your Poster
  • Dealing with Nervousness
  • Using Visual Aids
  • Peer Review Process
  • Informed Consent
  • Writing Field Notes

The limitations of the study are those characteristics of design or methodology that impacted or influenced the application or interpretation of the results of your study. They are the constraints on generalizability and utility of findings that are the result of the ways in which you chose to design the study and/or the method used to establish internal and external validity. 

Importance of...

Always acknowledge a study's limitations. It is far better for you to identify and acknowledge your study’s limitations than to have them pointed out by your professor and be graded down because you appear to have ignored them. 

Keep in mind that acknowledgement of a study's limitations is an opportunity to make suggestions for further research . If you do connect your study's limitations to suggestions for further research, be sure to explain the ways in which these unanswered questions may become more focused because of your study. 

Acknowledgement of a study's limitations also provides you with an opportunity to demonstrate to your professor that you have thought critically about the research problem, understood the relevant literature published about it, and correctly assessed the methods chosen for studying the problem. A key objective of the research process is not only discovering new knowledge but also to confront assumptions and explore what we don't know. 

Claiming limitations is a subjective process because you must evaluate the impact of those limitations. Don't just list key weaknesses and the magnitude of a study's limitations. To do so diminishes the validity of your research because it leaves the reader wondering whether, or in what ways, limitation(s) in your study may have impacted the findings and conclusions. Limitations require a critical, overall appraisal and interpretation of their impact. You should answer the question: do these problems with errors, methods, validity, etc. eventually matter and, if so, to what extent? 

Structure: How to Structure the Research Limitations Section of Your Dissertation . Dissertations and Theses: An Online Textbook. Laerd.com.

Descriptions of Possible Limitations

All studies have limitations. However, it is important that you restrict your discussion to limitations related to the research problem under investigation. For example, if a meta-analysis of existing literature is not a stated purpose of your research, it should not be discussed as a limitation. Do not apologize for not addressing issues that you did not promise to investigate in your paper. 

Here are examples of limitations you may need to describe and to discuss how they possibly impacted your findings. Descriptions of limitations should be stated in the past tense. 

Possible Methodological Limitations 

Sample size -- the number of the units of analysis you use in your study is dictated by the type of research problem you are investigating. Note that, if your sample size is too small, it will be difficult to find significant relationships from the data, as statistical tests normally require a larger sample size to ensure a representative distribution of the population and to be considered representative of groups of people to whom results will be generalized or transferred. 

Lack of available and/or reliable data -- a lack of data or of reliable data will likely require you to limit the scope of your analysis, the size of your sample, or it can be a significant obstacle in finding a trend and a meaningful relationship. You need to not only describe these limitations but to offer reasons why you believe data is missing or is unreliable. However, don’t just throw up your hands in frustration; use this as an opportunity to describe the need for future research. 

Lack of prior research studies on the topic -- citing prior research studies forms the basis of your literature review and helps lay a foundation for understanding the research problem you are investigating. Depending on the currency or scope of your research topic, there may be little, if any, prior research on your topic. Before assuming this to be true, consult with a librarian! In cases when a librarian has confirmed that there is a lack of prior research, you may be required to develop an entirely new research typology [for example, using an exploratory rather than an explanatory research design]. Note that this limitation can serve as an important opportunity to describe the need for further research. 

Measure used to collect the data -- sometimes it is the case that, after completing your interpretation of the findings, you discover that the way in which you gathered data inhibited your ability to conduct a thorough analysis of the results. For example, you regret not including a specific question in a survey that, in retrospect, could have helped address a particular issue that emerged later in the study. Acknowledge the deficiency by stating a need in future research to revise the specific method for gathering data. 

Self-reported data -- whether you are relying on pre-existing self-reported data or you are conducting a qualitative research study and gathering the data yourself, self-reported data is limited by the fact that it rarely can be independently verified. In other words, you must take what people say, whether in interviews, focus groups, or on questionnaires, at face value. However, self-reported data contain several potential sources of bias that should be noted as limitations: (1) selective memory (remembering or not remembering experiences or events that occurred at some point in the past); (2) telescoping [recalling events that occurred at one time as if they occurred at another time]; (3) attribution [the act of attributing positive events and outcomes to one's own agency but attributing negative events and outcomes to external forces]; and, (4) exaggeration [the act of representing outcomes or embellishing events as more significant than is actually suggested from other data]. 

Possible Limitations of the Researcher 

Access -- if your study depends on having access to people, organizations, or documents and, for whatever reason, access is denied or otherwise limited, the reasons for this need to be described. 

Longitudinal effects -- unlike your professor, who can devote years [even a lifetime] to studying a single research problem, the time available to investigate a research problem and to measure change or stability within a sample is constrained by the due date of your assignment. Be sure to choose a topic that does not require an excessive amount of time to complete the literature review, apply the methodology, and gather and interpret the results. If you're unsure, talk to your professor. 

Cultural and other types of bias -- we all have biases, whether we are conscience of them or not. Bias is when a person, place, or thing is viewed or shown in a consistently inaccurate way. It is usually negative, though one can have a positive bias as well. When proof-reading your paper, be especially critical in reviewing how you have stated a problem, selected the data to be studied, what may have been omitted, the way you have ordered events, people, or places and how you have chosen to represent a person, place, or thing, to name a phenomenon, or to use possible words with a positive or negative connotation. Note that if you detect bias in prior research, it must be acknowledged, and you should explain what measures were taken to avoid perpetuating bias. 

Fluency in a language -- if your research focuses on measuring the perceived value of after-school tutoring among Mexican American ESL [English as a Second Language] students, for example, and you are not fluent in Spanish, you are limited in being able to read and interpret Spanish language research studies on the topic. This deficiency should be acknowledged. 

Brutus, Stéphane et al. Self-Reported Limitations and Future Directions in Scholarly Reports: Analysis and Recommendations.  Journal of Management  39 (January 2013): 48-75; Senunyeme, Emmanuel K.  Business Research Methods . Powerpoint Presentation. Regent University of Science and Technology.

Structure and Writing Style

Information about the limitations of your study is generally placed either at the beginning of the discussion section of your paper so the reader knows and understands the limitations before reading the rest of your analysis of the findings, or the limitations are outlined at the conclusion of the discussion section as an acknowledgement of the need for further study. Statements about a study's limitations should not be buried in the body [middle] of the discussion section unless a limitation is specific to something covered in that part of the paper. If this is the case, though, the limitation should be reiterated at the conclusion of the section. 

If you determine that your study is seriously flawed due to important limitations, such as an inability to acquire critical data, consider reframing it as a pilot study intended to lay the groundwork for a more complete research study in the future. Be sure, though, to specifically explain the ways that these flaws can be successfully overcome in later studies. 

But do not use this as an excuse for not developing a thorough research paper! Review the tab in this guide for developing a research topic. If serious limitations exist, it generally indicates a likelihood that your research problem is too narrowly defined or that the issue or event under study is too recent and, thus, very little research has been written about it. If serious limitations do emerge, consult with your professor about possible ways to overcome them or how to reframe your study. 

When discussing the limitations of your research, be sure to:  

Describe each limitation in detailed but concise terms; 

Explain why each limitation exists; 

Provide the reasons why each limitation could not be overcome using the method(s) chosen to gather the data [cite to other studies that had similar problems when possible]; 

Assess the impact of each limitation in relation to the overall findings and conclusions of your study; and, 

If appropriate, describe how these limitations could point to the need for further research. 

Remember that the method you chose may be the source of a significant limitation that has emerged during your interpretation of the results [for example, you didn't ask a particular question in a survey that you later wish you had]. If this is the case, don't panic. Acknowledge it and explain how applying a different or more robust methodology might address the research problem more effectively in any future study. An underlying goal of scholarly research is not only to prove what works, but to demonstrate what doesn't work or what needs further clarification. 

Brutus, Stéphane et al. Self-Reported Limitations and Future Directions in Scholarly Reports: Analysis and Recommendations.  Journal of Management  39 (January 2013): 48-75; Ioannidis, John P.A. Limitations are not Properly Acknowledged in the Scientific Literature. Journal of Clinical Epidemiology 60 (2007): 324-329; Pasek, Josh.  Writing the Empirical Social Science Research Paper: A Guide for the Perplexed . January 24, 2012. Academia.edu;  Structure: How to Structure the Research Limitations Section of Your Dissertation . Dissertations and Theses: An Online Textbook. Laerd.com;  What Is an Academic Paper?  Institute for Writing Rhetoric. Dartmouth College; Writing the Experimental Report: Methods, Results, and Discussion. The Writing Lab and The OWL. Purdue University.

Writing Tip

Don't Inflate the Importance of Your Findings!    After all the hard work and long hours devoted to writing your research paper, it is easy to get carried away with attributing unwarranted importance to what you’ve done. We all want our academic work to be viewed as excellent and worthy of a good grade, but it is important that you understand and openly acknowledge the limitations of your study. Inflating the importance of your study's findings in an attempt to hide its flaws is a big turn off to your readers. A measure of humility goes a long way! 

Another Writing Tip

Negative Results are Not a Limitation! 

Negative evidence refers to findings that unexpectedly challenge rather than support your hypothesis. If you didn't get the results you anticipated, it may mean your hypothesis was incorrect and needs to be reformulated, or perhaps you have stumbled onto something unexpected that warrants further study. Moreover, the absence of an effect may be very telling in many situations, particularly in experimental research designs. In any case, your results may be of importance to others even though they did not support your hypothesis. Do not fall into the trap of thinking that results contrary to what you expected is a limitation to your study. If you carried out the research well, they are simply your results and only require additional interpretation. 

Yet Another Writing Tip

A Note about Sample Size Limitations in Qualitative Research 

Sample sizes are typically smaller in qualitative research because, as the study goes on, acquiring more data does not necessarily lead to more information. This is because one occurrence of a piece of data, or a code, is all that is necessary to ensure that it becomes part of the analysis framework. However, it remains true that sample sizes that are too small cannot adequately support claims of having achieved valid conclusions and sample sizes that are too large do not permit the deep, naturalistic, and inductive analysis that defines qualitative inquiry. Determining adequate sample size in qualitative research is ultimately a matter of judgment and experience in evaluating the quality of the information collected against the uses to which it will be applied, and the particular research method and purposeful sampling strategy employed. If the sample size is found to be a limitation, it may reflect your judgement about the methodological technique chosen [e.g., single life history study versus focus group interviews] rather than the number of respondents used. 

Huberman, A. Michael and Matthew B. Miles. Data Management and Analysis Methods. In Handbook of Qualitative Research. Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 428-444.

  • << Previous: The Discussion
  • Next: The Conclusion >>

Advertisement

Advertisement

Critical appraisal of the chorioallantoic membrane model for studying angiogenesis in preclinical research

  • Published: 28 September 2024
  • Volume 51 , article number  1026 , ( 2024 )

Cite this article

limitations of study in research methodology

  • Madhura Shekatkar 1 ,
  • Supriya Kheur 1 ,
  • Shantanu Deshpande 2 ,
  • Swapnali Sakhare 3 ,
  • Avinash Sanap 3 ,
  • Mohit Kheur 4 &
  • Ramesh Bhonde 3  

Angiogenesis, the biological mechanism by which new blood vessels are generated from existing ones, plays a vital role in growth and development. Effective preclinical screening is necessary for the development of medications that may enhance or inhibit angiogenesis in the setting of different disorders. Traditional in vitro and, in vivo models of angiogenesis are laborious and time-consuming, necessitating advanced infrastructure for embryo culture.

A challenge encountered by researchers studying angiogenesis is the lack of appropriate techniques to evaluate the impact of regulators on the angiogenic response. An ideal test should possess reliability, technical simplicity, easy quantifiability, and, most importantly, physiological relevance. The CAM model, leveraging the extraembryonic membrane of the chicken embryo, offers a unique combination of accessibility, low cost, and rapid development, making it an attractive option for angiogenesis assays. This review evaluates the strengths and limitations of the CAM model in the context of its anatomical and physiological properties, and its relevance to human pathophysiological conditions. Its abundant capillary network makes it a common choice for studying angiogenesis. The CAM assay serves as a substitute for animal models and offers a natural setting for developing blood vessels and the many elements involved in the intricate interaction with the host. Despite its advantages, the CAM model’s limitations are notable. These include species-specific responses that may not always extrapolate to humans and the ethical considerations of using avian embryos. We discuss methodological adaptations that can mitigate some of these limitations and propose future directions to enhance the translational relevance of this model. This review underscores the CAM model’s valuable role in angiogenesis research and aims to guide researchers in optimizing its use for more predictive and robust preclinical studies.

The highly vascularized chorioallantoic membrane (CAM) of fertilized chicken eggs is a cost-effective and easily available method for screening angiogenesis, in comparison to other animal models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

limitations of study in research methodology

Data availability

No datasets were generated or analysed during the current study.

Abbreviations

Chorioallantoic Membrane

Yolk Sac Membrane

Vascular Endothelial Growth Factor

Placenta Growth Factor

Fibroblast Growth Factor

Hepatocyte Growth Factor

Platelet-Derived Growth Factor

Angiopoietins

Transforming Growth Factor

Transforming Growth Factor Alpha

Interleukins

Endocrine Gland-Derived Vascular Endothelial Growth Factor

Cluster of Differentiation 31

Fluorescein Isothiocyanate-Labeled Dextran

Bone Morphogenetic Protein Receptor Type II

Interstitial Muscle Growth

Vascular Permeability Factor

Platelet-Derived Endothelial Cell Growth Factor

Tumor Necrosis Factor-Alpha

Vascular Endothelial Growth Factor Receptor-1

Vascular Endothelial Growth Factor Receptor-2

alpha-v beta-3

alpha-v beta-5

Rapidly Accelerated Fibrosarcoma

Extracellular-Signal-Regulated Kinase

Tyrosine Kinase with Immunoglobulin-like and EGF-like Domains 1

Tyrosine Kinase with Immunoglobulin-Like and EGF-Like Domains 2

Erythropoietin-Producing Hepatoma

glycosylphosphatidylinositol

Vascular Endothelial Cadherin

Protein kinase B

Phosphoinositide 3-kinases

Positron Emission Tomography

Magnetic Resonance Imaging

Computed Tomography

Auerbach R, Akhtar N, Lewis RL, Shinners BL (2000) Angiogenesis assays: problems and pitfalls. Cancer Metastasis Rev 19:167–172

Article   CAS   PubMed   Google Scholar  

Jain RK, Schlenger K, Hockel M, Yuan F (1997) Quantitative angiogenesis assays: progress and problems. Nat Med 3(11):1203–1208

Ribatti D (2008) Chick embryo chorioallantoic membrane as a useful tool to study angiogenesis. Int Rev cell Mol Biology 270:181–224

Article   CAS   Google Scholar  

Ribatti D, Vacca A (1999) Models for studying angiogenesis in vivo. Int J Biol Mark 14(4):207–213

Nicoli S, Ribatti D, Cotelli F, Presta M (2007) Mammalian tumor xenografts induce neovascularization in zebrafish embryos. Cancer Res 67(7):2927–2931

Merckx G, Tay H, Lo Monaco M, Van Zandvoort M, De Spiegelaere W, Lambrichts I, Bronckaers A (2020) Chorioallantoic membrane assay as model for angiogenesis in tissue engineering: focus on stem cells. Tissue Eng Part B: Reviews 26(6):519–539

Dvorak HF (2005) Angiogenesis: update 2005. J Thromb Haemost 3(8):1835–1842

Ribatti D, Vacca A, Presta M (2000) The discovery of angiogenic factors:: a historical review. Gen Pharmacology: Vascular Syst 35(5):227–231

Warren BA The vascular morphology of tumors. InTumor blood circulation 2020 Apr 15 (pp. 1–47)

Dvorak HF, Nagy JA, Feng D, Dvorak AM Tumor architecture and targeted delivery. In Radioimmunotherapy of cancer 2000 Jul 11 (pp. 107–135). Marcel Dekker, Inc

Secomb TW, Konerding MA, West CA, Su M, Young AJ, Mentzer SJ (2003) Microangiectasias: structural regulators of lymphocyte transmigration. Proc Natl Acad Sci 100(12):7231–7234

Article   CAS   PubMed   PubMed Central   Google Scholar  

Nagy JA, Vasile E, Feng D, Sundberg C, Brown LF, Manseau EJ, Dvorak AM, Dvorak HF VEGF-A induces angiogenesis, arteriogenesis, lymphangiogenesis, and vascular malformations. InCold Spring Harbor symposia on quantitative biology 2002 Jan 1 (Vol. 67, pp. 227–238). Cold Spring Harbor Laboratory Press

Brown LF, Yeo K, Berse B, Yeo TK, Senger DR, Dvorak HF, Van De Water L (1992) Expression of vascular permeability factor (vascular endothelial growth factor) by epidermal keratinocytes during wound healing. J Exp Med 176(5):1375–1379

Ren G, Michael LH, Entman ML, Frangogiannis NG (2002) Morphological characteristics of the microvasculature in healing myocardial infarcts. J Histochem Cytochemistry 50(1):71–79

Joukov V, Kaipainen A, Jeltsch M, Pajusola K, Olofsson B, Kumar V, Eriksson U, Alitalo K (1997) Vascular endothelial growth factors VEGF-B and VEGF‐C. J Cell Physiol 173(2):211–215

Jussila L, Alitalo K (2002) Vascular growth factors and lymphangiogenesis. Physiol Rev 82(3):673–700

Carmeliet P, Moons L, Luttun A, Vincenti V, Compernolle V, De Mol M, Wu Y, Bono F, Devy L, Beck H, Scholz D (2001) Synergism between vascular endothelial growth factor and placental growth factor contributes to angiogenesis and plasma extravasation in pathological conditions. Nat Med 7(5):575–583

De Falco S, Gigante B, Persico MG (2002) Structure and function of placental growth factor. Trends Cardiovasc Med 12(6):241–246

Article   PubMed   Google Scholar  

Veikkola T, Alitalo K, VEGFs receptors and angiogenesis. InSeminars in cancer biology 1999 Jun 1 (Vol. 9, No. 3, pp. 211–220). Academic Press

Brown LF, Detmar M, Claffey K, Nagy JA, Feng D, Dvorak AM, Dvorak HF (1997) Vascular permeability factor/vascular endothelial growth factor: a multifunctional angiogenic cytokine. Regul Angiogenesis. :233–269

Yancopoulos GD, Davis S, Gale NW, Rudge JS, Wiegand SJ, Holash J (2000) Vascular-specific growth factors and blood vessel formation. Nature 407(6801):242–248

Beck L Jr, D’Amore PA (1997) Vascular development: cellular and molecular regulation. FASEB J 11(5):365–373

Ferrara N, Houck K, Jakeman LY, Leung DW (1992) Molecular and biological properties of the vascular endothelial growth factor family of proteins. Endocr Rev 13(1):18–32

Ferrara N (1999) Role of vascular endothelial growth factor in the regulation of angiogenesis. Kidney Int 56(3):794–814

Malik AK, Gerber HP (2003) Targeting VEGF ligands and receptors in cancer. Targets 2(2):48–57

Ferrara N (2001) Role of vascular endothelial growth factor in regulation of physiological angiogenesis. Am J Physiology-Cell Physiol 280(6):C1358–C1366

Ferrara N, Davis-Smyth T (1997) The biology of vascular endothelial growth factor. Endocr Rev 18(1):4–25

Hagedorn M, Balke M, Schmidt A, Bloch W, Kurz H, Javerzat S, Rousseau B, Wilting J, Bikfalvi A (2004) VEGF coordinates interaction of pericytes and endothelial cells during vasculogenesis and experimental angiogenesis. Dev Dynamics: Official Publication Am Association Anatomists 230(1):23–33

Apte RS, Chen DS, Ferrara N (2019) VEGF in signaling and disease: beyond discovery and development. Cell 176(6):1248–1264

Ribatti D, Presta M (2002) The role of fibroblast growth factor-2 in the vascularization of the chick embryo chorioallantoic membrane. J Cell Mol Med 6(3):439–446

Esch F, Baird A, Ling N, Ueno N, Hill F, Denoroy L, Klepper R, Gospodarowicz D, Böhlen P, Guillemin R (1985) Primary structure of bovine pituitary basic fibroblast growth factor (FGF) and comparison with the amino-terminal sequence of bovine brain acidic FGF. Proc Natl Acad Sci 82(19):6507–6511

Bikfalvi A, Klein S, Pintucci G, Rifkin DB (1997) Biological roles of fibroblast growth factor-2. Endocr Rev 18(1):26–45

CAS   PubMed   Google Scholar  

Hood JD, Frausto R, Kiosses WB, Schwartz MA, Cheresh DA (2003) Differential αv integrin–mediated Ras-ERK signaling during two pathways of angiogenesis. J Cell Biol 162(5):933–943

Beenken A, Mohammadi M (2009) The FGF family: biology, pathophysiology and therapy. Nat Rev Drug Discovery 8(3):235–253

Tufan AC, Satiroglu-Tufan NL (2005) The chick embryo chorioallantoic membrane as a model system for the study of tumor angiogenesis, invasion and development of anti-angiogenic agents. Curr Cancer Drug Targets 5(4):249–266

Augustin HG, Young Koh G, Thurston G, Alitalo K (2009) Control of vascular morphogenesis and homeostasis through the angiopoietin–Tie system. Nat Rev Mol Cell Biol 10(3):165–177

Sato TN, Tozawa Y, Deutsch U, Wolburg-Buchholz K, Fujiwara Y, Gendron-Maguire M, Gridley T, Wolburg H, Risau W, Qin Y (1995) Distinct roles of the receptor tyrosine kinases Tie-1 and Tie-2 in blood vessel formation. Nature 376(6535):70–74

Puri MC, Partanen J, Rossant J, Bernstein A (1999) Interaction of the TEK and TIE receptor tyrosine kinases during cardiovascular development. Development 126(20):4569–4580

Takakura N, Huang XL, Naruse T, Hamaguchi I, Dumont DJ, Yancopoulos GD, Suda T (1998) Critical role of the TIE2 endothelial cell receptor in the development of definitive hematopoiesis. Immunity 9(5):677–686

Suri C, Jones PF, Patan S, Bartunkova S, Maisonpierre PC, Davis S, Sato TN, Yancopoulos GD (1996) Requisite role of angiopoietin-1, a ligand for the TIE2 receptor, during embryonic angiogenesis. Cell 87(7):1171–1180

Andrae J, Gallini R, Betsholtz C (2008) Role of platelet-derived growth factors in physiology and medicine. Genes Dev 22(10):1276–1312

Goumans MJ, Ten Dijke P (2018) TGF-β signaling in control of cardiovascular function. Cold Spring Harb Perspect Biol 10(2):a022210

Article   PubMed   PubMed Central   Google Scholar  

Bonnans C, Chou J, Werb Z (2014) Remodelling the extracellular matrix in development and disease. Nat Rev Mol Cell Biol 15(12):786–801

Desgrosellier JS, Cheresh DA (2010) Integrins in cancer: biological implications and therapeutic opportunities. Nat Rev Cancer 10(1):9–22

Alavi A, Hood JD, Frausto R, Stupack DG, Cheresh DA (2003) Role of Raf in vascular protection from distinct apoptotic stimuli. Science 301(5629):94–96

Hynes RO (2009) The extracellular matrix: not just pretty fibrils. Science 326(5957):1216–1219

Humphrey JD, Dufresne ER, Schwartz MA (2014) Mechanotransduction and extracellular matrix homeostasis. Nat Rev Mol Cell Biol 15(12):802–812

Patan S (2000) Vasculogenesis and angiogenesis as mechanisms of vascular network formation, growth and remodeling. J Neurooncol 50:1–5

Brantley-Sieders DM, Caughron J, Hicks D, Pozzi A, Ruiz JC, Chen J (2004) EphA2 receptor tyrosine kinase regulates endothelial cell migration and vascular assembly through phosphoinositide 3-kinase-mediated Rac1 GTPase activation. J Cell Sci 117(10):2037–2049

Darvishi B, Eisavand MR, Majidzadeh-A K, Farahmand L (2022) Matrix stiffening and acquired resistance to chemotherapy: concepts and clinical significance. Br J Cancer 126(9):1253–1263

Jiang BH, Zheng JZ, Aoki M, Vogt PK (2000) Phosphatidylinositol 3-kinase signaling mediates angiogenesis and expression of vascular endothelial growth factor in endothelial cells. Proceedings of the National Academy of Sciences. ;97(4):1749-53

Semenza GL (2012) Hypoxia-inducible factors in physiology and medicine. Cell 148(3):399–408

Staton CA, Stribbling SM, Tazzyman S, Hughes R, Brown NJ, Lewis CE (2004) Current methods for assaying angiogenesis in vitro and in vivo. Int J Exp Pathol 85(5):233–248

Vogel HB, Berry RG (1975) Chorioallantoic membrane heterotransplantation of human brain tumors. Int J Cancer 15(3):401–408

Auerbach R, Arensman R, Kubai L, Folkman J (1975) Tumor-induced angiogenesis: lack of inhibition by irradiation. Int J Cancer 15(2):241–245

Ausprunk DH, Folkman J (1976) Vascular injury in transplanted tissues: fine structural changes in tumor, adult, and embryonic blood vessels. Virchows Archiv B 21(1):31–44

Google Scholar  

Zijlstra A, Seandel M, Kupriyanova TA, Partridge JJ, Madsen MA, Hahn-Dantona EA, Quigley JP, Deryugina EI (2006) Proangiogenic role of neutrophil-like inflammatory heterophils during neovascularization induced by growth factors and human tumor cells. Blood 107(1):317–327

Staton CA, Reed MW, Brown NJ (2009) A critical analysis of current in vitro and in vivo angiogenesis assays. Int J Exp Pathol 90(3):195–221

Miller SA, Bresee KL, Michaelson CL, Tyrell DA (1994) Domains of differential cell proliferation and formation of amnion folds in chick embryo ectoderm. Anat Rec 238(2):225–236

Ribatti D, Nico B, Vacca A, Roncali L, Burri PH, Djonov V (2001) Chorioallantoic membrane capillary bed: a useful target for studying angiogenesis and anti-angiogenesis in vivo. Anat Record: Official Publication Am Association Anatomists 264(4):317–324

Schlatter P, König MF, Karlsson LM, Burri PH (1997) Quantitative study of intussusceptive capillary growth in the chorioallantoic membrane (CAM) of the chicken embryo. Microvasc Res 54(1):65–73

Fuchs A, Lindenbaum ES (1988) The two-and three-dimensional structure of the microcirculation of the chick chorioallantoic membrane. Cells Tissues Organs 131(4):271–275

Ribatti D, Bertossi M, Nico B, Vacca A, Ria R, Riva A, Roncali L, Presta M (1998) Role of basic fibroblast growth factor in the formation of the capillary plexus in the chick embryo chorioallantoic membrane. An in situ hybridization, immunohistochemical and ultrastructural study. J Submicrosc Cytol Pathol 30(1):127–136

Risau W, Lemmon V (1988) Changes in the vascular extracellular matrix during embryonic vasculogenesis and angiogenesis. Dev Biol 125(2):441–450

Missirlis E, Karakiulakis G, Maragoudakis ME (1990) Angiogenesis is associated with collagenous protein synthesis and degradation in the chick chorioallantoic membrane. Tissue Cell 22(4):419–426

Regazzoni C, Winterhalter KH, Rohrer L (2001) Type I collagen induces expression of bone morphogenetic protein receptor type II. Biochem Biophys Res Commun 283(2):316–322

Ausprunk DH (1986) Distribution of hyaluronic acid and sulfated glycosaminoglycans during blood-vessel development in the chick chorioallantoic membrane. Am J Anat 177(3):313–331

Ferrara N, Kerbel RS (2005) Angiogenesis as a therapeutic target. Nature 438(7070):967–974

Kolluru GK, Bir SC, Kevil CG (2012) Endothelial dysfunction and diabetes: effects on angiogenesis, vascular remodeling, and wound healing. International journal of vascular medicine. ;2012

Olsson AK, Dimberg A, Kreuger J, Claesson-Welsh L (2006) VEGF receptor signalling? In control of vascular function. Nat Rev Mol Cell Biol 7(5):359–371

Presta M, Dell’Era P, Mitola S, Moroni E, Ronca R, Rusnati M (2005) Fibroblast growth factor/fibroblast growth factor receptor system in angiogenesis. Cytokine Growth Factor Rev 16(2):159–178

Semenza GL (2003) Targeting HIF-1 for cancer therapy. Nat Rev Cancer 3(10):721–732

Ferrara N (2004) Vascular endothelial growth factor as a target for anticancer therapy. Oncologist 9(S1):2–10

Hynes RO (2002) Integrins: bidirectional, allosteric signaling machines. Cell 110(6):673–687

Gridley T (2007) Notch signaling in vascular development and physiology. Development 134(15):2709–2718

Egeblad M, Werb Z (2002) New functions for the matrix metalloproteinases in cancer progression. Nat Rev Cancer 2(3):161–174

Asahara T, Murohara T, Sullivan A, Silver M, van der Zee R, Li T, Witzenbichler B, Schatteman G, Isner JM (1997) Isolation of putative progenitor endothelial cells for angiogenesis. Science 275(5302):964–966

Yang Y, Chen Y, Zhao Y, Ji F, Zhang L, Tang S, Zhang S, Hu Q, Li Z, Zhang F, Li Q (2022) Human menstrual blood-derived stem cell transplantation suppresses liver injury in DDC-induced chronic cholestasis. Stem Cell Res Ther 13(1):57

Kumar S, Patel A, Kumar D, Singh S, Kaul A (2023) Nanoparticle-based delivery of anti-angiogenic drugs for cancer therapy: an in vivo evaluation using chick chorioallantoic membrane model. J Nanobiotechnol 21(1):45–59

Wang Y, Zhao H, Lin J, Zhang J (2024) VEGF gene therapy enhances angiogenesis and promotes wound healing in the chick chorioallantoic membrane model. Gene Ther 31(1):12–23

Scott PS, Vrba LK, Wilks JW (1993) A simple vessel for the culture of chicken embryos used in angiogenesis assays. Microvasc Res 45(3):324–327

Nowak-Sliwinska P, Segura T, Iruela-Arispe ML (2014) The chicken chorioallantoic membrane model in biology, medicine and bioengineering. Angiogenesis 17:779–804

Beugels J, Molin DG, Ophelders DR, Rutten T, Kessels L, Kloosterboer N, Grzymala AA, Kramer BW, van der Hulst RR, Wolfs TG (2019) Electrical stimulation promotes the angiogenic potential of adipose-derived stem cells. Sci Rep 9(1):12076

Kirchner LM, Schmidt SP, Gruber BS (1996) Quantitation of angiogenesis in the chick chorioallantoic membrane model using fractal analysis. Microvasc Res 51(1):2–14

Humenik F, Cizkova D, Cikos S, Luptakova L, Madari A, Mudronova D, Kuricova M, Farbakova J, Spirkova A, Petrovova E, Cente M, Molecular (2019) Cell Proteom 18(9):1824–1835

Woloszyk A, Buschmann J, Waschkies C, Stadlinger B, Mitsiadis TA (2016) Human dental pulp stem cells and gingival fibroblasts seeded into silk fibroin scaffolds have the same ability in attracting vessels. Front Physiol 7:196818

Article   Google Scholar  

Dohle DS, Pasa SD, Gustmann S, Laub M, Wissler JH, Jennissen HP, Dünker N (2009 Nov) Chick ex ovo culture and ex ovo CAM assay: how it really works. JoVE (Journal Visualized Experiments) 30(33):e1620

Schomann T, Qunneis F, Widera D, Kaltschmidt C, Kaltschmidt B (2013) Improved method for ex ovo-cultivation of developing chicken embryos for human stem cell xenografts. Stem cells international. ;2013

Auerbach R, Kubai L, Knighton D, Folkman J (1974) A simple procedure for the long-term cultivation of chicken embryos. Dev Biol 41(2):391–394

Hamamichi S, Nishigori H (2001) Establishment of a chick embryo shell-less culture system and its use to observe change in behavior caused by nicotine and substances from cigarette smoke. Toxicol Lett 119(2):95–102

Bigliardi PL, Alsagoff SA, El-Kafrawi HY, Pyon JK, Wa CT, Villa MA (2017) Povidone iodine in wound healing: a review of current concepts and practices. Int J Surg 44:260–268

Johnston PM, Comar CL (1955) Distribution and contribution of calcium from the albumen, yolk and shell to the developing chick embryo. Am J Physiology-Legacy Content 183(3):365–370

Tahara Y, Obara K (2014) A novel shell-less culture system for chick embryos using a plastic film as culture vessels. J Poult Sci 51(3):307–312

Leong HS, Chambers AF, Lewis JD (2012) Assessing cancer cell migration and metastatic growth in vivo in the chick embryo using fluorescence intravital imaging. In Vivo Cellular Imaging Using Fluorescent Proteins: Methods and Protocols. :1–4

Ribatti D (2016) The chick embryo chorioallantoic membrane (CAM). A multifaceted experimental model. Mech Dev 141:70–77

Janković BD, Isaković K, Lukić ML, Vujanović NL, Petrović S, Marković BM (1975) Immunological capacity of the chicken embryo. I. Relationship between the maturation of lymphoid tissues and the occurrence of cell-mediated immunity in the developing chicken embryo. Immunology 29(3):497

PubMed   PubMed Central   Google Scholar  

Yang XB, Whitaker MJ, Sebald W, Clarke N, Howdle SM, Shakesheff KM, Oreffo RO (2004) Human osteoprogenitor bone formation using encapsulated bone morphogenetic protein 2 in porous polymer scaffolds. Tissue Eng 10(7–8):1037–1045

Moreno-Jiménez I, Kanczler JM, Hulsart-Billstrom G, Inglis S, Oreffo RO (2017) The chorioallantoic membrane assay for biomaterial testing in tissue engineering: a short-term in vivo preclinical model. Tissue Eng Part C: Methods 23(12):938–952

Smart N, Dubé KN, Riley PR (2009) Coronary vessel development and insight towards neovascular therapy. Int J Exp Pathol 90(3):262–283

Ribatti D, Nico B, Vacca A, Presta M (2006) The gelatin sponge–chorioallantoic membrane assay. Nat Protoc 1(1):85–91

Ribatti D (2010) Morphofunctional Study of the microvasculature by the Chick Chorioallantoic Membrane Assay. Springer

Schneider CA, Rasband WS, Eliceiri KW (2012) NIH Image to ImageJ: 25 years of image analysis. Nat Methods 9(7):671–675

Litjens G, Kooi T, Bejnordi BE, Setio AA, Ciompi F, Ghafoorian M, Van Der Laak JA, Van Ginneken B, Sánchez CI (2017) A survey on deep learning in medical image analysis. Med Image Anal 42:60–88

Zudaire E, Gambardella L, Kurcz C, Vermeren S (2011) A computational tool for quantitative analysis of vascular networks. PLoS ONE 6(11):e27385

The MathWorks Inc (2024) MATLAB and Simulink for Technical Computing. Retrieved from https://www.mathworks.com

Carpenter AE, Jones TR, Lamprecht MR, Clarke C, Kang IH, Friman O, Guertin DA, Chang JH, Lindquist RA, Moffat J, Golland P (2006) CellProfiler: image analysis software for identifying and quantifying cell phenotypes. Genome Biol 7:1–1

VisualSonics (2024) VevoLAB Advanced Imaging System. Retrieved from https://www.visualsonics.com

Lindsey JR, Butters RR, Malarkey DE (2015) Integrating multi-modal imaging for comprehensive CAM assay analysis. J Biomedical Imaging, 634–645

Weis SM, Cheresh DA (2005) Pathophysiological consequences of VEGF-induced vascular permeability. Nature 437(7058):497–504

Kunz P, Schenker A, Sähr H, Lehner B, Fellenberg J (2019) Optimization of the chicken chorioallantoic membrane assay as reliable in vivo model for the analysis of osteosarcoma. PLoS ONE 14(4):e0215312

Russell WMS, Burch RL (1959) The principles of Humane experimental technique. Med J Aust 1(13):500

Buhr CR, Wiesmann N, Tanner RC, Brieger J, Eckrich J (2020) The chorioallantoic membrane assay in nanotoxicological research—An alternative for in vivo experimentation. Nanomaterials 10(12):2328

As MN, Deshpande R, Kale VP, Bhonde RR, Datar SP (2018) Establishment of an in ovo chick embryo yolk sac membrane (YSM) assay for pilot screening of potential angiogenic and anti-angiogenic agents. Cell Biol Int 42(11):1474–1483

Shekatkar MR, Kheur SM, Kharat AH, Deshpande SS, Sanap AP, Kheur MG, Bhonde RR (2022) Assessment of angiogenic potential of mesenchymal stem cells derived conditioned medium from various oral sources. J Clin Translational Res 8(4):323

CAS   Google Scholar  

Download references

Acknowledgements

The authors thank Regenerative Medicine Laboratory, Dr. D. Y. Patil Dental College and Hospital, Dr. D.Y. Patil Vidyapeeth, Pimpri, Pune, India, for their support.

Not applicable.

Author information

Authors and affiliations.

Department of Oral Pathology and Microbiology, Dr. D. Y. Patil Dental College and Hospital, Dr. D. Y. Patil Vidyapeeth, Pimpri, Pune, Maharashtra, India

Madhura Shekatkar & Supriya Kheur

Department of Pediatric and Preventive Dentistry, Bharati Vidyapeeth (Deemed to be University), Dental College and Hospital, Navi Mumbai, India

Shantanu Deshpande

Regenerative Medicine Laboratory, Dr. D. Y. Patil Dental College and Hospital, Dr. D. Y. Patil Vidyapeeth, Pune, Maharashtra, India

Swapnali Sakhare, Avinash Sanap & Ramesh Bhonde

Department of Prosthodontics, M.A. Rangoonwala College of Dental Sciences and Research Centre, Pune, Maharashtra, India

Mohit Kheur

You can also search for this author in PubMed   Google Scholar

Contributions

Substantial contributions to the conception or design of the work: MK, RB, SK.Acquisition, analysis, or interpretation of data for the work: MS, SK, SD, SS, AS.Drafting the work: MS, SD, SS. Revising it critically for important intellectual content: AS, MK, RB. Final approval of the version to be published: SK, RB, MK, AS.All authors reviewed the manuscript.

Corresponding author

Correspondence to Supriya Kheur .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Shekatkar, M., Kheur, S., Deshpande, S. et al. Critical appraisal of the chorioallantoic membrane model for studying angiogenesis in preclinical research. Mol Biol Rep 51 , 1026 (2024). https://doi.org/10.1007/s11033-024-09956-x

Download citation

Received : 01 March 2024

Accepted : 18 September 2024

Published : 28 September 2024

DOI : https://doi.org/10.1007/s11033-024-09956-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Chorioallantoic membrane assay
  • Angiogenesis
  • Chick embryo
  • In ovo yolk sac membrane (YSM)
  • Find a journal
  • Publish with us
  • Track your research

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Anniversary
  • BMJ Journals

You are here

  • Volume 33, Issue 5
  • Managing the exponential growth of mendelian randomization studies
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0002-4049-993X Marcus R Munafo 1 ,
  • http://orcid.org/0000-0002-2797-5428 Jamie Brown 2 ,
  • http://orcid.org/0000-0002-1709-1098 Marita Hefler 3 ,
  • http://orcid.org/0000-0002-1407-8314 George Davey Smith 4
  • 1 University of Bristol , Bristol , UK
  • 2 Public Health and Epidemiology , University College London , London , UK
  • 3 Menzies School of Health Research , Charles Darwin University , Casuarina , Northern Territory , Australia
  • 4 Department of Social Medicine , MRC Centre for Causal Analyses in Translational Epidemiology , Bristol , UK
  • Correspondence to Dr Marcus R Munafo; marcus.munafo{at}bristol.ac.uk

https://doi.org/10.1136/tc-2024-058987

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

  • Carcinogens
  • Tobacco industry

Much of the research we publish relates to questions of cause and effect. In an ideal world, we would subject these questions to experimentation, randomising study participants to different conditions. However, in many cases – particularly in the context of addiction – such randomization is simply not possible. We cannot randomise tobacco-naïve children to use e-cigarettes, for example, to determine whether or not vaping acts as a ‘gateway’ to subsequent smoking. In these cases, we have to rely on observational methods, and these suffer from well described problems of confounding, including reverse causality.

Several methods exist for strengthening causal inference in such cases, from the use of prospective data and statistical adjustment for confounding, through to propensity score matching and the use of natural experiments. One method, in particular, has experienced exponential growth recently – Mendelian randomization (MR). 1 2 This approach uses genetic variants as proxies for an exposure of interest, effectively as a form of instrumental variable analysis. If relevant assumptions hold, this should protect against confounding, including reverse causality, due to the random allocation of genotype at meiosis and the fact that environmental exposures cannot directly alter germline DNA sequence. 3

As summary data from a vast range of genome-wide association studies (GWAS) have become widely and freely available, it has become possible to run every permutation of the vast number of exposure-outcome relationships that exist using MR methods. This, in principle, is a good thing. Although MR is not without its limitations (and critics), it is a potentially powerful tool that has provided important evidence – for example suggesting a causal effect of cigarette smoking on some adverse mental health outcomes. 4 However, with the advent of platforms such as MR Base ( https://www.mrbase.org ), all bivariate relationships tractable via MR using summary GWAS data can be considered to have been conducted. 5

Unfortunately, GWAS with ever-larger samples have enabled us to detect genetic variants associated with an exposure of interest that are also associated with a range of other exposures (and not via the exposure of interest), effectively reintroducing confounding when using MR. For example, genetic variants associated with smoking initiation are also associated with behavioural outcomes in young children, at an age before any exposure to smoking, suggesting that these variants may not be uniquely capturing smoking initiation but instead some broad risk-taking phenotype. 6

In other words, while the exposure of interest may be smoking initiation, the primary phenotype most proximal to the genetic proxies used may be risk taking. This means that using these variants as a proxy for smoking initiation may introduce genetic confounding. Dynastic effects may also be operating, whereby offspring genetic variants become associated with particular environmental exposures due to parental genotypes influencing these exposures and (of course) offspring genotype. This is likely to be a particular issue in the context of substance use.

Taken together, this means that using MR in the context of complex behavioural exposures requires careful thought – the use of negative controls to exclude alternative pathways to the outcome of interest, and ideally triangulation of evidence using multiple study designs, combined with a detailed understanding of the plausible biological pathways where possible. 7

Despite this, we are unfortunately seeing an ever-increasing number of MR studies that simply use summary GWAS data, and lack negative controls or evidence from other study designs and methodologies to strengthen inference. These often investigate causal pathways that are already known (eg, whether smoking causes coronary arterial disease), or exposures that simply do not lend themselves to genetic instrumentation within an MR framework (eg, skipping breakfast). Ultimately, these studies either do not advance knowledge (because the answer is already known) or offer little more than a conventional observational study. Indeed, they may offer less , and in fact have negative utility, in that they come packaged with causal claims, in a way that conventional observational studies typically do not. In this way they may actually serve to degrade knowledge.

What is driving this increase? Ultimately, it is down to current incentive structures that reward publication over knowledge.

An indicator of this is that there are now relatively few studies applying MR methods that report null results. This is ironic. A key early application of MR was to establish whether widely-reported conventional observational analyses were causal. For example, circulating C-reactive protein (CRP) is associated with coronary heart disease (CHD) and the development of novel therapeutics targeted CRP because of its presumed causal role; however, MR analyses established that CRP does not cause elevated CHD, with the likely reason for the observed association being that early stages of the atherosclerotic disease process increase CRP levels, as do many established causes of CHD such as cigarette smoking and elevated adiposity. 8 9 Another key null early MR findings related to HDL cholesterol, which was widely considered to reduce CHD risk; MR and RCTs concurred in demonstrating the lack of benefit of higher HDL. 10 Such null results are critical in correcting erroneous findings in observational epidemiology.

There are other ways to use Mendelian randomization that genuinely add to the sum of human knowledge. For example, Khouja and colleagues used multivariable MR to attempt to dissect the effects of nicotine and non-nicotine constituents of tobacco smoke on outcomes known to be caused by smoking. 11 And Davies and colleagues triangulated evidence from MR and the natural experiment of the raising of the school leaving age in the UK to understand the causal effects of educational attainment on smoking initiation. 12

MR analyses have been already used to good effect in a range of areas relevant to addictive behaviours. For example, in conventional analyses, ‘moderate’ alcohol intake is associated with reduced cardiovascular risk, when compared with abstinence or heavier drinking. These findings, repeatedly reported over several decades, achieved widespread recognition.

However, using genetic variants that strongly influence alcohol consumption, together with the natural experiment created by few women drinking in east Asian countries, genotype-predicted alcohol intake was shown to be linearly related to blood pressure; critically, this was only observed in men – the lack of a relationship among women (who drank virtually no alcohol) indicated that the genetic variants did not have an effect on blood pressure except through their relationship with alcohol intake. 13 Using the same approach, alcohol was shown to have a continuous linear adverse effect on stroke risk. 14 As mentioned above, MR was also used to demonstrate that HDL cholesterol – that which was supposed to mediate the protective effect of alcohol on CHD – was not actually protective. Thus, the evidence from MR studies and that from randomised controlled trials triangulated to clarify one of the most controversial issues in cardiovascular epidemiology.

We therefore suggest that MR studies that simply use summary GWAS data to estimate the causal effect of X on Y should be given a low priority for publication unless they genuinely advance knowledge. This could be achieved by exploring complex exposures and/or outcomes (including testing for possible mediation via multivariable MR), incorporating negative controls and/or evidence from other study designs such as natural experiments, and collaborating with biologists to advance plausible biological mechanisms. MR studies should also conform to the reporting guidelines in the STROBE-MR guidelines. 15

We do not wish to be overly prescriptive, but ultimately if a study offers nothing more than the mechanical application of a statistical package to publicly available summary data, then it may not warrant the use of editorial and reviewer time, and journal space.

  • Davey Smith G ,
  • Davies NM ,
  • Holmes MV ,
  • Davey Smith G
  • Wootton RE ,
  • Richmond RC ,
  • Stuijfzand BG , et al
  • Elsworth B ,
  • Erola P , et al
  • Khouja JN ,
  • Taylor AE , et al
  • Lawlor DA ,
  • Tilling K ,
  • Timpson NJ ,
  • Harbord RM , et al
  • Nordestgaard BG ,
  • Phillips AN
  • Sanderson E ,
  • Wootton RE , et al
  • Dickson M ,
  • Davey Smith G , et al
  • Millwood IY ,
  • Walters RG ,
  • Mei XW , et al
  • Skrivankova VW ,
  • Woolf BAR , et al

Contributors MRM (conceptualization [equal], writing—original draft [lead], writing—review & editing [equal]), JB (conceptualization [equal], writing—review & editing [equal]), MH (writing – review & editing [equal]), and GDS (conceptualization [equal], writing—review & editing [equal]).

Competing interests MRM leads an MRC Programme that conducts a substantial amount of MR research (MC_UU_00032/07). JB has received unrestricted funding to study smoking cessation from J&J and Pfizer, who manufacture medically licensed smoking cessation pharmacotherapies. MH has no competing interests to declare. GDS coauthored the first extended exposition of Mendelian randomization, and therefore has considerable intellectual investment in the approach and has received funding for MR studies over many years. He directs an MRC Unit that conducts a substantial amount of MR research (MC_UU_00032/01).

Read the full text or download the PDF:

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Journal Proposal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

foods-logo

Article Menu

limitations of study in research methodology

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Color matters: a study exploring the influence of packaging colors on university students’ perceptions and willingness to pay for organic pasta.

limitations of study in research methodology

1. Introduction

2. materials and methods, 4. discussion, 5. conclusions, 6. limitations, author contributions, institutional review board statement, informed consent statement, data availability statement, acknowledgments, conflicts of interest.

  • Spence, C. On the manipulation, and meaning (s), of color in food: A historical perspective. J. Food Sci. 2023 , 88 , A5–A20. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Wu, X.; Xiong, J.; Yan, J.; Wang, Y. Perceived quality of traceability information and its effect on purchase intention towards organic food. J. Mark. Manag. 2021 , 37 , 1267–1286. [ Google Scholar ] [ CrossRef ]
  • Clydesdale, F.M. Color perception and food quality. J. Food Qual. 1991 , 14 , 61–74. [ Google Scholar ] [ CrossRef ]
  • Marozzo, V.; Raimondo, M.A.; Miceli, G.N.; Scopelliti, I. Effects of au naturel packaging colors on willingness to pay for healthy food. Psychol. Mark. 2020 , 37 , 913–927. [ Google Scholar ] [ CrossRef ]
  • Wang, C.Y.; Chang, F.Y. The influence of packaging color on taste expectations and perceptions. Color Res. Appl. 2022 , 47 , 1426–1441. [ Google Scholar ] [ CrossRef ]
  • Steiner, K.; Florack, A. The Influence of Packaging Color on Consumer Perceptions of Healthfulness: A Systematic Review and Theoretical Framework. Foods 2023 , 12 , 3911. [ Google Scholar ] [ CrossRef ]
  • Soroka, A.; Wojciechowska-Solis, J. Consumer motivation to buy organic food depends on lifestyle. Foods 2019 , 8 , 581. [ Google Scholar ] [ CrossRef ]
  • Nuttavuthisit, K.; Thøgersen, J. The importance of consumer trust for the emergence of a market for green products: The case of organic food. J. Bus. Ethics 2017 , 140 , 323–337. [ Google Scholar ] [ CrossRef ]
  • Vega-Zamora, M.; Torres-Ruiz, F.J.; Parras-Rosa, M. Towards sustainable consumption: Keys to communication for improving trust in organic foods. J. Clean. Prod. 2019 , 216 , 511–519. [ Google Scholar ] [ CrossRef ]
  • Chrysochou, P.; Festila, A. A content analysis of organic product package designs. J. Consum. Mark. 2019 , 36 , 441–448. [ Google Scholar ] [ CrossRef ]
  • Margariti, K. “White” Space and Organic Claims on Food Packaging: Communicating Sustainability Values and Affecting Young Adults’ Attitudes and Purchase Intentions. Sustainability 2021 , 13 , 11101. [ Google Scholar ] [ CrossRef ]
  • Pereira, C. The meaning of colors in food packaging: A study of industrialized products sold in Brazil. Color Res. Appl. 2021 , 46 , 566–574. [ Google Scholar ] [ CrossRef ]
  • Klimchuk, M.R.; Krasovec, S.A. Packaging Design: Successful Product Branding from Concept to Shelf ; John Wiley & Sons: Hoboken, NJ, USA, 2013. [ Google Scholar ]
  • Lyons, S.J.; Wien, A.H. Evoking premiumness: How color-product congruency influences premium evaluations. Food Qual. Prefer. 2018 , 64 , 103–110. [ Google Scholar ] [ CrossRef ]
  • Karnal, N.; Machiels, C.J.; Orth, U.R.; Mai, R. Healthy by design, but only when in focus: Communicating non-verbal health cues through symbolic meaning in packaging. Food Qual. Prefer. 2016 , 52 , 106–119. [ Google Scholar ] [ CrossRef ]
  • Shen, M.; Shi, L.; Gao, Z. Beyond the food label itself: How does color affect attention to information on food labels and preference for food attributes? Food Qual. Prefer. 2018 , 64 , 47–55. [ Google Scholar ] [ CrossRef ]
  • Schuldt, J.P. Does green mean healthy? Nutrition label color affects perceptions of healthfulness. Health Commun. 2013 , 28 , 814–821. [ Google Scholar ] [ CrossRef ]
  • Hallez, L.; Vansteenbeeck, H.; Boen, F.; Smits, T. Persuasive packaging? The impact of packaging color and claims on young consumers’ perceptions of product healthiness, sustainability and tastiness. Appetite 2023 , 182 , 106433. [ Google Scholar ] [ CrossRef ]
  • Huang, L.; Lu, J. The impact of package color and the nutrition content labels on the perception of food healthiness and purchase intention. J. Food Prod. Mark. 2016 , 22 , 191–218. [ Google Scholar ] [ CrossRef ]
  • Pan, C.; Lei, Y.; Wu, J.; Wang, Y. The influence of green packaging on consumers’ green purchase intention in the context of online-to-offline commerce. J. Syst. Inf. Technol. 2021 , 23 , 133–153. [ Google Scholar ] [ CrossRef ]
  • Boncinelli, F.; Gerini, F.; Piracci, G.; Bellia, R.; Casini, L. Effect of executional greenwashing on market share of food products: An empirical study on green-coloured packaging. J. Clean. Prod. 2023 , 391 , 136258. [ Google Scholar ] [ CrossRef ]
  • Su, J.; Wang, S. Influence of food packaging color and foods type on consumer purchase intention: The mediating role of perceived fluency. Front. Nutr. 2024 , 10 , 1344237. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Wang, E.S. The influence of visual packaging design on perceived food product quality, value, and brand preference. Int. J. Retail Distrib. Manag. 2013 , 41 , 805–816. [ Google Scholar ] [ CrossRef ]
  • Kunz, S.; Haasova, S.; Florack, A. Fifty shades of food: The influence of package color saturation on health and taste in consumer judgments. Psychol. Mark. 2020 , 37 , 900–912. [ Google Scholar ] [ CrossRef ]
  • Tijssen, I.; Zandstra, E.H.; de Graaf, C.; Jager, G. Why a ‘light’product package should not be light blue: Effects of package colour on perceived healthiness and attractiveness of sugar-and fat-reduced products. Food Qual. Prefer. 2017 , 59 , 46–58. [ Google Scholar ] [ CrossRef ]
  • Wąsowicz, G.; Styśko-Kunkowska, M.; Grunert, K.G. The meaning of colours in nutrition labelling in the context of expert and consumer criteria of evaluating food product healthfulness. J. Health Psychol. 2015 , 20 , 907–920. [ Google Scholar ] [ CrossRef ]
  • Spence, C. On the psychological impact of food colour. Flavour 2015 , 4 , 1–16. [ Google Scholar ] [ CrossRef ]
  • Kim, E.J.; Kim, E.L.; Kim, M.; Tang, J. The matching effect of local food and color on ethical dining behaviors: The roles of credibility and green image. Int. J. Contemp. Hosp. Manag. 2024 , 36 , 1557–1576. [ Google Scholar ] [ CrossRef ]
  • Becker, G.M.; DeGroot, M.H.; Marschak, J. Measuring utility by a single-response sequential method. Behav. Sci. 1964 , 9 , 226–232. [ Google Scholar ] [ CrossRef ]
  • Canavari, M.; Drichoutis, A.C.; Lusk, J.L.; Nayga, R.M., Jr. How to run an experimental auction: A review of recent advances. Eur. Rev. Agric. Econ. 2019 , 46 , 862–922. [ Google Scholar ] [ CrossRef ]
  • Depositario, D.P.T.; Nayga, R.M., Jr.; Wu, X.; Laude, T.P. Should students be used as subjects in experimental auctions? Econ. Lett. 2009 , 102 , 122–124. [ Google Scholar ] [ CrossRef ]
  • Brunsø, K.; Birch, D.; Memery, J.; Temesi, Á.; Lakner, Z.; Lang, M.; Dean, D.; Grunert, K.G. Core dimensions of food-related lifestyle: A new instrument for measuring food involvement, innovativeness and responsibility. Food Qual. Prefer. 2021 , 91 , 104192. [ Google Scholar ] [ CrossRef ]
  • Roininen, K.; Lähteenmäki, L.; Tuorila, H. Quantification of consumer attitudes to health and hedonic characteristics of foods. Appetite 1999 , 33 , 71–88. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Zander, K.; Padel, S.; Zanoli, R. EU organic logo and its perception by consumers. Br. Food J. 2015 , 117 , 1506–1526. [ Google Scholar ] [ CrossRef ]
  • Szente, V.; Torma, D. Organic food purchase habits in Hungary. J. Econ. Dev. Environ. People 2015 , 4 , 32–40. [ Google Scholar ] [ CrossRef ]
  • Paull, J. The Global Growth and Evolution of Organic Agriculture. In Research Advancements in Organic Farming ; Nova Science: Hauppauge, NY, USA, 2023; pp. 1–17. [ Google Scholar ]

Click here to enlarge figure

Male3231%
Female7069%
18–259997%
26–3533%
Capital city4443%
City/town3736%
Village2121%
High school9391%
Diploma99%
Low1111%
Average5049%
High4140%
Never/almost never3231%
Less than once per month3534%
1–2 times per month2525%
Once per week77%
Several times per week33%
TrustSustainabilityPremiumnessHealthiness
White5.084.644.335.21
Black5.054.614.925.15
Green5.214.704.425.20
Blue5.074.614.445.04
WTPStd. Dev.MinMax
White544.58185.7601000
Black570.87198.0401000
Green543.59192.501000
Blue538.71194.4101000
White WTPBlack WTPGreen WTPBlue WTP
Age0.980.980.160.92
Gender0.92−0.10−0.980.23
Education−0.46−0.810.19−0.62
Income0.09−0.280.390.32
Place of living−0.61−0.56−0.08−0.16
Organic purchase0.821.350.710.34
Trust2.74 **3.43 **2.49 **2.73 **
Sustainability0.240.610.481.06
Premiumness4.17 **2.73 **2.21 **2.64 **
Healthiness2.03 **2.27 **1.162.35 **
Price consciousness1.472.02 **1.281.12
Quality consciousness1.610.89−0.330.75
General health interest0.230.011.79 *1.02
Natural product interest−0.740.34−0.24−1.05
Food responsibility−0.24−0.84−0.580.77
Constant−1.08−0.66−0.21−0.86
R20.1610.2710.1950.261
Chi276.53106.3053.6294.16
p0.0000.0000.0000.000
White WTPBlack WTPGreen WTPBlue WTP
Non-organic buyers (n = 32)
Trust2.00 **2.76 **3.00 **2.79 **
Sustainability0.090.16−1.28−0.07
Premiumness2.01 **1.491.451.39
Healthiness0.050.021.80 *0.89
Price consciousness−0.45−0.660.05−1.40
Constant1.461.490.521.17
R −0.0690.1570.0660.372
Chi 15.5336.9431.3245.16
p0.0080.0000.0000.000
Organic buyers (n = 70)
Trust3.52 **3.11 **1.512.97 **
Sustainability−0.050.570.140.74
Premiumness4.23 **2.66 **2.38 **3.86 **
Healthiness1.221.330.300.70
Price consciousness1.92 *2.49 **1.81 *2.22 **
Constant−0.13−0.640.41−0.11
R 0.2070.3250.1740.221
Chi 67.6263.8924.7570.32
p0.0000.0000.0000.000
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Nagy, L.B.; Temesi, Á. Color Matters: A Study Exploring the Influence of Packaging Colors on University Students’ Perceptions and Willingness to Pay for Organic Pasta. Foods 2024 , 13 , 3112. https://doi.org/10.3390/foods13193112

Nagy LB, Temesi Á. Color Matters: A Study Exploring the Influence of Packaging Colors on University Students’ Perceptions and Willingness to Pay for Organic Pasta. Foods . 2024; 13(19):3112. https://doi.org/10.3390/foods13193112

Nagy, László Bendegúz, and Ágoston Temesi. 2024. "Color Matters: A Study Exploring the Influence of Packaging Colors on University Students’ Perceptions and Willingness to Pay for Organic Pasta" Foods 13, no. 19: 3112. https://doi.org/10.3390/foods13193112

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

  • Open access
  • Published: 27 September 2024

A study of medical students’ experiences at Shiraz University of medical sciences from the implementation of integration in medical education: a qualitative study

  • Fariba Khanipoor   ORCID: orcid.org/0000-0001-8424-494X 1 ,
  • Leila Bazrafkan   ORCID: orcid.org/0000-0002-9741-3981 2 ,
  • Sadegh Aramesh   ORCID: orcid.org/0000-0001-9653-1962 3 ,
  • Mehrnaz Shojaei   ORCID: orcid.org/0000-0002-0463-0566 4 &
  • Afsaneh Ghasemi   ORCID: orcid.org/0000-0001-6643-5056 5  

BMC Medical Education volume  24 , Article number:  1042 ( 2024 ) Cite this article

15 Accesses

Metrics details

The basic science course is the foundation of medical knowledge, and how and in what form the teaching of this course is a vital issue. A successful curriculum should include everything medical students need in their future careers. Basic science education should be such that students clearly understand the relationship between the content and its application in clinical practice. Therefore, the need to change the curriculum of the general medicine course, especially course of the Basic science, in content and layout in an integrated way is felt more than before. This study was designed to explain the experiences of medical students from the Shiraz University of Medical Sciences since the implementation of integration in 2020.

The present study was qualitative research with a conventional content analysis method. Participants were selected for the interview purposefully. They included 12 Medical students from basic and clinical sciences and 5 faculty members. Data were collected through semi-structured interviews and the content analysis method. Four methods of Goba and Lincoln were used to evaluate the accuracy of the data.

After summarizing and analyzing the data, 221 codes were extracted. They were divided into seven subcategories and finally, three main themes including Enjoyable experiences (advantages of the integration system), Upsetting experiences (Disadvantages of the integration system), Resolutions to solving integration problems were extracted. Generally, the findings indicated a positive evaluation of medical students of Shiraz University of Medical Sciences from the integration system, and they stated that the integration causes an interaction between basic and clinical sciences and also increases students’ motivation.

The findings indicated a positive evaluation of Shiraz’s medical students of the integrated system. According to the results of this research, it can be said; that the use of integration horizontally and vertically in medical education improves the quality of education compared to traditional methods. The integration of basic and clinical science is important in the sense that can be a powerful tool for learning and acquiring skills, also, it can also help in the promotion of professional development of students and motivate them to study more in interactive manner and be loyal to lifelong learning. study generally showed that combining theoretical and practical courses has advantages and disadvantages, but its advantages are more. Paying attention to the shortcomings, especially in the supply of human resources and professors, and reforming the program with continuous revision are issues that education managers and medical educators of the general medical practice should address.

Peer Review reports

The modern history of medicine begins with the Flexner intervention of 1910, which produced the dichotomous (preclinical/clinical) traditional medical curriculum [ 1 , 2 ]. The dichotomous strategy led to the separation of basic sciences from clinical sciences, and it continues in some medical schools until now [ 3 ]. This gave birth to the innovative integrated curriculum, including defining integration [ 4 ]. The idea of “the integration ladder,” presented by Harden, is a useful tool for medical teachers to improve education in medical sciences [ 5 , 6 , 7 ]. Integration means combining content areas or subjects that are included in the curriculum in traditional educational systems separately and in isolation from each other [ 3 ]. The “Ladder of Integration” has 11 steps from subject-based curriculum to integrated teaching and learning curriculum. In the first four steps, the emphasis is on the subjects or disciplines: Isolation, Awareness, Harmonization, Nesting, Temporal coordination, Sharing, Correlation, Complementary, Multi-disciplinary, Inter-disciplinary, and Trans-disciplinary [ 4 ]. The integration ladder is a useful tool for the medical teacher and can be used as an aid in planning, implementing, and evaluating the medical curriculum [ 8 ].

Some advantages of integration are: reducing the fragmentation of course parts and creating unity and communication between disciplines, increasing student motivation, more effective teaching, and raising the level of educational goals (improving the level of knowledge retention to the level of its application and problem-solving skills), the communication and cooperation of professors, the rationalization of educational resources, increasing self-confidence, positive attitude, and ability in learning. Of course, the integration program also has disadvantages based on the studies conducted, such as not fully covering the content and basic principles of a field, unintentionally omitting some topics, professors’ less proficiency in integrated education compared to the traditional one, and higher cost [ 9 , 10 , 11 ].

Recent evidence in Iran and other parts of the world shows that we can improve the medical school curriculum by integrating basic and clinical sciences [ 10 , 12 , 13 , 14 , 15 ]. Ebrahimzadeh et al. (2021) evaluated the effectiveness of Integrated Teaching on Students’ Learning by a quasi-experimental study in the infectious disease ward. The integrated teaching approach was adopted in the intervention group by four professors of epidemiology, microbiology, infectious diseases and pharmacology. The results reveal that the integration of basic and clinical subjects helps medical students to better comprehend the pathophysiology of diseases and increases their satisfaction [ 13 ].

Rosenberg and Hartley (2024), in a mixed-method study titled “Continuity of Changed Attitudes Among Students in an Integrated Anatomy Curriculum” in the United States, asked students about attitude changes in the anatomy course. The results showed the persistence of specific attitudinal differences between groups with blocked anatomy versus integrated anatomy in learning anatomy and confidence in this learning [ 14 ]. In the book A Practical Guide for Medical Teachers, Harden explained the importance of adding social sciences to the curriculum of medical schools and emphasized that social and behavioral sciences in medical school curricula are core subjects in medical education, and they should be integrated into different courses [ 15 ].

In the traditional medical education curriculum at Shiraz University of Medical Sciences (SUMS), subjects were taught as separate and independent packages, with an emphasis on basic sciences in the first years and clinical experiences in the later years. This separation hindered understanding the contents and creating connections between basic and clinical sciences. One of the solutions offered to address the lack of connection between the different parts of theoretical courses and the real field of medicine is the horizontal and vertical integration of basic and clinical courses, which has received special attention from the Ministry of Health in recent years [ 16 ].

The integrated program at Shiraz University of Medical Sciences includes the horizontal integration of basic science courses and 36 months of rotation in clinical departments, which started in 2009 [ 15 ]. Since improving quality is one of the basic goals in higher education worldwide, the evaluation of higher education cannot rely solely on quantitative indicators but must also be comprehensive, containing both quantitative and qualitative criteria and internal and external evaluation. Although the positive and negative impacts of this program have been expressed through the quantitative study of Ruhal Amini and colleagues using the CIPP model, we are not aware of the deeper opinions of the students who have experienced this program [ 17 ].

In this study, we aimed to identify the themes that explain medical students’ experiences at Shiraz University of Medical Sciences regarding the implementation of horizontal and vertical integration of basic medical sciences in sharing levels in medical education. Additionally, we aimed to develop a conceptual framework to explain the challenges and problems through the perspectives of both students and professors.

The current research is a qualitative research of the conventional content analysis type which was conducted on Shiraz medical students with integration experience in 2020. The present qualitative research, through semi-structured interviews, examines the experiences, meanings, understanding, and interpretations that the participants have about the implementation of integration in medical education, and explores new perspectives and concepts of its implementation [ 18 , 19 ].

This study was conducted at Shiraz University of Medical Sciences (SUMS). The university currently includes more than 10,000 students, 200 majors, 782 faculty members, 54 research centers, 13 educational hospitals, a history of 70 years [ 20 ].

Participants and sampling

The statistical sample included 12 general medical students and 5 medical faculty professors in basic and clinical sciences. The participants were selected as a targeted sampling of students with maximum variation in terms of the academic semester and academic level, and sampling continued until data saturation when no new code was attained during interviews and repetition of the previous categories and codes.

Accordingly, we selected 12 medical students from year 1 to senior interns and 5 faculty members (FM) within a wide range of academic rank from shiraz Universities of Medical Sciences in 2020 and 2024. The inclusion criteria for students were: completion of the informed consent form, medical students, interested in participating in the research, allocating time for interviewing and reviewing the material, and the exclusion criterion was, not being satisfied with participation in each stage of the study. The inclusion criteria for faculty members in the medical education field were a minimum of ten-year teaching experience. The exclusion criterion was the unwillingness to participate in the study.

Initially, we collected data from a medical student well known for his effort in learning; then, we continued data gathering from the other students, faculty members, and medical education experts until data saturation was obtained and when no new code was attained during interviews and repetition of the previous categories and codes [ 18 , 21 ].

Tools/instruments

After obtaining permission from the university and coordinating with the participants, the researcher collected information at the medical school, including the library, teaching halls, sports field and coffee shop in Shiraz University of Medical Sciences were conducted as the research environment. The environment of conducting qualitative research is the place where the experiences of the people in question take place. In this study, the main method of data collection was structured individual interviews. The interview queries focused on participants’ experiences of integration implementation in medical education. The interviews started with questions as follows: What is your description of integration in medical education? Tell me about your experiences in integration in bacic or clinical sciences and diferent situations, including the classroom, laboratory and clinical setting. According to the contributors’ answers, we asked probing questions and duo to the results we added FM in research. Each interview took 20 to 60 min, with an average of 35 min. Data were collected and analyzed using Microsoft One Note 2010, Microsoft, Redmond campus, US. We listened to each recorded interview to get an overall understanding. In order to keep the information and data confidential, any names were avoided in the interviews and conversations were recorded, and each person was assigned a specific code, and all information was kept confidential.

Data analysis

The applied qualitative content analysis approach was conventional. The purpose of content analysis in this research is to provide knowledge and understanding of the concept of integration or to provide knowledge, a new insight, a picture of reality, and guidance for the practice of integrating courses in the Shiraz Medical School.

The conventional content analysis is usually used in the design of studies whose purpose is to describe a phenomenon. Therefore, through induction, categories emerge from the data [ 18 ]. Data analysis was performed simultaneously with data collection. In this process, the data were divided into the smallest units of meaning. New data were compared for similarities and differences and classified through repeated reviews and merging of similar data. The obtained data were evaluated using conventional content analysis using the approach of Graneheim and Lundman, (2004) [ 22 ]. The data were qualitatively analyzed in five steps, including (1) transcribing the entire interview immediately after completion, (2) reading the entire text to reach an overall understanding/summary of the content, (3) determining meaning units and primary codes, (4) Categorizing similar initial codes into more comprehensive categories and (5) determining the main theme of the categories.

In other words, the information collected from the interviews was analyzed through association of meaning. In this way, the data analysis started by reading them repeatedly to get a complete understanding of them. Then, based on his perception and understanding of the studied text, the researcher started writing the preliminary analysis to create a background for the emergence of codes. This action caused the design of codes to emerge from the text. In other words, the meaning unit was determined step by step, and its compression continued until the code was determined. Then the codes were categorized based on their similarities and differences. This categorization was created by organizing and grouping codes into meaningful clusters [ 19 ]. In the following, considering the quality of communication between subcategories, the researcher was able to combine and organize subcategories to turn them into fewer categories (categories). In other words, the general concept that was the result of summarizing these categories (theme) was obtained. To maintain reliability, the content was reviewed in two stages, one after 10–50% of the categorizations were completed and the other at the end of the work.

Rigor and trustworthiness

Four Guba and Lincoln methods were used to confirm the correctness of the data. These validation criteria are; Credibility, Transferability, Dependability, and Confirmability, which are observed in this study as follows [ 18 , 23 ]. For the validity of the study, various methods such as long-term engagement, continuous observation, triangulation, asking colleagues, and member checks were used. The researchers’ effort was to create an intimate and reliable space between the researcher and the participant to strengthen and enrich the data. It was the revision of supervisors, using the supplementary comments of colleagues (peer check) and the review of manuscripts by participants (member check), that the reform was done and suggested points were added and actions were taken to increase the validity of the research.

Transferability means the ability to generalize the results obtained from the interview sample to the entire population [ 18 , 24 ]. In this study, we tried to describe all the details of the research, from sampling to collection and complete analysis, so that there is no ambiguity about transferability. In the current research, an external observer who was experienced in qualitative research was used, who examined and confirmed the basis of the analysis process, and as a result, the reliability of the research was obtained. The last thing is related to the ability to study. That is, the sources taken from it can be assured that the conclusions and interpretations made are directly related to them [ 23 ]. The external observer had access to things such as: interview tapes, executed texts, notes, analyzed data, study findings, extracted meanings, codes, themes and classifications, details of the study process, the initial intention of the study, the initial proposal, Interview questions and all study details, that, in addition to reliability, also helped the verifiability of the research. See the strategies used in the rigor and trustworthiness section in Table  1 .

The demographic characteristics of the study participants are shown in Table  2 .

After analyzing and summarizing the information, about 221 propositions were extracted. These propositions were converted into seven categories, and finally, among them, the following three main and central themes emerged (Table  3 ).

According to the participants, enjoyable experiences was the most of the student’s and faculty members expericees from the implementation of integration in medical education. The educator and the student in this process, act rationally and intentionally and are aware of the factors that make the in tegration of curriculum enjoyable or upsetting them.

Theme 1: enjoyable experiences

Based on the experiences of the students and faculty members participants, they had enjoyable experiences in the implementation of the integration plan, which were divided into two main parts. Students say that the large volume of courses has been reduced due to non-repetition of materials in different courses, and also, the pressure and strictness of basic sciences curriculumin has been reduced. They have experienced the joy of group work in classrooms and better understanding of course materials with the presence of two professors. Creativity and mental preparation for performing future medical tasks by examining clinical cases in vertical integration was one of the benefits of implementing the integration plan.

The professors also expressed their pleasant experiences in dividing responsibility in different subjects in the compilation of textbooks, being aware of the experiences of colleagues in other educational groups, and holding interactive classrooms instead of the traditional and dry classroom. According to them, the integration method saves time and avoids repetition of material in different courses. One of the students said in this regard: “…, I enjoyed when the professor of surgery explained the problems related to abdomen together with the professor of anatomy and asked various questions to us (for example , they said , who has experienced pain witch related to appendix? How was it , come and show me).” (Student no. 4) students’ experiences: Students believed that teaching basic sciences together with clinical education is one of the best learning methods. Their attitude towards basic science courses was that they were ineffective and unnecessary in their future careers and had created a negative attitude in them, but they acknowledged; that the integration of theoretical and clinical courses can help to meet the needs of the general practitioner and the student can gain more skill in medical knowledge. Therefore, regardless of how the program was implemented, most students believed; that the content of the combination of courses could create continuity in the mind and increase self-confidence and a better attitude toward the integration program. In general, students’ attitude toward integration was positive. One of the professors said in this regard: “.I have seen that when we explain different materials from a clinically important aspect or when I explain the reason for conducting biochemistry experiments in physiology , they(the students) see how anemia causes hypoxia. Students listen with interest and participate in discussions…” (FM no. 7).

Interaction between basic and clinical sciences: Students supposed that basic science courses were forgotten in the following years of study. Therefore, basic science education along with clinical science education and bedside exposure in teaching hospitals can help in deeper learning of basic science. They expressed that integration causes an interaction between basic and clinical sciences and creates subject harmony in the student’s mind. Students considered the interaction between basic and clinical sciences to facilitate learning and a deeper understanding of the material and improve their knowledge.

Increasing students’ motivation: Students believed that the implementation of an integration program motivates them to participate more in academic discussions, and if the program’s objectives are explained to them at the beginning, it has a greater impact on increasing motivation. Ultimately, proper implementation of the integration program can prepare students to provide appropriate treatment and comprehensive, community-based care. An example of the statements of one of the student regarding interactive education:

“… This system is much better than trying to examine the tissue completely or read the anatomy all at once and… ”(Student no. 2) .

The advantages of integration: From the student’s point of view, these advantages are: “Presenting all the materials related to any subject does not confuse the person’s mind and gives a better understanding of the subject. The course of physiopathology, also makes students able to understand almost the entire pathology, pharmacology, and epidemiology of the subjects. Understand better and in more time, and there is no evasion. The integration system creates more interest and desire to study in students. Students will become more oriented, especially when they are interns. In this case, the student can more easily read the section. (Student no.9 )

“…It covers simply and has a better clinical view , and in practice , it works stronger. Presenting clinical cases together with theoretical topics makes for a deeper understanding of the material…” .

An example of one of the students statements in this regard; “…This feeling of responsibility that slowly rises in the guard is very good! Because it happens after the studentship and before the internship and not suddenly and all at once during the internship…” (Student no.3 ) .

Or student number 1 said; “Presenting all the materials related to the subject of the course makes a person gain a better understanding of that subject and in the course of physiopathology , it makes people able to read and understand almost all the topics of pathology , Pharmacology , and epidemiology better and in more time , and there is no evasion of material…“ (Student no.1 ) .

Theme 2: upsetting experiences

In the current study, based on the experiences of students and professors, they were dissatisfied with some of the problems and considered them to be disturbing problems in the implementation of the integration plan, which are classified into two main parts: the lack of preparation of the structure, including the deficiency of human resources, especially professorslack and problems related to the implementation of laws and guidelines.

Irregular attendance of clinical professors, non-revision of comprehensive textbooks in some courses, inadequacy of questions in some exams with course volume, inadequacy of course grades, non-observance of priority and lateness of courses were among these challenges.

The long time spent by 6th-year medical students in the clinical departments of hospitals destroys the student’s study time. Places too much value on some courses in basic sciences, when they are not very practical. Clinical professors’ use of a large number of Power Points and the use of terms that are not familiar to students in education, and little access to suitable books for integration. An example of students;

“… One of the disadvantages of this system in the basic sciences is that , for example , if those subjects are presented together in the nervous system , anatomy , tissue , embryo , and physiology , during the exam , the grades are all taken into account , so most of the middle school students. They can use this escape route and escape from those units that have more difficult anatomy and pass that unit with the physiology score and do not read the contents completely…” (Student no.5 ) .

Theme 3: resolutions to solving integration problems

Provision of a suitable physical space for the implementation of the plan, considering the density of theoretical and practical courses at the beginning of the academic semester, solutions related to the additional facilities of the implementation of the plan, increasing the number of faculty members of the groups involved in the implementation of the plan, holding an annual conference and sharing the experiences of the professors involved in the integration plan across the country and revision the plan based on the experiences of expert professors can have good results. Based on the experiences of some professors from this plan and their confusion, the insufficient number of professors to implement the plan to reduce the efficiency of professors due to the density of courses at the beginning of the academic semester should be compensated by employing new professors and completing the educational curriculum.

Presentation of common cases alongside theoretical contents: From the point of view of students, the simultaneous presentation of theoretical materials and clinical cases can greatly help to increase internal motivation and improve the students’ perspective towards basic science courses, and the use of practical educational activities, laboratory activities, as well as new teaching and learning methods such as drawing pictures and watching videos in Increasing the power of memorization of learned material is effective in students and should be paid attention to by teachers of basic sciences.

Also, increasing practical and laboratory works in terms of the quantity and quality of teaching cases, increasing the number of training sessions, presenting more clinical materials and reducing memorization materials, the professor’s use of new tools for teaching and its non-uniformity (for example, more use of educational videos, use of mock-ups, more education with photos), student-centered education (questions and answers in class and participation of students in discussions), simultaneous presentation of course materials by clinical and basic science professors and more mastery of the professors on the materials, as well as providing courses online can be more effective in Create an integration system.

Student participation in designing and revising the curriculum: Based on the experiences of professors and students, using the opinions and experiences of students can help in upgrading and revising the curriculum. Better interaction and participation of students in designing and revising the curriculum solves integration problems better. Students stated that if specific points of reference are prepared by professors for a specific clinical stage or a book that students only refer to, more effectiveness and learning will be achieved in the integrated system. one of the students said: “. It would be much more useful if the professors would come to a consensus and prepare a book or a pamphlet , or specify some reference material so that only those parts should be read. Or “References are heavy for a student who has just entered the clinical stage and just got rid of basic sciences , and practically no one can and does not have time to read references for exams!.” (Student no.4 ) .

The present study was conducted to explain the experiences of medical students from integration in the general medical course of Shiraz. According to the participants, enjoyable experiences was the most of the student’s and faculty members expericees from the implementation of integration in medical education. This result in line with Shojaei et al. (2022) in the blacksmith approach, a strategy for teaching and learning in the medical anatomy course wich the students were satisfied with the presentation of anatomy lessons in the integrated system [ 25 ]. Also the study of Dehghan et al. (2017) in Early clinical exposure program in learning renal physiology revealed that the students were satisfied with this vertical integration [ 26 ]. These two studies show the satisfaction of students in one course, while the results of the present study are related to the whole integration system in the education of medical students.

One of the topics of this research was interactive education. Research at Isfahan University of Medical Sciences showed that the integration of practical and basic topics in the bacteriology course has led to a change in students’ attitude towards this course as a useful and practical unit so that they feel the need to pass this course more [ 27 ].

Other studies related to anatomy and physiology courses showed an improvement in students’ attitudes towards these courses after the implementation of the integration plan [ 28 , 29 ], and the above results are consistent with the findings of this study. The results of the present study showed; that the interaction between basic and clinical sciences can help to learn more deeply about basic sciences. Boon’s study showed; that the lack of connection between basic and clinical sciences can discourage students from studying basic science courses [ 30 ]. Baghdady also showed in research that teaching basic sciences along with clinical sciences to dental students is more effective than teaching basic sciences alone [ 31 ], and the results of this research are in line with the findings of the present study.

According to the results of this study, the implementation of integration encourages students to participate more in course topics and has a greater effect on increasing their motivation. In his study, Khazai concluded that being in the hospital and dealing with the patient, at the same time as teaching theory, can greatly help to increase students’ motivation [ 29 ]. The results of Dehghan’s study about early clinical exposure showed that most medical students believed that this exposure increased their motivation to study basic science courses [ 26 ]. The results of the studies by Knowlton and Roholamini about the integration of basic sciences with clinical courses respectively showed that students found more enthusiasm for studying, and the integration plan encouraged them to participate more actively in class [ 32 , 33 ]. Integration was found to be a factor in increasing students’ motivation.

Another topic of this study was the advantages and disadvantages of integration in the field of medicine. From the point of view of students, integration will have many benefits such as creating an attractive and effective educational environment for professors and students, and will lead to the satisfaction and personal development of students. It also has advantages such as; mental coherence about the material, creating more interest and desire to study with students, and making students more oriented to enter the departments and understand the material more deeply. Among the disadvantages of the integration plan from the students’ point of view, we can mention things such as: “giving high value to some courses in the basic sciences while they are not very practical and lack of access to suitable books for the integration plan.”

Jalilian’s study also showed that regarding the integration program of the physiopathology course, due to the benefits of integration, such as increasing students’ motivation in learning courses, improving the educational level from memorization to application, increasing communication and cooperation between students and professors, and rationalizing the process of educational resources, It is also appropriate to continue the process of integration in other periods that have not yet started integration [ 34 ]. Rahman also showed; that the complete integration of basic science courses for medical students promotes educational goals and facilitates learning [ 35 ]. Kasarla’s study revealed that integrating the curriculum of medical and dental students leads to a range of positive outcomes, including enhanced interdisciplinary collaboration, improved clinical reasoning skills, and increased student satisfaction [ 36 ]. Also, Sethi’s study findings reveal that curriculum integration offers numerous benefits, including enhanced student engagement, improved critical thinking skills, and a more holistic understanding of interconnected knowledge domains [ 37 ]. The results of the mentioned studies are consistent with the findings related to the advantages and disadvantages of integration in this research.

The results of the present study showed that the students positively evaluated the effect of presenting clinical cases in classes and stating references on their level of preparation before entering the hospital, and they believed that the theoretical content and clinical cases at the same time can help a lot to increase the internal motivation and improve the perspective of the students at compared to basic science courses. In Kumaravel’s research, the integration of the medical curriculum, its review, and its use before clinical exposure, increased students’ knowledge and skills [ 38 ]. The study findings of Husain & et al. underscore the importance of integrating various disciplines, such as basic sciences, clinical knowledge, and professional skills, to create a comprehensive and cohesive medical curriculum. The research findings demonstrate that integrated curricula facilitate a deeper understanding of the interconnectedness of medical disciplines, allowing students to develop a holistic approach to patient care. By integrating basic sciences with clinical knowledge, students gain a more comprehensive understanding of disease processes and are better equipped to make informed clinical decisions [ 39 ]. In another study by Schmidt, the results showed that those students who are trained with the integrated program and presenting clinical cases simultaneously with theory have better diagnosis and more favorable learning outcomes than the students trained in the traditional program [ 40 ].

Cowan’s study also showed that the integration plan is very useful, but there are not enough suitable books for the integration plan of courses [ 41 ], and the results of this research are in line with the results of this study. In medical education, the most important aspect of learning is the relationship between theoretical and practical knowledge, and integration is an important educational strategy in this field [ 7 ]. Wijnen-Meijer & et al. showed in their study by integrating basic sciences with clinical knowledge, students develop a deep understanding of the underlying principles of medicine and are better equipped to apply this knowledge in diagnosing and managing patient conditions [ 42 ]. In many courses, the integration of basic and clinical sciences has helped students to better understand the material, and if the teaching of basic science is such that students clearly understand the relationship between the material and their application in clinical practice, the material will remain in their memory better [ 16 ].

Revolution in medical education especialy in curriculum area is a global problem, and many medical schools have supposed investigation and experienced some intervention such as horizontal and vertical integration [ 43 , 44 ]. Similarly, an active and integrated curriculum for teaching and learning medical students has been emphasized in all aspect of curriculum.

The strengths and limitations of the study

One of the strengths of this study is to consider the integration in general (horizontal and vertical) and about all courses (like some studies, it is not focused only on one course). All medical students from different levels: basic science, physiopathlogy, extern and intern were present in this study, and this diverse spectrum added to the richness of this study. The another strength from the participants perspective was: the applicability of students learning experiences and faculty members’ teaching experiences in the context and workplace conditions was one of the factors that changed the view of anatomy as an introductory course in medicine. As this study was conducted in one medical school, it is suggested that more studies and insights should be obtained from other participants to increase the generalized findings. Also, additional work should be done to be responsible for more practical guidance in the educational system regarding the horizontal and vertical integration.

Lessons learned and implications for policy-makers and future researchers

Managers, practitioners, and professors can use the elements of integration strategy to increase the effectiveness and productivity of medical education, by improve students’ and attitudes, knowledge, and skills. Since the integration in Shiraz University of Medical Sciences has progressed to stage 6 (Sharing) according to Harden’s ladder, it is recommended to provide a proper planning and platform for the implementation of other stages and then the researchers do research on these remaining steps. On the other hand, it is suggested to plan implementation of integration for other educationsl fields. also implementation of integration about the common interdisciplinary topics is suggested.

In this study, three central themes were extracted under the title: Enjoyable experiences (advantages of the integration system), Upsetting experiences (Disadvantages of the integration system), Resolutions to solving integration problems. In general, the findings indicated a positive evaluation of Shiraz’s medical students of the integrated system. According to the results of this research, it can be said; that the use of integration horizontally and vertically in medical education improves the quality of education compared to traditional methods. The integration of basic and clinical science is important in the sense that can be a powerful tool for learning and acquiring skills, also, it can also help in the promotion of professional development of students and motivate them to study more in interactive manner and be loyal to lifelong learning. study generally showed that combining theoretical and practical courses has advantages and disadvantages, but its advantages are more. Paying attention to the shortcomings, especially in the supply of human resources and professors, and reforming the program with continuous revision are issues that education managers and medical educators of the general medical practice should address.

Based on the constructivism theory Integration is an important part of SPICES strategy to improve learning and teaching in medical education. The best learning happens when there is a connection between new information and previous knowledge, and students are encouraged to integrate the new knowledge learned with previous knowledge and apply theoretical knowledge in clinical situations, even to solve the problems wich they have not been trained.

Also, It is suggested that to familiarize students with the process of integration, workshops should be held and the facilities of the dissection and mollage hall should be reviewed for integration. According to the educational goals of the integration plan, relevant books and lesson plans should be prepared and placed at the disposal of the students, and in the field of prioritization and precedence and backwardness of the courses, the order of presentation of the courses with the priority of histology, embryology, anatomy, and physiology should be regulated.

Data availability

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Faculty Members

Achike FI. The challenges of integration in an innovative modern medical curriculum. Med Sci Educ. 2016;26:153–8.

Article   Google Scholar  

Page D, Baranchuk A. The Flexner report: 100 years later. Int J Med Educ. 2010;1:74–5.

Dominguez I, Zumwalt AC. Integrating the basic sciences in medical curricula: focus on the basic scientists. Adv Physiol Educ. 2020;44(2):119–23.

Brauer DG, Ferguson KJ. The integrated curriculum in medical education: AMEE Guide 96. Med Teach. 2015;37(4):312–22.

Atwa HS, Gouda EM. Curriculum integration in medical education: a theoretical review. Intel Prop Rights. 2014;2(2):113.

Matinho D, Pietrandrea M, Echeverria C, Helderman R, Masters M, Regan D, Shu S, Moreno R, McHugh D. A systematic review of integrated learning definitions, frameworks, and practices in recent health professions education literature. Educ Sci. 2022;12(3):165.

Harden RM. The integration ladder: a tool for curriculum planning and evaluation. Med Educ. 2000;34(7):551–7. https://doi.org/10.1046/j.1365-2923.2000.00697.x .

Irby DM, Cooke M, O’Brien BC. Calls for reform of medical education by the Carnegie Foundation for the advancement of teaching: 1910 and 2010. Acad Med. 2010;85(2):220–7.

Bazrafcan L, Kojuri J, Amini M. Using SPICES educational strategy for undergraduate curricular reform at Shiraz Medical School. Med Teach. 2019;41(9):1091.

Chengai F, Janani F, Farzan B, Shirkhani S. Clarification of the opinion of professors of Khorramabad Medical School regarding the plan of horizontal integration of basic medical science courses. J Qualitative Res Health Sci. 2017;7(2):308–299. https://www.sid.ir/paper/215503/fa . [Persian]

Google Scholar  

Quintero GA, Vergel J, Arredondo M, Ariza MC, Gómez P, Pinzon-Barrios AM. Integrated medical curriculum: advantages and disadvantages. J Med Educ Curric Dev. 2016;3:JMECD–S18920.

Hashemy SI, Mastour H. Towards Basic sciences Curriculum Reform in General Medicine at Mashhad University of Medical Sciences. Future Med Educ J. 2020;10(4):25–31. https://doi.org/10.22038/fmej.2020.50008.1341 .

Ebrahimzadeh A, Abedini MR, Ramazanzade K, Bijari B, Aramjoo H, Zare Bidaki M. Effect of Integrated Teaching on Students’ Learning. Strides Dev Med Educ. 2021;18(1):1–6.

Rosenberg MJ, Hartley RS. Persistence of changed attitudes among students in an integrated anatomy curriculum. Anat Sci Educ. 2024 Apr 5.

Harden J. Social and behavioural sciences in medical school curricula. A Practical Guide for Medical Teachers, E-Book: A Practical Guide for Medical Teachers, E-Book. 2021 Apr 24:189.

Amini M, Kojuri J, Mahbudi A, Lotfi F, Seghatoleslam A, Karimian Z, Shams M. Implementation and evolution of the horizontal integration at Shiraz medical school. J Adv Med Educ Professionalism. 2013;1(1):21–7. https://jamp.sums.ac.ir/article_40869.html .

Rooholamini A, Amini M, Bazrafkan L, Dehghani MR, Esmaeilzadeh Z, Nabeiei P, Rezaee R, Kojuri J. Program evaluation of an integrated basic science medical curriculum in Shiraz Medical School, using CIPP evaluation model. J Adv Med Educ Professionalism. 2017;5(3):148.

Guba EG, Lincoln YS. Epistemological and methodological bases of naturalistic inquiry. ECTJ. 1982;30:233–52. https://doi.org/10.1007/BF02765185 .

Hsieh H-F, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.

Kavousipour S, Noorafshan A, Pourahmad S, Dehghani-Nazhvani AL. Achievement motivation level in students of Shiraz University of Medical Sciences and its influential factors. J Adv Med Educ Professionalism. 2015;3(1):26.

Whitehead D, Whitehead L. Data collection and sampling in qualitative research. Nursing and Midwifery Research Methods and Appraisal for Evidence-Based Practice. 6th Edition. Sydney: Elsevier. 2020 Apr 8:118 – 35.

Graneheim UH, Lundman B. Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness. Nurse Educ Today. 2004;24(2):105–12.

Johnson JL, Adkins D, Chauvin S. A review of the quality indicators of rigor in qualitative research. Am J Pharm Educ. 2020;84(1):7120.

Tabatabai A, Hassani P, Mortazavi H, Tabatabaichehr M. Strategies to enhance rigor in qualitative research. J North Khorasan Univ Med Sci. 2013;5(3):663–70. http://journal.nkums.ac.ir/article-1-403-fa.html . [pesian].

Shojaei A, Feili A, Kojuri J, Norafshan A, Bazrafkan L. The blacksmith approach: a strategy for teaching and learning in the medical anatomy course (a qualitative study). BMC Med Educ. 2022;22(1):728.

Dehghan A, Amini M, Sagheb MM, Shidmoosavi SM, Nabeiei P. Early clinical exposure program in learning renal physiology. J Adv Med Educ Prof. 2017;5(4):172–6. PMCID: PMC5611426.

Fazeli H, Hosseini Ns, Narimani T. Teaching practical medical bacteriology accommodate with job analysis. Iran J Med Educ. 2011;10(5):1102–09. http://journals.mui.ac.ir . [Persian].

Dehghan M, Anvari M, Hosseini Sharifabad M, Talebi A, Nahangi H, Abbasi A, et al. The viewpoints of Medical students in Yazd University of Medical Sciences toward Horizontal Integration Teaching Method in Anatomical sciences Courses. Strides Dev Med Educ. 2011;8(1):81–7. https://sdme.kmu.ac.ir/article_90207.html . [Persian].

Khazaei M. Effects of integrating physiology lessons to clinical and para-clinical findings on medical students’ attitude and motivation toward physiology lesson. Iran J Med Educ. 2011;10(5):609–13. https://ijme.mui.ac.ir/article-1-1481-en.html . [Persian].

Boon JM, Meiring JH, Richards PA, Jacobs CJ. Evaluation of clinical relevance of problem-oriented teaching in undergraduate anatomy at the University of Pretoria. Surg Radiol Anat. 2001;23(1):57–60. https://doi.org/10.1007/s00276-001-0057-3 .

Baghdady MT, Carnahan H, Lam EW, Woods NN. Integration of basic sciences and clinical sciences in oral radiology education for dental students. J Dent Educ. 2013;77(6):757–63. PMID: 23740912.

Knowlton AA, Rainwater JA, Chiamvimonvat N, Bonham AC, Robbins JA, Henderson S, et al. Training the translational research teams of the future: UC Davis-HH Integrating Medicine into Basic Science program. Clin Transl Sci. 2013;6(5):339. https://doi.org/10.1111/cts.12068 .

Rooholamini A, Amini M, Bazrafkan L, Dehghani MR, Esmaeilzadeh Z, Nabeiei P, et al. Program evaluation of an Integrated Basic Science Medical Curriculum in Shiraz Medical School, using CIPP evaluation model. J Adv Med Educ Prof. 2017;5(3):148–54. PMCID: PMC5522906.

Jalilian N, Jalilian N, Rezaei M, Deh Haghi A. Evaluating the satisfaction of Kermanshah University of Medical Sciences students of Physiopathology course integration. Horizon Med Educ Dev. 2011;4(3):33–7. magiran.com/p903124 .

Rehman R, Iqbal A, Syed S, Kamran A. Evaluation of integrated learning program of undergraduate medical students. Pak J Physiol. 2011;7(2):37–41. http://www.pps.org.pk/PJP/7-2/Rehana.pdf .

Kasarla RR, Pathak L. Curriculum integration for medical and dental students. J Univers Coll Med Sci. 2021;9(1):p82. https://doi.org/10.3126/jucms.v9i01.37989 .

Sethi A, Ahmed Khan R. Curriculum integration: from ladder to Ludo. Med Teach. 2019. https://doi.org/10.1080/0142159X.2019.1707176 .

Kumaravel B, Jenkins H, Chepkin S, Kirisnathas S, Hearn J, Stocker CJ, et al. A prospective study evaluating the integration of a multifaceted evidence-based medicine curriculum into early years in an undergraduate medical school. BMC Med Educ. 2020;20(1):278. https://doi.org/10.1186/s12909-020-02140-2 .

Husain M, Khan S, Badyal D. Integration in Medical Education. Indian Pediatr. 2020;57(9):842–7. PMID: 32999111.

Peile E, Integrated learning. BMJ. 2006;332(7536):278. https://doi.org/10.1136/bmj.332.7536.278 .

Cowan M, Arain NN, Assale TS, Assi AH, Albar RA, Ganguly PK. Student-centered integrated anatomy resource sessions at Alfaisal University. Anat Sci Educ. 2010;3(5):272–5. https://doi.org/10.1002/ase.176 .

Wijnen-Meijer M, van den Broek S, Koens F, et al. Vertical integration in medical education: the broader perspective. BMC Med Educ. 2020;20:509. https://doi.org/10.1186/s12909-020-02433-6 .

Badrawi N, Hosny S, Ragab L, Ghaly M, Eldeek B, Tawdi AF, Makhlouf AM, Said ZN, Mohsen L, Waly AH, El-Wazir Y. Radical reform of the undergraduate medical education program in a developing country: the Egyptian experience. BMC Med Educ. 2023;23(1):143.

Kapitonova MY, Gupalo SP, Dydykin SS, Vasil’Ev Yu L, Mandrikov VB, Klauchek SV, Fedorova OV. Is it time for transition from the subject-based to the integrated preclinical medical curriculum? Russian Open Med J. 2020;9(2):213.

Download references

Acknowledgements

The authors appreciate the cooperation of the participants and other people who helped us in conducting this research.

This research was done with the financial support of Shiraz University of Medical Sciences, Shiraz, Iran with grant number 21560.

Author information

Authors and affiliations.

Department of E-Learning in Medical Sciences, Virtual School & Center of Excellence in E-Learning, Shiraz University of Medical Sciences, Shiraz, Iran

Fariba Khanipoor

Department of Medical Education, Shiraz University of Medical Sciences, Shiraz, Iran

Leila Bazrafkan

Doctor of Medicine (MD), Department of Medical Education, Shiraz University of Medical Sciences, Shiraz, Iran

Sadegh Aramesh

Shahid Sadoughi University of Medical Sciences, Yazd, Iran

Mehrnaz Shojaei

Department of Public Health, School of Health, Fasa University of Medical Sciences, Fasa, Iran

Afsaneh Ghasemi

You can also search for this author in PubMed   Google Scholar

Contributions

F.Kh. Designed the study, analyzed the data and wrote the main manuscript text. S.A. conceptualized and designed the study, and collected and analyzed the data. A.Gh. collected and analyzed the data. M.Sh. collected and analyzed the data. L.B. conceptualized and designed the study. Supervision of the written manuscript. The authors contributed to the preparation of the manuscript and complied with the authorship criteria. Also, the authors approved the final manuscript.

Corresponding author

Correspondence to Afsaneh Ghasemi .

Ethics declarations

Ethics approval and consent to participate.

This study was approved by the Ethics Committee of Shiraz University of Medical Sciences (IR.SUMS.REC.1397.492). Informed consent to participate in the research was obtained from the participants, and they were assured that all their information would remain confidential.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Khanipoor, F., Bazrafkan, L., Aramesh, S. et al. A study of medical students’ experiences at Shiraz University of medical sciences from the implementation of integration in medical education: a qualitative study. BMC Med Educ 24 , 1042 (2024). https://doi.org/10.1186/s12909-024-05983-1

Download citation

Received : 14 May 2024

Accepted : 03 September 2024

Published : 27 September 2024

DOI : https://doi.org/10.1186/s12909-024-05983-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Integration
  • Medical education
  • Qualitative research
  • Medical students

BMC Medical Education

ISSN: 1472-6920

limitations of study in research methodology

IMAGES

  1. Limitations in Research

    limitations of study in research methodology

  2. PPT

    limitations of study in research methodology

  3. The study limitations, and how should they be stated?

    limitations of study in research methodology

  4. example of limitation of study in research proposal pdf

    limitations of study in research methodology

  5. Limitations In Research Presentation Graphics

    limitations of study in research methodology

  6. example of limitation of study in research proposal pdf

    limitations of study in research methodology

VIDEO

  1. Come study Research Methodology with me

  2. Research Methodology why RM -1 (Urdu/Hindi)

  3. Chapter 2: Project Writing

  4. Case Study || Research Methodology || Part 11

  5. 32

  6. Case Control Study: Explained

COMMENTS

  1. How to Write Limitations of the Study (with examples)

    Common types of limitations and their ramifications include: Theoretical: limits the scope, depth, or applicability of a study. Methodological: limits the quality, quantity, or diversity of the data. Empirical: limits the representativeness, validity, or reliability of the data. Analytical: limits the accuracy, completeness, or significance of ...

  2. Limitations of the Study

    The limitations of the study are those characteristics of design or methodology that impacted or influenced the interpretation of the findings from your research. Study limitations are the constraints placed on the ability to generalize from the results, to further describe applications to practice, and/or related to the utility of findings ...

  3. Limitations in Research

    Limitations in Research. Limitations in research refer to the factors that may affect the results, conclusions, and generalizability of a study.These limitations can arise from various sources, such as the design of the study, the sampling methods used, the measurement tools employed, and the limitations of the data analysis techniques.

  4. 21 Research Limitations Examples

    In research, studies can have limitations such as limited scope, researcher subjectivity, and lack of available research tools. Acknowledging the limitations of your study should be seen as a strength. It demonstrates your willingness for transparency, humility, and submission to the scientific method and can bolster the integrity of the study.

  5. Understanding Limitations in Research

    Methodology limitations. Not having access to data or reliable information can impact the methods used to facilitate your research. A lack of data or reliability may limit the parameters of your study area and the extent of your exploration. Your sample size may also be affected because you won't have any direction on how big or small it ...

  6. Limited by our limitations

    Abstract. Study limitations represent weaknesses within a research design that may influence outcomes and conclusions of the research. Researchers have an obligation to the academic community to present complete and honest limitations of a presented study. Too often, authors use generic descriptions to describe study limitations.

  7. Limitations of the Study

    Step 1. Identify the limitation (s) of the study. This part should comprise around 10%-20% of your discussion of study limitations. The first step is to identify the particular limitation (s) that affected your study. There are many possible limitations of research that can affect your study, but you don't need to write a long review of all ...

  8. PDF Selecting Studies and Assessing Methodological Limitations

    Typical Study Selection Process. Step 1: Apply Inclusion/Exclusion Criteria to Titles and Abstracts. Step 2: Eliminate Studies That Clearly Meet One or More Exclusion Criteria (RULING OUT) Step 3: Retrieve the Full Text of the Remaining Studies. <Sampling Strategy>.

  9. Research Limitations: Simple Explainer With Examples

    Whether you're working on a dissertation, thesis or any other type of formal academic research, remember the five most common research limitations and interpret your data while keeping them in mind. Access to Information (literature and data) Time and money. Sample size and composition. Research design and methodology.

  10. PDF How to discuss your study's limitations effectively

    sentence tha. signals what you're about to discu. s. For example:"Our study had some limitations."Then, provide a concise sentence or two identifying each limitation and explaining how the limitation may have affected the quality. of the study. s findings and/or their applicability. For example:"First, owing to the rarity of the ...

  11. Limitations of the Study

    Acknowledgement of a study's limitations also provides you with an opportunity to demonstrate that you have thought critically about the research problem, understood the relevant literature published about it, and correctly assessed the methods chosen for studying the problem. A key objective of the research process is not only discovering new ...

  12. Organizing Academic Research Papers: Limitations of the Study

    The limitations of the study are those characteristics of design or methodology that impacted or influenced the application or interpretation of the results of your study. ... in evaluating the quality of the information collected against the uses to which it will be applied and the particular research method and purposeful sampling strategy ...

  13. Limitations of a Research Study

    3. Identify your limitations of research and explain their importance. 4. Provide the necessary depth, explain their nature, and justify your study choices. 5. Write how you are suggesting that it is possible to overcome them in the future. Limitations can help structure the research study better.

  14. What are the limitations in research and how to write them?

    The ideal way is to divide your limitations section into three steps: 1. Identify the research constraints; 2. Describe in great detail how they affect your research; 3. Mention the opportunity for future investigations and give possibilities. By following this method while addressing the constraints of your research, you will be able to ...

  15. How to Present the Limitations of a Study in Research?

    Writing the limitations of the research papers is often assumed to require lots of effort. However, identifying the limitations of the study can help structure the research better. Therefore, do not underestimate the importance of research study limitations. 3. Opportunity to make suggestions for further research.

  16. 8 Research design limitations

    8.2 Limitations related to internal validity. Internal validity refers to the extent to which a cause-and-effect relationship can be established in a study, eliminating other possible explanations (Sect. 3.1); that is, the effectiveness of the study using the sample. A discussion of the limitations of internal validity should cover, as appropriate: possible confounding and lurking variables ...

  17. A tutorial on methodological studies: the what, when, how and why

    In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts). In the past 10 years, there has been an increase in the use of terms related to ...

  18. Strengths and Limitations of Qualitative and Quantitative Research Methods

    Scientific research adopts qualitati ve and quantitative methodologies in the modeling. and analysis of numerous phenomena. The qualitative methodology intends to. understand a complex reality and ...

  19. A tutorial on methodological studies: the what, when, how and why

    Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. We provide an overview of some of the key aspects of methodological studies such ...

  20. (PDF) Limited by our limitations

    Limitations are weaknesses inherent to the study methodology, design and data collection method that may impact or influence the outcomes and conclusion of the research (Ross & Zaidi, 2019 ...

  21. Research Limitations

    Research limitations in a typical dissertation may relate to the following points: 1. Formulation of research aims and objectives. You might have formulated research aims and objectives too broadly. You can specify in which ways the formulation of research aims and objectives could be narrowed so that the level of focus of the study could be ...

  22. Scale development: ten main limitations and ...

    The study identified ten main types of limitation in these practices reported in the literature: sample characteristic limitations, methodological limitations, psychometric limitations, qualitative research limitations, missing data, social desirability bias, item limitations, brevity of the scale, difficulty controlling all variables, and lack ...

  23. Manifestations and drivers of secondary trauma among service providers

    This mixed-methods study explores the incidence, manifestations, and drivers of secondary traumatic stress among service providers who work with Syrian refugees in Istanbul, Turkey. ... Another limitation is that since the study targeted extant employees of agencies, it might have missed service providers who potentially could have higher ...

  24. An evaluation of whole-school trauma-informed training intervention

    There is a paucity of research on whole-school trauma-informed approaches and most have methodological limitations via the absence of a control group. In addressing this gap, the study is one of the first to utilise a control group in the research design to ensure findings are robust. ... A mixed methods study. Journal of Child & Adolescent ...

  25. Streamlining pediatric vital sign assessment: innovations and insights

    Main outcomes. The BP centile curves derived from this study are depicted in Fig. 2, with SBP shown in Fig. 2A, DBP in Fig. 2B, and MBP in Fig. 2C. Additionally, each centile chart can be found in ...

  26. Limitations of the Study

    The limitations of the study are those characteristics of design or methodology that impacted or influenced the application or interpretation of the results of your study. They are the constraints on generalizability and utility of findings that are the result of the ways in which you chose to design the study and/or the method used to ...

  27. Critical appraisal of the chorioallantoic membrane model for studying

    We discuss methodological adaptations that can mitigate some of these limitations and propose future directions to enhance the translational relevance of this model. This review underscores the CAM model's valuable role in angiogenesis research and aims to guide researchers in optimizing its use for more predictive and robust preclinical studies.

  28. Managing the exponential growth of mendelian randomization studies

    Much of the research we publish relates to questions of cause and effect. In an ideal world, we would subject these questions to experimentation, randomising study participants to different conditions. However, in many cases - particularly in the context of addiction - such randomization is simply not possible. We cannot randomise tobacco-naïve children to use e-cigarettes, for example ...

  29. Color Matters: A Study Exploring the Influence of Packaging ...

    The organic food market's rapid expansion necessitates an understanding of factors influencing consumer behavior. This paper investigates the impact of packaging colors on perceptions and willingness to pay (WTP) for organic foods, utilizing an experimental auction among university students. Drawing on previous research, we explore how colors influence perceived healthiness, premiumness ...

  30. A study of medical students' experiences at Shiraz University of

    This study was designed to explain the experiences of medical students from the Shiraz University of Medical Sciences since the implementation of integration in 2020. The present study was qualitative research with a conventional content analysis method. Participants were selected for the interview purposefully.