• help_outline help

iRubric: Qualitative Research I) rubric

rubric for qualitative research paper

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • AEM Educ Train
  • v.5(4); 2021 Aug

Logo of aemeductrain

Leveling the field: Development of reliable scoring rubrics for quantitative and qualitative medical education research abstracts

Jaime jordan.

1 Department of Emergency Medicine, David Geffen School of Medicine at UCLA, Los Angeles California, USA

2 Department of Emergency Medicine, Ronald Reagan UCLA Medical Center, Los Angeles California, USA

Laura R. Hopson

3 Department of Emergency Medicine, University of Michigan, Ann Arbor Michigan, USA

Caroline Molins

4 AdventHealth Emergency Medicine Residency, Orlando Florida, USA

Suzanne K. Bentley

5 Icahn School of Medicine at Mount Sinai, New York New York, USA

Nicole M. Deiorio

6 Virginia Commonwealth University School of Medicine, Richmond Virginia, USA

Sally A. Santen

7 University of Cincinnati College of Medicine, Cincinnati Ohio, USA

Lalena M. Yarris

8 Department of Emergency Medicine, Oregon Health & Science University, Portland Oregon, USA

Wendy C. Coates

Michael a. gisondi.

9 Department of Emergency Medicine, Stanford University, Palo Alto California, USA

Associated Data

Research abstracts are submitted for presentation at scientific conferences; however, criteria for judging abstracts are variable. We sought to develop two rigorous abstract scoring rubrics for education research submissions reporting (1) quantitative data and (2) qualitative data and then to collect validity evidence to support score interpretation.

We used a modified Delphi method to achieve expert consensus for scoring rubric items to optimize content validity. Eight education research experts participated in two separate modified Delphi processes, one to generate quantitative research items and one for qualitative. Modifications were made between rounds based on item scores and expert feedback. Homogeneity of ratings in the Delphi process was calculated using Cronbach's alpha, with increasing homogeneity considered an indication of consensus. Rubrics were piloted by scoring abstracts from 22 quantitative publications from AEM Education and Training “Critical Appraisal of Emergency Medicine Education Research” (11 highlighted for excellent methodology and 11 that were not) and 10 qualitative publications (five highlighted for excellent methodology and five that were not). Intraclass correlation coefficient (ICC) estimates of reliability were calculated.

Each rubric required three rounds of a modified Delphi process. The resulting quantitative rubric contained nine items: quality of objectives, appropriateness of methods, outcomes, data analysis, generalizability, importance to medical education, innovation, quality of writing, and strength of conclusions (Cronbach's α for the third round = 0.922, ICC for total scores during piloting = 0.893). The resulting qualitative rubric contained seven items: quality of study aims, general methods, data collection, sampling, data analysis, writing quality, and strength of conclusions (Cronbach's α for the third round = 0.913, ICC for the total scores during piloting = 0.788).

We developed scoring rubrics to assess quality in quantitative and qualitative medical education research abstracts to aid in selection for presentation at scientific meetings. Our tools demonstrated high reliability.


The scientific abstract is the standard method for researchers to communicate brief written summaries of their findings. The written abstract is the gatekeeper for selection for presentation at professional society meetings. 1 A research presentation serves many purposes including dissemination of new knowledge, an opportunity for feedback, and the prospect of fostering an investigator's academic reputation. Beyond the presentation, abstracts, as written evidence of scientific conference proceedings, often endure through publication in peer‐reviewed journals. Because of the above, abstracts may be assessed in a number of potentially high‐stakes situations.

Abstracts are selected for presentation at conferences through a competitive process based on factors such as study rigor, importance of research findings, and relevance to the sponsoring professional society. Prior literature has shown poor observer agreement in the abstract selection process. 2 Scoring rubrics are often used to guide abstract reviewers in an attempt to standardize the process, reduce bias, support equity, and promote quality. 3 There are limited data describing the development and validity evidence of such scoring rubrics but the data available suggest that rubrics may be based on quality scoring tools for full research reports and published guidelines for abstracts. 2 , 4 , 5 Medical conferences often apply rubrics designed for judging clinical or basic science submissions, which reflect standard hypothesis‐testing methods and often use a single subjective Gestalt rating for quality decisions. 6 This may result in the systematic exclusion of studies that employ alternate, but equally rigorous methods, such as research in medical education. Existing scoring systems, commonly designed for biomedical research, may not accurately assess the scope, methods, and types of results commonly reported in medical education research abstracts, which may lead to a disproportionately high rate of rejection of these abstracts. There are additional challenges in reviewing qualitative research abstracts using a standard hypothesis‐testing rubric. In these qualitative studies, word‐count constraints may limit the author's ability to convey the study's outcome appropriately. 7 It is problematic for qualitative studies to be constrained to a standard quantitative abstract template, which may lead to low scores by those applying the rubric and a potential systematic bias against qualitative research.

Prior literature has described tools to assess quality in medical education research manuscripts, such as the Medical Education Research Study Quality Instrument (MERSQI) and the Newcastle‐Ottawa Scale–Education (NOS‐E). 8 A limited attempt to utilize the MERSQI tool to retrospectively assess internal medicine medical education abstracts achieving manuscript publication showed increased scores for the journal abstract relative to the conference abstract. 4 However, the MERSQI and similar tools were not developed specifically for judging abstracts, and there is a lack of published validity evidence to support score interpretation based on these tools. To equitably assess the quality of education research abstracts to scholarly venues, which may have downstream effects on researcher scholarship, advancement, and reputation, there is a need for a rigorously developed abstract scoring rubric that is based on a validity evidence framework. 9 , 10

The aim of this paper is to describe the development and pilot testing of a dedicated rubric to assess the quality of both quantitative and qualitative medical education research studies. We describe the development process, which aimed to optimize content and response process validity, and initial internal structure and relation to other variables validity evidence to support score interpretation using these instruments. The rubrics may be of use to researchers developing studies and abstract and paper reviewers and may be applied to medical education research assessment in other specialties.

Study design

We utilized a modified Delphi technique to achieve consensus on items for a scoring rubric to assess quality of emergency medicine (EM) education research abstracts. The modified Delphi technique is a systematic group consensus strategy designed to increase content validity. 11 Through this method we developed individual rubrics to assess quantitative and qualitative EM medical education research abstracts. This study was approved by the institutional review board of the David Geffen School of Medicine at UCLA.

Study setting and population

The first author identified eight EM education researchers with successful publication records from diverse regions across the United States and invited them to participate in the Delphi panel. Previous work has suggested that six to 10 experts is an appropriate number for obtaining stable results in the modified Delphi method. 12 , 13 , 14 All invited panelists agreed to participate. The panel included one assistant professor, two associate professors, and five professors. All panelists serve as reviewers for medical education journals and four hold editorial positions. We collected data in September and October 2020.

Study protocol

We followed Messick's framework for validity that includes five types of validity evidence; content, response process, internal structure, relation to other variables, and consequential. 15 Our study team drafted initial items for the scoring rubrics after a review of the literature and existing research abstract scoring rubrics to optimize content validity. We created separate items for research abstracts reporting quantitative and qualitative data. We sent the draft items to the Society for Academic Emergency Medicine (SAEM) education committee for review and comment to gather stakeholder feedback and for further content and response process validity evidence. 16 One author (JJ) who was not a member of the Delphi panel then revised the initial lists of items based on committee feedback to create the initial Delphi surveys. We used an electronic survey platform (SurveyMonkey) to administer and collect data from the Delphi surveys. 17 Experts on the Delphi panel rated the importance of including each item in a scoring rubric on a 1 to 9 Likert scale with 1 labeled as “not at all important” and 9 labeled as “extremely important.” The experts were invited to provide additional written comments, edits, and suggestions for each item. They were also encouraged to suggest additional items that they felt were important but not currently listed. We determined a priori that items with a mean score of 7 or greater advanced to the next round and items with a mean score of three or below were eliminated. The Delphi panel moderator (JJ) applied discretion for items scoring between 4 and 6, with the aim of both adhering to the opinions of the experts and creating a comprehensive scoring rubric. For example, if an item received a middle score but had comments supporting inclusion in a revised form, the moderator would make the suggested revisions and include the item in the next round.

Each item consisted of a stem and anchored choices with associated point‐value assignments. Panelists commented on the stems, content, and assigned point value of choices and provided narrative unstructured feedback. The moderator made modifications between rounds based on item scores and expert feedback. After each round, we provided panelists with aggregate mean item scores, written comments, and an edited version of the item list derived from the responses in the previous round. The panelists were then asked to rate the revised items and provide additional edits or suggestions.

We considered homogeneity of ratings in the Delphi process to be an indication of consensus. After consensus was achieved, we created final scoring rubrics for quantitative and qualitative medical education research abstracts. We then piloted the scoring rubrics to gather internal structure and further response process validity evidence. Five raters from the study group (JJ, LH, MG, CM, SB) participated in piloting. We piloted the final quantitative research rubric by scoring abstracts from publications identified in the most recent critical appraisal of EM education research by Academic Emergency Medicine / AEM Education and Training, “Critical Appraisal of Emergency Medicine Education Research: The Best Publications of 2016”. 18 All 11 papers highlighted for excellent methodology in this issue were included in the pilot. 18 Additionally, we included an equal number of randomly selected citations that were included in the issue but not selected as top papers, for a total of 22 quantitative publications. 18 Given the limited number of qualitative studies cited in this issue of the critical appraisal series, we chose to pilot the qualitative rubric on publications from this series from the last 5 years available (2012–2016). 18 , 19 , 20 , 21 , 22 We randomly selected one qualitative publication that was highlighted for excellent methodology and one that was not from each year for a total of 10 qualitative publications. 18 , 19 , 20 , 21 , 22 The same five raters who performed the quantitative pilot also conducted the qualitative pilot.

Data analysis

We calculated and reported descriptive statistics for item scoring during Delphi rounds. We used Cronbach's alpha to assess homogeneity of ratings in the Delphi process. Increasing homogeneity was considered to be an indication of consensus among the expert panelists. We used intraclass correlation coefficient (ICC) estimates to assess reliability among raters during piloting based on a mean rating (κ = 5), absolute agreement, two‐way random‐effects model. We performed all analyses in SPSS (IBM SPSS Statistics for Windows, Version 27.0).

Quantitative rubric

Three Delphi rounds were completed, each with 100% response rate. Mean item scores for each round are depicted in Table  1 . After the first round, three items were deleted, one item was added, and five items underwent wording changes. After the second round, one item was deleted and eight items underwent wording changes. After the third round items were reordered for flow and ease of use but no further changes were made to content or wording. Cronbach's alpha for the third round was 0.922, indicating high internal consistency. The final rubric contained nine items: quality of objectives, appropriateness of methods, outcomes, data analysis, generalizability, importance to medical education, innovation, quality of writing, and strength of conclusions (Data Supplement  S1 , Appendix S1 , available as supporting information in the online version of this paper, which is available at http://onlinelibrary.wiley.com/doi/10.1002/aet2.10654/full ). The ICC for the total scores during piloting was 0.893, indicating excellent agreement. ICCs for individual rubric items ranged from 0.406 to 0.878 (Table  3 ).

Items and mean scores of expert review during Delphi process for quantitative scoring rubric

Inter‐rater reliability results during piloting

Qualitative rubric

Three Delphi rounds were completed, each with 100% response rate. Mean item scores for each round are depicted in Table  2 . After the first round 2 items were deleted, one item was added and nine items underwent wording changes. After the second round, three items were deleted and four underwent wording changes. After the third round no further changes were made. The resulting tool contained seven items reflecting the domains of quality of study aims, general methods, data collection, sampling, data analysis, writing quality, and strength of conclusions (Appendix S2 ). Cronbach's alpha for the third round was 0.913, indicating high internal consistency. ICC for the total scores during piloting was 0.788, indicating good agreement. The item on writing quality had an ICC of –0.301, likely due to the small scale of the item and sample size leading to limited variance. ICCs for the remainder of the items ranged from 0.176 to 0.897 (Table  3 ).

Items and mean scores of expert review during Delphi process for qualitative scoring rubric

We developed novel and distinct abstract scoring rubrics for assessing quantitative and qualitative medical education abstract quality through a Delphi process. It is important to evaluate medical education research abstracts that utilize accepted education methods as a distinctly different class than basic, clinical, and translational research. Through our Delphi and piloting processes we have provided multiple types of validity evidence in support of these rubrics aligned with Messick's framework including content, response process, and internal structure. 15 Similar to other tools assessing quality in medical education research, our rubrics assess aspects such as study design, sampling, data analysis, and outcomes that represent the underpinnings of rigorous research. 8 , 23 , 24 , 25 , 26 Unlike many medical education research assessments published in the literature, our tool was designed specifically for the assessment of abstracts rather than full‐text manuscripts, and therefore the specific item domains and characteristics reflect this unique purpose.

We deliberately created separate rubrics for abstracts reporting quantitative and qualitative data because each has unique methods. When designing a study, education researchers must decide the best method to address their questions. Often, in the exploratory phase of inquiry, a qualitative study is the most appropriate choice to identify key topics that merit further study. These often may be narrow in scope and may employ one or more qualitative methods (e.g., ethnography, focus groups, personal interviews). The careful and rigorous analysis may reveal points that can be studied later via quantitative methods to test a hypothesis gleaned during the qualitative phase. 27 Specific standards for reporting on qualitative research have been widely disseminated and are distinct from standards for reporting quantitative research. 28 Even an impeccably designed and executed qualitative study would fail to meet major criteria for excellent quantitative studies. For example, points may be subtracted for lack of generalizability or conduct of the qualitative study in multiple institutions as well as for the absence of common quantitative statistical analytics. The qualitative abstract itself may necessarily lack the common structure of a quantitative report and lead to a lower score. The obvious problem is that a well‐conducted study might not be shared with the relevant research community if it is judged according to quantitative standards. A similar outcome would occur if quantitative work were judged by qualitative standards; therefore, we advocate for using scoring rubrics specific to the type of research being assessed.

Our work has several possible applications. The rubrics we developed may be adopted as scoring tools for medical education research studies that are submitted for presentation to scientific conferences. The presence of specific scoring rubrics for medical education research may address disparities in acceptance rates and ensure presentation of rigorously conducted medical education research at scientific conferences. Further, publication of abstract scoring rubrics such as ours sets expectations for certain elements to be included and defines an acceptable level of submission quality. Dissemination and usage of the rubrics may therefore help improve research excellence. The rubrics themselves can serve as educational tools in resident and faculty training. For example, the rubrics could serve as illustrations or practice material in teaching how to prepare a strong abstract for submission. The inclusive wording of the items allows the rubrics to be adapted to medical education work in any medical specialty. Medical educators may also benefit from using the methods described here to create their own scoring rubrics or provide evidence‐based best practice approaches for other venues. Finally, this study provides a tool that could lay the groundwork for future scholarship on assessing the quality of educational research.


Our study has several limitations. First, the modified Delphi technique is a consensus technique that can force agreement of respondents, and the existence of consensus does not denote a correct response. 11 Since the method is implemented electronically, there is limited discussion and elaboration. Second, the team of experts were all researchers in EM; therefore, the rubrics may not generalize to other specialties. The rubrics were intended for quantitative and qualitative education research abstract submission, so it may not perform well for abstracts that include both quantitative and qualitative data or those focused on early work, innovations, instrument development, validity evidence, or program evaluation. Finally, there are two limitations to the pilot testing. An a priori power calculation to determine sample size was not possible since the rubrics were novel. The ICCs of individual items on the scoring rubrics were variable and we chose not to eliminate items with low ICCs given the small sample size during piloting and a desire to create a tool comprehensive of key domains. Future studies of use of these tools incorporating larger samples may provide data for additional refinement. Faculty who piloted the rubrics were familiar with the constructs and rubrics, and it is not known how the rubrics would have performed with general abstract reviewers nor what training might be required. The success of separate rubrics may rely on the expertise of the reviewers in the methodology being assessed.

We offer two medical education abstract scoring rubrics with supporting preliminary reliability and validity evidence. Future studies could add additional validity evidence including use with trained and untrained reviewers and relationship to other variables, e.g., a comparison between rubric scores and expert judgment. Additional studies could be performed to provide consequential validity evidence by comparing the number and quality of accepted medical education abstracts before and after the rubric's implementation or whether the number of abstracts that eventually lead to publication increases.


Using the modified Delphi technique for consensus building, we developed two scoring rubrics to assess quality in quantitative and qualitative medical education research abstracts with supporting validity evidence. Application of these rubrics demonstrated high reliability.


The authors have no potential conflicts to disclose.


Jaime Jordan and Michael A. Gisondi conceived the study. Jaime Jordan, Michael A. Gisondi, Laura R. Hopson, Caroline Molins, and Suzanne K. Bentley contributed to the design of the study. Jaime Jordan, Laura R. Hopson, Caroline Molins, Suzanne K. Bentley, Nicole M. Deiorio, Sally A. Santen, Lalena M. Yarris, Wendy C. Coates, and Michael A. Gisondi contributed to data collection. Jaime Jordan analyzed the data. Jaime Jordan, Laura R. Hopson, Caroline Molins, Suzanne K. Bentley, Nicole M. Deiorio, Sally A. Santen, Lalena M. Yarris, Wendy C. Coates, and Michael A. Gisondi contributed to drafting of the manuscript and critical revision.

Supporting information

Data Supplement S1 . Supplemental material.


The authors acknowledge that this project originated to meet an SAEM Education Committee Objective and thank all the committee members for their support of this work.

Jordan J, Hopson LR, Molins C, et al. Leveling the field: Development of reliable scoring rubrics for quantitative and qualitative medical education research abstracts . AEM Educ Train . 2021; 5 :e10654. 10.1002/aet2.10654 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Presented at Society for Academic Emergency Medicine Virtual Meeting, May 13, 2021.

Supervising Editor: Esther H. Chen, MD.

Create a Qualitative Rubric

Video guide.

Create a qualitative Turnitin rubric (YouTube, 2m 26s)

For information on the different types of rubrics available in Turnitin, refer to the  Marks / Rubrics / Grading Forms Overview guide

Note: Turnitin rubrics are different to Blackboard rubrics. It is not possible to use a Blackboard rubric in Turnitin.

Note: The availability of rubrics is based on who is logged on, not what Blackboard course the Turnitin is accessed from. Your tutors will be able to use the rubric you select for marking. 

To pass a rubric on to another staff member, you need to export the rubric/form and they will need to import it into Turnitin (refer to the Export / Import a Rubric/Form guide).

Another option would be to share the spreadsheet the rubric is based on.

Note: The below rubric is an example of how a qualitative rubric may be structured.

Add a rubric

The recommended option is that rubrics be created in a spreadsheet and uploaded to TurnItIn. The advantage of this is that rubrics can then be easily copied into your Course Profile and assignment instructions.

Download the spreadsheet template

  • Right click the below link and save the spreadsheet template.

Turnitin rubric template   

Complete the rubric

Note: The criteria percentage weightings and standard marks are not included in the spreadsheet.

  • Copy and paste rows to add additional criterion.
  • Copy and paste columns to add additional standards.
  • Change the criterion titles, standard titles (or delete) and criterion/standard descriptions.

populate qualitative rubric

Note: The criterion titles are limited to 13 characters (including spaces). If the criterion title is too long, leave it as Criterion X and enter the title underneath as the criterion description.

Tip: If you are unsure of character limits we suggest you do not change the criterion titles or the scale titles and do this after you have uploaded the rubric.

Import the spreadsheet into TurnItIn

A rubric can be added when you first setup your TurnItIn assignment under Optional settings or by editing an existing assignment.

Refer to the guides Create a TurnItIn Assignment (text based) , Create a TurnItIn Assignment (non-text based) ,  Create a TurnItIn Assignment (no file submission) or Reuse a TurnItIn Assignment .

  •  Navigate to the required assignment link.
  • Click on the assignment title.
  • Click on the cog button.

cog button

  • Expand  Optional settings  and check   the Attach a rubric checkbox.

rubric for qualitative research paper

  • Launch Rubric Manager panel will be displayed. Click on the Launch Rubric Manager button.

rubric for qualitative research paper

  • Click on the Export/Import  button.
  • Select Import from the drop-down list.

click import

  • Click on the Select files  button.
  • Browse to and select the complete rubric template.
  • Click on the View  button.

select files

  • Enter a name for the rubric.
  • Click on the Qualitative  rubric icon at the bottom of the screen.

Note: Not all criterion/standard “cells” need to be used.

  • Click on the SAVE  button.
  • Click on the CLOSE  button.

click save and close

  • Select the required rubric from the Rubric drop-down list.

qualitative rubric

  • Apply late penalities in Gradescope
  • Assign questions to pages in Gradescope
  • Change a Turnitin qualitative rubric to a grading rubric
  • Copy a Gradescope assignment
  • Gradescope exam papers formatting
  • Mark by group in Gradescope
  • Publish and Post Gradescope student results
  • Review marked Gradescope assignments

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 22, Issue 1
  • How to appraise qualitative research
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Calvin Moorley 1 ,
  • Xabi Cathala 2
  • 1 Nursing Research and Diversity in Care, School of Health and Social Care , London South Bank University , London , UK
  • 2 Institute of Vocational Learning , School of Health and Social Care, London South Bank University , London , UK
  • Correspondence to Dr Calvin Moorley, Nursing Research and Diversity in Care, School of Health and Social Care, London South Bank University, London SE1 0AA, UK; Moorleyc{at}lsbu.ac.uk


Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.


In order to make a decision about implementing evidence into practice, nurses need to be able to critically appraise research. Nurses also have a professional responsibility to maintain up-to-date practice. 1 This paper provides a guide on how to critically appraise a qualitative research paper.

What is qualitative research?

  • View inline

Useful terms

Some of the qualitative approaches used in nursing research include grounded theory, phenomenology, ethnography, case study (can lend itself to mixed methods) and narrative analysis. The data collection methods used in qualitative research include in depth interviews, focus groups, observations and stories in the form of diaries or other documents. 3


Title, keywords, authors and abstract.

In a previous paper, we discussed how the title, keywords, authors’ positions and affiliations and abstract can influence the authenticity and readability of quantitative research papers, 4 the same applies to qualitative research. However, other areas such as the purpose of the study and the research question, theoretical and conceptual frameworks, sampling and methodology also need consideration when appraising a qualitative paper.

Purpose and question

The topic under investigation in the study should be guided by a clear research question or a statement of the problem or purpose. An example of a statement can be seen in table 2 . Unlike most quantitative studies, qualitative research does not seek to test a hypothesis. The research statement should be specific to the problem and should be reflected in the design. This will inform the reader of what will be studied and justify the purpose of the study. 5

Example of research question and problem statement

An appropriate literature review should have been conducted and summarised in the paper. It should be linked to the subject, using peer-reviewed primary research which is up to date. We suggest papers with a age limit of 5–8 years excluding original work. The literature review should give the reader a balanced view on what has been written on the subject. It is worth noting that for some qualitative approaches some literature reviews are conducted after the data collection to minimise bias, for example, in grounded theory studies. In phenomenological studies, the review sometimes occurs after the data analysis. If this is the case, the author(s) should make this clear.

Theoretical and conceptual frameworks

Most authors use the terms theoretical and conceptual frameworks interchangeably. Usually, a theoretical framework is used when research is underpinned by one theory that aims to help predict, explain and understand the topic investigated. A theoretical framework is the blueprint that can hold or scaffold a study’s theory. Conceptual frameworks are based on concepts from various theories and findings which help to guide the research. 6 It is the researcher’s understanding of how different variables are connected in the study, for example, the literature review and research question. Theoretical and conceptual frameworks connect the researcher to existing knowledge and these are used in a study to help to explain and understand what is being investigated. A framework is the design or map for a study. When you are appraising a qualitative paper, you should be able to see how the framework helped with (1) providing a rationale and (2) the development of research questions or statements. 7 You should be able to identify how the framework, research question, purpose and literature review all complement each other.

There remains an ongoing debate in relation to what an appropriate sample size should be for a qualitative study. We hold the view that qualitative research does not seek to power and a sample size can be as small as one (eg, a single case study) or any number above one (a grounded theory study) providing that it is appropriate and answers the research problem. Shorten and Moorley 8 explain that three main types of sampling exist in qualitative research: (1) convenience (2) judgement or (3) theoretical. In the paper , the sample size should be stated and a rationale for how it was decided should be clear.


Qualitative research encompasses a variety of methods and designs. Based on the chosen method or design, the findings may be reported in a variety of different formats. Table 3 provides the main qualitative approaches used in nursing with a short description.

Different qualitative approaches

The authors should make it clear why they are using a qualitative methodology and the chosen theoretical approach or framework. The paper should provide details of participant inclusion and exclusion criteria as well as recruitment sites where the sample was drawn from, for example, urban, rural, hospital inpatient or community. Methods of data collection should be identified and be appropriate for the research statement/question.

Data collection

Overall there should be a clear trail of data collection. The paper should explain when and how the study was advertised, participants were recruited and consented. it should also state when and where the data collection took place. Data collection methods include interviews, this can be structured or unstructured and in depth one to one or group. 9 Group interviews are often referred to as focus group interviews these are often voice recorded and transcribed verbatim. It should be clear if these were conducted face to face, telephone or any other type of media used. Table 3 includes some data collection methods. Other collection methods not included in table 3 examples are observation, diaries, video recording, photographs, documents or objects (artefacts). The schedule of questions for interview or the protocol for non-interview data collection should be provided, available or discussed in the paper. Some authors may use the term ‘recruitment ended once data saturation was reached’. This simply mean that the researchers were not gaining any new information at subsequent interviews, so they stopped data collection.

The data collection section should include details of the ethical approval gained to carry out the study. For example, the strategies used to gain participants’ consent to take part in the study. The authors should make clear if any ethical issues arose and how these were resolved or managed.

The approach to data analysis (see ref  10 ) needs to be clearly articulated, for example, was there more than one person responsible for analysing the data? How were any discrepancies in findings resolved? An audit trail of how the data were analysed including its management should be documented. If member checking was used this should also be reported. This level of transparency contributes to the trustworthiness and credibility of qualitative research. Some researchers provide a diagram of how they approached data analysis to demonstrate the rigour applied ( figure 1 ).

  • Download figure
  • Open in new tab
  • Download powerpoint

Example of data analysis diagram.

Validity and rigour

The study’s validity is reliant on the statement of the question/problem, theoretical/conceptual framework, design, method, sample and data analysis. When critiquing qualitative research, these elements will help you to determine the study’s reliability. Noble and Smith 11 explain that validity is the integrity of data methods applied and that findings should accurately reflect the data. Rigour should acknowledge the researcher’s role and involvement as well as any biases. Essentially it should focus on truth value, consistency and neutrality and applicability. 11 The authors should discuss if they used triangulation (see table 2 ) to develop the best possible understanding of the phenomena.

Themes and interpretations and implications for practice

In qualitative research no hypothesis is tested, therefore, there is no specific result. Instead, qualitative findings are often reported in themes based on the data analysed. The findings should be clearly linked to, and reflect, the data. This contributes to the soundness of the research. 11 The researchers should make it clear how they arrived at the interpretations of the findings. The theoretical or conceptual framework used should be discussed aiding the rigour of the study. The implications of the findings need to be made clear and where appropriate their applicability or transferability should be identified. 12

Discussions, recommendations and conclusions

The discussion should relate to the research findings as the authors seek to make connections with the literature reviewed earlier in the paper to contextualise their work. A strong discussion will connect the research aims and objectives to the findings and will be supported with literature if possible. A paper that seeks to influence nursing practice will have a recommendations section for clinical practice and research. A good conclusion will focus on the findings and discussion of the phenomena investigated.

Qualitative research has much to offer nursing and healthcare, in terms of understanding patients’ experience of illness, treatment and recovery, it can also help to understand better areas of healthcare practice. However, it must be done with rigour and this paper provides some guidance for appraising such research. To help you critique a qualitative research paper some guidance is provided in table 4 .

Some guidance for critiquing qualitative research

  • ↵ Nursing and Midwifery Council . The code: Standard of conduct, performance and ethics for nurses and midwives . 2015 https://www.nmc.org.uk/globalassets/sitedocuments/nmc-publications/nmc-code.pdf ( accessed 21 Aug 18 ).
  • Barrett D ,
  • Cathala X ,
  • Shorten A ,

Patient consent for publication Not required.

Competing interests None declared.

Provenance and peer review Commissioned; internally peer reviewed.

Read the full text or download the PDF:


  • Georgia State University Library
  • GSU Library Research Guides

Open & Affordable Education @ GSU Library

  • Open Collections

Keywords and Phrases Relating to Instruction

  • Academic achievement
  • Academic development
  • Accessibility
  • Achievement
  • Achievement gap
  • Action research
  • Active learning
  • Adult learning
  • Assignments
  • Assistive technology
  • Asynchronous instruction
  • Authentic learning
  • Backwards design
  • Behavior management
  • Behavioral challenges
  • Blended learning
  • Bloom's taxonomy
  • Case method
  • Change management
  • Classroom climate
  • Classroom design
  • Classroom management
  • Cloud-based learning
  • Cognitive development
  • Cognitive load
  • Cognitive strategies
  • Collaboration
  • Collaborative learning
  • Collaborative research
  • Communication
  • Communication disorders
  • Communities of practice
  • Community service learning
  • Conceptions of learning
  • Conceptions of instruction
  • Conceptions of teaching
  • Confidentiality
  • Construction of knowledge
  • Constructivism
  • Cooperation
  • Counselling
  • Course design
  • Course evaluation
  • Course outline
  • Critical thinking
  • Cultural competency
  • Culturally responsive pedagogy
  • Curriculum design
  • Description
  • Differentiation
  • Disabilities
  • Discussion post
  • Discussion thread
  • Distance learning
  • Educational development
  • Educational technologies
  • Epistemology
  • Essay writing
  • Evidenced-based
  • Expectations
  • Experiential learning
  • Extrinsic motivation
  • Faculty development
  • First-year student
  • Fixed mindset
  • Flipped classroom
  • Formative assessment
  • Formative feedback
  • Game-based learning
  • Gamification
  • Gatekeeping
  • Global design
  • Graduate students
  • Growth mindset
  • Hidden curriculum
  • High-impact practices
  • Higher Education
  • Hybrid course
  • Hybrid design
  • Hybrid learning
  • Inclusive teaching
  • Individualized instruction
  • Information literacy
  • Inquiry-based learning
  • Inquiry research
  • Instruction
  • Instructional design
  • Instructional goals
  • Instructional strategies
  • Instructional technology
  • Instructors
  • Intellectual
  • Intelligence
  • Interactive learning
  • Interdisciplinary
  • International
  • Internationalization
  • Intrinsic motivation
  • Inverted classroom
  • Kolb's learning cycle
  • Leadership skills
  • Learner characteristics
  • Learner-centered
  • Learning disabilities
  • Learning goals
  • Learning management system (LMS)
  • Learning objective
  • Learning outcomes
  • Learning spaces
  • Learning styles
  • Learning technology
  • Learning tools
  • Lesson planning
  • Marginalization
  • Media literacy
  • Mental models
  • Metacognition
  • Minute thesis
  • Mixed methodology
  • Mixed methods research
  • Multidisciplinary
  • Multimodal learning
  • Multiple choice
  • Needs assessment
  • News literacy
  • Non-disclosure
  • Object-based learning (OBL)
  • Online learning
  • Open access
  • Open education
  • Open education resources
  • Open learning
  • Participation
  • Peer assessment
  • Peer observation
  • Peer review
  • Portfolio assignments
  • Prior knowledge
  • Problem-based learning
  • Professionalism
  • Program outcomes
  • Qualitative analysis
  • Qualitative research
  • Quantitative analysis
  • Quantitative research
  • Reliability
  • Research design
  • Research methodology
  • Scaffolding
  • Scholarship
  • Scholarship of teaching and learning (SOTL)
  • Science, technology, engineering, and mathematics (STEM)
  • Science, technology, engineering, art, and mathematics (STEAM)
  • Self-assessment
  • Self-regulation
  • Service learning
  • Small group
  • Social constructivism
  • Social learning
  • Social learning systems
  • Stereotyping
  • Storyboarding
  • Storytelling
  • Student engagement
  • Student success
  • Study skills
  • Summative assessment
  • Survey design
  • Synchronous instruction
  • Teaching and learning communities
  • Teaching assistant
  • Teaching controversy
  • Teaching development plan (TDP)
  • Teaching evaluation
  • Teaching methods
  • Teaching portfolio
  • Team-based learning
  • Threshold concepts
  • Transfer of knowledge
  • Transfer student
  • Transformative teaching
  • Transparent assignment design
  • Trauma-informed
  • Trigger warnings
  • Underachievement
  • Undergraduate
  • Universal design
  • Vulnerable populations
  • Working knowledge
  • Workplace learning
  • Zone of proximal development (ZPD)

Open Textbooks and Affordable Library E-Books - All in One Spot!

  • Access Faculty Select Faculty select supports textbook affordability by allowing users to search and access quality open textbooks, request e-books from top academic publishers, and find barrier-free e-books in the library’s collection.
  • Learn About Faculty Select Learn how to navigate Faculty Select, preview an e-book, obtain a course-friendly link, order e-books, or add course designators.

GSU's Openly Accessible Scholarship

  • Access ScholarWorks ScholarWorks is Georgia State University's open-access institutional repository, which includes journal articles, book chapters, books, conference proceedings and presentations, data sets, technical reports, audiovisual materials, and electronic dissertations & theses created by GSU faculty, students, and staff.
  • Learn About ScholarWorks Learn about ScholarWorks and open-access, and discover how to submit your work or host a journal/conference.

GSU's Openly Accessible Digital Collections

  • Access the Digital Collections The library’s Digital Collections provide open-access to unique and rare material within our special collections and archives, including: manuscripts, photographs, periodicals, newspapers, oral histories, audio/visual materials, maps, and printed materials.
  • Learn About the Digital Collections Discover the scope of Georgia State University Library's Digital Collections and learn about our efforts to preserve Atlanta's history.

GSU's Openly Accessible Projects on Atlanta

  • Access digATL digATL is the library's portal housing projects, collections, and data about the metro area, produced by Georgia State University faculty, staff, and students working with and within their communities.
  • Learn About digATL Discover the portal's scope and objectives, and submit recommendations for new recources.

Fully Adaptable Open Textbooks

  • Open Textbooks and Learning Materials for USG Core Curriculum Courses
  • OpenStax OpenStax is an ever-growing library of open textbooks and ancillaries that are available for free online, are high-quality and peer-reviewed, and are openly licensed -- which means faculty can adopt or adapt them to their course needs.
  • California Community Colleges OER by Discipline
  • Milne Open Textbooks A catalog of open textbooks that have been authored and peer-reviewed by SUNY faculty and staff.
  • Northwestern Selected Open Textbooks A list of open textbooks by discipline.
  • Open Textbook Library Open textbooks are licensed by authors and publishers to be freely used and adapted. Download, edit and distribute them at no cost.

Openly Accessible Indexes, Repositories, and Archives

  • COERLL Open educational resources for language learning.
  • Confederation of Open Access Repositories (COAR) An international association of repositories and repository networks.
  • CORA A community of online research assignments.
  • Core Games An arcade of free games, experiences, and events designed by a global community of creator
  • Digital Public Library of America Over 50,645,126 images, texts, videos, and sounds from across the United States.
  • Intech Open Open access books and open science journals.
  • Knowledge Commons A network for knowledge creators across disciplines.
  • LIS Scholarship Archive A free, open scholarly platform for library and information science.
  • MERLOT Openly accessible online learning, content creation, and support materials.
  • nanoHUB Founded in 2002, nanoHUB is an open and free online platform for computational education, research, and collaboration in nanotechnology, materials science, and related fields.
  • National Archives of the US All of these materials are preserved because they are important to the workings of Government, have long-term research worth, or provide information of value to citizens.
  • OAPEN An online library of quality-controlled open-access books.
  • Open Science Framework A free-open platform for research sharing and collaboration.
  • Project Gutenberg Free ebooks.
  • Public Library of Science A nonprofit, open access publisher empowering researchers to accelerate progress in science and medicine by leading a transformation in research communication.
  • The Scholarly Communication Notebook An inclusive community of practice for teaching scholarly communications, including a resource hub.
  • SocArXiv A repository for the social sciences.
  • Teaching Resources and Innovations Library for Sociology The TRAILS Resource Collections highlight syllabi, activities and assignments around important topics in sociology. Click the collection titles below to see a hand picked set of TRAILS resources.
  • UCL Press An open access collection of textbooks and journals based out of the UK.

Collections in the Public Domain (No Attribution Required)

  • The Internet Archive
  • The Library of Congress
  • LibriVox Audiobooks in the public domain.
  • Public Domain Review
  • << Previous: Introduction
  • Next: Authoring Open Resources >>
  • Introduction
  • Authoring Open Resources
  • Creative Commons Licenses
  • Open for Student Success Symposium This link opens in a new window
  • Contact Your Subject Librarian About Open & Affordable Education This link opens in a new window
  • Open Access Week 2022
  • Special Libraries Association Presentation 2022
  • Open Access Week Events 2021
  • Open Education Week Events 2021
  • OER Poster: Librarian Engagement
  • OER: Librarian Roles (Carterette webinar)
  • OER & LibGuides (Affordable Learning Georgia)
  • Database as Textbook Alternative: Poster & Paper
  • Open Ed Week webinar

Submit a Resource!

Have an openly accessible resource to share, to submit a resource for this page, email librarian charlene at [email protected] ., to publish your work with gsu library, email laura burtle at [email protected] ., openly accessible images.

Attribution is required for works in the following collections:

  • CDC Public Health Image Library An openly-accessible collection of images and visualizations from the Center for Disease Control and Prevention.
  • Europeana An openly-accessible collection of European cultural heritage artifacts.
  • NASA Images An openly-accessible collection of scientific images and visualizations from NASA.
  • NIH Image Gallery An openly-accessible collection of medical images and visualizations from the National Institute of Health.
  • NY Public Library Digital Gallery An openly-accessible collection of images curated by the New York Public Library.

No attribution is required for works in the following collections:

  • Pixnio Public Domain Images

Open Audio Collections

  • ccMixer A collection of open-source music.
  • Community Audio An openly-accessible collection of community-created audio recordings.
  • Free Music Archive This archive contains two collections: one full of open-source music and one full of openly-accessible music created by independent artists.
  • Musopen A collection of free recordings, sheet music, and textbooks.

Open Video Collections

  • Critical Commons A public media archive and fair use advocacy network that supports the transformative reuse of media in scholarly and creative contexts.
  • Hippocampus Thousands of free videos in over 13 disciplines.
  • Moving Image Archive Free movies, films, and videos captured by the Internet Archive.
  • Open Video Project A collection of open digital videos.
  • Vimeo Creative Commons A collection of openly-licensed videos.

Open Software

  • Audacity Open audio editing software.
  • CCleaner for Windows Open software to clean unwanted files and invalid Windows registry entries from computers.
  • Core Games Creator A game creation portal, as well as thousands of free, professional music, art, and sound assets.
  • Gimp Open raster graphics editor.
  • Omeka Open-source publishing platforms for sharing digital collections and creating media-rich exhibits.
  • Open Book Publishers Open software for metadata, usage data, and publishing and distribution.
  • PhET Interactive simulations for science and math.
  • Zotero Open R-based software for citation management.
  • Last Updated: Jun 5, 2024 10:08 AM
  • URL: https://research.library.gsu.edu/openandaffordableeducation



  1. iRubric: Qualitative Research I) rubric

    Do more with rubrics than ever imagined possible. Only with iRubric tm . iRubric W582B3: Review the papers you receive from the other students and conduct an evaluation of each paper. Assess whether the reports include the following: a) research process, b) type of narrative research, c) data collection technique, d) main themes in the data, e ...

  2. PDF Creating a Qualitative Rubric and its potential uses

    relevant rubric from the drop-down box. How the Qualitative Rubric works within Turnitin Feedback Studio Within a student's submission, a qualitative rubric will allow the marker to both assess the work based on the criterion and scales set and assign individual pieces of feedback to the criterion set in the rubric.

  3. Leveling the field: Development of reliable scoring rubrics for

    There are additional challenges in reviewing qualitative research abstracts using a standard hypothesis‐testing rubric. In these qualitative studies, word‐count constraints may limit the author's ability to convey the study's outcome appropriately. 7 It is problematic for qualitative studies to be constrained to a standard quantitative ...

  4. Example 1

    Example 1 - Research Paper Rubric. Characteristics to note in the rubric: Language is descriptive, not evaluative. Labels for degrees of success are descriptive ("Expert" "Proficient", etc.); by avoiding the use of letters representing grades or numbers representing points, there is no implied contract that qualities of the paper will ...

  5. PDF Learning to Appraise the Quality of Qualitative Research Articles: A

    qualitative research papers which present results from qualitative data analysis, analyze the three papers using the CASP tool, and ... quality of the three papers based upon the results of their CASP tool analysis in a A criterion-based rubric is used to assess students' abilities to compose a 12 to 15 page paper in compliance with APA ...

  6. Planning Qualitative Research: Design and Decision Making for New

    Given the nuance and complexity of qualitative research, this paper provides an accessible starting point from which novice researchers can begin their journey of learning about, designing, and conducting qualitative research. For students conducting their first qualitative research project, the choice of approach and subsequent alignment among ...

  7. PDF DISSERTATION Chapters 1-5 Section Rubric

    Chapter 5 is an interpretation. and discussion of the results, as it relates to the existing body of research related to the. dissertation topic. For the proposal, this section should also provide a timeline for. completing the research and writing up the dissertation. When the dissertation is.

  8. PDF Leveling the field: Development of reliable scoring rubrics for

    Background: Research abstracts are submitted for presentation at scientific confer-ences; however, criteria for judging abstracts are variable. We sought to develop two rigorous abstract scoring rubrics for education research submissions reporting (1) quantitative data and (2) qualitative data and then to collect validity evidence to sup-

  9. Create a Qualitative Rubric

    Browse to and select the complete rubric template. Click on the View button. Enter a name for the rubric. Click on the Qualitative rubric icon at the bottom of the screen. Note: Not all criterion/standard "cells" need to be used. Click on the SAVE button. Click on the CLOSE button. Select the required rubric from the Rubric drop-down list.

  10. How to appraise qualitative research

    In order to make a decision about implementing evidence into practice, nurses need to be able to critically appraise research. Nurses also have a professional responsibility to maintain up-to-date practice.1 This paper provides a guide on how to critically appraise a qualitative research paper. Qualitative research concentrates on understanding phenomena and may focus on meanings, perceptions ...

  11. Appraising Qualitative Research Reports: A Developmental Approach

    The Weeklyalso announces new issues of other qualitative research journals. We don't see ourselves as in competition with these other journals, but as peers within a larger qualitative research community. The year after the launch of The Weekly, in January 2010, we held the first TQR Conference, to provide a forum for qualitative research

  12. PDF Research Paper Scoring Rubric

    Research Paper Scoring Rubric Ideas Points 1-10 Has a well-developed thesis that conveys a perspective on the subject Poses relevant and tightly drawn questions about the topic; excludes extraneous details and inappropriate information Records important ideas, concepts, and direct quotations from a variety of reliable

  13. PDF Research Paper Rubric.xls

    The central purpose or argument is not consistently clear throughout the paper. The purpose or argument is generally unclear. Content. Balanced presentation of relevant and legitimate information that clearly supports a central purpose or argument and shows a thoughtful, in-depth analysis of a significant topic. Reader gains important insights.


    Purpose - Assessment rubric often lacks rigor and is underutilized. This article reports the effectiveness of the use of several assessment rubrics for a research writing course. In particular, we examined students' perceived and observed changes in their Chapter One thesis writing as assessed by supervisors using an existing departmental

  15. PDF Reflection-on-action in qualitative research: A critical self ...

    Reflection-on-action in qualitative research: A critical self-appraisal rubric for deconstructing research Martin Stynes Dublin City University, Ireland Timothy Murphy University of Limerick, Ireland Gerry McNamara and Joe O'Hara Dublin City University, Ireland In this paper, four critical friends meet to discuss qualitative research practices.


    RESEARCH DESIGN AND METHOD Indicators 5 4 3 2 1 1. The research methodology is well-described and is suitable or fitting to the research problem. 2. The participants of the study are well-described in terms of the sample profile, sample size and sampling technique and or procedure. 3.

  17. Example qualitative (CRA) rubric Standards

    Show more. Download Table | Example qualitative (CRA) rubric Standards from publication: Assessment of student outcomes from work-integrated learning: Validity and reliability | Learning and ...

  18. PDF Developing and Validating Scoring Rubrics for the Assessment of

    Arabic: law, linguistics, medicine and police. Both qualitative and quantitative data were analyzed using Bhatia's (1993) four-move structure and Hyland's (2000) five-move structure. ... rubric to assess research papers writing ability of undergraduate students who major in English. Then, it aims to examine if there is a difference in the ...

  19. PDF Research Presentation Rubrics

    The goal of this rubric is to identify and assess elements of research presentations, including delivery strategies and slide design. • Self-assessment: Record yourself presenting your talk using your computer's pre-downloaded recording software or by using the coach in Microsoft PowerPoint. Then review your recording, fill in the rubric ...

  20. GSU Library Research Guides: Open & Affordable Education @ GSU Library

    ScholarWorks is Georgia State University's open-access institutional repository, which includes journal articles, book chapters, books, conference proceedings and presentations, data sets, technical reports, audiovisual materials, and electronic dissertations & theses created by GSU faculty, students, and staff.

  21. Utilizing large language models for EFL essay grading: An examination

    The analysis of the first research question provides valuable insights into the reliability of LLMs in grading student essays based on a given rubric. The results clearly show that both ChatGPT models, FineTuned and Default, demonstrated a high level of reliability, as indicated by their high ICC scores of 0.972 and 0.947 respectively.

  22. Parker Summerhill '25: LISD-Funded Qualitative Research in Azerbaijan

    This March, I had the opportunity to travel to Baku, Azerbaijan, to conduct research, in part thanks to generous support from the Liechtenstein Institute. This transformative research experience made a significant contribution my Junior Paper (JP). My research focused on the impact of Section 907 of the FREEDOM Support Act on U.S.-Azerbaijani re...