U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Research Council; Division of Behavioral and Social Sciences and Education; Commission on Behavioral and Social Sciences and Education; Committee on Basic Research in the Behavioral and Social Sciences; Gerstein DR, Luce RD, Smelser NJ, et al., editors. The Behavioral and Social Sciences: Achievements and Opportunities. Washington (DC): National Academies Press (US); 1988.

Cover of The Behavioral and Social Sciences: Achievements and Opportunities

The Behavioral and Social Sciences: Achievements and Opportunities.

  • Hardcopy Version at National Academies Press

5 Methods of Data Collection, Representation, and Analysis

This chapter concerns research on collecting, representing, and analyzing the data that underlie behavioral and social sciences knowledge. Such research, methodological in character, includes ethnographic and historical approaches, scaling, axiomatic measurement, and statistics, with its important relatives, econometrics and psychometrics. The field can be described as including the self-conscious study of how scientists draw inferences and reach conclusions from observations. Since statistics is the largest and most prominent of methodological approaches and is used by researchers in virtually every discipline, statistical work draws the lion’s share of this chapter’s attention.

Problems of interpreting data arise whenever inherent variation or measurement fluctuations create challenges to understand data or to judge whether observed relationships are significant, durable, or general. Some examples: Is a sharp monthly (or yearly) increase in the rate of juvenile delinquency (or unemployment) in a particular area a matter for alarm, an ordinary periodic or random fluctuation, or the result of a change or quirk in reporting method? Do the temporal patterns seen in such repeated observations reflect a direct causal mechanism, a complex of indirect ones, or just imperfections in the data? Is a decrease in auto injuries an effect of a new seat-belt law? Are the disagreements among people describing some aspect of a subculture too great to draw valid inferences about that aspect of the culture?

Such issues of inference are often closely connected to substantive theory and specific data, and to some extent it is difficult and perhaps misleading to treat methods of data collection, representation, and analysis separately. This report does so, as do all sciences to some extent, because the methods developed often are far more general than the specific problems that originally gave rise to them. There is much transfer of new ideas from one substantive field to another—and to and from fields outside the behavioral and social sciences. Some of the classical methods of statistics arose in studies of astronomical observations, biological variability, and human diversity. The major growth of the classical methods occurred in the twentieth century, greatly stimulated by problems in agriculture and genetics. Some methods for uncovering geometric structures in data, such as multidimensional scaling and factor analysis, originated in research on psychological problems, but have been applied in many other sciences. Some time-series methods were developed originally to deal with economic data, but they are equally applicable to many other kinds of data.

  • In economics: large-scale models of the U.S. economy; effects of taxation, money supply, and other government fiscal and monetary policies; theories of duopoly, oligopoly, and rational expectations; economic effects of slavery.
  • In psychology: test calibration; the formation of subjective probabilities, their revision in the light of new information, and their use in decision making; psychiatric epidemiology and mental health program evaluation.
  • In sociology and other fields: victimization and crime rates; effects of incarceration and sentencing policies; deployment of police and fire-fighting forces; discrimination, antitrust, and regulatory court cases; social networks; population growth and forecasting; and voting behavior.

Even such an abridged listing makes clear that improvements in methodology are valuable across the spectrum of empirical research in the behavioral and social sciences as well as in application to policy questions. Clearly, methodological research serves many different purposes, and there is a need to develop different approaches to serve those different purposes, including exploratory data analysis, scientific inference about hypotheses and population parameters, individual decision making, forecasting what will happen in the event or absence of intervention, and assessing causality from both randomized experiments and observational data.

This discussion of methodological research is divided into three areas: design, representation, and analysis. The efficient design of investigations must take place before data are collected because it involves how much, what kind of, and how data are to be collected. What type of study is feasible: experimental, sample survey, field observation, or other? What variables should be measured, controlled, and randomized? How extensive a subject pool or observational period is appropriate? How can study resources be allocated most effectively among various sites, instruments, and subsamples?

The construction of useful representations of the data involves deciding what kind of formal structure best expresses the underlying qualitative and quantitative concepts that are being used in a given study. For example, cost of living is a simple concept to quantify if it applies to a single individual with unchanging tastes in stable markets (that is, markets offering the same array of goods from year to year at varying prices), but as a national aggregate for millions of households and constantly changing consumer product markets, the cost of living is not easy to specify clearly or measure reliably. Statisticians, economists, sociologists, and other experts have long struggled to make the cost of living a precise yet practicable concept that is also efficient to measure, and they must continually modify it to reflect changing circumstances.

Data analysis covers the final step of characterizing and interpreting research findings: Can estimates of the relations between variables be made? Can some conclusion be drawn about correlation, cause and effect, or trends over time? How uncertain are the estimates and conclusions and can that uncertainty be reduced by analyzing the data in a different way? Can computers be used to display complex results graphically for quicker or better understanding or to suggest different ways of proceeding?

Advances in analysis, data representation, and research design feed into and reinforce one another in the course of actual scientific work. The intersections between methodological improvements and empirical advances are an important aspect of the multidisciplinary thrust of progress in the behavioral and social sciences.

  • Designs for Data Collection

Four broad kinds of research designs are used in the behavioral and social sciences: experimental, survey, comparative, and ethnographic.

Experimental designs, in either the laboratory or field settings, systematically manipulate a few variables while others that may affect the outcome are held constant, randomized, or otherwise controlled. The purpose of randomized experiments is to ensure that only one or a few variables can systematically affect the results, so that causes can be attributed. Survey designs include the collection and analysis of data from censuses, sample surveys, and longitudinal studies and the examination of various relationships among the observed phenomena. Randomization plays a different role here than in experimental designs: it is used to select members of a sample so that the sample is as representative of the whole population as possible. Comparative designs involve the retrieval of evidence that is recorded in the flow of current or past events in different times or places and the interpretation and analysis of this evidence. Ethnographic designs, also known as participant-observation designs, involve a researcher in intensive and direct contact with a group, community, or population being studied, through participation, observation, and extended interviewing.

Experimental Designs

Laboratory experiments.

Laboratory experiments underlie most of the work reported in Chapter 1 , significant parts of Chapter 2 , and some of the newest lines of research in Chapter 3 . Laboratory experiments extend and adapt classical methods of design first developed, for the most part, in the physical and life sciences and agricultural research. Their main feature is the systematic and independent manipulation of a few variables and the strict control or randomization of all other variables that might affect the phenomenon under study. For example, some studies of animal motivation involve the systematic manipulation of amounts of food and feeding schedules while other factors that may also affect motivation, such as body weight, deprivation, and so on, are held constant. New designs are currently coming into play largely because of new analytic and computational methods (discussed below, in “Advances in Statistical Inference and Analysis”).

Two examples of empirically important issues that demonstrate the need for broadening classical experimental approaches are open-ended responses and lack of independence of successive experimental trials. The first concerns the design of research protocols that do not require the strict segregation of the events of an experiment into well-defined trials, but permit a subject to respond at will. These methods are needed when what is of interest is how the respondent chooses to allocate behavior in real time and across continuously available alternatives. Such empirical methods have long been used, but they can generate very subtle and difficult problems in experimental design and subsequent analysis. As theories of allocative behavior of all sorts become more sophisticated and precise, the experimental requirements become more demanding, so the need to better understand and solve this range of design issues is an outstanding challenge to methodological ingenuity.

The second issue arises in repeated-trial designs when the behavior on successive trials, even if it does not exhibit a secular trend (such as a learning curve), is markedly influenced by what has happened in the preceding trial or trials. The more naturalistic the experiment and the more sensitive the meas urements taken, the more likely it is that such effects will occur. But such sequential dependencies in observations cause a number of important conceptual and technical problems in summarizing the data and in testing analytical models, which are not yet completely understood. In the absence of clear solutions, such effects are sometimes ignored by investigators, simplifying the data analysis but leaving residues of skepticism about the reliability and significance of the experimental results. With continuing development of sensitive measures in repeated-trial designs, there is a growing need for more advanced concepts and methods for dealing with experimental results that may be influenced by sequential dependencies.

Randomized Field Experiments

The state of the art in randomized field experiments, in which different policies or procedures are tested in controlled trials under real conditions, has advanced dramatically over the past two decades. Problems that were once considered major methodological obstacles—such as implementing randomized field assignment to treatment and control groups and protecting the randomization procedure from corruption—have been largely overcome. While state-of-the-art standards are not achieved in every field experiment, the commitment to reaching them is rising steadily, not only among researchers but also among customer agencies and sponsors.

The health insurance experiment described in Chapter 2 is an example of a major randomized field experiment that has had and will continue to have important policy reverberations in the design of health care financing. Field experiments with the negative income tax (guaranteed minimum income) conducted in the 1970s were significant in policy debates, even before their completion, and provided the most solid evidence available on how tax-based income support programs and marginal tax rates can affect the work incentives and family structures of the poor. Important field experiments have also been carried out on alternative strategies for the prevention of delinquency and other criminal behavior, reform of court procedures, rehabilitative programs in mental health, family planning, and special educational programs, among other areas.

In planning field experiments, much hinges on the definition and design of the experimental cells, the particular combinations needed of treatment and control conditions for each set of demographic or other client sample characteristics, including specification of the minimum number of cases needed in each cell to test for the presence of effects. Considerations of statistical power, client availability, and the theoretical structure of the inquiry enter into such specifications. Current important methodological thresholds are to find better ways of predicting recruitment and attrition patterns in the sample, of designing experiments that will be statistically robust in the face of problematic sample recruitment or excessive attrition, and of ensuring appropriate acquisition and analysis of data on the attrition component of the sample.

Also of major significance are improvements in integrating detailed process and outcome measurements in field experiments. To conduct research on program effects under field conditions requires continual monitoring to determine exactly what is being done—the process—how it corresponds to what was projected at the outset. Relatively unintrusive, inexpensive, and effective implementation measures are of great interest. There is, in parallel, a growing emphasis on designing experiments to evaluate distinct program components in contrast to summary measures of net program effects.

Finally, there is an important opportunity now for further theoretical work to model organizational processes in social settings and to design and select outcome variables that, in the relatively short time of most field experiments, can predict longer-term effects: For example, in job-training programs, what are the effects on the community (role models, morale, referral networks) or on individual skills, motives, or knowledge levels that are likely to translate into sustained changes in career paths and income levels?

Survey Designs

Many people have opinions about how societal mores, economic conditions, and social programs shape lives and encourage or discourage various kinds of behavior. People generalize from their own cases, and from the groups to which they belong, about such matters as how much it costs to raise a child, the extent to which unemployment contributes to divorce, and so on. In fact, however, effects vary so much from one group to another that homespun generalizations are of little use. Fortunately, behavioral and social scientists have been able to bridge the gaps between personal perspectives and collective realities by means of survey research. In particular, governmental information systems include volumes of extremely valuable survey data, and the facility of modern computers to store, disseminate, and analyze such data has significantly improved empirical tests and led to new understandings of social processes.

Within this category of research designs, two major types are distinguished: repeated cross-sectional surveys and longitudinal panel surveys. In addition, and cross-cutting these types, there is a major effort under way to improve and refine the quality of survey data by investigating features of human memory and of question formation that affect survey response.

Repeated cross-sectional designs can either attempt to measure an entire population—as does the oldest U.S. example, the national decennial census—or they can rest on samples drawn from a population. The general principle is to take independent samples at two or more times, measuring the variables of interest, such as income levels, housing plans, or opinions about public affairs, in the same way. The General Social Survey, collected by the National Opinion Research Center with National Science Foundation support, is a repeated cross sectional data base that was begun in 1972. One methodological question of particular salience in such data is how to adjust for nonresponses and “don’t know” responses. Another is how to deal with self-selection bias. For example, to compare the earnings of women and men in the labor force, it would be mistaken to first assume that the two samples of labor-force participants are randomly selected from the larger populations of men and women; instead, one has to consider and incorporate in the analysis the factors that determine who is in the labor force.

In longitudinal panels, a sample is drawn at one point in time and the relevant variables are measured at this and subsequent times for the same people. In more complex versions, some fraction of each panel may be replaced or added to periodically, such as expanding the sample to include households formed by the children of the original sample. An example of panel data developed in this way is the Panel Study of Income Dynamics (PSID), conducted by the University of Michigan since 1968 (discussed in Chapter 3 ).

Comparing the fertility or income of different people in different circumstances at the same time to find correlations always leaves a large proportion of the variability unexplained, but common sense suggests that much of the unexplained variability is actually explicable. There are systematic reasons for individual outcomes in each person’s past achievements, in parental models, upbringing, and earlier sequences of experiences. Unfortunately, asking people about the past is not particularly helpful: people remake their views of the past to rationalize the present and so retrospective data are often of uncertain validity. In contrast, generation-long longitudinal data allow readings on the sequence of past circumstances uncolored by later outcomes. Such data are uniquely useful for studying the causes and consequences of naturally occurring decisions and transitions. Thus, as longitudinal studies continue, quantitative analysis is becoming feasible about such questions as: How are the decisions of individuals affected by parental experience? Which aspects of early decisions constrain later opportunities? And how does detailed background experience leave its imprint? Studies like the two-decade-long PSID are bringing within grasp a complete generational cycle of detailed data on fertility, work life, household structure, and income.

Advances in Longitudinal Designs

Large-scale longitudinal data collection projects are uniquely valuable as vehicles for testing and improving survey research methodology. In ways that lie beyond the scope of a cross-sectional survey, longitudinal studies can sometimes be designed—without significant detriment to their substantive interests—to facilitate the evaluation and upgrading of data quality; the analysis of relative costs and effectiveness of alternative techniques of inquiry; and the standardization or coordination of solutions to problems of method, concept, and measurement across different research domains.

Some areas of methodological improvement include discoveries about the impact of interview mode on response (mail, telephone, face-to-face); the effects of nonresponse on the representativeness of a sample (due to respondents’ refusal or interviewers’ failure to contact); the effects on behavior of continued participation over time in a sample survey; the value of alternative methods of adjusting for nonresponse and incomplete observations (such as imputation of missing data, variable case weighting); the impact on response of specifying different recall periods, varying the intervals between interviews, or changing the length of interviews; and the comparison and calibration of results obtained by longitudinal surveys, randomized field experiments, laboratory studies, onetime surveys, and administrative records.

It should be especially noted that incorporating improvements in methodology and data quality has been and will no doubt continue to be crucial to the growing success of longitudinal studies. Panel designs are intrinsically more vulnerable than other designs to statistical biases due to cumulative item non-response, sample attrition, time-in-sample effects, and error margins in repeated measures, all of which may produce exaggerated estimates of change. Over time, a panel that was initially representative may become much less representative of a population, not only because of attrition in the sample, but also because of changes in immigration patterns, age structure, and the like. Longitudinal studies are also subject to changes in scientific and societal contexts that may create uncontrolled drifts over time in the meaning of nominally stable questions or concepts as well as in the underlying behavior. Also, a natural tendency to expand over time the range of topics and thus the interview lengths, which increases the burdens on respondents, may lead to deterioration of data quality or relevance. Careful methodological research to understand and overcome these problems has been done, and continued work as a component of new longitudinal studies is certain to advance the overall state of the art.

Longitudinal studies are sometimes pressed for evidence they are not designed to produce: for example, in important public policy questions concerning the impact of government programs in such areas as health promotion, disease prevention, or criminal justice. By using research designs that combine field experiments (with randomized assignment to program and control conditions) and longitudinal surveys, one can capitalize on the strongest merits of each: the experimental component provides stronger evidence for casual statements that are critical for evaluating programs and for illuminating some fundamental theories; the longitudinal component helps in the estimation of long-term program effects and their attenuation. Coupling experiments to ongoing longitudinal studies is not often feasible, given the multiple constraints of not disrupting the survey, developing all the complicated arrangements that go into a large-scale field experiment, and having the populations of interest overlap in useful ways. Yet opportunities to join field experiments to surveys are of great importance. Coupled studies can produce vital knowledge about the empirical conditions under which the results of longitudinal surveys turn out to be similar to—or divergent from—those produced by randomized field experiments. A pattern of divergence and similarity has begun to emerge in coupled studies; additional cases are needed to understand why some naturally occurring social processes and longitudinal design features seem to approximate formal random allocation and others do not. The methodological implications of such new knowledge go well beyond program evaluation and survey research. These findings bear directly on the confidence scientists—and others—can have in conclusions from observational studies of complex behavioral and social processes, particularly ones that cannot be controlled or simulated within the confines of a laboratory environment.

Memory and the Framing of Questions

A very important opportunity to improve survey methods lies in the reduction of nonsampling error due to questionnaire context, phrasing of questions, and, generally, the semantic and social-psychological aspects of surveys. Survey data are particularly affected by the fallibility of human memory and the sensitivity of respondents to the framework in which a question is asked. This sensitivity is especially strong for certain types of attitudinal and opinion questions. Efforts are now being made to bring survey specialists into closer contact with researchers working on memory function, knowledge representation, and language in order to uncover and reduce this kind of error.

Memory for events is often inaccurate, biased toward what respondents believe to be true—or should be true—about the world. In many cases in which data are based on recollection, improvements can be achieved by shifting to techniques of structured interviewing and calibrated forms of memory elicitation, such as specifying recent, brief time periods (for example, in the last seven days) within which respondents recall certain types of events with acceptable accuracy.

  • “Taking things altogether, how would you describe your marriage? Would you say that your marriage is very happy, pretty happy, or not too happy?”
  • “Taken altogether how would you say things are these days—would you say you are very happy, pretty happy, or not too happy?”

Presenting this sequence in both directions on different forms showed that the order affected answers to the general happiness question but did not change the marital happiness question: responses to the specific issue swayed subsequent responses to the general one, but not vice versa. The explanations for and implications of such order effects on the many kinds of questions and sequences that can be used are not simple matters. Further experimentation on the design of survey instruments promises not only to improve the accuracy and reliability of survey research, but also to advance understanding of how people think about and evaluate their behavior from day to day.

Comparative Designs

Both experiments and surveys involve interventions or questions by the scientist, who then records and analyzes the responses. In contrast, many bodies of social and behavioral data of considerable value are originally derived from records or collections that have accumulated for various nonscientific reasons, quite often administrative in nature, in firms, churches, military organizations, and governments at all levels. Data of this kind can sometimes be subjected to careful scrutiny, summary, and inquiry by historians and social scientists, and statistical methods have increasingly been used to develop and evaluate inferences drawn from such data. Some of the main comparative approaches are cross-national aggregate comparisons, selective comparison of a limited number of cases, and historical case studies.

Among the more striking problems facing the scientist using such data are the vast differences in what has been recorded by different agencies whose behavior is being compared (this is especially true for parallel agencies in different nations), the highly unrepresentative or idiosyncratic sampling that can occur in the collection of such data, and the selective preservation and destruction of records. Means to overcome these problems form a substantial methodological research agenda in comparative research. An example of the method of cross-national aggregative comparisons is found in investigations by political scientists and sociologists of the factors that underlie differences in the vitality of institutions of political democracy in different societies. Some investigators have stressed the existence of a large middle class, others the level of education of a population, and still others the development of systems of mass communication. In cross-national aggregate comparisons, a large number of nations are arrayed according to some measures of political democracy and then attempts are made to ascertain the strength of correlations between these and the other variables. In this line of analysis it is possible to use a variety of statistical cluster and regression techniques to isolate and assess the possible impact of certain variables on the institutions under study. While this kind of research is cross-sectional in character, statements about historical processes are often invoked to explain the correlations.

More limited selective comparisons, applied by many of the classic theorists, involve asking similar kinds of questions but over a smaller range of societies. Why did democracy develop in such different ways in America, France, and England? Why did northeastern Europe develop rational bourgeois capitalism, in contrast to the Mediterranean and Asian nations? Modern scholars have turned their attention to explaining, for example, differences among types of fascism between the two World Wars, and similarities and differences among modern state welfare systems, using these comparisons to unravel the salient causes. The questions asked in these instances are inevitably historical ones.

Historical case studies involve only one nation or region, and so they may not be geographically comparative. However, insofar as they involve tracing the transformation of a society’s major institutions and the role of its main shaping events, they involve a comparison of different periods of a nation’s or a region’s history. The goal of such comparisons is to give a systematic account of the relevant differences. Sometimes, particularly with respect to the ancient societies, the historical record is very sparse, and the methods of history and archaeology mesh in the reconstruction of complex social arrangements and patterns of change on the basis of few fragments.

Like all research designs, comparative ones have distinctive vulnerabilities and advantages: One of the main advantages of using comparative designs is that they greatly expand the range of data, as well as the amount of variation in those data, for study. Consequently, they allow for more encompassing explanations and theories that can relate highly divergent outcomes to one another in the same framework. They also contribute to reducing any cultural biases or tendencies toward parochialism among scientists studying common human phenomena.

One main vulnerability in such designs arises from the problem of achieving comparability. Because comparative study involves studying societies and other units that are dissimilar from one another, the phenomena under study usually occur in very different contexts—so different that in some cases what is called an event in one society cannot really be regarded as the same type of event in another. For example, a vote in a Western democracy is different from a vote in an Eastern bloc country, and a voluntary vote in the United States means something different from a compulsory vote in Australia. These circumstances make for interpretive difficulties in comparing aggregate rates of voter turnout in different countries.

The problem of achieving comparability appears in historical analysis as well. For example, changes in laws and enforcement and recording procedures over time change the definition of what is and what is not a crime, and for that reason it is difficult to compare the crime rates over time. Comparative researchers struggle with this problem continually, working to fashion equivalent measures; some have suggested the use of different measures (voting, letters to the editor, street demonstration) in different societies for common variables (political participation), to try to take contextual factors into account and to achieve truer comparability.

A second vulnerability is controlling variation. Traditional experiments make conscious and elaborate efforts to control the variation of some factors and thereby assess the causal significance of others. In surveys as well as experiments, statistical methods are used to control sources of variation and assess suspected causal significance. In comparative and historical designs, this kind of control is often difficult to attain because the sources of variation are many and the number of cases few. Scientists have made efforts to approximate such control in these cases of “many variables, small N.” One is the method of paired comparisons. If an investigator isolates 15 American cities in which racial violence has been recurrent in the past 30 years, for example, it is helpful to match them with 15 cities of similar population size, geographical region, and size of minorities—such characteristics are controls—and then search for systematic differences between the two sets of cities. Another method is to select, for comparative purposes, a sample of societies that resemble one another in certain critical ways, such as size, common language, and common level of development, thus attempting to hold these factors roughly constant, and then seeking explanations among other factors in which the sampled societies differ from one another.

Ethnographic Designs

Traditionally identified with anthropology, ethnographic research designs are playing increasingly significant roles in most of the behavioral and social sciences. The core of this methodology is participant-observation, in which a researcher spends an extended period of time with the group under study, ideally mastering the local language, dialect, or special vocabulary, and participating in as many activities of the group as possible. This kind of participant-observation is normally coupled with extensive open-ended interviewing, in which people are asked to explain in depth the rules, norms, practices, and beliefs through which (from their point of view) they conduct their lives. A principal aim of ethnographic study is to discover the premises on which those rules, norms, practices, and beliefs are built.

The use of ethnographic designs by anthropologists has contributed significantly to the building of knowledge about social and cultural variation. And while these designs continue to center on certain long-standing features—extensive face-to-face experience in the community, linguistic competence, participation, and open-ended interviewing—there are newer trends in ethnographic work. One major trend concerns its scale. Ethnographic methods were originally developed largely for studying small-scale groupings known variously as village, folk, primitive, preliterate, or simple societies. Over the decades, these methods have increasingly been applied to the study of small groups and networks within modern (urban, industrial, complex) society, including the contemporary United States. The typical subjects of ethnographic study in modern society are small groups or relatively small social networks, such as outpatient clinics, medical schools, religious cults and churches, ethnically distinctive urban neighborhoods, corporate offices and factories, and government bureaus and legislatures.

As anthropologists moved into the study of modern societies, researchers in other disciplines—particularly sociology, psychology, and political science—began using ethnographic methods to enrich and focus their own insights and findings. At the same time, studies of large-scale structures and processes have been aided by the use of ethnographic methods, since most large-scale changes work their way into the fabric of community, neighborhood, and family, affecting the daily lives of people. Ethnographers have studied, for example, the impact of new industry and new forms of labor in “backward” regions; the impact of state-level birth control policies on ethnic groups; and the impact on residents in a region of building a dam or establishing a nuclear waste dump. Ethnographic methods have also been used to study a number of social processes that lend themselves to its particular techniques of observation and interview—processes such as the formation of class and racial identities, bureaucratic behavior, legislative coalitions and outcomes, and the formation and shifting of consumer tastes.

Advances in structured interviewing (see above) have proven especially powerful in the study of culture. Techniques for understanding kinship systems, concepts of disease, color terminologies, ethnobotany, and ethnozoology have been radically transformed and strengthened by coupling new interviewing methods with modem measurement and scaling techniques (see below). These techniques have made possible more precise comparisons among cultures and identification of the most competent and expert persons within a culture. The next step is to extend these methods to study the ways in which networks of propositions (such as boys like sports, girls like babies) are organized to form belief systems. Much evidence suggests that people typically represent the world around them by means of relatively complex cognitive models that involve interlocking propositions. The techniques of scaling have been used to develop models of how people categorize objects, and they have great potential for further development, to analyze data pertaining to cultural propositions.

Ideological Systems

Perhaps the most fruitful area for the application of ethnographic methods in recent years has been the systematic study of ideologies in modern society. Earlier studies of ideology were in small-scale societies that were rather homogeneous. In these studies researchers could report on a single culture, a uniform system of beliefs and values for the society as a whole. Modern societies are much more diverse both in origins and number of subcultures, related to different regions, communities, occupations, or ethnic groups. Yet these subcultures and ideologies share certain underlying assumptions or at least must find some accommodation with the dominant value and belief systems in the society.

The challenge is to incorporate this greater complexity of structure and process into systematic descriptions and interpretations. One line of work carried out by researchers has tried to track the ways in which ideologies are created, transmitted, and shared among large populations that have traditionally lacked the social mobility and communications technologies of the West. This work has concentrated on large-scale civilizations such as China, India, and Central America. Gradually, the focus has generalized into a concern with the relationship between the great traditions—the central lines of cosmopolitan Confucian, Hindu, or Mayan culture, including aesthetic standards, irrigation technologies, medical systems, cosmologies and calendars, legal codes, poetic genres, and religious doctrines and rites—and the little traditions, those identified with rural, peasant communities. How are the ideological doctrines and cultural values of the urban elites, the great traditions, transmitted to local communities? How are the little traditions, the ideas from the more isolated, less literate, and politically weaker groups in society, transmitted to the elites?

India and southern Asia have been fruitful areas for ethnographic research on these questions. The great Hindu tradition was present in virtually all local contexts through the presence of high-caste individuals in every community. It operated as a pervasive standard of value for all members of society, even in the face of strong little traditions. The situation is surprisingly akin to that of modern, industrialized societies. The central research questions are the degree and the nature of penetration of dominant ideology, even in groups that appear marginal and subordinate and have no strong interest in sharing the dominant value system. In this connection the lowest and poorest occupational caste—the untouchables—serves as an ultimate test of the power of ideology and cultural beliefs to unify complex hierarchical social systems.

Historical Reconstruction

Another current trend in ethnographic methods is its convergence with archival methods. One joining point is the application of descriptive and interpretative procedures used by ethnographers to reconstruct the cultures that created historical documents, diaries, and other records, to interview history, so to speak. For example, a revealing study showed how the Inquisition in the Italian countryside between the 1570s and 1640s gradually worked subtle changes in an ancient fertility cult in peasant communities; the peasant beliefs and rituals assimilated many elements of witchcraft after learning them from their persecutors. A good deal of social history—particularly that of the family—has drawn on discoveries made in the ethnographic study of primitive societies. As described in Chapter 4 , this particular line of inquiry rests on a marriage of ethnographic, archival, and demographic approaches.

Other lines of ethnographic work have focused on the historical dimensions of nonliterate societies. A strikingly successful example in this kind of effort is a study of head-hunting. By combining an interpretation of local oral tradition with the fragmentary observations that were made by outside observers (such as missionaries, traders, colonial officials), historical fluctuations in the rate and significance of head-hunting were shown to be partly in response to such international forces as the great depression and World War II. Researchers are also investigating the ways in which various groups in contemporary societies invent versions of traditions that may or may not reflect the actual history of the group. This process has been observed among elites seeking political and cultural legitimation and among hard-pressed minorities (for example, the Basque in Spain, the Welsh in Great Britain) seeking roots and political mobilization in a larger society.

Ethnography is a powerful method to record, describe, and interpret the system of meanings held by groups and to discover how those meanings affect the lives of group members. It is a method well adapted to the study of situations in which people interact with one another and the researcher can interact with them as well, so that information about meanings can be evoked and observed. Ethnography is especially suited to exploration and elucidation of unsuspected connections; ideally, it is used in combination with other methods—experimental, survey, or comparative—to establish with precision the relative strengths and weaknesses of such connections. By the same token, experimental, survey, and comparative methods frequently yield connections, the meaning of which is unknown; ethnographic methods are a valuable way to determine them.

  • Models for Representing Phenomena

The objective of any science is to uncover the structure and dynamics of the phenomena that are its subject, as they are exhibited in the data. Scientists continuously try to describe possible structures and ask whether the data can, with allowance for errors of measurement, be described adequately in terms of them. Over a long time, various families of structures have recurred throughout many fields of science; these structures have become objects of study in their own right, principally by statisticians, other methodological specialists, applied mathematicians, and philosophers of logic and science. Methods have evolved to evaluate the adequacy of particular structures to account for particular types of data. In the interest of clarity we discuss these structures in this section and the analytical methods used for estimation and evaluation of them in the next section, although in practice they are closely intertwined.

A good deal of mathematical and statistical modeling attempts to describe the relations, both structural and dynamic, that hold among variables that are presumed to be representable by numbers. Such models are applicable in the behavioral and social sciences only to the extent that appropriate numerical measurement can be devised for the relevant variables. In many studies the phenomena in question and the raw data obtained are not intrinsically numerical, but qualitative, such as ethnic group identifications. The identifying numbers used to code such questionnaire categories for computers are no more than labels, which could just as well be letters or colors. One key question is whether there is some natural way to move from the qualitative aspects of such data to a structural representation that involves one of the well-understood numerical or geometric models or whether such an attempt would be inherently inappropriate for the data in question. The decision as to whether or not particular empirical data can be represented in particular numerical or more complex structures is seldom simple, and strong intuitive biases or a priori assumptions about what can and cannot be done may be misleading.

Recent decades have seen rapid and extensive development and application of analytical methods attuned to the nature and complexity of social science data. Examples of nonnumerical modeling are increasing. Moreover, the widespread availability of powerful computers is probably leading to a qualitative revolution, it is affecting not only the ability to compute numerical solutions to numerical models, but also to work out the consequences of all sorts of structures that do not involve numbers at all. The following discussion gives some indication of the richness of past progress and of future prospects although it is by necessity far from exhaustive.

In describing some of the areas of new and continuing research, we have organized this section on the basis of whether the representations are fundamentally probabilistic or not. A further useful distinction is between representations of data that are highly discrete or categorical in nature (such as whether a person is male or female) and those that are continuous in nature (such as a person’s height). Of course, there are intermediate cases involving both types of variables, such as color stimuli that are characterized by discrete hues (red, green) and a continuous luminance measure. Probabilistic models lead very naturally to questions of estimation and statistical evaluation of the correspondence between data and model. Those that are not probabilistic involve additional problems of dealing with and representing sources of variability that are not explicitly modeled. At the present time, scientists understand some aspects of structure, such as geometries, and some aspects of randomness, as embodied in probability models, but do not yet adequately understand how to put the two together in a single unified model. Table 5-1 outlines the way we have organized this discussion and shows where the examples in this section lie.

Table 5-1. A Classification of Structural Models.

A Classification of Structural Models.

Probability Models

Some behavioral and social sciences variables appear to be more or less continuous, for example, utility of goods, loudness of sounds, or risk associated with uncertain alternatives. Many other variables, however, are inherently categorical, often with only two or a few values possible: for example, whether a person is in or out of school, employed or not employed, identifies with a major political party or political ideology. And some variables, such as moral attitudes, are typically measured in research with survey questions that allow only categorical responses. Much of the early probability theory was formulated only for continuous variables; its use with categorical variables was not really justified, and in some cases it may have been misleading. Recently, very significant advances have been made in how to deal explicitly with categorical variables. This section first describes several contemporary approaches to models involving categorical variables, followed by ones involving continuous representations.

Log-Linear Models for Categorical Variables

Many recent models for analyzing categorical data of the kind usually displayed as counts (cell frequencies) in multidimensional contingency tables are subsumed under the general heading of log-linear models, that is, linear models in the natural logarithms of the expected counts in each cell in the table. These recently developed forms of statistical analysis allow one to partition variability due to various sources in the distribution of categorical attributes, and to isolate the effects of particular variables or combinations of them.

Present log-linear models were first developed and used by statisticians and sociologists and then found extensive application in other social and behavioral sciences disciplines. When applied, for instance, to the analysis of social mobility, such models separate factors of occupational supply and demand from other factors that impede or propel movement up and down the social hierarchy. With such models, for example, researchers discovered the surprising fact that occupational mobility patterns are strikingly similar in many nations of the world (even among disparate nations like the United States and most of the Eastern European socialist countries), and from one time period to another, once allowance is made for differences in the distributions of occupations. The log-linear and related kinds of models have also made it possible to identify and analyze systematic differences in mobility among nations and across time. As another example of applications, psychologists and others have used log-linear models to analyze attitudes and their determinants and to link attitudes to behavior. These methods have also diffused to and been used extensively in the medical and biological sciences.

Regression Models for Categorical Variables

Models that permit one variable to be explained or predicted by means of others, called regression models, are the workhorses of much applied statistics; this is especially true when the dependent (explained) variable is continuous. For a two-valued dependent variable, such as alive or dead, models and approximate theory and computational methods for one explanatory variable were developed in biometry about 50 years ago. Computer programs able to handle many explanatory variables, continuous or categorical, are readily available today. Even now, however, the accuracy of the approximate theory on given data is an open question.

Using classical utility theory, economists have developed discrete choice models that turn out to be somewhat related to the log-linear and categorical regression models. Models for limited dependent variables, especially those that cannot take on values above or below a certain level (such as weeks unemployed, number of children, and years of schooling) have been used profitably in economics and in some other areas. For example, censored normal variables (called tobits in economics), in which observed values outside certain limits are simply counted, have been used in studying decisions to go on in school. It will require further research and development to incorporate information about limited ranges of variables fully into the main multivariate methodologies. In addition, with respect to the assumptions about distribution and functional form conventionally made in discrete response models, some new methods are now being developed that show promise of yielding reliable inferences without making unrealistic assumptions; further research in this area promises significant progress.

One problem arises from the fact that many of the categorical variables collected by the major data bases are ordered. For example, attitude surveys frequently use a 3-, 5-, or 7-point scale (from high to low) without specifying numerical intervals between levels. Social class and educational levels are often described by ordered categories. Ignoring order information, which many traditional statistical methods do, may be inefficient or inappropriate, but replacing the categories by successive integers or other arbitrary scores may distort the results. (For additional approaches to this question, see sections below on ordered structures.) Regression-like analysis of ordinal categorical variables is quite well developed, but their multivariate analysis needs further research. New log-bilinear models have been proposed, but to date they deal specifically with only two or three categorical variables. Additional research extending the new models, improving computational algorithms, and integrating the models with work on scaling promise to lead to valuable new knowledge.

Models for Event Histories

Event-history studies yield the sequence of events that respondents to a survey sample experience over a period of time; for example, the timing of marriage, childbearing, or labor force participation. Event-history data can be used to study educational progress, demographic processes (migration, fertility, and mortality), mergers of firms, labor market behavior, and even riots, strikes, and revolutions. As interest in such data has grown, many researchers have turned to models that pertain to changes in probabilities over time to describe when and how individuals move among a set of qualitative states.

Much of the progress in models for event-history data builds on recent developments in statistics and biostatistics for life-time, failure-time, and hazard models. Such models permit the analysis of qualitative transitions in a population whose members are undergoing partially random organic deterioration, mechanical wear, or other risks over time. With the increased complexity of event-history data that are now being collected, and the extension of event-history data bases over very long periods of time, new problems arise that cannot be effectively handled by older types of analysis. Among the problems are repeated transitions, such as between unemployment and employment or marriage and divorce; more than one time variable (such as biological age, calendar time, duration in a stage, and time exposed to some specified condition); latent variables (variables that are explicitly modeled even though not observed); gaps in the data; sample attrition that is not randomly distributed over the categories; and respondent difficulties in recalling the exact timing of events.

Models for Multiple-Item Measurement

For a variety of reasons, researchers typically use multiple measures (or multiple indicators) to represent theoretical concepts. Sociologists, for example, often rely on two or more variables (such as occupation and education) to measure an individual’s socioeconomic position; educational psychologists ordinarily measure a student’s ability with multiple test items. Despite the fact that the basic observations are categorical, in a number of applications this is interpreted as a partitioning of something continuous. For example, in test theory one thinks of the measures of both item difficulty and respondent ability as continuous variables, possibly multidimensional in character.

Classical test theory and newer item-response theories in psychometrics deal with the extraction of information from multiple measures. Testing, which is a major source of data in education and other areas, results in millions of test items stored in archives each year for purposes ranging from college admissions to job-training programs for industry. One goal of research on such test data is to be able to make comparisons among persons or groups even when different test items are used. Although the information collected from each respondent is intentionally incomplete in order to keep the tests short and simple, item-response techniques permit researchers to reconstitute the fragments into an accurate picture of overall group proficiencies. These new methods provide a better theoretical handle on individual differences, and they are expected to be extremely important in developing and using tests. For example, they have been used in attempts to equate different forms of a test given in successive waves during a year, a procedure made necessary in large-scale testing programs by legislation requiring disclosure of test-scoring keys at the time results are given.

An example of the use of item-response theory in a significant research effort is the National Assessment of Educational Progress (NAEP). The goal of this project is to provide accurate, nationally representative information on the average (rather than individual) proficiency of American children in a wide variety of academic subjects as they progress through elementary and secondary school. This approach is an improvement over the use of trend data on university entrance exams, because NAEP estimates of academic achievements (by broad characteristics such as age, grade, region, ethnic background, and so on) are not distorted by the self-selected character of those students who seek admission to college, graduate, and professional programs.

Item-response theory also forms the basis of many new psychometric instruments, known as computerized adaptive testing, currently being implemented by the U.S. military services and under additional development in many testing organizations. In adaptive tests, a computer program selects items for each examinee based upon the examinee’s success with previous items. Generally, each person gets a slightly different set of items and the equivalence of scale scores is established by using item-response theory. Adaptive testing can greatly reduce the number of items needed to achieve a given level of measurement accuracy.

Nonlinear, Nonadditive Models

Virtually all statistical models now in use impose a linearity or additivity assumption of some kind, sometimes after a nonlinear transformation of variables. Imposing these forms on relationships that do not, in fact, possess them may well result in false descriptions and spurious effects. Unwary users, especially of computer software packages, can easily be misled. But more realistic nonlinear and nonadditive multivariate models are becoming available. Extensive use with empirical data is likely to force many changes and enhancements in such models and stimulate quite different approaches to nonlinear multivariate analysis in the next decade.

Geometric and Algebraic Models

Geometric and algebraic models attempt to describe underlying structural relations among variables. In some cases they are part of a probabilistic approach, such as the algebraic models underlying regression or the geometric representations of correlations between items in a technique called factor analysis. In other cases, geometric and algebraic models are developed without explicitly modeling the element of randomness or uncertainty that is always present in the data. Although this latter approach to behavioral and social sciences problems has been less researched than the probabilistic one, there are some advantages in developing the structural aspects independent of the statistical ones. We begin the discussion with some inherently geometric representations and then turn to numerical representations for ordered data.

Although geometry is a huge mathematical topic, little of it seems directly applicable to the kinds of data encountered in the behavioral and social sciences. A major reason is that the primitive concepts normally used in geometry—points, lines, coincidence—do not correspond naturally to the kinds of qualitative observations usually obtained in behavioral and social sciences contexts. Nevertheless, since geometric representations are used to reduce bodies of data, there is a real need to develop a deeper understanding of when such representations of social or psychological data make sense. Moreover, there is a practical need to understand why geometric computer algorithms, such as those of multidimensional scaling, work as well as they apparently do. A better understanding of the algorithms will increase the efficiency and appropriateness of their use, which becomes increasingly important with the widespread availability of scaling programs for microcomputers.

Over the past 50 years several kinds of well-understood scaling techniques have been developed and widely used to assist in the search for appropriate geometric representations of empirical data. The whole field of scaling is now entering a critical juncture in terms of unifying and synthesizing what earlier appeared to be disparate contributions. Within the past few years it has become apparent that several major methods of analysis, including some that are based on probabilistic assumptions, can be unified under the rubric of a single generalized mathematical structure. For example, it has recently been demonstrated that such diverse approaches as nonmetric multidimensional scaling, principal-components analysis, factor analysis, correspondence analysis, and log-linear analysis have more in common in terms of underlying mathematical structure than had earlier been realized.

Nonmetric multidimensional scaling is a method that begins with data about the ordering established by subjective similarity (or nearness) between pairs of stimuli. The idea is to embed the stimuli into a metric space (that is, a geometry with a measure of distance between points) in such a way that distances between points corresponding to stimuli exhibit the same ordering as do the data. This method has been successfully applied to phenomena that, on other grounds, are known to be describable in terms of a specific geometric structure; such applications were used to validate the procedures. Such validation was done, for example, with respect to the perception of colors, which are known to be describable in terms of a particular three-dimensional structure known as the Euclidean color coordinates. Similar applications have been made with Morse code symbols and spoken phonemes. The technique is now used in some biological and engineering applications, as well as in some of the social sciences, as a method of data exploration and simplification.

One question of interest is how to develop an axiomatic basis for various geometries using as a primitive concept an observable such as the subject’s ordering of the relative similarity of one pair of stimuli to another, which is the typical starting point of such scaling. The general task is to discover properties of the qualitative data sufficient to ensure that a mapping into the geometric structure exists and, ideally, to discover an algorithm for finding it. Some work of this general type has been carried out: for example, there is an elegant set of axioms based on laws of color matching that yields the three-dimensional vectorial representation of color space. But the more general problem of understanding the conditions under which the multidimensional scaling algorithms are suitable remains unsolved. In addition, work is needed on understanding more general, non-Euclidean spatial models.

Ordered Factorial Systems

One type of structure common throughout the sciences arises when an ordered dependent variable is affected by two or more ordered independent variables. This is the situation to which regression and analysis-of-variance models are often applied; it is also the structure underlying the familiar physical identities, in which physical units are expressed as products of the powers of other units (for example, energy has the unit of mass times the square of the unit of distance divided by the square of the unit of time).

There are many examples of these types of structures in the behavioral and social sciences. One example is the ordering of preference of commodity bundles—collections of various amounts of commodities—which may be revealed directly by expressions of preference or indirectly by choices among alternative sets of bundles. A related example is preferences among alternative courses of action that involve various outcomes with differing degrees of uncertainty; this is one of the more thoroughly investigated problems because of its potential importance in decision making. A psychological example is the trade-off between delay and amount of reward, yielding those combinations that are equally reinforcing. In a common, applied kind of problem, a subject is given descriptions of people in terms of several factors, for example, intelligence, creativity, diligence, and honesty, and is asked to rate them according to a criterion such as suitability for a particular job.

In all these cases and a myriad of others like them the question is whether the regularities of the data permit a numerical representation. Initially, three types of representations were studied quite fully: the dependent variable as a sum, a product, or a weighted average of the measures associated with the independent variables. The first two representations underlie some psychological and economic investigations, as well as a considerable portion of physical measurement and modeling in classical statistics. The third representation, averaging, has proved most useful in understanding preferences among uncertain outcomes and the amalgamation of verbally described traits, as well as some physical variables.

For each of these three cases—adding, multiplying, and averaging—researchers know what properties or axioms of order the data must satisfy for such a numerical representation to be appropriate. On the assumption that one or another of these representations exists, and using numerical ratings by subjects instead of ordering, a scaling technique called functional measurement (referring to the function that describes how the dependent variable relates to the independent ones) has been developed and applied in a number of domains. What remains problematic is how to encompass at the ordinal level the fact that some random error intrudes into nearly all observations and then to show how that randomness is represented at the numerical level; this continues to be an unresolved and challenging research issue.

During the past few years considerable progress has been made in understanding certain representations inherently different from those just discussed. The work has involved three related thrusts. The first is a scheme of classifying structures according to how uniquely their representation is constrained. The three classical numerical representations are known as ordinal, interval, and ratio scale types. For systems with continuous numerical representations and of scale type at least as rich as the ratio one, it has been shown that only one additional type can exist. A second thrust is to accept structural assumptions, like factorial ones, and to derive for each scale the possible functional relations among the independent variables. And the third thrust is to develop axioms for the properties of an order relation that leads to the possible representations. Much is now known about the possible nonadditive representations of both the multifactor case and the one where stimuli can be combined, such as combining sound intensities.

Closely related to this classification of structures is the question: What statements, formulated in terms of the measures arising in such representations, can be viewed as meaningful in the sense of corresponding to something empirical? Statements here refer to any scientific assertions, including statistical ones, formulated in terms of the measures of the variables and logical and mathematical connectives. These are statements for which asserting truth or falsity makes sense. In particular, statements that remain invariant under certain symmetries of structure have played an important role in classical geometry, dimensional analysis in physics, and in relating measurement and statistical models applied to the same phenomenon. In addition, these ideas have been used to construct models in more formally developed areas of the behavioral and social sciences, such as psychophysics. Current research has emphasized the communality of these historically independent developments and is attempting both to uncover systematic, philosophically sound arguments as to why invariance under symmetries is as important as it appears to be and to understand what to do when structures lack symmetry, as, for example, when variables have an inherent upper bound.

Many subjects do not seem to be correctly represented in terms of distances in continuous geometric space. Rather, in some cases, such as the relations among meanings of words—which is of great interest in the study of memory representations—a description in terms of tree-like, hierarchial structures appears to be more illuminating. This kind of description appears appropriate both because of the categorical nature of the judgments and the hierarchial, rather than trade-off, nature of the structure. Individual items are represented as the terminal nodes of the tree, and groupings by different degrees of similarity are shown as intermediate nodes, with the more general groupings occurring nearer the root of the tree. Clustering techniques, requiring considerable computational power, have been and are being developed. Some successful applications exist, but much more refinement is anticipated.

Network Models

Several other lines of advanced modeling have progressed in recent years, opening new possibilities for empirical specification and testing of a variety of theories. In social network data, relationships among units, rather than the units themselves, are the primary objects of study: friendships among persons, trade ties among nations, cocitation clusters among research scientists, interlocking among corporate boards of directors. Special models for social network data have been developed in the past decade, and they give, among other things, precise new measures of the strengths of relational ties among units. A major challenge in social network data at present is to handle the statistical dependence that arises when the units sampled are related in complex ways.

  • Statistical Inference and Analysis

As was noted earlier, questions of design, representation, and analysis are intimately intertwined. Some issues of inference and analysis have been discussed above as related to specific data collection and modeling approaches. This section discusses some more general issues of statistical inference and advances in several current approaches to them.

Causal Inference

Behavioral and social scientists use statistical methods primarily to infer the effects of treatments, interventions, or policy factors. Previous chapters included many instances of causal knowledge gained this way. As noted above, the large experimental study of alternative health care financing discussed in Chapter 2 relied heavily on statistical principles and techniques, including randomization, in the design of the experiment and the analysis of the resulting data. Sophisticated designs were necessary in order to answer a variety of questions in a single large study without confusing the effects of one program difference (such as prepayment or fee for service) with the effects of another (such as different levels of deductible costs), or with effects of unobserved variables (such as genetic differences). Statistical techniques were also used to ascertain which results applied across the whole enrolled population and which were confined to certain subgroups (such as individuals with high blood pressure) and to translate utilization rates across different programs and types of patients into comparable overall dollar costs and health outcomes for alternative financing options.

A classical experiment, with systematic but randomly assigned variation of the variables of interest (or some reasonable approach to this), is usually considered the most rigorous basis from which to draw such inferences. But random samples or randomized experimental manipulations are not always feasible or ethically acceptable. Then, causal inferences must be drawn from observational studies, which, however well designed, are less able to ensure that the observed (or inferred) relationships among variables provide clear evidence on the underlying mechanisms of cause and effect.

Certain recurrent challenges have been identified in studying causal inference. One challenge arises from the selection of background variables to be measured, such as the sex, nativity, or parental religion of individuals in a comparative study of how education affects occupational success. The adequacy of classical methods of matching groups in background variables and adjusting for covariates needs further investigation. Statistical adjustment of biases linked to measured background variables is possible, but it can become complicated. Current work in adjustment for selectivity bias is aimed at weakening implausible assumptions, such as normality, when carrying out these adjustments. Even after adjustment has been made for the measured background variables, other, unmeasured variables are almost always still affecting the results (such as family transfers of wealth or reading habits). Analyses of how the conclusions might change if such unmeasured variables could be taken into account is essential in attempting to make causal inferences from an observational study, and systematic work on useful statistical models for such sensitivity analyses is just beginning.

The third important issue arises from the necessity for distinguishing among competing hypotheses when the explanatory variables are measured with different degrees of precision. Both the estimated size and significance of an effect are diminished when it has large measurement error, and the coefficients of other correlated variables are affected even when the other variables are measured perfectly. Similar results arise from conceptual errors, when one measures only proxies for a theoretical construct (such as years of education to represent amount of learning). In some cases, there are procedures for simultaneously or iteratively estimating both the precision of complex measures and their effect on a particular criterion.

Although complex models are often necessary to infer causes, once their output is available, it should be translated into understandable displays for evaluation. Results that depend on the accuracy of a multivariate model and the associated software need to be subjected to appropriate checks, including the evaluation of graphical displays, group comparisons, and other analyses.

New Statistical Techniques

Internal resampling.

One of the great contributions of twentieth-century statistics was to demonstrate how a properly drawn sample of sufficient size, even if it is only a tiny fraction of the population of interest, can yield very good estimates of most population characteristics. When enough is known at the outset about the characteristic in question—for example, that its distribution is roughly normal—inference from the sample data to the population as a whole is straightforward, and one can easily compute measures of the certainty of inference, a common example being the 95 percent confidence interval around an estimate. But population shapes are sometimes unknown or uncertain, and so inference procedures cannot be so simple. Furthermore, more often than not, it is difficult to assess even the degree of uncertainty associated with complex data and with the statistics needed to unravel complex social and behavioral phenomena.

Internal resampling methods attempt to assess this uncertainty by generating a number of simulated data sets similar to the one actually observed. The definition of similar is crucial, and many methods that exploit different types of similarity have been devised. These methods provide researchers the freedom to choose scientifically appropriate procedures and to replace procedures that are valid under assumed distributional shapes with ones that are not so restricted. Flexible and imaginative computer simulation is the key to these methods. For a simple random sample, the “bootstrap” method repeatedly resamples the obtained data (with replacement) to generate a distribution of possible data sets. The distribution of any estimator can thereby be simulated and measures of the certainty of inference be derived. The “jackknife” method repeatedly omits a fraction of the data and in this way generates a distribution of possible data sets that can also be used to estimate variability. These methods can also be used to remove or reduce bias. For example, the ratio-estimator, a statistic that is commonly used in analyzing sample surveys and censuses, is known to be biased, and the jackknife method can usually remedy this defect. The methods have been extended to other situations and types of analysis, such as multiple regression.

There are indications that under relatively general conditions, these methods, and others related to them, allow more accurate estimates of the uncertainty of inferences than do the traditional ones that are based on assumed (usually, normal) distributions when that distributional assumption is unwarranted. For complex samples, such internal resampling or subsampling facilitates estimating the sampling variances of complex statistics.

An older and simpler, but equally important, idea is to use one independent subsample in searching the data to develop a model and at least one separate subsample for estimating and testing a selected model. Otherwise, it is next to impossible to make allowances for the excessively close fitting of the model that occurs as a result of the creative search for the exact characteristics of the sample data—characteristics that are to some degree random and will not predict well to other samples.

Robust Techniques

Many technical assumptions underlie the analysis of data. Some, like the assumption that each item in a sample is drawn independently of other items, can be weakened when the data are sufficiently structured to admit simple alternative models, such as serial correlation. Usually, these models require that a few parameters be estimated. Assumptions about shapes of distributions, normality being the most common, have proved to be particularly important, and considerable progress has been made in dealing with the consequences of different assumptions.

More recently, robust techniques have been designed that permit sharp, valid discriminations among possible values of parameters of central tendency for a wide variety of alternative distributions by reducing the weight given to occasional extreme deviations. It turns out that by giving up, say, 10 percent of the discrimination that could be provided under the rather unrealistic assumption of normality, one can greatly improve performance in more realistic situations, especially when unusually large deviations are relatively common.

These valuable modifications of classical statistical techniques have been extended to multiple regression, in which procedures of iterative reweighting can now offer relatively good performance for a variety of underlying distributional shapes. They should be extended to more general schemes of analysis.

In some contexts—notably the most classical uses of analysis of variance—the use of adequate robust techniques should help to bring conventional statistical practice closer to the best standards that experts can now achieve.

Many Interrelated Parameters

In trying to give a more accurate representation of the real world than is possible with simple models, researchers sometimes use models with many parameters, all of which must be estimated from the data. Classical principles of estimation, such as straightforward maximum-likelihood, do not yield reliable estimates unless either the number of observations is much larger than the number of parameters to be estimated or special designs are used in conjunction with strong assumptions. Bayesian methods do not draw a distinction between fixed and random parameters, and so may be especially appropriate for such problems.

A variety of statistical methods have recently been developed that can be interpreted as treating many of the parameters as or similar to random quantities, even if they are regarded as representing fixed quantities to be estimated. Theory and practice demonstrate that such methods can improve the simpler fixed-parameter methods from which they evolved, especially when the number of observations is not large relative to the number of parameters. Successful applications include college and graduate school admissions, where quality of previous school is treated as a random parameter when the data are insufficient to separately estimate it well. Efforts to create appropriate models using this general approach for small-area estimation and undercount adjustment in the census are important potential applications.

Missing Data

In data analysis, serious problems can arise when certain kinds of (quantitative or qualitative) information is partially or wholly missing. Various approaches to dealing with these problems have been or are being developed. One of the methods developed recently for dealing with certain aspects of missing data is called multiple imputation: each missing value in a data set is replaced by several values representing a range of possibilities, with statistical dependence among missing values reflected by linkage among their replacements. It is currently being used to handle a major problem of incompatibility between the 1980 and previous Bureau of Census public-use tapes with respect to occupation codes. The extension of these techniques to address such problems as nonresponse to income questions in the Current Population Survey has been examined in exploratory applications with great promise.

Computer Packages and Expert Systems

The development of high-speed computing and data handling has fundamentally changed statistical analysis. Methodologies for all kinds of situations are rapidly being developed and made available for use in computer packages that may be incorporated into interactive expert systems. This computing capability offers the hope that much data analyses will be more carefully and more effectively done than previously and that better strategies for data analysis will move from the practice of expert statisticians, some of whom may not have tried to articulate their own strategies, to both wide discussion and general use.

But powerful tools can be hazardous, as witnessed by occasional dire misuses of existing statistical packages. Until recently the only strategies available were to train more expert methodologists or to train substantive scientists in more methodology, but without the updating of their training it tends to become outmoded. Now there is the opportunity to capture in expert systems the current best methodological advice and practice. If that opportunity is exploited, standard methodological training of social scientists will shift to emphasizing strategies in using good expert systems—including understanding the nature and importance of the comments it provides—rather than in how to patch together something on one’s own. With expert systems, almost all behavioral and social scientists should become able to conduct any of the more common styles of data analysis more effectively and with more confidence than all but the most expert do today. However, the difficulties in developing expert systems that work as hoped for should not be underestimated. Human experts cannot readily explicate all of the complex cognitive network that constitutes an important part of their knowledge. As a result, the first attempts at expert systems were not especially successful (as discussed in Chapter 1 ). Additional work is expected to overcome these limitations, but it is not clear how long it will take.

Exploratory Analysis and Graphic Presentation

The formal focus of much statistics research in the middle half of the twentieth century was on procedures to confirm or reject precise, a priori hypotheses developed in advance of collecting data—that is, procedures to determine statistical significance. There was relatively little systematic work on realistically rich strategies for the applied researcher to use when attacking real-world problems with their multiplicity of objectives and sources of evidence. More recently, a species of quantitative detective work, called exploratory data analysis, has received increasing attention. In this approach, the researcher seeks out possible quantitative relations that may be present in the data. The techniques are flexible and include an important component of graphic representations. While current techniques have evolved for single responses in situations of modest complexity, extensions to multiple responses and to single responses in more complex situations are now possible.

Graphic and tabular presentation is a research domain in active renaissance, stemming in part from suggestions for new kinds of graphics made possible by computer capabilities, for example, hanging histograms and easily assimilated representations of numerical vectors. Research on data presentation has been carried out by statisticians, psychologists, cartographers, and other specialists, and attempts are now being made to incorporate findings and concepts from linguistics, industrial and publishing design, aesthetics, and classification studies in library science. Another influence has been the rapidly increasing availability of powerful computational hardware and software, now available even on desktop computers. These ideas and capabilities are leading to an increasing number of behavioral experiments with substantial statistical input. Nonetheless, criteria of good graphic and tabular practice are still too much matters of tradition and dogma, without adequate empirical evidence or theoretical coherence. To broaden the respective research outlooks and vigorously develop such evidence and coherence, extended collaborations between statistical and mathematical specialists and other scientists are needed, a major objective being to understand better the visual and cognitive processes (see Chapter 1 ) relevant to effective use of graphic or tabular approaches.

Combining Evidence

Combining evidence from separate sources is a recurrent scientific task, and formal statistical methods for doing so go back 30 years or more. These methods include the theory and practice of combining tests of individual hypotheses, sequential design and analysis of experiments, comparisons of laboratories, and Bayesian and likelihood paradigms.

There is now growing interest in more ambitious analytical syntheses, which are often called meta-analyses. One stimulus has been the appearance of syntheses explicitly combining all existing investigations in particular fields, such as prison parole policy, classroom size in primary schools, cooperative studies of therapeutic treatments for coronary heart disease, early childhood education interventions, and weather modification experiments. In such fields, a serious approach to even the simplest question—how to put together separate estimates of effect size from separate investigations—leads quickly to difficult and interesting issues. One issue involves the lack of independence among the available studies, due, for example, to the effect of influential teachers on the research projects of their students. Another issue is selection bias, because only some of the studies carried out, usually those with “significant” findings, are available and because the literature search may not find out all relevant studies that are available. In addition, experts agree, although informally, that the quality of studies from different laboratories and facilities differ appreciably and that such information probably should be taken into account. Inevitably, the studies to be included used different designs and concepts and controlled or measured different variables, making it difficult to know how to combine them.

Rich, informal syntheses, allowing for individual appraisal, may be better than catch-all formal modeling, but the literature on formal meta-analytic models is growing and may be an important area of discovery in the next decade, relevant both to statistical analysis per se and to improved syntheses in the behavioral and social and other sciences.

  • Opportunities and Needs

This chapter has cited a number of methodological topics associated with behavioral and social sciences research that appear to be particularly active and promising at the present time. As throughout the report, they constitute illustrative examples of what the committee believes to be important areas of research in the coming decade. In this section we describe recommendations for an additional $16 million annually to facilitate both the development of methodologically oriented research and, equally important, its communication throughout the research community.

Methodological studies, including early computer implementations, have for the most part been carried out by individual investigators with small teams of colleagues or students. Occasionally, such research has been associated with quite large substantive projects, and some of the current developments of computer packages, graphics, and expert systems clearly require large, organized efforts, which often lie at the boundary between grant-supported work and commercial development. As such research is often a key to understanding complex bodies of behavioral and social sciences data, it is vital to the health of these sciences that research support continue on methods relevant to problems of modeling, statistical analysis, representation, and related aspects of behavioral and social sciences data. Researchers and funding agencies should also be especially sympathetic to the inclusion of such basic methodological work in large experimental and longitudinal studies. Additional funding for work in this area, both in terms of individual research grants on methodological issues and in terms of augmentation of large projects to include additional methodological aspects, should be provided largely in the form of investigator-initiated project grants.

Ethnographic and comparative studies also typically rely on project grants to individuals and small groups of investigators. While this type of support should continue, provision should also be made to facilitate the execution of studies using these methods by research teams and to provide appropriate methodological training through the mechanisms outlined below.

Overall, we recommend an increase of $4 million in the level of investigator-initiated grant support for methodological work. An additional $1 million should be devoted to a program of centers for methodological research.

Many of the new methods and models described in the chapter, if and when adopted to any large extent, will demand substantially greater amounts of research devoted to appropriate analysis and computer implementation. New user interfaces and numerical algorithms will need to be designed and new computer programs written. And even when generally available methods (such as maximum-likelihood) are applicable, model application still requires skillful development in particular contexts. Many of the familiar general methods that are applied in the statistical analysis of data are known to provide good approximations when sample sizes are sufficiently large, but their accuracy varies with the specific model and data used. To estimate the accuracy requires extensive numerical exploration. Investigating the sensitivity of results to the assumptions of the models is important and requires still more creative, thoughtful research. It takes substantial efforts of these kinds to bring any new model on line, and the need becomes increasingly important and difficult as statistical models move toward greater realism, usefulness, complexity, and availability in computer form. More complexity in turn will increase the demand for computational power. Although most of this demand can be satisfied by increasingly powerful desktop computers, some access to mainframe and even supercomputers will be needed in selected cases. We recommend an additional $4 million annually to cover the growth in computational demands for model development and testing.

Interaction and cooperation between the developers and the users of statistical and mathematical methods need continual stimulation—both ways. Efforts should be made to teach new methods to a wider variety of potential users than is now the case. Several ways appear effective for methodologists to communicate to empirical scientists: running summer training programs for graduate students, faculty, and other researchers; encouraging graduate students, perhaps through degree requirements, to make greater use of the statistical, mathematical, and methodological resources at their own or affiliated universities; associating statistical and mathematical research specialists with large-scale data collection projects; and developing statistical packages that incorporate expert systems in applying the methods.

Methodologists, in turn, need to become more familiar with the problems actually faced by empirical scientists in the laboratory and especially in the field. Several ways appear useful for communication in this direction: encouraging graduate students in methodological specialties, perhaps through degree requirements, to work directly on empirical research; creating postdoctoral fellowships aimed at integrating such specialists into ongoing data collection projects; and providing for large data collection projects to engage relevant methodological specialists. In addition, research on and development of statistical packages and expert systems should be encouraged to involve the multidisciplinary collaboration of experts with experience in statistical, computer, and cognitive sciences.

A final point has to do with the promise held out by bringing different research methods to bear on the same problems. As our discussions of research methods in this and other chapters have emphasized, different methods have different powers and limitations, and each is designed especially to elucidate one or more particular facets of a subject. An important type of interdisciplinary work is the collaboration of specialists in different research methodologies on a substantive issue, examples of which have been noted throughout this report. If more such research were conducted cooperatively, the power of each method pursued separately would be increased. To encourage such multidisciplinary work, we recommend increased support for fellowships, research workshops, and training institutes.

Funding for fellowships, both pre-and postdoctoral, should be aimed at giving methodologists experience with substantive problems and at upgrading the methodological capabilities of substantive scientists. Such targeted fellowship support should be increased by $4 million annually, of which $3 million should be for predoctoral fellowships emphasizing the enrichment of methodological concentrations. The new support needed for research workshops is estimated to be $1 million annually. And new support needed for various kinds of advanced training institutes aimed at rapidly diffusing new methodological findings among substantive scientists is estimated to be $2 million annually.

  • Cite this Page National Research Council; Division of Behavioral and Social Sciences and Education; Commission on Behavioral and Social Sciences and Education; Committee on Basic Research in the Behavioral and Social Sciences; Gerstein DR, Luce RD, Smelser NJ, et al., editors. The Behavioral and Social Sciences: Achievements and Opportunities. Washington (DC): National Academies Press (US); 1988. 5, Methods of Data Collection, Representation, and Analysis.
  • PDF version of this title (16M)

In this Page

Other titles in this collection.

  • The National Academies Collection: Reports funded by National Institutes of Health

Recent Activity

  • Methods of Data Collection, Representation, and Analysis - The Behavioral and So... Methods of Data Collection, Representation, and Analysis - The Behavioral and Social Sciences: Achievements and Opportunities

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

  • Privacy Policy

Research Method

Home » Data Collection – Methods Types and Examples

Data Collection – Methods Types and Examples

Table of Contents

Data collection

Data Collection

Definition:

Data collection is the process of gathering and collecting information from various sources to analyze and make informed decisions based on the data collected. This can involve various methods, such as surveys, interviews, experiments, and observation.

In order for data collection to be effective, it is important to have a clear understanding of what data is needed and what the purpose of the data collection is. This can involve identifying the population or sample being studied, determining the variables to be measured, and selecting appropriate methods for collecting and recording data.

Types of Data Collection

Types of Data Collection are as follows:

Primary Data Collection

Primary data collection is the process of gathering original and firsthand information directly from the source or target population. This type of data collection involves collecting data that has not been previously gathered, recorded, or published. Primary data can be collected through various methods such as surveys, interviews, observations, experiments, and focus groups. The data collected is usually specific to the research question or objective and can provide valuable insights that cannot be obtained from secondary data sources. Primary data collection is often used in market research, social research, and scientific research.

Secondary Data Collection

Secondary data collection is the process of gathering information from existing sources that have already been collected and analyzed by someone else, rather than conducting new research to collect primary data. Secondary data can be collected from various sources, such as published reports, books, journals, newspapers, websites, government publications, and other documents.

Qualitative Data Collection

Qualitative data collection is used to gather non-numerical data such as opinions, experiences, perceptions, and feelings, through techniques such as interviews, focus groups, observations, and document analysis. It seeks to understand the deeper meaning and context of a phenomenon or situation and is often used in social sciences, psychology, and humanities. Qualitative data collection methods allow for a more in-depth and holistic exploration of research questions and can provide rich and nuanced insights into human behavior and experiences.

Quantitative Data Collection

Quantitative data collection is a used to gather numerical data that can be analyzed using statistical methods. This data is typically collected through surveys, experiments, and other structured data collection methods. Quantitative data collection seeks to quantify and measure variables, such as behaviors, attitudes, and opinions, in a systematic and objective way. This data is often used to test hypotheses, identify patterns, and establish correlations between variables. Quantitative data collection methods allow for precise measurement and generalization of findings to a larger population. It is commonly used in fields such as economics, psychology, and natural sciences.

Data Collection Methods

Data Collection Methods are as follows:

Surveys involve asking questions to a sample of individuals or organizations to collect data. Surveys can be conducted in person, over the phone, or online.

Interviews involve a one-on-one conversation between the interviewer and the respondent. Interviews can be structured or unstructured and can be conducted in person or over the phone.

Focus Groups

Focus groups are group discussions that are moderated by a facilitator. Focus groups are used to collect qualitative data on a specific topic.

Observation

Observation involves watching and recording the behavior of people, objects, or events in their natural setting. Observation can be done overtly or covertly, depending on the research question.

Experiments

Experiments involve manipulating one or more variables and observing the effect on another variable. Experiments are commonly used in scientific research.

Case Studies

Case studies involve in-depth analysis of a single individual, organization, or event. Case studies are used to gain detailed information about a specific phenomenon.

Secondary Data Analysis

Secondary data analysis involves using existing data that was collected for another purpose. Secondary data can come from various sources, such as government agencies, academic institutions, or private companies.

How to Collect Data

The following are some steps to consider when collecting data:

  • Define the objective : Before you start collecting data, you need to define the objective of the study. This will help you determine what data you need to collect and how to collect it.
  • Identify the data sources : Identify the sources of data that will help you achieve your objective. These sources can be primary sources, such as surveys, interviews, and observations, or secondary sources, such as books, articles, and databases.
  • Determine the data collection method : Once you have identified the data sources, you need to determine the data collection method. This could be through online surveys, phone interviews, or face-to-face meetings.
  • Develop a data collection plan : Develop a plan that outlines the steps you will take to collect the data. This plan should include the timeline, the tools and equipment needed, and the personnel involved.
  • Test the data collection process: Before you start collecting data, test the data collection process to ensure that it is effective and efficient.
  • Collect the data: Collect the data according to the plan you developed in step 4. Make sure you record the data accurately and consistently.
  • Analyze the data: Once you have collected the data, analyze it to draw conclusions and make recommendations.
  • Report the findings: Report the findings of your data analysis to the relevant stakeholders. This could be in the form of a report, a presentation, or a publication.
  • Monitor and evaluate the data collection process: After the data collection process is complete, monitor and evaluate the process to identify areas for improvement in future data collection efforts.
  • Ensure data quality: Ensure that the collected data is of high quality and free from errors. This can be achieved by validating the data for accuracy, completeness, and consistency.
  • Maintain data security: Ensure that the collected data is secure and protected from unauthorized access or disclosure. This can be achieved by implementing data security protocols and using secure storage and transmission methods.
  • Follow ethical considerations: Follow ethical considerations when collecting data, such as obtaining informed consent from participants, protecting their privacy and confidentiality, and ensuring that the research does not cause harm to participants.
  • Use appropriate data analysis methods : Use appropriate data analysis methods based on the type of data collected and the research objectives. This could include statistical analysis, qualitative analysis, or a combination of both.
  • Record and store data properly: Record and store the collected data properly, in a structured and organized format. This will make it easier to retrieve and use the data in future research or analysis.
  • Collaborate with other stakeholders : Collaborate with other stakeholders, such as colleagues, experts, or community members, to ensure that the data collected is relevant and useful for the intended purpose.

Applications of Data Collection

Data collection methods are widely used in different fields, including social sciences, healthcare, business, education, and more. Here are some examples of how data collection methods are used in different fields:

  • Social sciences : Social scientists often use surveys, questionnaires, and interviews to collect data from individuals or groups. They may also use observation to collect data on social behaviors and interactions. This data is often used to study topics such as human behavior, attitudes, and beliefs.
  • Healthcare : Data collection methods are used in healthcare to monitor patient health and track treatment outcomes. Electronic health records and medical charts are commonly used to collect data on patients’ medical history, diagnoses, and treatments. Researchers may also use clinical trials and surveys to collect data on the effectiveness of different treatments.
  • Business : Businesses use data collection methods to gather information on consumer behavior, market trends, and competitor activity. They may collect data through customer surveys, sales reports, and market research studies. This data is used to inform business decisions, develop marketing strategies, and improve products and services.
  • Education : In education, data collection methods are used to assess student performance and measure the effectiveness of teaching methods. Standardized tests, quizzes, and exams are commonly used to collect data on student learning outcomes. Teachers may also use classroom observation and student feedback to gather data on teaching effectiveness.
  • Agriculture : Farmers use data collection methods to monitor crop growth and health. Sensors and remote sensing technology can be used to collect data on soil moisture, temperature, and nutrient levels. This data is used to optimize crop yields and minimize waste.
  • Environmental sciences : Environmental scientists use data collection methods to monitor air and water quality, track climate patterns, and measure the impact of human activity on the environment. They may use sensors, satellite imagery, and laboratory analysis to collect data on environmental factors.
  • Transportation : Transportation companies use data collection methods to track vehicle performance, optimize routes, and improve safety. GPS systems, on-board sensors, and other tracking technologies are used to collect data on vehicle speed, fuel consumption, and driver behavior.

Examples of Data Collection

Examples of Data Collection are as follows:

  • Traffic Monitoring: Cities collect real-time data on traffic patterns and congestion through sensors on roads and cameras at intersections. This information can be used to optimize traffic flow and improve safety.
  • Social Media Monitoring : Companies can collect real-time data on social media platforms such as Twitter and Facebook to monitor their brand reputation, track customer sentiment, and respond to customer inquiries and complaints in real-time.
  • Weather Monitoring: Weather agencies collect real-time data on temperature, humidity, air pressure, and precipitation through weather stations and satellites. This information is used to provide accurate weather forecasts and warnings.
  • Stock Market Monitoring : Financial institutions collect real-time data on stock prices, trading volumes, and other market indicators to make informed investment decisions and respond to market fluctuations in real-time.
  • Health Monitoring : Medical devices such as wearable fitness trackers and smartwatches can collect real-time data on a person’s heart rate, blood pressure, and other vital signs. This information can be used to monitor health conditions and detect early warning signs of health issues.

Purpose of Data Collection

The purpose of data collection can vary depending on the context and goals of the study, but generally, it serves to:

  • Provide information: Data collection provides information about a particular phenomenon or behavior that can be used to better understand it.
  • Measure progress : Data collection can be used to measure the effectiveness of interventions or programs designed to address a particular issue or problem.
  • Support decision-making : Data collection provides decision-makers with evidence-based information that can be used to inform policies, strategies, and actions.
  • Identify trends : Data collection can help identify trends and patterns over time that may indicate changes in behaviors or outcomes.
  • Monitor and evaluate : Data collection can be used to monitor and evaluate the implementation and impact of policies, programs, and initiatives.

When to use Data Collection

Data collection is used when there is a need to gather information or data on a specific topic or phenomenon. It is typically used in research, evaluation, and monitoring and is important for making informed decisions and improving outcomes.

Data collection is particularly useful in the following scenarios:

  • Research : When conducting research, data collection is used to gather information on variables of interest to answer research questions and test hypotheses.
  • Evaluation : Data collection is used in program evaluation to assess the effectiveness of programs or interventions, and to identify areas for improvement.
  • Monitoring : Data collection is used in monitoring to track progress towards achieving goals or targets, and to identify any areas that require attention.
  • Decision-making: Data collection is used to provide decision-makers with information that can be used to inform policies, strategies, and actions.
  • Quality improvement : Data collection is used in quality improvement efforts to identify areas where improvements can be made and to measure progress towards achieving goals.

Characteristics of Data Collection

Data collection can be characterized by several important characteristics that help to ensure the quality and accuracy of the data gathered. These characteristics include:

  • Validity : Validity refers to the accuracy and relevance of the data collected in relation to the research question or objective.
  • Reliability : Reliability refers to the consistency and stability of the data collection process, ensuring that the results obtained are consistent over time and across different contexts.
  • Objectivity : Objectivity refers to the impartiality of the data collection process, ensuring that the data collected is not influenced by the biases or personal opinions of the data collector.
  • Precision : Precision refers to the degree of accuracy and detail in the data collected, ensuring that the data is specific and accurate enough to answer the research question or objective.
  • Timeliness : Timeliness refers to the efficiency and speed with which the data is collected, ensuring that the data is collected in a timely manner to meet the needs of the research or evaluation.
  • Ethical considerations : Ethical considerations refer to the ethical principles that must be followed when collecting data, such as ensuring confidentiality and obtaining informed consent from participants.

Advantages of Data Collection

There are several advantages of data collection that make it an important process in research, evaluation, and monitoring. These advantages include:

  • Better decision-making : Data collection provides decision-makers with evidence-based information that can be used to inform policies, strategies, and actions, leading to better decision-making.
  • Improved understanding: Data collection helps to improve our understanding of a particular phenomenon or behavior by providing empirical evidence that can be analyzed and interpreted.
  • Evaluation of interventions: Data collection is essential in evaluating the effectiveness of interventions or programs designed to address a particular issue or problem.
  • Identifying trends and patterns: Data collection can help identify trends and patterns over time that may indicate changes in behaviors or outcomes.
  • Increased accountability: Data collection increases accountability by providing evidence that can be used to monitor and evaluate the implementation and impact of policies, programs, and initiatives.
  • Validation of theories: Data collection can be used to test hypotheses and validate theories, leading to a better understanding of the phenomenon being studied.
  • Improved quality: Data collection is used in quality improvement efforts to identify areas where improvements can be made and to measure progress towards achieving goals.

Limitations of Data Collection

While data collection has several advantages, it also has some limitations that must be considered. These limitations include:

  • Bias : Data collection can be influenced by the biases and personal opinions of the data collector, which can lead to inaccurate or misleading results.
  • Sampling bias : Data collection may not be representative of the entire population, resulting in sampling bias and inaccurate results.
  • Cost : Data collection can be expensive and time-consuming, particularly for large-scale studies.
  • Limited scope: Data collection is limited to the variables being measured, which may not capture the entire picture or context of the phenomenon being studied.
  • Ethical considerations : Data collection must follow ethical principles to protect the rights and confidentiality of the participants, which can limit the type of data that can be collected.
  • Data quality issues: Data collection may result in data quality issues such as missing or incomplete data, measurement errors, and inconsistencies.
  • Limited generalizability : Data collection may not be generalizable to other contexts or populations, limiting the generalizability of the findings.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Delimitations

Delimitations in Research – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Evaluating Research

Evaluating Research – Process, Examples and...

Research Questions

Research Questions – Types, Examples and Writing...

Logo for Open Educational Resources

Chapter 20. Presentations

Introduction.

If a tree falls in a forest, and no one is around to hear it, does it make a sound? If a qualitative study is conducted, but it is not presented (in words or text), did it really happen? Perhaps not. Findings from qualitative research are inextricably tied up with the way those findings are presented. These presentations do not always need to be in writing, but they need to happen. Think of ethnographies, for example, and their thick descriptions of a particular culture. Witnessing a culture, taking fieldnotes, talking to people—none of those things in and of themselves convey the culture. Or think about an interview-based phenomenological study. Boxes of interview transcripts might be interesting to read through, but they are not a completed study without the intervention of hours of analysis and careful selection of exemplary quotes to illustrate key themes and final arguments and theories. And unlike much quantitative research in the social sciences, where the final write-up neatly reports the results of analyses, the way the “write-up” happens is an integral part of the analysis in qualitative research. Once again, we come back to the messiness and stubborn unlinearity of qualitative research. From the very beginning, when designing the study, imagining the form of its ultimate presentation is helpful.

Because qualitative researchers are motivated by understanding and conveying meaning, effective communication is not only an essential skill but a fundamental facet of the entire research project. Ethnographers must be able to convey a certain sense of verisimilitude, the appearance of true reality. Those employing interviews must faithfully depict the key meanings of the people they interviewed in a way that rings true to those people, even if the end result surprises them. And all researchers must strive for clarity in their publications so that various audiences can understand what was found and why it is important. This chapter will address how to organize various kinds of presentations for different audiences so that your results can be appreciated and understood.

In the world of academic science, social or otherwise, the primary audience for a study’s results is usually the academic community, and the primary venue for communicating to this audience is the academic journal. Journal articles are typically fifteen to thirty pages in length (8,000 to 12,000 words). Although qualitative researchers often write and publish journal articles—indeed, there are several journals dedicated entirely to qualitative research [1] —the best writing by qualitative researchers often shows up in books. This is because books, running from 80,000 to 150,000 words in length, allow the researcher to develop the material fully. You have probably read some of these in various courses you have taken, not realizing what they are. I have used examples of such books throughout this text, beginning with the three profiles in the introductory chapter. In some instances, the chapters in these books began as articles in academic journals (another indication that the journal article format somewhat limits what can be said about the study overall).

While the article and the book are “final” products of qualitative research, there are actually a few other presentation formats that are used along the way. At the very beginning of a research study, it is often important to have a written research proposal not just to clarify to yourself what you will be doing and when but also to justify your research to an outside agency, such as an institutional review board (IRB; see chapter 12), or to a potential funder, which might be your home institution, a government funder (such as the National Science Foundation, or NSF), or a private foundation (such as the Gates Foundation). As you get your research underway, opportunities will arise to present preliminary findings to audiences, usually through presentations at academic conferences. These presentations can provide important feedback as you complete your analyses. Finally, if you are completing a degree and looking to find an academic job, you will be asked to provide a “job talk,” usually about your research. These job talks are similar to conference presentations but can run significantly longer.

All the presentations mentioned so far are (mostly) for academic audiences. But qualitative research is also unique in that many of its practitioners don’t want to confine their presentation only to other academics. Qualitative researchers who study particular contexts or cultures might want to report back to the people and places they observed. Those working in the critical tradition might want to raise awareness of a particular issue to as large an audience as possible. Many others simply want everyday, nonacademic people to read their work, because they think it is interesting and important. To reach a wide audience, the final product can look like almost anything—it can be a poem, a blog, a podcast, even a science fiction short story. And if you are very lucky, it can even be a national or international bestseller.

In this chapter, we are going to stick with the more basic quotidian presentations—the academic paper / research proposal, the conference slideshow presentation / job talk, and the conference poster. We’ll also spend a bit of time on incorporating universal design into your presentations and how to create some especially attractive and impactful visual displays.

Researcher Note

What is the best piece of advice you’ve ever been given about conducting qualitative research?

The best advice I’ve received came from my adviser, Alford Young Jr. He told me to find the “Jessi Streib” answer to my research question, not the “Pierre Bourdieu” answer to my research question. In other words, don’t just say how a famous theorist would answer your question; say something original, something coming from you.

—Jessi Streib, author of The Power of the Past and Privilege Lost 

Writing about Your Research

The journal article and the research proposal.

Although the research proposal is written before you have actually done your research and the article is written after all data collection and analysis is complete, there are actually many similarities between the two in terms of organization and purpose. The final article will (probably—depends on how much the research question and focus have shifted during the research itself) incorporate a great deal of what was included in a preliminary research proposal. The average lengths of both a proposal and an article are quite similar, with the “front sections” of the article abbreviated to make space for the findings, discussion of findings, and conclusion.

Figure 20.1 shows one model for what to include in an article or research proposal, comparing the elements of each with a default word count for each section. Please note that you will want to follow whatever specific guidelines you have been provided by the venue you are submitting the article/proposal to: the IRB, the NSF, the Journal of Qualitative Research . In fact, I encourage you to adapt the default model as needed by swapping out expected word counts for each section and adding or varying the sections to match expectations for your particular publication venue. [2]

You will notice a few things about the default model guidelines. First, while half of the proposal is spent discussing the research design, this section is shortened (but still included) for the article. There are a few elements that only show up in the proposal (e.g., the limitations section is in the introductory section here—it will be more fully developed in the conclusory section in the article). Obviously, you don’t have findings in the proposal, so this is an entirely new section for the article. Note that the article does not include a data management plan or a timeline—two aspects that most proposals require.

It might be helpful to find and maintain examples of successfully written sections that you can use as models for your own writing. I have included a few of these throughout the textbook and have included a few more at the end of this chapter.

Make an Argument

Some qualitative researchers, particularly those engaged in deep ethnographic research, focus their attention primarily if not exclusively on describing the data. They might even eschew the notion that they should make an “argument” about the data, preferring instead to use thick descriptions to convey interpretations. Bracketing the contrast between interpretation and argument for the moment, most readers will expect you to provide an argument about your data, and this argument will be in answer to whatever research question you eventually articulate (remember, research questions are allowed to shift as you get further into data collection and analysis). It can be frustrating to read a well-developed study with clear and elegant descriptions and no argument. The argument is the point of the research, and if you do not have one, 99 percent of the time, you are not finished with your analysis. Calarco ( 2020 ) suggests you imagine a pyramid, with all of your data forming the basis and all of your findings forming the middle section; the top/point of the pyramid is your argument, “what the patterns in your data tell us about how the world works or ought to work” ( 181 ).

The academic community to which you belong will be looking for an argument that relates to or develops theory. This is the theoretical generalizability promise of qualitative research. An academic audience will want to know how your findings relate to previous findings, theories, and concepts (the literature review; see chapter 9). It is thus vitally important that you go back to your literature review (or develop a new one) and draw those connections in your discussion and/or conclusion. When writing to other audiences, you will still want an argument, although it may not be written as a theoretical one. What do I mean by that? Even if you are not referring to previous literature or developing new theories or adapting older ones, a simple description of your findings is like dumping a lot of leaves in the lap of your audience. They still deserve to know about the shape of the forest. Maybe provide them a road map through it. Do this by telling a clear and cogent story about the data. What is the primary theme, and why is it important? What is the point of your research? [3]

A beautifully written piece of research based on participant observation [and/or] interviews brings people to life, and helps the reader understand the challenges people face. You are trying to use vivid, detailed and compelling words to help the reader really understand the lives of the people you studied. And you are trying to connect the lived experiences of these people to a broader conceptual point—so that the reader can understand why it matters. ( Lareau 2021:259 )

Do not hide your argument. Make it the focal point of your introductory section, and repeat it as often as needed to ensure the reader remembers it. I am always impressed when I see researchers do this well (see, e.g., Zelizer 1996 ).

Here are a few other suggestions for writing your article: Be brief. Do not overwhelm the reader with too many words; make every word count. Academics are particularly prone to “overwriting” as a way of demonstrating proficiency. Don’t. When writing your methods section, think about it as a “recipe for your work” that allows other researchers to replicate if they so wish ( Calarco 2020:186 ). Convey all the necessary information clearly, succinctly, and accurately. No more, no less. [4] Do not try to write from “beginning to end” in that order. Certain sections, like the introductory section, may be the last ones you write. I find the methods section the easiest, so I often begin there. Calarco ( 2020 ) begins with an outline of the analysis and results section and then works backward from there to outline the contribution she is making, then the full introduction that serves as a road map for the writing of all sections. She leaves the abstract for the very end. Find what order best works for you.

Presenting at Conferences and Job Talks

Students and faculty are primarily called upon to publicly present their research in two distinct contexts—the academic conference and the “job talk.” By convention, conference presentations usually run about fifteen minutes and, at least in sociology and other social sciences, rely primarily on the use of a slideshow (PowerPoint Presentation or PPT) presentation. You are usually one of three or four presenters scheduled on the same “panel,” so it is an important point of etiquette to ensure that your presentation falls within the allotted time and does not crowd into that of the other presenters. Job talks, on the other hand, conventionally require a forty- to forty-five-minute presentation with a fifteen- to twenty-minute question and answer (Q&A) session following it. You are the only person presenting, so if you run over your allotted time, it means less time for the Q&A, which can disturb some audience members who have been waiting for a chance to ask you something. It is sometimes possible to incorporate questions during your presentation, which allows you to take the entire hour, but you might end up shorting your presentation this way if the questions are numerous. It’s best for beginners to stick to the “ask me at the end” format (unless there is a simple clarifying question that can easily be addressed and makes the presentation run more smoothly, as in the case where you simply forgot to include information on the number of interviews you conducted).

For slideshows, you should allot two or even three minutes for each slide, never less than one minute. And those slides should be clear, concise, and limited. Most of what you say should not be on those slides at all. The slides are simply the main points or a clear image of what you are speaking about. Include bulleted points (words, short phrases), not full sentences. The exception is illustrative quotations from transcripts or fieldnotes. In those cases, keep to one illustrative quote per slide, and if it is long, bold or otherwise, highlight the words or passages that are most important for the audience to notice. [5]

Figure 20.2 provides a possible model for sections to include in either a conference presentation or a job talk, with approximate times and approximate numbers of slides. Note the importance (in amount of time spent) of both the research design and the findings/results sections, both of which have been helpfully starred for you. Although you don’t want to short any of the sections, these two sections are the heart of your presentation.

Fig 20.2. Suggested Slideshow Times and Number of Slides

Should you write out your script to read along with your presentation? I have seen this work well, as it prevents presenters from straying off topic and keeps them to the time allotted. On the other hand, these presentations can seem stiff and wooden. Personally, although I have a general script in advance, I like to speak a little more informally and engagingly with each slide, sometimes making connections with previous panelists if I am at a conference. This means I have to pay attention to the time, and I sometimes end up breezing through one section more quickly than I would like. Whatever approach you take, practice in advance. Many times. With an audience. Ask for feedback, and pay attention to any presentation issues that arise (e.g., Do you speak too fast? Are you hard to hear? Do you stumble over a particular word or name?).

Even though there are rules and guidelines for what to include, you will still want to make your presentation as engaging as possible in the little amount of time you have. Calarco ( 2020:274 ) recommends trying one of three story structures to frame your presentation: (1) the uncertain explanation , where you introduce a phenomenon that has not yet been fully explained and then describe how your research is tackling this; (2) the uncertain outcome , where you introduce a phenomenon where the consequences have been unclear and then you reveal those consequences with your research; and (3) the evocative example , where you start with some interesting example from your research (a quote from the interview transcripts, for example) or the real world and then explain how that example illustrates the larger patterns you found in your research. Notice that each of these is a framing story. Framing stories are essential regardless of format!

A Word on Universal Design

Please consider accessibility issues during your presentation, and incorporate elements of universal design into your slideshow. The basic idea behind universal design in presentations is that to the greatest extent possible, all people should be able to view, hear, or otherwise take in your presentation without needing special individual adaptations. If you can make your presentation accessible to people with visual impairment or hearing loss, why not do so? For example, one in twelve men is color-blind, unable to differentiate between certain colors, red/green being the most common problem. So if you design a graphic that relies on red and green bars, some of your audience members may not be able to properly identify which bar means what. Simple contrasts of black and white are much more likely to be visible to all members of your audience. There are many other elements of good universal design, but the basic foundation of all of them is that you consider how to make your presentation as accessible as possible at the outset. For example, include captions whenever possible, both as descriptions on slides and as images on slides and for any audio or video clips you are including; keep font sizes large enough to read from the back of the room; and face the audience when you are.

Poster Design

Undergraduate students who present at conferences are often encouraged to present at “poster sessions.” This usually means setting up a poster version of your research in a large hall or convention space at a set period of time—ninety minutes is common. Your poster will be one of dozens, and conference-goers will wander through the space, stopping intermittently at posters that attract them. Those who stop by might ask you questions about your research, and you are expected to be able to talk intelligently for two or three minutes. It’s a fairly easy way to practice presenting at conferences, which is why so many organizations hold these special poster sessions.

Null

A good poster design will be immediately attractive to passersby and clearly and succinctly describe your research methods, findings, and conclusions. Some students have simply shrunk down their research papers to manageable sizes and then pasted them on a poster, all twelve to fifteen pages of them. Don’t do that! Here are some better suggestions: State the main conclusion of your research in large bold print at the top of your poster, on brightly colored (contrasting) paper, and paste in a QR code that links to your full paper online ( Calarco 2020:280 ). Use the rest of the poster board to provide a couple of highlights and details of the study. For an interview-based study, for example, you will want to put in some details about your sample (including number of interviews) and setting and then perhaps one or two key quotes, also distinguished by contrasting color background.

Incorporating Visual Design in Your Presentations

In addition to ensuring that your presentation is accessible to as large an audience as possible, you also want to think about how to display your data in general, particularly how to use charts and graphs and figures. [6] The first piece of advice is, use them! As the saying goes, a picture is worth a thousand words. If you can cut to the chase with a visually stunning display, do so. But there are visual displays that are stunning, and then there are the tired, hard-to-see visual displays that predominate at conferences. You can do better than most presenters by simply paying attention here and committing yourself to a good design. As with model section passages, keep a file of visual displays that work as models for your own presentations. Find a good guidebook to presenting data effectively (Evergreen 2018 , 2019 ; Schwabisch 2021) , and refer to it often.

Let me make a few suggestions here to get you started. First, test every visual display on a friend or colleague to find out how quickly they can understand the point you are trying to convey. As with reading passages aloud to ensure that your writing works, showing someone your display is the quickest way to find out if it works. Second, put the point in the title of the display! When writing for an academic journal, there will be specific conventions of what to include in the title (full description including methods of analysis, sample, dates), but in a public presentation, there are no limiting rules. So you are free to write as your title “Working-Class College Students Are Three Times as Likely as Their Peers to Drop Out of College,” if that is the point of the graphic display. It certainly helps the communicative aspect. Third, use the themes available to you in Excel for creating graphic displays, but alter them to better fit your needs . Consider adding dark borders to bars and columns, for example, so that they appear crisper for your audience. Include data callouts and labels, and enlarge them so they are clearly visible. When duplicative or otherwise unnecessary, drop distracting gridlines and labels on the y-axis (the vertical one). Don’t go crazy adding different fonts, however—keep things simple and clear. Sans serif fonts (those without the little hooks on the ends of letters) read better from a distance. Try to use the same color scheme throughout, even if this means manually changing the colors of bars and columns. For example, when reporting on working-class college students, I use blue bars, while I reserve green bars for wealthy students and yellow bars for students in the middle. I repeat these colors throughout my presentations and incorporate different colors when talking about other items or factors. You can also try using simple grayscale throughout, with pops of color to indicate a bar or column or line that is of the most interest. These are just some suggestions. The point is to take presentation seriously and to pay attention to visual displays you are using to ensure they effectively communicate what you want them to communicate. I’ve included a data visualization checklist from Evergreen ( 2018 ) here.

Ethics of Presentation and Reliability

Until now, all the data you have collected have been yours alone. Once you present the data, however, you are sharing sometimes very intimate information about people with a broader public. You will find yourself balancing between protecting the privacy of those you’ve interviewed and observed and needing to demonstrate the reliability of the study. The more information you provide to your audience, the more they can understand and appreciate what you have found, but this also may pose risks to your participants. There is no one correct way to go about finding the right balance. As always, you have a duty to consider what you are doing and must make some hard decisions.

Null

The most obvious place we see this paradox emerge is when you mask your data to protect the privacy of your participants. It is standard practice to provide pseudonyms, for example. It is such standard practice that you should always assume you are being given a pseudonym when reading a book or article based on qualitative research. When I was a graduate student, I tried to find information on how best to construct pseudonyms but found little guidance. There are some ethical issues here, I think. [7] Do you create a name that has the same kind of resonance as the original name? If the person goes by a nickname, should you use a nickname as a pseudonym? What about names that are ethnically marked (as in, almost all of them)? Is there something unethical about reracializing a person? (Yes!) In her study of adolescent subcultures, Wilkins ( 2008 ) noted, “Because many of the goths used creative, alternative names rather than their given names, I did my best to reproduce the spirit of their chosen names” ( 24 ).

Your reader or audience will want to know all the details about your participants so that they can gauge both your credibility and the reliability of your findings. But how many details are too many? What if you change the name but otherwise retain all the personal pieces of information about where they grew up, and how old they were when they got married, and how many children they have, and whether they made a splash in the news cycle that time they were stalked by their ex-boyfriend? At some point, those details are going to tip over into the zone of potential unmasking. When you are doing research at one particular field site that may be easily ascertained (as when you interview college students, probably at the institution at which you are a student yourself), it is even more important to be wary of providing too many details. You also need to think that your participants might read what you have written, know things about the site or the population from which you drew your interviews, and figure out whom you are talking about. This can all get very messy if you don’t do more than simply pseudonymize the people you interviewed or observed.

There are some ways to do this. One, you can design a study with all of these risks in mind. That might mean choosing to conduct interviews or observations at multiple sites so that no one person can be easily identified. Another is to alter some basic details about your participants to protect their identity or to refuse to provide all the information when selecting quotes . Let’s say you have an interviewee named “Anna” (a pseudonym), and she is a twenty-four-year-old Latina studying to be an engineer. You want to use a quote from Anna about racial discrimination in her graduate program. Instead of attributing the quote to Anna (whom your reader knows, because you’ve already told them, is a twenty-four-year-old Latina studying engineering), you might simply attribute the quote to “Latina student in STEM.” Taking this a step further, you might leave the quote unattributed, providing a list of quotes about racial discrimination by “various students.”

The problem with masking all the identifiers, of course, is that you lose some of the analytical heft of those attributes. If it mattered that Anna was twenty-four (not thirty-four) and that she was a Latina and that she was studying engineering, taking out any of those aspects of her identity might weaken your analysis. This is one of those “hard choices” you will be called on to make! A rather radical and controversial solution to this dilemma is to create composite characters , characters based on the reality of the interviews but fully masked because they are not identifiable with any one person. My students are often very queasy about this when I explain it to them. The more positivistic your approach and the more you see individuals rather than social relationships/structure as the “object” of your study, the more employing composites will seem like a really bad idea. But composites “allow researchers to present complex, situated accounts from individuals” without disclosing personal identities ( Willis 2019 ), and they can be effective ways of presenting theory narratively ( Hurst 2019 ). Ironically, composites permit you more latitude when including “dirty laundry” or stories that could harm individuals if their identities became known. Rather than squeezing out details that could identify a participant, the identities are permanently removed from the details. Great difficulty remains, however, in clearly explaining the theoretical use of composites to your audience and providing sufficient information on the reliability of the underlying data.

There are a host of other ethical issues that emerge as you write and present your data. This is where being reflective throughout the process will help. How and what you share of what you have learned will depend on the social relationships you have built, the audiences you are writing or speaking to, and the underlying animating goals of your study. Be conscious about all of your decisions, and then be able to explain them fully, both to yourself and to those who ask.

Our research is often close to us. As a Black woman who is a first-generation college student and a professional with a poverty/working-class origin, each of these pieces of my identity creates nuances in how I engage in my research, including how I share it out. Because of this, it’s important for us to have people in our lives who we trust who can help us, particularly, when we are trying to share our findings. As researchers, we have been steeped in our work, so we know all the details and nuances. Sometimes we take this for granted, and we might not have shared those nuances in conversation or writing or taken some of this information for granted. As I share my research with trusted friends and colleagues, I pay attention to the questions they ask me or the feedback they give when we talk or when they read drafts.

—Kim McAloney, PhD, College Student Services Administration Ecampus coordinator and instructor

Final Comments: Preparing for Being Challenged

Once you put your work out there, you must be ready to be challenged. Science is a collective enterprise and depends on a healthy give and take among researchers. This can be both novel and difficult as you get started, but the more you understand the importance of these challenges, the easier it will be to develop the kind of thick skin necessary for success in academia. Scientists’ authority rests on both the inherent strength of their findings and their ability to convince other scientists of the reliability and validity and value of those findings. So be prepared to be challenged, and recognize this as simply another important aspect of conducting research!

Considering what challenges might be made as you design and conduct your study will help you when you get to the writing and presentation stage. Address probable challenges in your final article, and have a planned response to probable questions in a conference presentation or job talk. The following is a list of common challenges of qualitative research and how you might best address them:

  • Questions about generalizability . Although qualitative research is not statistically generalizable (and be prepared to explain why), qualitative research is theoretically generalizable. Discuss why your findings here might tell us something about related phenomena or contexts.
  • Questions about reliability . You probably took steps to ensure the reliability of your findings. Discuss them! This includes explaining the use and value of multiple data sources and defending your sampling and case selections. It also means being transparent about your own position as researcher and explaining steps you took to ensure that what you were seeing was really there.
  • Questions about replicability. Although qualitative research cannot strictly be replicated because the circumstances and contexts will necessarily be different (if only because the point in time is different), you should be able to provide as much detail as possible about how the study was conducted so that another researcher could attempt to confirm or disconfirm your findings. Also, be very clear about the limitations of your study, as this allows other researchers insight into what future research might be warranted.

None of this is easy, of course. Writing beautifully and presenting clearly and cogently require skill and practice. If you take anything from this chapter, it is to remember that presentation is an important and essential part of the research process and to allocate time for this as you plan your research.

Data Visualization Checklist for Slideshow (PPT) Presentations

Adapted from Evergreen ( 2018 )

Text checklist

  • Short catchy, descriptive titles (e.g., “Working-class students are three times as likely to drop out of college”) summarize the point of the visual display
  • Subtitled and annotations provide additional information (e.g., “note: male students also more likely to drop out”)
  • Text size is hierarchical and readable (titles are largest; axes labels smallest, which should be at least 20points)
  • Text is horizontal. Audience members cannot read vertical text!
  • All data labeled directly and clearly: get rid of those “legends” and embed the data in your graphic display
  • Labels are used sparingly; avoid redundancy (e.g., do not include both a number axis and a number label)

Arrangement checklist

  • Proportions are accurate; bar charts should always start at zero; don’t mislead the audience!
  • Data are intentionally ordered (e.g., by frequency counts). Do not leave ragged alphabetized bar graphs!
  • Axis intervals are equidistant: spaces between axis intervals should be the same unit
  • Graph is two-dimensional. Three-dimensional and “bevelled” displays are confusing
  • There is no unwanted decoration (especially the kind that comes automatically through the PPT “theme”). This wastes your space and confuses.

Color checklist

  • There is an intentional color scheme (do not use default theme)
  • Color is used to identify key patterns (e.g., highlight one bar in red against six others in greyscale if this is the bar you want the audience to notice)
  • Color is still legible when printed in black and white
  • Color is legible for people with color blindness (do not use red/green or yellow/blue combinations)
  • There is sufficient contrast between text and background (black text on white background works best; be careful of white on dark!)

Lines checklist

  • Be wary of using gridlines; if you do, mute them (grey, not black)
  • Allow graph to bleed into surroundings (don’t use border lines)
  • Remove axis lines unless absolutely necessary (better to label directly)

Overall design checklist

  • The display highlights a significant finding or conclusion that your audience can ‘”see” relatively quickly
  • The type of graph (e.g., bar chart, pie chart, line graph) is appropriate for the data. Avoid pie charts with more than three slices!
  • Graph has appropriate level of precision; if you don’t need decimal places
  • All the chart elements work together to reinforce the main message

Universal Design Checklist for Slideshow (PPT) Presentations

  • Include both verbal and written descriptions (e.g., captions on slides); consider providing a hand-out to accompany the presentation
  • Microphone available (ask audience in back if they can clearly hear)
  • Face audience; allow people to read your lips
  • Turn on captions when presenting audio or video clips
  • Adjust light settings for visibility
  • Speak slowly and clearly; practice articulation; don’t mutter or speak under your breath (even if you have something humorous to say – say it loud!)
  • Use Black/White contrasts for easy visibility; or use color contrasts that are real contrasts (do not rely on people being able to differentiate red from green, for example)
  • Use easy to read font styles and avoid too small font sizes: think about what an audience member in the back row will be able to see and read.
  • Keep your slides simple: do not overclutter them; if you are including quotes from your interviews, take short evocative snippets only, and bold key words and passages. You should also read aloud each passage, preferably with feeling!

Supplement: Models of Written Sections for Future Reference

Data collection section example.

Interviews were semi structured, lasted between one and three hours, and took place at a location chosen by the interviewee. Discussions centered on four general topics: (1) knowledge of their parent’s immigration experiences; (2) relationship with their parents; (3) understanding of family labor, including language-brokering experiences; and (4) experiences with school and peers, including any future life plans. While conducting interviews, I paid close attention to respondents’ nonverbal cues, as well as their use of metaphors and jokes. I conducted interviews until I reached a point of saturation, as indicated by encountering repeated themes in new interviews (Glaser and Strauss 1967). Interviews were audio recorded, transcribed with each interviewee’s permission, and conducted in accordance with IRB protocols. Minors received permission from their parents before participation in the interview. ( Kwon 2022:1832 )

Justification of Case Selection / Sample Description Section Example

Looking at one profession within one organization and in one geographic area does impose limitations on the generalizability of our findings. However, it also has advantages. We eliminate the problem of interorganizational heterogeneity. If multiple organizations are studied simultaneously, it can make it difficult to discern the mechanisms that contribute to racial inequalities. Even with a single occupation there is considerable heterogeneity, which may make understanding how organizational structure impacts worker outcomes difficult. By using the case of one group of professionals in one religious denomination in one geographic region of the United States, we clarify how individuals’ perceptions and experiences of occupational inequality unfold in relation to a variety of observed and unobserved occupational and contextual factors that might be obscured in a larger-scale study. Focusing on a specific group of professionals allows us to explore and identify ways that formal organizational rules combine with informal processes to contribute to the persistence of racial inequality. ( Eagle and Mueller 2022:1510–1511 )

Ethics Section Example

I asked everyone who was willing to sit for a formal interview to speak only for themselves and offered each of them a prepaid Visa Card worth $25–40. I also offered everyone the opportunity to keep the card and erase the tape completely at any time they were dissatisfied with the interview in any way. No one asked for the tape to be erased; rather, people remarked on the interview being a really good experience because they felt heard. Each interview was professionally transcribed and for the most part the excerpts are literal transcriptions. In a few places, the excerpts have been edited to reduce colloquial features of speech (e.g., you know, like, um) and some recursive elements common to spoken language. A few excerpts were placed into standard English for clarity. I made this choice for the benefit of readers who might otherwise find the insights and ideas harder to parse in the original. However, I have to acknowledge this as an act of class-based violence. I tried to keep the original phrasing whenever possible. ( Pascale 2021:235 )

Further Readings

Calarco, Jessica McCrory. 2020. A Field Guide to Grad School: Uncovering the Hidden Curriculum . Princeton, NJ: Princeton University Press. Don’t let the unassuming title mislead you—there is a wealth of helpful information on writing and presenting data included here in a highly accessible manner. Every graduate student should have a copy of this book.

Edwards, Mark. 2012. Writing in Sociology . Thousand Oaks, CA: SAGE. An excellent guide to writing and presenting sociological research by an Oregon State University professor. Geared toward undergraduates and useful for writing about either quantitative or qualitative research or both.

Evergreen, Stephanie D. H. 2018. Presenting Data Effectively: Communicating Your Findings for Maximum Impact . Thousand Oaks, CA: SAGE. This is one of my very favorite books, and I recommend it highly for everyone who wants their presentations and publications to communicate more effectively than the boring black-and-white, ragged-edge tables and figures academics are used to seeing.

Evergreen, Stephanie D. H. 2019. Effective Data Visualization 2 . Thousand Oaks, CA: SAGE. This is an advanced primer for presenting clean and clear data using graphs, tables, color, font, and so on. Start with Evergreen (2018), and if you graduate from that text, move on to this one.

Schwabisch, Jonathan. 2021. Better Data Visualizations: A Guide for Scholars, Researchers, and Wonks . New York: Columbia University Press. Where Evergreen’s (2018, 2019) focus is on how to make the best visual displays possible for effective communication, this book is specifically geared toward visual displays of academic data, both quantitative and qualitative. If you want to know when it is appropriate to use a pie chart instead of a stacked bar chart, this is the reference to use.

  • Some examples: Qualitative Inquiry , Qualitative Research , American Journal of Qualitative Research , Ethnography , Journal of Ethnographic and Qualitative Research , Qualitative Report , Qualitative Sociology , and Qualitative Studies . ↵
  • This is something I do with every article I write: using Excel, I write each element of the expected article in a separate row, with one column for “expected word count” and another column for “actual word count.” I fill in the actual word count as I write. I add a third column for “comments to myself”—how things are progressing, what I still need to do, and so on. I then use the “sum” function below each of the first two columns to keep a running count of my progress relative to the final word count. ↵
  • And this is true, I would argue, even when your primary goal is to leave space for the voices of those who don’t usually get a chance to be part of the conversation. You will still want to put those voices in some kind of choir, with a clear direction (song) to be sung. The worst thing you can do is overwhelm your audience with random quotes or long passages with no key to understanding them. Yes, a lot of metaphors—qualitative researchers love metaphors! ↵
  • To take Calarco’s recipe analogy further, do not write like those food bloggers who spend more time discussing the color of their kitchen or the experiences they had at the market than they do the actual cooking; similarly, do not write recipes that omit crucial details like the amount of flour or the size of the baking pan used or the temperature of the oven. ↵
  • The exception is the “compare and contrast” of two or more quotes, but use caution here. None of the quotes should be very long at all (a sentence or two each). ↵
  • Although this section is geared toward presentations, many of the suggestions could also be useful when writing about your data. Don’t be afraid to use charts and graphs and figures when writing your proposal, article, thesis, or dissertation. At the very least, you should incorporate a tabular display of the participants, sites, or documents used. ↵
  • I was so puzzled by these kinds of questions that I wrote one of my very first articles on it ( Hurst 2008 ). ↵

The visual presentation of data or information through graphics such as charts, graphs, plots, infographics, maps, and animation.  Recall the best documentary you ever viewed, and there were probably excellent examples of good data visualization there (for me, this was An Inconvenient Truth , Al Gore’s film about climate change).  Good data visualization allows more effective communication of findings of research, particularly in public presentations (e.g., slideshows).

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

Data Collection, Presentation and Analysis

  • First Online: 25 May 2023

Cite this chapter

data collection and analysis in research methodology ppt

  • Uche M. Mbanaso 4 ,
  • Lucienne Abrahams 5 &
  • Kennedy Chinedu Okafor 6  

549 Accesses

This chapter covers the topics of data collection, data presentation and data analysis. It gives attention to data collection for studies based on experiments, on data derived from existing published or unpublished data sets, on observation, on simulation and digital twins, on surveys, on interviews and on focus group discussions. One of the interesting features of this chapter is the section dealing with using measurement scales in quantitative research, including nominal scales, ordinal scales, interval scales and ratio scales. It explains key facets of qualitative research including ethical clearance requirements. The chapter discusses the importance of data visualization as key to effective presentation of data, including tabular forms, graphical forms and visual charts such as those generated by Atlas.ti analytical software.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Bibliography

Abdullah, M. F., & Ahmad, K. (2013). The mapping process of unstructured data to structured data. Proceedings of the 2013 International Conference on Research and Innovation in Information Systems (ICRIIS) , Malaysia , 151–155. https://doi.org/10.1109/ICRIIS.2013.6716700

Adnan, K., & Akbar, R. (2019). An analytical study of information extraction from unstructured and multidimensional big data. Journal of Big Data, 6 , 91. https://doi.org/10.1186/s40537-019-0254-8

Article   Google Scholar  

Alsheref, F. K., & Fattoh, I. E. (2020). Medical text annotation tool based on IBM Watson Platform. Proceedings of the 2020 6th international conference on advanced computing and communication systems (ICACCS) , India , 1312–1316. https://doi.org/10.1109/ICACCS48705.2020.9074309

Cinque, M., Cotroneo, D., Della Corte, R., & Pecchia, A. (2014). What logs should you look at when an application fails? Insights from an industrial case study. Proceedings of the 2014 44th Annual IEEE/IFIP International Conference on Dependable Systems and Networks , USA , 690–695. https://doi.org/10.1109/DSN.2014.69

Gideon, L. (Ed.). (2012). Handbook of survey methodology for the social sciences . Springer.

Google Scholar  

Leedy, P., & Ormrod, J. (2015). Practical research planning and design (12th ed.). Pearson Education.

Madaan, A., Wang, X., Hall, W., & Tiropanis, T. (2018). Observing data in IoT worlds: What and how to observe? In Living in the Internet of Things: Cybersecurity of the IoT – 2018 (pp. 1–7). https://doi.org/10.1049/cp.2018.0032

Chapter   Google Scholar  

Mahajan, P., & Naik, C. (2019). Development of integrated IoT and machine learning based data collection and analysis system for the effective prediction of agricultural residue/biomass availability to regenerate clean energy. Proceedings of the 2019 9th International Conference on Emerging Trends in Engineering and Technology – Signal and Information Processing (ICETET-SIP-19) , India , 1–5. https://doi.org/10.1109/ICETET-SIP-1946815.2019.9092156 .

Mahmud, M. S., Huang, J. Z., Salloum, S., Emara, T. Z., & Sadatdiynov, K. (2020). A survey of data partitioning and sampling methods to support big data analysis. Big Data Mining and Analytics, 3 (2), 85–101. https://doi.org/10.26599/BDMA.2019.9020015

Miswar, S., & Kurniawan, N. B. (2018). A systematic literature review on survey data collection system. Proceedings of the 2018 International Conference on Information Technology Systems and Innovation (ICITSI) , Indonesia , 177–181. https://doi.org/10.1109/ICITSI.2018.8696036

Mosina, C. (2020). Understanding the diffusion of the internet: Redesigning the global diffusion of the internet framework (Research report, Master of Arts in ICT Policy and Regulation). LINK Centre, University of the Witwatersrand. https://hdl.handle.net/10539/30723

Nkamisa, S. (2021). Investigating the integration of drone management systems to create an enabling remote piloted aircraft regulatory environment in South Africa (Research report, Master of Arts in ICT Policy and Regulation). LINK Centre, University of the Witwatersrand. https://hdl.handle.net/10539/33883

QuestionPro. (2020). Survey research: Definition, examples and methods . https://www.questionpro.com/article/survey-research.html

Rajanikanth, J. & Kanth, T. V. R. (2017). An explorative data analysis on Bangalore City Weather with hybrid data mining techniques using R. Proceedings of the 2017 International Conference on Current Trends in Computer, Electrical, Electronics and Communication (CTCEEC) , India , 1121-1125. https://doi/10.1109/CTCEEC.2017.8455008

Rao, R. (2003). From unstructured data to actionable intelligence. IT Professional, 5 , 29–35. https://www.researchgate.net/publication/3426648_From_Unstructured_Data_to_Actionable_Intelligence

Schulze, P. (2009). Design of the research instrument. In P. Schulze (Ed.), Balancing exploitation and exploration: Organizational antecedents and performance effects of innovation strategies (pp. 116–141). Gabler. https://doi.org/10.1007/978-3-8349-8397-8_6

Usanov, A. (2015). Assessing cybersecurity: A meta-analysis of threats, trends and responses to cyber attacks . The Hague Centre for Strategic Studies. https://www.researchgate.net/publication/319677972_Assessing_Cyber_Security_A_Meta-analysis_of_Threats_Trends_and_Responses_to_Cyber_Attacks

Van de Kaa, G., De Vries, H. J., van Heck, E., & van den Ende, J. (2007). The emergence of standards: A meta-analysis. Proceedings of the 2007 40th Annual Hawaii International Conference on Systems Science (HICSS’07) , USA , 173a–173a. https://doi.org/10.1109/HICSS.2007.529

Download references

Author information

Authors and affiliations.

Centre for Cybersecurity Studies, Nasarawa State University, Keffi, Nigeria

Uche M. Mbanaso

LINK Centre, University of the Witwatersrand, Johannesburg, South Africa

Lucienne Abrahams

Department of Mechatronics Engineering, Federal University of Technology, Owerri, Nigeria

Kennedy Chinedu Okafor

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Mbanaso, U.M., Abrahams, L., Okafor, K.C. (2023). Data Collection, Presentation and Analysis. In: Research Techniques for Computer Science, Information Systems and Cybersecurity. Springer, Cham. https://doi.org/10.1007/978-3-031-30031-8_7

Download citation

DOI : https://doi.org/10.1007/978-3-031-30031-8_7

Published : 25 May 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-30030-1

Online ISBN : 978-3-031-30031-8

eBook Packages : Engineering Engineering (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

SlidePlayer

  • My presentations

Auth with social network:

Download presentation

We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!

Presentation is loading. Please wait.

Data Collection and Analysis

Published by Martha Perry Modified over 8 years ago

Similar presentations

Presentation on theme: "Data Collection and Analysis"— Presentation transcript:

Data Collection and Analysis

Census and Statistics Department Introduction to Sample Surveys.

data collection and analysis in research methodology ppt

Sampling techniques as applied to environmental and earth sciences

data collection and analysis in research methodology ppt

Determining How to Select a Sample

data collection and analysis in research methodology ppt

King Fahd University of Petroleum & Minerals Department of Management and Marketing MKT 345 Marketing Research Dr. Alhassan G. Abdul-Muhmin Questionnaire.

data collection and analysis in research methodology ppt

Quantitative Research Design Backdrop to Multivariate Analysis.

data collection and analysis in research methodology ppt

MKTG 3342 Fall 2008 Professor Edward Fox

data collection and analysis in research methodology ppt

Chapter 16 Sampling Designs and Sampling Procedures © 2010 South-Western/Cengage Learning. All rights reserved. May not be scanned, copied or duplicated,

data collection and analysis in research methodology ppt

MISUNDERSTOOD AND MISUSED

data collection and analysis in research methodology ppt

Sample Design (Click icon for audio) Dr. Michael R. Hyman, NMSU.

data collection and analysis in research methodology ppt

SAMPLING DESIGN AND PROCEDURE

data collection and analysis in research methodology ppt

sampling Dr Majed El-Farra

data collection and analysis in research methodology ppt

Who and How And How to Mess It up

data collection and analysis in research methodology ppt

Sampling Prepared by Dr. Manal Moussa. Sampling Prepared by Dr. Manal Moussa.

data collection and analysis in research methodology ppt

11 Populations and Samples.

data collection and analysis in research methodology ppt

Determining the Sample Plan

data collection and analysis in research methodology ppt

Chapter 4 Selecting a Sample Gay, Mills, and Airasian

data collection and analysis in research methodology ppt

SAMPLING METHODS. Reasons for Sampling Samples can be studied more quickly than populations. A study of a sample is less expensive than studying an entire.

data collection and analysis in research methodology ppt

Sampling Design.

data collection and analysis in research methodology ppt

Sampling Designs and Sampling Procedures

About project

© 2024 SlidePlayer.com Inc. All rights reserved.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Data Collection Methods | Step-by-Step Guide & Examples

Data Collection Methods | Step-by-Step Guide & Examples

Published on 4 May 2022 by Pritha Bhandari .

Data collection is a systematic process of gathering observations or measurements. Whether you are performing research for business, governmental, or academic purposes, data collection allows you to gain first-hand knowledge and original insights into your research problem .

While methods and aims may differ between fields, the overall process of data collection remains largely the same. Before you begin collecting data, you need to consider:

  • The  aim of the research
  • The type of data that you will collect
  • The methods and procedures you will use to collect, store, and process the data

To collect high-quality data that is relevant to your purposes, follow these four steps.

Table of contents

Step 1: define the aim of your research, step 2: choose your data collection method, step 3: plan your data collection procedures, step 4: collect the data, frequently asked questions about data collection.

Before you start the process of data collection, you need to identify exactly what you want to achieve. You can start by writing a problem statement : what is the practical or scientific issue that you want to address, and why does it matter?

Next, formulate one or more research questions that precisely define what you want to find out. Depending on your research questions, you might need to collect quantitative or qualitative data :

  • Quantitative data is expressed in numbers and graphs and is analysed through statistical methods .
  • Qualitative data is expressed in words and analysed through interpretations and categorisations.

If your aim is to test a hypothesis , measure something precisely, or gain large-scale statistical insights, collect quantitative data. If your aim is to explore ideas, understand experiences, or gain detailed insights into a specific context, collect qualitative data.

If you have several aims, you can use a mixed methods approach that collects both types of data.

  • Your first aim is to assess whether there are significant differences in perceptions of managers across different departments and office locations.
  • Your second aim is to gather meaningful feedback from employees to explore new ideas for how managers can improve.

Prevent plagiarism, run a free check.

Based on the data you want to collect, decide which method is best suited for your research.

  • Experimental research is primarily a quantitative method.
  • Interviews , focus groups , and ethnographies are qualitative methods.
  • Surveys , observations, archival research, and secondary data collection can be quantitative or qualitative methods.

Carefully consider what method you will use to gather data that helps you directly answer your research questions.

When you know which method(s) you are using, you need to plan exactly how you will implement them. What procedures will you follow to make accurate observations or measurements of the variables you are interested in?

For instance, if you’re conducting surveys or interviews, decide what form the questions will take; if you’re conducting an experiment, make decisions about your experimental design .

Operationalisation

Sometimes your variables can be measured directly: for example, you can collect data on the average age of employees simply by asking for dates of birth. However, often you’ll be interested in collecting data on more abstract concepts or variables that can’t be directly observed.

Operationalisation means turning abstract conceptual ideas into measurable observations. When planning how you will collect data, you need to translate the conceptual definition of what you want to study into the operational definition of what you will actually measure.

  • You ask managers to rate their own leadership skills on 5-point scales assessing the ability to delegate, decisiveness, and dependability.
  • You ask their direct employees to provide anonymous feedback on the managers regarding the same topics.

You may need to develop a sampling plan to obtain data systematically. This involves defining a population , the group you want to draw conclusions about, and a sample, the group you will actually collect data from.

Your sampling method will determine how you recruit participants or obtain measurements for your study. To decide on a sampling method you will need to consider factors like the required sample size, accessibility of the sample, and time frame of the data collection.

Standardising procedures

If multiple researchers are involved, write a detailed manual to standardise data collection procedures in your study.

This means laying out specific step-by-step instructions so that everyone in your research team collects data in a consistent way – for example, by conducting experiments under the same conditions and using objective criteria to record and categorise observations.

This helps ensure the reliability of your data, and you can also use it to replicate the study in the future.

Creating a data management plan

Before beginning data collection, you should also decide how you will organise and store your data.

  • If you are collecting data from people, you will likely need to anonymise and safeguard the data to prevent leaks of sensitive information (e.g. names or identity numbers).
  • If you are collecting data via interviews or pencil-and-paper formats, you will need to perform transcriptions or data entry in systematic ways to minimise distortion.
  • You can prevent loss of data by having an organisation system that is routinely backed up.

Finally, you can implement your chosen methods to measure or observe the variables you are interested in.

The closed-ended questions ask participants to rate their manager’s leadership skills on scales from 1 to 5. The data produced is numerical and can be statistically analysed for averages and patterns.

To ensure that high-quality data is recorded in a systematic way, here are some best practices:

  • Record all relevant information as and when you obtain data. For example, note down whether or how lab equipment is recalibrated during an experimental study.
  • Double-check manual data entry for errors.
  • If you collect quantitative data, you can assess the reliability and validity to get an indication of your data quality.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organisations.

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g., understanding the needs of your consumers or user testing your website).
  • You can control and standardise the process for high reliability and validity (e.g., choosing appropriate measurements and sampling methods ).

However, there are also some drawbacks: data collection can be time-consuming, labour-intensive, and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research , you also have to consider the internal and external validity of your experiment.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bhandari, P. (2022, May 04). Data Collection Methods | Step-by-Step Guide & Examples. Scribbr. Retrieved 6 May 2024, from https://www.scribbr.co.uk/research-methods/data-collection-guide/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, qualitative vs quantitative research | examples & methods, triangulation in research | guide, types, examples, what is a conceptual framework | tips & examples.

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Lecture 5 Data Collection Methods

Profile image of Niyompano dodos

Related Papers

data collection and analysis in research methodology ppt

APTISI Transactions on Management

APTISI Transactions on Management , George Iwan

Writing can mean lowering or describing graphic symbols that describe a language understood by someone. For a researcher, management of research preparation is a very important step because this step greatly determines the success or failure of all research activities. Before a person starts with research activities, he must make a written plan commonly referred to as the management of research data collection. In the process of collecting research data, of course we can do the management of questionnaires as well as the preparation of interview guidelines to disseminate and obtain accurate information. With the arrangement of planning and conducting interviews: the ethics of conducting interviews, the advantages and disadvantages of interviews, the formulation of interview questions, the schedule of interviews, group and focus group interviews, interviews using recording devices, and interview bias. making a questionnaire must be designed with very good management by giving to the information needed, in accordance with the problem and all that does not cause problems at the stage of analysis and interpretation.

Edrine Wanyama

Questionnaire construction has overtime evolved with consistency and rarely, it has been skipped in the world’s researches. Questionnaires form the basis for which most pieces of information can be obtained. In the very light, response rates to questions and accuracy of data findings are possible through the use of questionnaire usage. Where a questionnaire is poorly constructed, one faces the risk of missing out vital information which could be forming the basis for research. This paper discusses the relevance and importance of questionnaire construction in data collection and research. An attempt is made to show questionnaire usage in social research and other research processes. Ultimately, questionnaire construction is considered just as important as any other research process used while collecting data. Some key recommendations that could make questionnaire usage in research better are also briefly considered.

Amanda Hunn

De Wet Schutte

Abstract All empirical research involves some form of data collection. One of the approaches commonly used the human sciences, is survey research. This article focuses on the various forms of interviews and using the questionnaire technique as a data collection instrument often associated with surveys. It puts the different interview types on a continuum, ranging from structured to unstructured interviews into perspective against two underlying types of data, namely qualitative and quantitative data. The article sensitises the prospective researcher for some pitfalls when using the interview as a data collection technique and includes some hints for this protective researcher when using the interview data collection technique in practice. It also attempts to bring order into the vocabulary when using the concepts: procedure and technique.

Leann Delos Reyes

Elise Paradis

Ebenezer Consultan

João Bandeira De Melo Papelo

Sapthami KKM

RELATED PAPERS

Bulletin of marine science

David Ballantine

Ljiljana Nikolić

European Journal of Social Sciences Education and Research

Svjetlana Kolić-vehovec

BOIS & FORETS DES TROPIQUES

Yeboa Koffi

Hannu Aronen

Taweedej SIRITHANAPIPAT

Christophe Godowski

Nordic Journal of Vocational Education and Training

Rønnaug Lyckander

Muhammad Riduan

Sharmistha Mondal

Radiation Oncology

Tomáš Jonszta

Medical Science Monitor

Mustafa Yıldırım

Marine Policy

Frances Humber

Tạp chí Y học Việt Nam

Đoàn Nguyễn

The Journal of Positive Psychology

Claire Prade

Jurnal Riset dan Inovasi Pembelajaran

Abdul Rahman Rahim

Amelia Martínez-Alonso

Aylin Rodríguez

Regulatory Toxicology and Pharmacology

Khushbu Sharma

Adesoji Adelaja

The Oncologist

Fatima Cardoso

The Cochrane library

Nai Ming Lai

Amit Kalmanovich

International Journal of Plastics Technology

SUNIL KUTTY

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Monograph Matters

Qualitative analysis: process and examples | powerpoint – 85.2.

Authors Laura Wray-Lake and Laura Abrams describe qualitative data analysis, with illustrative examples from their SRCD monograph,  Pathways to Civic Engagement Among Urban Youth of Color . This PowerPoint document includes presenter notes, making it an ideal resource for researchers learning about qualitative analysis and for instructors teaching about it in upper-level undergraduate or graduate courses.

Created by Laura Wray-Lake and Laura S. Abrams. All rights reserved.

Citation: Wray-Lake, L. & Abrams, L. S. (2020) Qualitative Analysis: Process and Examples [PowerPoint]. Retrieved from https://monographmatters.srcd.org/2020/05/12/teachingresources-qualitativeanalysis-powerpoint-85-2

Share this:

SlideTeam

Researched by Consultants from Top-Tier Management Companies

Banner Image

Powerpoint Templates

Icon Bundle

Kpi Dashboard

Professional

Business Plans

Swot Analysis

Gantt Chart

Business Proposal

Marketing Plan

Project Management

Business Case

Business Model

Cyber Security

Business PPT

Digital Marketing

Digital Transformation

Human Resources

Product Management

Artificial Intelligence

Company Profile

Acknowledgement PPT

PPT Presentation

Reports Brochures

One Page Pitch

Interview PPT

All Categories

[Updated 2023] Top 20 PowerPoint Templates to Devise a Systematic Research Methodology

[Updated 2023] Top 20 PowerPoint Templates to Devise a Systematic Research Methodology

Kritika Saini

author-user

Developing a systematic research methodology is essential for conducting effective investigations. It ensures clarity, rigor, validity, replicability, ethical integrity, and efficiency in the research process. It serves as a roadmap that guides researchers through the study, enabling them to generate reliable findings and contribute to the advancement of knowledge in their respective fields.

Research Methodology Templates to Conduct Rigorous and Reliable Research

By following a well-structured approach, you can enhance the efficiency of your research and produce meaningful results. Therefore, SlideTeam brings you a collection of content-ready and custom-made PPT templates to help you save time by providing pre-designed structures and frameworks for research methodologies. You can customize these templates to fit your specific projects, eliminating the need to create a methodology from scratch. 

This time-saving aspect allows you to focus more on the actual research process. Secondly, these ready-made templates provide you with consistency and standardization in methodologies. They ensure that essential elements are included and organized in a logical manner, making it easier for readers and reviewers to understand and evaluate the research. They also serve as a helpful guide, ensuring that researchers cover all necessary components and follow best practices. They provide a clear and structured format for learning about research methodologies and help researchers develop a systematic approach to their work. Overall, research methodology templates streamline the process, enhance consistency, and serve as educational resources for researchers at various levels of expertise.

Browse the collection below and ensure that your methodology is comprehensive and well-written. 

Let's begin!

Want to elevate your creativity? Check out this blog.  

Template 1: Research method PPT Template

Save time and ensure consistency with our research methodology template. Designed to streamline your research process, our content-ready template provides a pre-designed structure and framework for developing your methodology section. Use this actionable PPT to focus more on conducting your research while ensuring that all essential elements are covered and organized in a logical manner. Enhance your efficiency and maintain consistency with our research methodology template. 

ResearchMethod

Download now

Template 2: Research Methodology Process Analysis Template

This is a content-ready PowerPoint template to maximize the effectiveness of your research. This professional and appealing template guides you step-by-step through the research process, from defining your research question to analyzing and interpreting data. With a structured framework in place, you can ensure that your methodology is comprehensive, rigorous, and adheres to best practices. Save time and maintain consistency by using our research methodology process template, empowering you to conduct high-quality research and generate meaningful insights.

Research Methodology

Template 3: Business Research Design and Methodology Template

Accelerate your business research endeavors with our business research methodology proposal template. This comprehensive e template provides a solid framework for crafting a well-structured and persuasive research proposal. Streamline the proposal development process by leveraging our template's pre-designed sections, including problem statement, research objectives, methodology, timeline, and budget. Present your proposal with confidence, knowing that you have followed a proven format and incorporated essential elements. Take your business research to the next level with our business research methodology proposal template. 

Business Research Design and Methodology Proposal

Template 4: Market Share Research Methodology Template

Wish to uncover valuable market insights? Deploy this ready-made PowerPoint template that simplifies the process of analyzing market share data, allowing you to assess your company's performance in relation to competitors. With pre-designed sections for data collection, analysis, and visualization, easily track market trends, identify growth opportunities, and make data-driven decisions. Save time and enhance your market research efforts with our market share research template, empowering you to stay ahead in a competitive business landscape. 

Market Share Research Methodology with Six Pentagonal Steps

Template 5: PESTEL Analysis Research Methodology PPT Template

Gain a comprehensive understanding of your business environment with our pre-designed PESTEL analysis research methodology template. This versatile template provides a structured framework for conducting a thorough analysis of the political, economic, social, technological, environmental, and legal factors impacting your industry or market. Easily identify key trends, opportunities, and risks by utilizing our pre-designed sections and guidance. Streamline your research process and make informed strategic decisions using our PESTEL Analysis research methodology template, ensuring your business stays ahead of the curve.

Pestel Analysis Research Methodology Chart Sample File

Template 6: Research Methodology with 3 Step Process Map PPT Template 

Looking for ways to create a research methodology process? Achieve research success with our content-ready PPT template which simplifies the research journey into three steps. Collect data, conduct research, and evaluate your findings to draw meaningful conclusions. With our template, you'll stay organized and ensure consistency throughout your research process. Maximize your research potential and achieve impactful results using our premium PPT slide.

Research Methodology with 3 Step Process Map

Template 7: Rational Sections Research Methodology Template

This is a well-structured PowerPoint template that features distinct sections that guide you through every aspect of your research. From clearly defining research objectives to selecting appropriate data collection methods, analyzing data, and interpreting results, this PPT slide ensures you cover all essential components. With pre-designed sections for literature review, research design, data analysis, and more, you can streamline your research process and maintain consistency. Harness the potential of each section in our research methodology template to conduct rigorous and impactful studies. 

Rational Sections Research Methodology Supplementary Program

Template 8: Research Methodology with Analysis PPT Template 

Unleash the power of data-driven insights with our ready-made PPT template. This all-inclusive template integrates research methodology and data analysis, providing a comprehensive framework for conducting robust studies. From defining research objectives to data collection, cleaning, and analysis, our template guides you through each step of the research process. With pre-designed sections for statistical analysis, visualizations, and interpretation, uncover meaningful patterns and trends in your data. Elevate your research endeavors with this actionable template and unlock valuable insights for informed decision-making.

Research Methodology with Analysis and Online Survey

Template 9: Research Methodology Workflow PPT Template 

Wish to optimize your research workflow? Use this content-ready PPT template that simplifies the process of planning, executing, and documenting your research methodology. With pre-designed sections for each stage, including research question formulation, data collection, analysis, and reporting, this pre-designed template ensures a structured and organized approach. Streamline your workflow, enhance collaboration, and maintain consistency throughout your research project with our professional and appealing PPT slide. 

Research Methodology Showing Identify Aims Test Workflow

Template 10: Research Methodology with Literature Review PPT Template

Deploy this content-ready PowerPoint template to elevate your research that showcases crucial elements of literature review, providing a seamless framework for conducting rigorous investigations. With this pre-designed PPT template exhibiting research objectives, appropriate methods, a thorough literature review, and findings with existing knowledge, you can save time, maintain consistency, and produce impactful research. Leverage our PPT template to uncover valuable insights and contribute to the advancement of knowledge in your field.

Research Methodology with Literature Review and Report Findings

Template 11: Framework of Exploratory Research Methodology PPT Template  

Embark on a journey of discovery and provide a structured framework for conducting exploratory research using our content-ready template. Delve into uncharted territories and uncover new insights by incorporating this premium template. Use this PPT slide to identify problem, data collection methods, analysis techniques, and interpretation. This PowerPoint template guides you through the exploratory research process. Unlock novel perspectives, generate hypotheses, and fuel innovation using our ready-made slide.

Framework of Exploratory Research Methodology

Template 12: 5 Steps Indicating Research Methodology Process PPT Template

Looking for ways to streamline your research journey? Deploy this content-ready PowerPoint template to simplify the research process into five clear and manageable steps: Define, Design, Collect, Analyze, and Report. Each step is accompanied by pre-designed sections, ensuring a systematic approach to your research project. From formulating research questions to presenting your findings, this premium template provides a structured framework for success. Save time, stay organized, and achieve research excellence with this ready-made template.

5 Steps indicating Research Methodology Process

Template 13: Graph of Primary Research Methodology PPT Template  

Experience the power of data-driven insights with this professional and appealing PPT template. Designed for primary research, this template offers a comprehensive framework that includes field trials, observations, interviews, focus groups, and surveys. Easily visualize and navigate through each stage of your research process, from data collection to analysis. Organize and document your findings to maximize the effectiveness of your primary research and make informed decisions using our ready to use PowerPoint template. 

Graph of Primary Research Methodology

Template 14: Research Methodology Framework of Market Analysis PPT Template 

Use this content-ready PPT template tailored specifically for market analysis to guide your research process. From defining research objectives to selecting appropriate data collection methods, analyzing market trends, and drawing meaningful conclusions, our template covers all essential aspects. Streamline your market analysis, maintain consistency, and make data-driven decisions with ease using our Research Methodology Framework for Market Analysis template. Stay ahead of the competition and capitalize on market opportunities. 

Research Methodology Framework of Market Analysis

Template 15: Four Steps Process of Research Methodology PPT Template 

This is a ready to use PPT template that provides you a structured and organized approach for your research methodology It includes a four-step process: Project Design, Data Acquisition, Data Analysis, and Strategy Recommendation to plan your research project, gather relevant data, analyze it using appropriate techniques, and derive actionable strategy recommendations. Save time and enhance the effectiveness of your research with our premium template, empowering you to make informed decisions and achieve impactful results.

Four Steps Process of Research Methodology

Template 16: Market Research Methodology and Techniques PPT Template 

This comprehensive template equips you with a range of methodologies and techniques to effectively study and understand your target market. From surveys and interviews to focus groups and data analysis, this premium template covers a wide array of research methods. It provides pre-designed sections for each technique, guiding you through the research process and ensuring consistency.

Market Research Methodology and Techniques

Template 17: Quantitative Market Research Methodology Framework PPT Template 

This template serves as a guide to direct your market research endeavors. Showcasing each stage of the research process, including research design, data collection methods, analysis techniques, and reporting, this template ensures a systematic approach to quantitative market research. Create professional and engaging presentations, highlighting your research methodology with ease.

Quantitative Market Research Methodology Framework

Template 18: Process Tree for Research Methodology PPT Template 

Use this content-ready PPT template that outlines the sequential steps involved in conducting a research study. It serves as a roadmap, depicting the flow of activities from research question formulation to data collection, analysis, and interpretation. Like the branches of a tree, each step branches out into sub-steps and tasks, highlighting the interconnectedness and dependencies. Grab this ready-made PowerPoint template that provides you with a clear and engaging overview, ensuring researchers stay organized and follow a systematic approach throughout their research journey.

Process Tree for Research Methodology

Template 19: Flowchart for Research Methodology PPT Template

Deploy this pre-designed PPT that illustrates the logical flow of steps and decisions involved in conducting a research study. Similar to a roadmap, it presents a series of interconnected boxes or shapes connected by arrows, representing the sequential progression of activities. Each box represents a specific task or process, and the arrows indicate the direction of the flow. Incorporate this PPT slide to help your audience understand the research process at a glance, making it engaging and crisp to follow the logical progression of their study.

Flowchart for Research Methodology with Design and Development

Template 20: Eleven Stage Process for Research Methodology PPT Template  

Unleash the power of simplicity in research methodology using our PPT template that eliminates complexity and guides you through each step effortlessly. From defining objectives to data analysis, we've got you covered. Simplify your research journey and unlock meaningful insights with ease.

Eleven Stage Process for Research Methodology

Our content-ready and custom-made templates empower researchers to streamline their work, save time, and maintain consistency. With its comprehensive structure and pre-designed sections, it simplifies the research process, ensuring all essential components are covered. Maximize your research potential and achieve impactful results with our user-friendly template.

Download now!

FAQs on Research Methodology

What are the four types of research methodology.

The four types of research methodology commonly used in academic and scientific studies are:

Descriptive Research: This type aims to describe and document the characteristics, behavior, and phenomena of a particular subject or population. It focuses on gathering information and providing an accurate portrayal of the research topic.

Experimental Research: This approach involves the manipulation and control of variables to establish cause-and-effect relationships. It often includes the use of control groups and random assignment to test hypotheses and draw conclusions.

Correlational Research: This methodology examines the statistical relationship between two or more variables without direct manipulation. It aims to identify patterns and associations between variables to understand their degree of relationship.

Qualitative Research: This approach focuses on exploring and understanding the subjective experiences, perspectives, and meanings attributed by individuals or groups. It involves methods such as interviews, observations, and analysis of textual or visual data to uncover insights and interpretations.

What are the 3 main methodological types of research?

The three main methodological types of research are:

Quantitative Research: This approach involves the collection and analysis of numerical data to uncover patterns, relationships, and statistical trends. It focuses on objective measurements, often utilizing surveys, experiments, and statistical analysis to quantify and generalize findings.

Qualitative Research: This methodology aims to understand the subjective experiences, meanings, and social contexts associated with a research topic. It relies on non-numerical data, such as interviews, observations, and textual analysis, to explore in-depth perspectives, motivations, and behavior.

Mixed-Methods Research: This type of research integrates both quantitative and qualitative approaches, combining the strengths of both methodologies. It involves collecting and analyzing both numerical and non-numerical data to gain a comprehensive understanding of the research problem. Mixed-methods research can provide a more nuanced picture by capturing both statistical trends and rich contextual information.

What are the 7 basic research methods?

There are several research methods commonly used in academic and scientific studies. While the specific categorization may vary, here are seven basic research methods:

Experimental Research: Involves controlled manipulation of variables to establish cause-and-effect relationships.

Survey Research: Utilizes questionnaires or interviews to collect data from a sample population to gather insights and opinions.

Observational Research: Involves systematic observation of subjects in their natural environment to gather qualitative or quantitative data.

Case Study Research: In-depth analysis of a particular individual, group, or phenomenon to gain insights and generate detailed descriptions.

Correlational Research: Examines the statistical relationship between variables to identify patterns and associations.

Qualitative Research: Focuses on understanding subjective experiences, meanings, and social contexts through interviews, observations, and textual analysis.

Action Research: Involves collaboration between researchers and participants to address real-world problems and generate practical solutions.

Related posts:

  • 10 Most Impactful Ways of Writing a Research Proposal: Examples and Sample Templates (Free PDF Attached)
  • Must-have Marketing Research Proposal Example Templates with Samples
  • How Financial Management Templates Can Make a Money Master Out of You
  • [Updated 2023] Top 10 Winning Case Study Competition Presentations [and 10 Vexing Business Issues They Can Help You Solve]

Liked this blog? Please recommend us

data collection and analysis in research methodology ppt

Top 15 System Development Life Cycle Templates to Build Robust Business Applications

Top 15 Matrix Management Templates to Boost Collaboration

Top 15 Matrix Management Templates to Boost Collaboration

This form is protected by reCAPTCHA - the Google Privacy Policy and Terms of Service apply.

digital_revolution_powerpoint_presentation_slides_Slide01

Digital revolution powerpoint presentation slides

sales_funnel_results_presentation_layouts_Slide01

Sales funnel results presentation layouts

3d_men_joinning_circular_jigsaw_puzzles_ppt_graphics_icons_Slide01

3d men joinning circular jigsaw puzzles ppt graphics icons

Business Strategic Planning Template For Organizations Powerpoint Presentation Slides

Business Strategic Planning Template For Organizations Powerpoint Presentation Slides

Future plan powerpoint template slide

Future plan powerpoint template slide

project_management_team_powerpoint_presentation_slides_Slide01

Project Management Team Powerpoint Presentation Slides

Brand marketing powerpoint presentation slides

Brand marketing powerpoint presentation slides

Launching a new service powerpoint presentation with slides go to market

Launching a new service powerpoint presentation with slides go to market

agenda_powerpoint_slide_show_Slide01

Agenda powerpoint slide show

Four key metrics donut chart with percentage

Four key metrics donut chart with percentage

Engineering and technology ppt inspiration example introduction continuous process improvement

Engineering and technology ppt inspiration example introduction continuous process improvement

Meet our team representing in circular format

Meet our team representing in circular format

Google Reviews

  • Open access
  • Published: 09 May 2024

Exploring factors affecting the unsafe behavior of health care workers’ in using respiratory masks during COVID-19 pandemic in Iran: a qualitative study

  • Azadeh Tahernejad 1 ,
  • Sanaz Sohrabizadeh   ORCID: orcid.org/0000-0002-9170-178X 1 &
  • Somayeh Tahernejad 2  

BMC Health Services Research volume  24 , Article number:  608 ( 2024 ) Cite this article

Metrics details

The use of respiratory masks has been one of the most important measures to prevent the spread of COVID-19 among health care workers during the COVID-19 pandemic. Therefore, correct and safe use of breathing masks is vital. The purpose of this study was to exploring factors affecting the unsafe behavior of health care workers’ in using respiratory masks during the COVID-19 pandemic in Iran.

This study was carried out using the conventional qualitative content analysis. Participants were the number of 26 health care workers selected by purposive sampling method. Data collection was conducted through in-depth semi-structured interviews. Data analysis was done using the content analysis approach of Graneheim and Lundman. This study aligns with the Consolidated Criteria for Reporting Qualitative Research (COREQ) checklist and was conducted between December 2021 and April 2022.

The factors affecting the unsafe behavior of health care workers while using respiratory masks were divided into 3 main categories and 8 sub-categories. Categories included discomfort and pain (four sub-categories of headache and dizziness, skin discomfort, respiratory discomfort, feeling hot and thirsty), negative effect on performance (four sub-categories of effect on physical function, effect on cognitive function, system function vision, and hearing), and a negative effect on the mental state (two subcategories of anxiety and depression).

The findings can help identify and analyze possible scenarios to reduce unsafe behaviors at the time of using breathing masks. The necessary therapeutic and preventive interventions regarding the complications of using masks, as well as planning to train personnel for the correct use of masks with minimal health effects are suggested.

Peer Review reports

The COVID-19 pandemic has brought unprecedented challenges to healthcare systems worldwide, requiring Health Care Workers (HCWs) to adopt strict infection control measures to protect themselves [ 1 ]. Among these measures, the proper use of respiratory masks plays a crucial role in preventing the transmission of the virus [ 2 ]. Iran was among the initial countries impacted by COVID-19. In Iran, as in many other countries, HCWs have been at the forefront of the battle against COVID-19, facing various challenges in utilizing respiratory masks effectively [ 3 ]. Over 7.6 million Iranians have been infected by the SARS-CoV-2 virus, with more than 146,480 reported deaths as of August 2023 [ 4 ]. Amid the COVID-19 pandemic, Iran’s healthcare system experienced significant impacts as well [ 5 ].

Despite the passage of several years since the onset of the COVID-19 pandemic, new variant of the virus continues to emerge worldwide. It is crucial to be prepared for future pandemics and similar biological disasters.

Due to the SARS-CoV-2 virus transmission via respiratory droplets, the use of masks and personal protective equipment is essential [ 6 ]. The World Health Organization recommended the use of medical masks, such as surgical masks, for HCWs during the COVID-19 pandemic [ 7 ]. These masks are designed to provide a barrier to respiratory droplets and help reduce the transmission of the virus [ 8 ].

Few studies have been devoted to negative aspects of using respiratory masks in human being. The physiological and adverse effects of using PPE have been investigated in a systematic review study [ 9 ]. In another review study, of skin problems related to the use of respiratory masks were studied [ 10 ]. Also, in some studies, a significant relationship has been found between the time of using masks and the severity of the adverse effects of using masks [ 11 ]. In all the above studies, questionnaires have been used to check the prevalence of these adverse effects among HCWs.

Incorrect use of masks is considered as the unsafe behaviors of HCWs. In some studies, unsafe behaviors are defined as disobeying an accepted safe method while working with the capability of causing an accident [ 12 ]. Since the reasons for unsafe behavior are complex and multifaceted, their prevention requires a clear understanding of important and influential factors. In various studies about the prevalence of unsafe behaviors in work environments, several factors such as individual characteristics, psychological aspects, safety conditions, perceived risk, and stress have been introduced as effective factors in demonstrating the unsafe behaviors [ 12 , 13 , 14 ]. However, the findings are still unable to provide a deep understanding of the underlying causes and motivations contributing to unsafe behaviors.

In the present study, unsafe behaviors while using respiratory masks is defined as the behaviors that are seen by some HCWs, which reduce the effectiveness of respiratory masks due to improper placement on the face or hand contact with the mask [ 15 ]. Some researchers in their studies indicated that other unknown factors are also effective in the unsafe behaviors [ 14 ]. However, the findings are still unable to provide a deep understanding of the underlying causes and motivations contributing to unsafe behaviors. Qualitative studies are needed to answer these questions and determine its causes. Hence, the present study is aimed to explore the factors affecting the unsafe behavior of HCWs while using respiratory masks during the COVID-19 pandemic through a qualitative study.

Study design

This study was carried out using conventional qualitative content analysis (item 9 in COREQ checklist). The interviews explored HCWs’ experiences regarding factors affecting the unsafe behavior in using respiratory masks during covid-19 pandemic in Iran. This research adheres to the guidelines outlined in the Consolidated Criteria for Reporting Qualitative Research (COREQ).

This study was conducted in government and non-government hospitals in Tehran, Mashhad and Rafsanjan that admitted patients with COVID-19 disease. The authors’ place of work and access to participants were important causes of choosing the settings. Moreover, these hospitals experienced a large amount of patients seeking healthcare during the Covid-19 pandemic. This study was performed between December 2021 and April 2022.

Participants

In this study, interviews were performed with healthcare workers (HCWs) including nurses, physicians and hospital workers who had direct contact with patients that used masks for more than 4 h in each work shift. Also, participants frequently utilized surgical masks. Among them, few employed filter masks or a combination of both types. The inclusion criteria were people with experience of using respiratory masks for more than one year and the ability to express their experiences and point of views. The sole exclusion criterion of the current study was a lack of interest in further participation. The participants were selected using purposive sampling method (item 10 in COREQ checklist) in which the researcher selected the most informed people who could explain their experiences regarding the research topic [ 16 ]. The number of participants was determined based on the data saturation principle in which no new concepts were obtained. Data saturation was achieved after 24 interviews, and to ensure saturation, two more interviews were also performed. Finally, the total number of participants was 26 people (items 12–13 in COREQ checklist).

Data gathering

Data collection was performed through in-depth face to face (item 11 in COREQ checklist) semi-structured interviews. The first author, who received training in qualitative research methods, conducted all the interviews (items 1–5 in COREQ checklist). The participants were presented with information about the research topic, objectives, and the researchers’ identities. The researcher thoroughly described the study procedure to those who consented to participate, and written informed consent was obtained from all participants (items 6–8 in COREQ checklist). The data was gathered in the workplace of the participants. Additionally, demographic data of the participants was documented (items 14–16 in COREQ checklist). At first, 5 unstructured interviews were done to extract the primary concept, and then, 21 semi-structured interviews were conducted using the interview guide. The interviews were done in a quiet and comfortable place. The interviews started with simple and general topics and were gradually directed to specific questions based on the answers. Some of the questions were: Based on your experience, what factors are effective in not using your mask safely?

New concepts were extracted from each interview, and this process continued until data saturation was reached. After obtaining permission from the participants to record the interviews, the implementation of the interviews was done immediately after the completion of each interview to increase the accuracy of the obtained data. The duration of the interviews was between 15 and 40 min (30 min on average). Field notes were made during or after the interview and transcripts were returned to participants for the comments and corrections (items 17–23 in COREQ checklist).

Data analysis

Data analysis was done using the five-step content analysis approach of Graneheim and Lundman [ 17 ]. Immediately after conducting each interview, the recorded file of the interview was transcribed in Word software. The interview text was read several times and based on the research question, all the content related to the participants’ experiences were extracted in the form of meaning units. In addition, notes were written in the margins of the text and then, the abstracted meaning units were designated as the code. Subsequently, the compiled codes were categorized into subcategories according to similarities. This process was repeated for all transcribed interviews until the main categories were established. The whole data analysis process was carried out by the researchers. Direct quotes from the interviews included in the results section to elucidate the codes, categories, and themes. (items 24–32 in COREQ checklist).

Trustworthiness

The strategies of transferability, dependability, credibility outlined by Lincoln and Guba were employed to achieve data trustworthiness [ 18 ]. Credibility and dependability were established through data triangulation approach, which involved interviews and field notes. Furthermore, peer check and member check were applied for ensuring credibility. To obtain member check, the transcribed interviews and codes were shared with some participants to receive their feedbacks. In the case of peer check, the research team and independent experts were verified the extracted codes and sub-categories. Data transferability and Confirmability were met through the detailed explanation of the research stages and process.

Women were 50% of all participants and the highest frequency of education was bachelor’s degree ( n  = 17). Furthermore, the highest amount of work experience was 22 years (Table  1 ).

In the present study, 689 initial codes were identified in the initial writing, and after removing duplicate codes and cleaning, the number of final codes included 132 codes. After reviewing and analyzing the data, the factors affecting the unsafe behavior of HCWs while using respiratory masks were divided into 3 main categories and 8 sub-categories (Table  2 ). Categories included discomfort and pain (four sub-categories of headache and dizziness, skin discomfort, respiratory discomfort, feeling hot and thirsty), negative effect on performance (four sub-categories of effect on physical function, effect on cognitive function, system function vision and hearing), and a negative effect on the mental state (two subcategories of anxiety and depression).

Pain and discomfort

Some of the participants reported that the reason for improper and unsafe use of the mask is feeling pain and discomfort, and the reasons include the four subcategories of headache and dizziness, skin discomfort, respiratory discomfort, discomfort caused by heat and thirst.

Skin disorders

The side effects of the mask on the skin are of the important factors in this category. Thus, some participants, due effects of the mask to their skin, limited the use of the mask or did not use it correctly. Among the skin problems experienced by the participants were acne and skin sensitivities, which in some cases required drug treatments. The subcategory of skin sensitivities such as itching and burning was mentioned by more than 70% of the samples as the most important cause of discomfort.

“…I can’t help touching my mask. After half an hour when I put on the new mask, my face, especially my nose, starts to itch badly and I often have to blow my nose from under the mask or over the mask with my fingers, palm or the back of my hand…” (P1)

Respiratory disorders

Most of the participants in the study noted to problems such as difficulty in breathing, heart palpitations, carbon dioxide and unpleasant smell inside the mask as the most important respiratory problems. Therefore, it can be one of the important reasons for removing the mask and unsafe behavior in using the mask.

“… at any opportunity, I remove my mask to take a breath…” (P15)

Feeling hot and thirsty

Temperature discomfort, especially in long-term use and when people had to use two masks, was mentioned as an annoying factor.

“… the heat inside the mask bothers me a lot, I sweat and the mask gets wet… no matter how much water I drink, I still feel thirsty…” (P6)

Unfitness of mask with the individual’s face

Another important point extracted from the interviews was the importance of when to use the mask. In this way, as the time of using the mask increased, the person’s feeling of discomfort due to the mismatch between the belt and the mask increased, because the feeling of pressure and pain on the nose, behind the ears, and the face usually occurs several hours after wearing the mask. Several participants reported experiencing discomfort and headaches after wearing the mask. Although These headaches were often short-term and didn’t have long-term complications according to the participants’ reports, they could affect the work performance of HCWs and their behavior in the correct use of respiratory masks.

“…. After a while, the mask puts pressure on my nose and parts of my head and face. Sometimes I touch and move it unintentionally…” (P3) “… if I don’t move the mask on my face, I get a headache because the mask strap puts pressure on my head and nose…” (P21)

Effects on performance

The participants reported that wearing a mask for a long time is one of their important problems in performing their duties, and one of the main categories extracted from this study is the effects on performance, which includes the physical, cognitive, vision and hearing performance.

Effects on physical performance

The effect on the physical performance of HCWs had less effect on their unsafe behavior in using masks than other cases. But when masks were used for a long time and people were more physically tired, sometimes people removed the mask to increase their ability to perform physical work.

“…when I wear a mask, it becomes difficult for me to walk and do physical work, as if I am short of breath…” (P17)

Effects on cognitive function

It was the most frequent subcategory. Because when people feel uncomfortable, their attention decreases and part of the working memory is involved in feeling uncomfortable. Of course, it should be noted that many of the participants in the present study reported the decrease in alertness to be an effective factor in reducing their cognitive performance.

“…When I take off the mask, I can focus better on my work. Especially when I wear it in longer times, I get tired. Many times, I move the mask to finish my job faster…” (P8)

Based on the participants’ point of views, data perception (understanding information through the visual and auditory systems) decreases while using the mask. However, the negative effect of mask on the visual performance affects the unsafe behavior of the HCWs in the incorrect use of the mask and moving it on the face more than other cases. Most of the people who used glasses reported the steam condensation under the glasses as an important cause of discomfort and interference of the mask with their work duties.

“…Using glasses with a mask is really annoying. I have eye pain and burning, and there is always a fog in front of my eyes…” (P2)

Effects on mental status

Among the other main categories extracted in this study is the effects on mental status, which includes the subcategories of depression and anxiety. The negative effect of the mask on the mental state unconsciously affects the person’s behavior in using the respiratory mask.

Some of the participants in this study reported feeling anxious while wearing the mask for various reasons. Therefore, they refuse to wear masks, although they have no justification for doing so. In many cases, the participants in this study expressed that during higher psychological stress, they suffer more from wearing masks and tend to wear them improperly.

“… Sometimes I distractedly take off my mask so that the other person hears my voice better. However, there are many patients, So I am afraid of getting infected. Sometimes I have to speak loudly and this makes me furious … I worry about making a mistake or misunderstanding the conversation, and …” (P4)

One of the most important factors mentioned as a cause of depression was harder communication with colleagues and patients while wearing a mask. This occurs by increasing the physical and mental workload and placing people in social isolation. In this situation, HCWs sometimes consciously take off their masks, so that they can communicate with each other more conveniently.

“…When I wear a mask, I get tired when talking to others. I prefer not to talk to my colleague. Sometimes I don’t pay attention, I take the mask down so they can understand me …” (P5)

To the best of our knowledge, this research is one of the first qualitative studies to extract the experiences of HCWs for explaining the factors affecting the unsafe behavior of HCWs in using respiratory masks during the COVID-19 pandemic in Iran. Although many reasons can cause the unsafe behavior of HCWs in the correct use of respiratory masks in the hospital, according to the present results, three main categories include discomfort and pain, effects on performance, effects on mental status. Skin and respiratory discomforts and the negative effect of the mask on cognitive functions are among the most important factors affecting the unsafe behavior of HCWs in the field of correct use of respiratory masks.

Based on the present study, the participants experienced discomfort and pain while using the mask, and this was one of the important factors of unsafe use of respiratory masks. Discomfort while wearing masks has been confirmed in several studies [ 19 ]. Additionally, in a similar study, researchers found that wearing face masks during the COVID-19 era heightens the discomfort experienced by HCWs [ 20 ]. Some studies have delved into these discomforts in greater detail. For example, the prevalence of skin disorders among HCWs using PPE during the COVID-19 pandemic was reported to be significant [ 21 ]. Some researchers also reported significant prevalence of respiratory disorders and headaches when using PPE [ 22 ]. The findings of a study suggested that a novel form of headache has emerged among HCWs when using a mask during the COVID-19 pandemic. Both exacerbation of existing headaches and the onset of new headaches have been observed to rise with mask usage, irrespective of the use duration [ 23 ]. In some studies, a significant percentage of people reported feeling thirsty and dehydrated after long-term use of respiratory masks [ 24 ]. Several studies reported disturbing rates of perspiration from prolonged use of respiratory masks [ 25 , 26 , 27 ]. A similar study reported that prolonged exposure to masks and protective gear, especially among HCWs, can lead to various issues such as acne, skin irritation, cognitive impairment, and headaches [ 28 ]. According to the results of the present study, discomfort often causes HCWs to move the mask and disturb the correct fitness of the mask on their face.

The results of the present study indicated that respiratory masks have the ability to hinder the work performance of their users. Various studies have confirmed the adverse effect of respiratory masks on HCWs performance. A similar research indicated that respiratory masks reduce physical performance [ 29 ]. Several studies have highlighted the issue of mask users’ ability to see and read being hindered by fogging of glasses [ 22 , 27 , 30 ]. The feel of weakness to perform cognitive tasks has also been reported in various studies [ 31 , 32 ]. An increase in physical fatigue has been mentioned in some studies as an adverse effect of respiratory masks [ 27 , 31 ]. A research showed the effect of respiratory mask on hearing and visual performance [ 33 ]. Another study reported that high-protection respiratory masks reduced physiological and psychological ability, especially if the workers perform physical work [ 34 ].

The third category is related to the negative impact on the psychological state of HCWs. Some studies noted the use of some PPE, including respiratory masks, as one of the possible reasons for the increase of mental health problems among HCWs [ 35 , 36 ]. Before the prevalence of the COVID-19 virus, the hypothesis of the negative effect of respiratory masks on the mental state of people was investigated and confirmed by some studies [ 37 ]. Furthermore, one study reported that wearing respiratory masks leads to an increase in anxiety [ 38 ].

The non-ergonomic nature of respiratory masks (the lack of suitability of masks for people for long-term use) can affect the effectiveness of respiratory masks by encouraging people to perform unsafe behaviors in using respiratory masks [ 39 ]. An important point was that the attitude and knowledge of health care works regarding the use of respiratory masks were not identified as the cause of unsafe behavior of HCWs. However, this factor has been reported in some previous studies as a reason for people not using PPE properly [ 40 ]. The COVID-19 pandemic situation and the extensive information collected about this pandemic may improve the level of awareness and the attitude of the HCWs.

The escalation in infection rates among HCWs, despite receiving training and utilizing personal protective equipment, served as a catalyst for this research endeavor. So far, there has been a deficiency in the context-specific research that could offer a more profound understanding of this issue. Therefore, the outcomes of this qualitative study may prove beneficial in enhancing the design and execution of respiratory protection programs for HCWs in infectious hospital departments or during similar pandemics.

Implications for nursing practice

It is expected that the findings of this study can provide a better understanding of the factors influencing the unsafe behavior of HCWs while using masks. Furthermore, it can be used as a preliminary study to evaluate the effectiveness of safety and infection control programs in hospitals in the COVID-19 pandemic and similar disasters in the future.

Discomfort and pain, effects on performance, and effects on mental status are important factors for unsafe behavior of HCWs’ in using respiratory masks. Our results could contribute to the identification and analysis of possible scenarios to reduce unsafe behaviors in the use of respiratory masks. Accordingly, it is recommended to provide the necessary therapeutic and preventive interventions regarding the complications of using masks. Planning to reduce the side effects of masks and training personnel on the correct use of masks with minimal health effects are recommended as well.

Limitations

The physical and cognitive workload of HCWs which increased during the COVID-19 pandemic [ 41 ], had possible impacts on the work ability of the staff [ 42 ]. Therefore, their explanation about the negative effects of wearing masks may be affected by their specific working conditions.

Data availability

The datasets used during the current study are available from the corresponding author on reasonable request.

Al-Tawfiq JA, Temsah M-H. Perspective on the challenges of COVID-19 facing healthcare workers. Infection. 2023;51(2):541–4.

Article   CAS   PubMed   Google Scholar  

SeyedAlinaghi S, Karimi A, Afsahi AM, Mirzapour P, Varshochi S, Mojdeganlou H et al. The effectiveness of face masks in preventing covid-19 transmission: a systematic review. Infectious disorders-drug targets (formerly current drug targets-infectious disorders). 2023;23(8):19–29.

Carvalho T, Krammer F, Iwasaki A. The first 12 months of COVID-19: a timeline of immunological insights. Nat Rev Immunol. 2021;21(4):245–56.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Razimoghadam M, Yaseri M, Rezaee M, Fazaeli A, Daroudi R. Non-COVID-19 hospitalization and mortality during the COVID-19 pandemic in Iran: a longitudinal assessment of 41 million people in 2019–2022. BMC Public Health. 2024;24(1):380.

Article   PubMed   PubMed Central   Google Scholar  

Takian A, Aarabi SS, Semnani F, Rayati Damavandi A. Preparedness for future pandemics: lessons learned from the COVID-19 pandemic in Iran. Int J Public Health. 2022;67:1605094.

Toksoy CK, Demirbaş H, Bozkurt E, Acar H, Börü ÜT. Headache related to mask use of healthcare workers in COVID-19 pandemic. Korean J pain. 2021;34(2):241–5.

Article   CAS   PubMed Central   Google Scholar  

Matusiak Ł, Szepietowska M, Krajewski P, Białynicki-Birula R, Szepietowski JC. Inconveniences due to the use of face masks during the COVID‐19 pandemic: a survey study of 876 young people. Dermatol Ther. 2020;33(4).

Seresirikachorn K, Phoophiboon V, Chobarporn T, Tiankanon K, Aeumjaturapat S, Chusakul S, et al. Decontamination and reuse of surgical masks and N95 filtering facepiece respirators during the COVID-19 pandemic: a systematic review. Infect Control Hosp Epidemiol. 2021;42(1):25–30.

Article   PubMed   Google Scholar  

Ha JF. The COVID-19 pandemic, personal protective equipment and respirator: a narrative review. Int J Clin Pract. 2020;74(10):e13578.

Johnson AT. Respirator masks protect health but impact performance: a review. J Biol Eng. 2016;10(1):1–12.

Article   Google Scholar  

Shubhanshu K, Singh A. Prolonged use of N95 mask a boon or bane to healthcare workers during covid–19 pandemic. Indian J Otolaryngol Head Neck Surg. 2021:1–4.

Arghami S, Pouya Kian M, Mohammadfam I. Effects of safety signs on the modification of unsafe behaviours. J Adv Med Biomedical Res. 2009;17(68):93–8.

Google Scholar  

Hashemi Nejad N, Mohammad Fam I, Jafari Nodoshan R, Dortaj Rabori E, Kakaei H. Assessment of unsafe behavior types by safety behavior sampling method in oil refinery workers in 2009 and suggestions for control. Occup Med Q J. 2012;4(1):25–33.

Asadi Z, Akbari H, Ghiyasi S, Dehdashti A, Motalebi Kashani M. Survey of unsafe acts and its influencing factors in metal smelting industry workers in Kashan, 2016. Iran Occup Health. 2018;15(1):55–64.

Khandan M, Koohpaei A, Mobinizadeh V. The relationship between emotional intelligence with general health and safety behavior among workers of a manufacturing industry in 2014-15. J Sabzevar Univ Med Sci. 2017;24(1):63–70.

Rahmanian E, Nekoei-Moghadam M, Mardani M. Factors affecting futures studies in hospitals: a qualitative study. J Qualitative Res Health Sci. 2020;7(4):361–71.

Graneheim UH, Lundman B. Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness. Nurse Educ Today. 2004;24(2):105–12.

Korstjens I, Moser A, Series. Practical guidance to qualitative research. Part 4: trustworthiness and publishing. Eur J Gen Pract. 2018;24(1):120–4.

Shenal BV, Radonovich LJ Jr, Cheng J, Hodgson M, Bender BS. Discomfort and exertion associated with prolonged wear of respiratory protection in a health care setting. J Occup Environ Hyg. 2012;9(1):59–64.

Nwosu ADG, Ossai EN, Onwuasoigwe O, Ahaotu F. Oxygen saturation and perceived discomfort with face mask types, in the era of COVID-19: a hospital-based cross-sectional study. Pan Afr Med J. 2021;39(1).

Montero-Vilchez T, Cuenca‐Barrales C, Martinez‐Lopez A, Molina‐Leyva A, Arias‐Santiago S. Skin adverse events related to personal protective equipment: a systematic review and meta‐analysis. J Eur Acad Dermatol Venereol. 2021;35(10):1994–2006.

Jose S, Cyriac MC, Dhandapani M. Health problems and skin damages caused by personal protective equipment: experience of frontline nurses caring for critical COVID-19 patients in intensive care units. Indian J Crit care Medicine: peer-reviewed Official Publication Indian Soc Crit Care Med. 2021;25(2):134.

Article   CAS   Google Scholar  

Dargahi A, Jeddi F, Ghobadi H, Vosoughi M, Karami C, Sarailoo M, et al. Evaluation of masks’ internal and external surfaces used by health care workers and patients in coronavirus-2 (SARS-CoV-2) wards. Environ Res. 2021;196:110948.

Tabah A, Ramanan M, Laupland KB, Buetti N, Cortegiani A, Mellinghoff J, et al. Personal protective equipment and intensive care unit healthcare worker safety in the COVID-19 era (PPE-SAFE): an international survey. J Crit Care. 2020;59:70–5.

Davey SL, Lee BJ, Robbins T, Randeva H, Thake CD. Heat stress and PPE during COVID-19: impact on healthcare workers’ performance, safety and well-being in NHS settings. J Hosp Infect. 2021;108:185–8.

Bansal K, Saji S, Mathur VP, Rahul M, Tewari N. A survey of self-perceived physical discomforts and health behaviors related to personal protective equipment of Indian dental professionals during COVID-19 pandemic. Int J Clin Pediatr Dentistry. 2021;14(6):784.

Agarwal A, Agarwal S, Motiani P. Difficulties encountered while using PPE kits and how to overcome them: an Indian perspective. Cureus. 2020;12(11).

Rosner E. Adverse effects of prolonged mask use among healthcare professionals during COVID-19. J Infect Dis Epidemiol. 2020;6(3):130.

Engeroff T, Groneberg DA, Niederer D. The impact of ubiquitous face masks and filtering face piece application during rest, work and exercise on gas exchange, pulmonary function and physical performance: a systematic review with meta-analysis. Sports medicine-open. 2021;7:1–20.

Arif A, Bhatti AM, Iram M, Masud M, Hadi O, Inam S. Compliance and difficulties faced by health care providers with variants of face masks, eye protection and face shield. Pakistan J Med Health Sci. 2021;15:94–7.

Garra GM, Parmentier D, Garra G. Physiologic effects and symptoms associated with extended-use medical mask and N95 respirators. Annals Work Exposures Health. 2021;65(7):862–7.

Sahebi A, Hasheminejad N, Shohani M, Yousefi A, Tahernejad S, Tahernejad A. Personal protective equipment-associated headaches in health care workers during COVID-19: a systematic review and meta-analysis. Front Public Health. 2022;10.

Unoki T, Sakuramoto H, Sato R, Ouchi A, Kuribara T, Furumaya T, et al. Adverse effects of personal protective equipment among intensive care unit healthcare professionals during the COVID-19 pandemic: a scoping review. SAGE Open Nurs. 2021;7:23779608211026164.

PubMed   PubMed Central   Google Scholar  

AlGhamri AA. The effects of personal protective respirators on human motor, visual, and cognitive skills. 2012.

Chew NW, Lee GK, Tan BY, Jing M, Goh Y, Ngiam NJ, et al. A multinational, multicentre study on the psychological outcomes and associated physical symptoms amongst healthcare workers during COVID-19 outbreak. Brain Behav Immun. 2020;88:559–65.

Sharif S, Amin F, Hafiz M, Benzel E, Peev N, Dahlan RH, et al. COVID 19–depression and neurosurgeons. World Neurosurg. 2020;140:e401–10.

Maison N, Herbrüggen H, Schaub B, Schauberger C, Foth S, Grychtol R, et al. Impact of imposed social isolation and use of face masks on asthma course and mental health in pediatric and adult patients with recurrent wheeze and asthma. Allergy Asthma Clin Immunol. 2021;17(1):93.

CAS   PubMed   PubMed Central   Google Scholar  

Johnson AT, Dooly CR, Blanchard CA, Brown EY. Influence of anxiety level on work performance with and without a respirator mask. Am Ind Hyg Assoc J. 1995;56(9):858–65.

Jazani RK, Seyedmehdi SM, Kavousi A, Javazm ST. A novel questionnaire to ergonomically assess respirators among health care staff: development and validation. Tanaffos. 2018;17(4):257.

Winter S, Thomas JH, Stephens DP, Davis JS. Particulate face masks for protection against airborne pathogens-one size does not fit all: an observational study. Crit Care Resusc. 2010;12(1):24–7.

PubMed   Google Scholar  

de Oliveira Souza D. Health of nursing professionals: workload during the COVID-19 pandemic. Revista Brasileira De Med Do Trabalho. 2020;18(4):464.

Amirmahani M, Hasheminejad N, Tahernejad S, Nik HRT. Evaluation of work ability index and its association with job stress and musculoskeletal disorders among midwives during the Covid-19 pandemic. La Medicina Del Lavoro. 2022;113(4).

Download references

Acknowledgements

We would like to appreciate all participants who accepted our invitations for interviews and shared their valuable experiences with us.

Not applicable.

Author information

Authors and affiliations.

Department of Health in Disasters and Emergencies, School of Public Health and Safety, Shahid Beheshti University of Medical Sciences, Tehran, 1983535511, Iran

Azadeh Tahernejad & Sanaz Sohrabizadeh

Department of Occupational Health Engineering and Safety at Work, School of Public Health, Kerman University of Medical Sciences, Kerman, Iran

Somayeh Tahernejad

You can also search for this author in PubMed   Google Scholar

Contributions

All authors have read and approved the manuscript. AT, SS, ST are responsible for the overall conceptualization and oversight of the study, including study design, data interpretation, and manuscript write-up. AT is responsible for the first draft. All authors reviewed and provided feedback on the manuscript prior to submission.

Corresponding author

Correspondence to Sanaz Sohrabizadeh .

Ethics declarations

Ethics approval and consent to participate.

This study was approved by the ethics committee of the Shahid Beheshti University of Medical Sciences, Tehran, Iran (ethical code: IR.SBMU.PHNS.REC.1401.108). All the participants signed the written informed consent. Accordingly, all participants were informed about the research objectives, confidentiality of their personal information, and the possibility of their leaving or declining the interview sessions at any time. In addition, all methods were carried out in accordance with relevant guidelines and regulations in the Declaration of Helsinki.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Tahernejad, A., Sohrabizadeh, S. & Tahernejad, S. Exploring factors affecting the unsafe behavior of health care workers’ in using respiratory masks during COVID-19 pandemic in Iran: a qualitative study. BMC Health Serv Res 24 , 608 (2024). https://doi.org/10.1186/s12913-024-11000-4

Download citation

Received : 03 September 2023

Accepted : 16 April 2024

Published : 09 May 2024

DOI : https://doi.org/10.1186/s12913-024-11000-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Respiratory mask
  • Health care workers

BMC Health Services Research

ISSN: 1472-6963

data collection and analysis in research methodology ppt

University of Manchester

File(s) stored somewhere else

Please note: Linked content is NOT stored on University of Manchester and we can ' t guarantee its availability, quality, security or accept any liability.

Slides – Tomasz Janus – Open Research Conference 2024

Slides used by Tomasz Janus for the University of Manchester Open Research Conference 2024

Title: Development of transparent low-emission hydropower expansion strategies for Myanmar using bespoke open-source software and explainable AI

Abstract: We present the outcomes of our 3-year-long project on the development of open-source software and a transparent methodology for automated estimation of reservoir greenhouse gas (GHG) emissions using global and publicly accessible geospatial data. We developed two pieces of software that are currently in public domain. (1) GeoCARET (https://github.com/tomjanus/geocaret) is a Geospatial CAtchment and REservoir analysis Tool. It automates the process of delineating reservoirs and catchments and deriving reservoir and catchment specific properties from global datasets such as elevation maps, land cover maps, and various remote-sensing data, (2) RE-Emission (https://github.com/tomjanus/reemission) is our software for estimating GHG emissions from reservoirs. It relies on substantial amounts of input data that need to be sourced manually, or alternatively, can be calculated with GEOCaret. We applied our software to estimate emissions of 200+ reservoirs in Myanmar. We also developed a methodology for interpreting the emission results using explainable AI (xAI). The methodology is shared as a collection of Python and R scripts on GitHub. We present the key components of our research and highlight the opportunities, challenges and potential threats arising from research carried out in a fully transparent and open manner. We give our perspective on sharing code and significant volumes of data stemming from large projects such as ours, that require different sharing platforms and approaches. Since our software relies on many input datasets that come with their own licenses, we would like to share our experience in addressing licensing issues during software release. Finally, we would like to address the problem that is common in research software, i.e. the often complex installation and execution procedures that form a barrier for adoption by people with no or little IT background. In particular, we will talk about packaging and running software requiring multiple dependencies using Docker containers.

Usage metrics

The University of Manchester Library

  • Research, science and technology policy
  • Accessible computing
  • Computational modelling and simulation in earth sciences

CC BY 4.0

medRxiv

Long-Term Follow-Up Defines the Population That Benefits from Early Interception in a High-Risk Smoldering Multiple Myeloma Clinical Trial Using the Combination of Ixazomib, Lenalidomide, and Dexamethasone

  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: [email protected] [email protected]
  • Info/History
  • Supplementary material
  • Preview PDF

Background Early therapeutic intervention in high-risk SMM (HR-SMM) has demonstrated benefit in previous studies of lenalidomide with or without dexamethasone. Triplets and quadruplet studies have been examined in this same population. However, to date, none of these studies examined the impact of depth of response on long-term outcomes of participants treated with lenalidomide-based therapy, and whether the use of the 20/2/20 model or the addition of genomic alterations can further define the population that would benefit the most from early therapeutic intervention. Here, we present the results of the phase II study of the combination of ixazomib, lenalidomide, and dexamethasone in patients with HR-SMM with long-term follow-up and baseline single-cell tumor and immune sequencing that help refine the population to be treated for early intervention studies.

Methods This is a phase II trial of ixazomib, lenalidomide, and dexamethasone (IRD) in HR-SMM. Patients received 9 cycles of induction therapy with ixazomib 4mg on days 1, 8, and 15; lenalidomide 25mg on days 1-21; and dexamethasone 40mg on days 1, 8, 15, and 22. The induction phase was followed by maintenance with ixazomib 4mg on days 1, 8, and 15; and lenalidomide 15mg d1-21 for 15 cycles for 24 months of treatment. The primary endpoint was progression-free survival after 2 years of therapy. Secondary endpoints included depth of response, biochemical progression, and correlative studies included single-cell RNA sequencing and/or whole-genome sequencing of the tumor and single-cell sequencing of immune cells at baseline.

Results Fifty-five patients, with a median age of 64, were enrolled in the study. The overall response rate was 93%, with 31% of patients achieving a complete response and 45% achieving a very good partial response or better. The most common grade 3 or greater treatment-related hematologic toxicities were neutropenia (16 patients; 29%), leukopenia (10 patients; 18%), lymphocytopenia (8 patients; 15%), and thrombocytopenia (4 patients; 7%). Non-hematologic grade 3 or greater toxicities included hypophosphatemia (7 patients; 13%), rash (5 patients; 9%), and hypokalemia (4 patients; 7%). After a median follow-up of 50 months, the median progression-free survival (PFS) was 48.6 months (95% CI: 39.9 – not reached; NR) and median overall survival has not been reached. Patients achieving VGPR or better had a significantly better progression-free survival (p<0.001) compared to those who did not achieve VGPR (median PFS 58.2 months vs. 31.3 months). Biochemical progression preceded or was concurrent with the development of SLiM-CRAB criteria in eight patients during follow-up, indicating that biochemical progression is a meaningful endpoint that correlates with the development of end-organ damage. High-risk 20/2/20 participants had the worst PFS compared to low- and intermediate-risk participants. The use of whole genome or single-cell sequencing of tumor cells identified high-risk aberrations that were not identified by FISH alone and aided in the identification of participants at risk of progression. scRNA-seq analysis revealed a positive correlation between MHC class I expression and response to proteasome inhibition and at the same time a decreased proportion of GZMB+ T cells within the clonally expanded CD8+ T cell population correlated with suboptimal response.

Conclusions Ixazomib, lenalidomide and dexamethasone in HR-SMM demonstrates significant clinical activity with an overall favorable safety profile. Achievement of VGPR or greater led to significant improvement in time to progression, suggesting that achieving deep response is beneficial in HR-SMM. Biochemical progression correlates with end-organ damage. Patients with high-risk FISH and lack of deep response had poor outcomes. ClinicalTrials.gov identifier: ( NCT02916771 )

Competing Interest Statement

ON: Research support from Takeda and Janssen; Advisory board participation: Bristol Myers Squibb, Janssen, Sanofi, Takeda, GPCR therapeutics. Honorarium: Pfizer M.P.A: No conflicts of interest exist. R.A.R.: No conflicts of interest exist. M.T: No conflicts of interest exist. S.M.: No conflicts of interest exist. J.B.A. No conflicts of interest exist. L.B.: No conflicts of interest exist. A.K.D. No conflicts of interest exist. H.E.: No conflicts of interest exist. M.B.: Consultancy with Janssen, BMS, Takeda, Epizyme, Karyopharm, Menarini Biosystems, and Adaptive. E.D.L.: No conflicts of interest exist. J.P.L.: No conflicts of interest exist. G.B.: Consultancy: Prothena E.O.: Advisory Board/Honoraria: Janssen, BMS, Sanofi, Pfizer, Exact Consulting–Takeda Steering Committee: Natera T.W.: No conflicts of interest exist. J.T.: No conflicts of interest exist. K.A.: Consultant: AstraZeneca, Janssen, Pfizer, Board/ Stock Options: Dynamic Cell Therapies, C4 Therapeutics, Next RNA, Oncopep, Starton, Window G.G.: No conflicts of interest exist. L.T.: No conflicts of interest exist. P.G.R.: Advisory Boards/Consulting: Celgene/BMS, GSK, Karyopharm, Oncopeptides, Regeneron, Sanofi, Takeda. Research Grants: Oncopeptides, Karyopharm R.S.P.: Co–founder, equity holder, and consultant on pre-seed stage startup. I.M.G.: Consulting/Advisory role: AbbVie, Adaptive, Amgen, Aptitude Health, Bristol Myers Squibb, GlaxoSmithKline, Huron Consulting, Janssen, Menarini Silicon Biosystems, Oncopeptides, Pfizer, Sanofi, Sognef, Takeda, The Binding Site, and Window Therapeutics; Speaker fees: Vor Biopharma, Veeva Systems, Inc.; I.M.G.’s spouse is CMO and an equity holder of Disc Medicine.

Clinical Trial

NCT02916771

Funding Statement

Takeda and Celgene (Bristol Myers Squibb) provided support for the clinical trial. These funders reviewed the final manuscript and approved its publication; they were not involved in the conceptualization, design, data collection, analysis, or preparation of the manuscript. Funding was also provided by the Dr. Miriam and Sheldon G. Adelson Medical Research Foundation and the NIH (R35CA263817 awarded to I.M.G.).

Author Declarations

I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.

The details of the IRB/oversight body that provided approval or exemption for the research described are given below:

The research study was approved by the Dana-Farber/Harvard Cancer Center institutional review board (protocol number DFCI 16-313) and complied with all relevant ethical and legal regulations.

I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.

I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).

I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.

↵ * Co-first author

Data availability

Single-cell RNA and TCR-sequencing raw data generated for this study will be deposited in dbGaP (study site pending). Gene expression matrices can be accessed on Mendeley: https://data.mendeley.com/preview/z56k3y8cdg?a=6945f72a-b190-4fb1-b0fe-31c12a70a0d4 (reserved DOI:10.17632/z56k3y8cdg.1)

View the discussion thread.

Supplementary Material

Thank you for your interest in spreading the word about medRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Reddit logo

Citation Manager Formats

  • EndNote (tagged)
  • EndNote 8 (xml)
  • RefWorks Tagged
  • Ref Manager
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Addiction Medicine (322)
  • Allergy and Immunology (626)
  • Anesthesia (162)
  • Cardiovascular Medicine (2356)
  • Dentistry and Oral Medicine (286)
  • Dermatology (206)
  • Emergency Medicine (377)
  • Endocrinology (including Diabetes Mellitus and Metabolic Disease) (832)
  • Epidemiology (11744)
  • Forensic Medicine (10)
  • Gastroenterology (699)
  • Genetic and Genomic Medicine (3713)
  • Geriatric Medicine (347)
  • Health Economics (632)
  • Health Informatics (2383)
  • Health Policy (928)
  • Health Systems and Quality Improvement (891)
  • Hematology (340)
  • HIV/AIDS (776)
  • Infectious Diseases (except HIV/AIDS) (13296)
  • Intensive Care and Critical Care Medicine (767)
  • Medical Education (364)
  • Medical Ethics (104)
  • Nephrology (397)
  • Neurology (3473)
  • Nursing (197)
  • Nutrition (522)
  • Obstetrics and Gynecology (669)
  • Occupational and Environmental Health (661)
  • Oncology (1809)
  • Ophthalmology (534)
  • Orthopedics (218)
  • Otolaryngology (286)
  • Pain Medicine (232)
  • Palliative Medicine (66)
  • Pathology (445)
  • Pediatrics (1026)
  • Pharmacology and Therapeutics (426)
  • Primary Care Research (417)
  • Psychiatry and Clinical Psychology (3164)
  • Public and Global Health (6120)
  • Radiology and Imaging (1268)
  • Rehabilitation Medicine and Physical Therapy (741)
  • Respiratory Medicine (824)
  • Rheumatology (379)
  • Sexual and Reproductive Health (371)
  • Sports Medicine (320)
  • Surgery (398)
  • Toxicology (50)
  • Transplantation (171)
  • Urology (145)

IMAGES

  1. Unleashing Insights: Mastering the Art of Research and Data Analysis

    data collection and analysis in research methodology ppt

  2. What is Data Analysis in Research

    data collection and analysis in research methodology ppt

  3. 5 Steps of the Data Analysis Process

    data collection and analysis in research methodology ppt

  4. PPT

    data collection and analysis in research methodology ppt

  5. Quantitative Research Methods PowerPoint Template

    data collection and analysis in research methodology ppt

  6. 6 Step Analysis Process With Data Collection Plan

    data collection and analysis in research methodology ppt

VIDEO

  1. RESEARCH METHODOLOGY (PRESENTATION)

  2. Presentation of Data (lec. and SGT)

  3. Data Collection & Analysis Chapter 5 Business Research

  4. Presenting Written Research Methodology

  5. Series of Webinar on “Research Methodology

  6. International Conference: COM 4.0 Inaugural Session: Eudoxia Research University

COMMENTS

  1. Methods of data collection and analysis

    Methods of data collection and analysis. Dec 30, 2017 • Download as PPTX, PDF •. 3 likes • 369 views. S. Shimanta Dutta. Research is the most mandatory term for higher education. Data is essential for any research. so, for the purpose of collecting and analysis of data this presentation will help to any students/. Read more.

  2. Analysis of data in research

    Analysis of data in research. Analysis of data is a process of inspecting, cleaning, transforming, and modeling data with the goal of discovering useful information, suggesting conclusions, and supporting decision-making. 1. Presented by Abhijeet Birari UNIT V ANALYSIS OF DATA. 2.

  3. Chapter 10-DATA ANALYSIS & PRESENTATION

    1 of 38. Chapter 10-DATA ANALYSIS & PRESENTATION. 1. Data Analysis and Presentation. 2. PLANNING FOR DATA ANALYSIS. 3. Data Analysis The purpose To answer the research questions and to help determine the trends and relationships among the variables. 4.

  4. PDF Chapter 2

    Keep adding until there are k classes Step 5: Find the upper class limit Step 7: Find the class boundaries by subtracting 0.5 from each lower class limit and adding 0.5 to the UCL as shown. step 8:Tally the data step 9: Write the numeric values for the frequency column Step 10: Find cumulative frequency.

  5. Methods of Data Collection, Representation, and Analysis

    This chapter concerns research on collecting, representing, and analyzing the data that underlie behavioral and social sciences knowledge. Such research, methodological in character, includes ethnographic and historical approaches, scaling, axiomatic measurement, and statistics, with its important relatives, econometrics and psychometrics. The field can be described as including the self ...

  6. Data Collection

    Data Collection | Definition, Methods & Examples. Published on June 5, 2020 by Pritha Bhandari.Revised on June 21, 2023. Data collection is a systematic process of gathering observations or measurements. Whether you are performing research for business, governmental or academic purposes, data collection allows you to gain first-hand knowledge and original insights into your research problem.

  7. PDF Presenting Methodology and Research Approach

    out your research from data collection through data analysis. The two sections that follow elaborate in greater detail on the methods of data collection and the process of data analysis. The narrative in this section is often augmented by a flowchart or diagram that provides an illustration of the various steps involved. 5: Data-Collection Methods

  8. Data analysis and research presentation (Part 4)

    In this section the aim is to discuss quantitative and qualitative analysis and how to present research. You have already been advised to read widely on the method and techniques of your choice, and this is emphasized even more in this section. It is outwith the scope of this text to provide the detail and depth necessary for you to master any ...

  9. (PDF) Data Collection Methods and Tools for Research; A Step-by-Step

    PDF | Learn how to choose the best data collection methods and tools for your research project, with examples and tips from ResearchGate experts. | Download and read the full-text PDF.

  10. Methods & Tools of Data Collection

    8 Tools & Methods of Data Collection. A device/ instrument used by the researcher to collect data (to measure the concept of interest ) Methods Various steps or strategies used for gathering and analyzing data in a research 4/21/2015. 9 Types of Data Collection Methods. Self reports Interview Unstructured Semi structured Structured ...

  11. Data Collection

    Data collection is the process of gathering and collecting information from various sources to analyze and make informed decisions based on the data collected. This can involve various methods, such as surveys, interviews, experiments, and observation. In order for data collection to be effective, it is important to have a clear understanding ...

  12. Chapter 20. Presentations

    Writing about Your Research The Journal Article and the Research Proposal. Although the research proposal is written before you have actually done your research and the article is written after all data collection and analysis is complete, there are actually many similarities between the two in terms of organization and purpose.

  13. Data Collection, Presentation and Analysis

    Abstract. This chapter covers the topics of data collection, data presentation and data analysis. It gives attention to data collection for studies based on experiments, on data derived from existing published or unpublished data sets, on observation, on simulation and digital twins, on surveys, on interviews and on focus group discussions.

  14. Data Collection and Analysis

    Presentation on theme: "Data Collection and Analysis"— Presentation transcript: 1 Data Collection and Analysis. Dr. Fred Mugambi Mwirigi JKUAT. 2 Introduction Data collection is perhaps the most important component of Monitoring and Evaluation Without good data we can not tell if or not the project is beneficial as expected Mechanisms must be ...

  15. Chapter 8 (procedure of data collection)

    Chapter 8 (procedure of data collection) Dec 19, 2017 • Download as PPT, PDF •. 41 likes • 18,109 views. B. BoreyThai1. research. Education. Download now. Chapter 8 (procedure of data collection) - Download as a PDF or view online for free.

  16. (PDF) Qualitative Data Collection, Analysis and Presentation: A

    qualitative analysis is the production of visual displays. Laying out data in table or matrix form, and drawing theories. out in the form of a flow chart or map, helps to understand. what the ...

  17. Data Collection Methods

    Step 2: Choose your data collection method. Based on the data you want to collect, decide which method is best suited for your research. Experimental research is primarily a quantitative method. Interviews, focus groups, and ethnographies are qualitative methods. Surveys, observations, archival research, and secondary data collection can be ...

  18. (PPT) Lecture 5 Data Collection Methods

    The article sensitises the prospective researcher for some pitfalls when using the interview as a data collection technique and includes some hints for this protective researcher when using the interview data collection technique in practice. It also attempts to bring order into the vocabulary when using the concepts: procedure and technique.

  19. Qualitative Analysis: Process and Examples

    By Monograph Matters May 12, 2020 Teaching and Research Resources. Authors Laura Wray-Lake and Laura Abrams describe qualitative data analysis, with illustrative examples from their SRCD monograph, Pathways to Civic Engagement Among Urban Youth of Color. This PowerPoint document includes presenter notes, making it an ideal resource for ...

  20. [Updated 2023] Top 20 PowerPoint Templates for a Systematic Research

    Template 2: Research Methodology Process Analysis Template. This is a content-ready PowerPoint template to maximize the effectiveness of your research. This professional and appealing template guides you step-by-step through the research process, from defining your research question to analyzing and interpreting data.

  21. Data Collection Methods and Strategies for Effective Analytics

    This method is particularly useful for academic research, policy analysis, or understanding the evolution of a specific industry. It complements primary data collection methods by providing context. 9. Crowdsourced Data ... Tailor data collection methods to the research questions and objectives. A mix of quantitative and qualitative methods can ...

  22. Self-Presentation Processes in Job Analysis: A Field Experiment

    Although job analysis is a widely used organizational data collection technique, little research has investigated the extent to which job analysis information is affected by self-presentation processes. This study represents the first direct test of the propositions offered by F. P. Morgeson and M. A. Campion (1997) concerning self-presentation in job analysis measurement.

  23. Research Methodology Data Collection

    4. INTRODUCTION Research is a highly specialised activity that is more than just collecting information or writing a description in a targeted fashion, which is further analysed thoroughly to lead to answers of research questions and evaluate results. The collection of data is the heart of any research design, irrespective of the field of study. Data collection is the process of gathering the ...

  24. Exploring factors affecting the unsafe behavior of health care workers

    This study was carried out using the conventional qualitative content analysis. Participants were the number of 26 health care workers selected by purposive sampling method. Data collection was conducted through in-depth semi-structured interviews. Data analysis was done using the content analysis approach of Graneheim and Lundman.

  25. Slides

    Slides used by Tomasz Janus for the University of Manchester Open Research Conference 2024Title: Development of transparent low-emission hydropower expansion strategies for Myanmar using bespoke open-source software and explainable AIAbstract: We present the outcomes of our 3-year-long project on the development of open-source software and a transparent methodology for automated estimation of ...

  26. Long-Term Follow-Up Defines the Population That Benefits from Early

    These funders reviewed the final manuscript and approved its publication; they were not involved in the conceptualization, design, data collection, analysis, or preparation of the manuscript. Funding was also provided by the Dr. Miriam and Sheldon G. Adelson Medical Research Foundation and the NIH (R35CA263817 awarded to I.M.G.).

  27. Methods of Data Collection: Primary and Secondary Sources

    6. Methods of data Collection :Primary Data • 1) OBSERVATION METHOD : Observation method is a method under which data from the field is collected with the help of observation by the observer or by personally going to the field. • In the words of P.V. Young, "Observation may be defined as systematic viewing, coupled with consideration of ...

  28. AutoTA: Galaxy Workflows for Reproducible and Automated ...

    Concomitant to advancement in sequencing techniques, analysis methods and softwares have also grown to be sophisticated and efficient. Qiime2 is a collection of python scripts which enables end-to- end analysis of metagenomic data. However, usage of latest and more complex databases for classification is hindered by requirement of high compute ...

  29. Data collection and analysis

    Data collection and analysis. This document provides an overview of key concepts for data gathering and analysis in interaction design. It discusses techniques for interviews, questionnaires, observations, and the analysis of both qualitative and quantitative data. The goal is to understand users and inform the design process.