Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence
  • Market Research
  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • What is a survey?
  • Survey Research

Try Qualtrics for free

What is survey research.

15 min read Find out everything you need to know about survey research, from what it is and how it works to the different methods and tools you can use to ensure you’re successful.

Survey research is the process of collecting data from a predefined group (e.g. customers or potential customers) with the ultimate goal of uncovering insights about your products, services, or brand overall .

As a quantitative data collection method, survey research can provide you with a goldmine of information that can inform crucial business and product decisions. But survey research needs careful planning and execution to get the results you want.

So if you’re thinking about using surveys to carry out research, read on.

Get started with our free survey maker tool

Types of survey research

Calling these methods ‘survey research’ slightly underplays the complexity of this type of information gathering. From the expertise required to carry out each activity to the analysis of the data and its eventual application, a considerable amount of effort is required.

As for how you can carry out your research, there are several options to choose from — face-to-face interviews, telephone surveys, focus groups (though more interviews than surveys), online surveys , and panel surveys.

Typically, the survey method you choose will largely be guided by who you want to survey, the size of your sample , your budget, and the type of information you’re hoping to gather.

Here are a few of the most-used survey types:

Face-to-face interviews

Before technology made it possible to conduct research using online surveys, telephone, and mail were the most popular methods for survey research. However face-to-face interviews were considered the gold standard — the only reason they weren’t as popular was due to their highly prohibitive costs.

When it came to face-to-face interviews, organizations would use highly trained researchers who knew when to probe or follow up on vague or problematic answers. They also knew when to offer assistance to respondents when they seemed to be struggling. The result was that these interviewers could get sample members to participate and engage in surveys in the most effective way possible, leading to higher response rates and better quality data.

Telephone surveys

While phone surveys have been popular in the past, particularly for measuring general consumer behavior or beliefs, response rates have been declining since the 1990s .

Phone surveys are usually conducted using a random dialing system and software that a researcher can use to record responses.

This method is beneficial when you want to survey a large population but don’t have the resources to conduct face-to-face research surveys or run focus groups, or want to ask multiple-choice and open-ended questions .

The downsides are they can: take a long time to complete depending on the response rate, and you may have to do a lot of cold-calling to get the information you need.

You also run the risk of respondents not being completely honest . Instead, they’ll answer your survey questions quickly just to get off the phone.

Focus groups (interviews — not surveys)

Focus groups are a separate qualitative methodology rather than surveys — even though they’re often bunched together. They’re normally used for survey pretesting and designing , but they’re also a great way to generate opinions and data from a diverse range of people.

Focus groups involve putting a cohort of demographically or socially diverse people in a room with a moderator and engaging them in a discussion on a particular topic, such as your product, brand, or service.

They remain a highly popular method for market research , but they’re expensive and require a lot of administration to conduct and analyze the data properly.

You also run the risk of more dominant members of the group taking over the discussion and swaying the opinions of other people — potentially providing you with unreliable data.

Online surveys

Online surveys have become one of the most popular survey methods due to being cost-effective, enabling researchers to accurately survey a large population quickly.

Online surveys can essentially be used by anyone for any research purpose – we’ve all seen the increasing popularity of polls on social media (although these are not scientific).

Using an online survey allows you to ask a series of different question types and collect data instantly that’s easy to analyze with the right software.

There are also several methods for running and distributing online surveys that allow you to get your questionnaire in front of a large population at a fraction of the cost of face-to-face interviews or focus groups.

This is particularly true when it comes to mobile surveys as most people with a smartphone can access them online.

However, you have to be aware of the potential dangers of using online surveys, particularly when it comes to the survey respondents. The biggest risk is because online surveys require access to a computer or mobile device to complete, they could exclude elderly members of the population who don’t have access to the technology — or don’t know how to use it.

It could also exclude those from poorer socio-economic backgrounds who can’t afford a computer or consistent internet access. This could mean the data collected is more biased towards a certain group and can lead to less accurate data when you’re looking for a representative population sample.

When it comes to surveys, every voice matters.

Find out how to create more inclusive and representative surveys for your research.

Panel surveys

A panel survey involves recruiting respondents who have specifically signed up to answer questionnaires and who are put on a list by a research company. This could be a workforce of a small company or a major subset of a national population. Usually, these groups are carefully selected so that they represent a sample of your target population — giving you balance across criteria such as age, gender, background, and so on.

Panel surveys give you access to the respondents you need and are usually provided by the research company in question. As a result, it’s much easier to get access to the right audiences as you just need to tell the research company your criteria. They’ll then determine the right panels to use to answer your questionnaire.

However, there are downsides. The main one being that if the research company offers its panels incentives, e.g. discounts, coupons, money — respondents may answer a lot of questionnaires just for the benefits.

This might mean they rush through your survey without providing considered and truthful answers. As a consequence, this can damage the credibility of your data and potentially ruin your analyses.

What are the benefits of using survey research?

Depending on the research method you use, there are lots of benefits to conducting survey research for data collection. Here, we cover a few:

1.   They’re relatively easy to do

Most research surveys are easy to set up, administer and analyze. As long as the planning and survey design is thorough and you target the right audience , the data collection is usually straightforward regardless of which survey type you use.

2.   They can be cost effective

Survey research can be relatively cheap depending on the type of survey you use.

Generally, qualitative research methods that require access to people in person or over the phone are more expensive and require more administration.

Online surveys or mobile surveys are often more cost-effective for market research and can give you access to the global population for a fraction of the cost.

3.   You can collect data from a large sample

Again, depending on the type of survey, you can obtain survey results from an entire population at a relatively low price. You can also administer a large variety of survey types to fit the project you’re running.

4.   You can use survey software to analyze results immediately

Using survey software, you can use advanced statistical analysis techniques to gain insights into your responses immediately.

Analysis can be conducted using a variety of parameters to determine the validity and reliability of your survey data at scale.

5.   Surveys can collect any type of data

While most people view surveys as a quantitative research method, they can just as easily be adapted to gain qualitative information by simply including open-ended questions or conducting interviews face to face.

How to measure concepts with survey questions

While surveys are a great way to obtain data, that data on its own is useless unless it can be analyzed and developed into actionable insights.

The easiest, and most effective way to measure survey results, is to use a dedicated research tool that puts all of your survey results into one place.

When it comes to survey measurement, there are four measurement types to be aware of that will determine how you treat your different survey results:

Nominal scale

With a nominal scale , you can only keep track of how many respondents chose each option from a question, and which response generated the most selections.

An example of this would be simply asking a responder to choose a product or brand from a list.

You could find out which brand was chosen the most but have no insight as to why.

Ordinal scale

Ordinal scales are used to judge an order of preference. They do provide some level of quantitative value because you’re asking responders to choose a preference of one option over another.

Ratio scale

Ratio scales can be used to judge the order and difference between responses. For example, asking respondents how much they spend on their weekly shopping on average.

Interval scale

In an interval scale, values are lined up in order with a meaningful difference between the two values — for example, measuring temperature or measuring a credit score between one value and another.

Step by step: How to conduct surveys and collect data

Conducting a survey and collecting data is relatively straightforward, but it does require some careful planning and design to ensure it results in reliable data.

Step 1 – Define your objectives

What do you want to learn from the survey? How is the data going to help you? Having a hypothesis or series of assumptions about survey responses will allow you to create the right questions to test them.

Step 2 – Create your survey questions

Once you’ve got your hypotheses or assumptions, write out the questions you need answering to test your theories or beliefs. Be wary about framing questions that could lead respondents or inadvertently create biased responses .

Step 3 – Choose your question types

Your survey should include a variety of question types and should aim to obtain quantitative data with some qualitative responses from open-ended questions. Using a mix of questions (simple Yes/ No, multiple-choice, rank in order, etc) not only increases the reliability of your data but also reduces survey fatigue and respondents simply answering questions quickly without thinking.

Find out how to create a survey that’s easy to engage with

Step 4 – Test your questions

Before sending your questionnaire out, you should test it (e.g. have a random internal group do the survey) and carry out A/B tests to ensure you’ll gain accurate responses.

Step 5 – Choose your target and send out the survey

Depending on your objectives, you might want to target the general population with your survey or a specific segment of the population. Once you’ve narrowed down who you want to target, it’s time to send out the survey.

After you’ve deployed the survey, keep an eye on the response rate to ensure you’re getting the number you expected. If your response rate is low, you might need to send the survey out to a second group to obtain a large enough sample — or do some troubleshooting to work out why your response rates are so low. This could be down to your questions, delivery method, selected sample, or otherwise.

Step 6 – Analyze results and draw conclusions

Once you’ve got your results back, it’s time for the fun part.

Break down your survey responses using the parameters you’ve set in your objectives and analyze the data to compare to your original assumptions. At this stage, a research tool or software can make the analysis a lot easier — and that’s somewhere Qualtrics can help.

Get reliable insights with survey software from Qualtrics

Gaining feedback from customers and leads is critical for any business, data gathered from surveys can prove invaluable for understanding your products and your market position, and with survey software from Qualtrics, it couldn’t be easier.

Used by more than 13,000 brands and supporting more than 1 billion surveys a year, Qualtrics empowers everyone in your organization to gather insights and take action. No coding required — and your data is housed in one system.

Get feedback from more than 125 sources on a single platform and view and measure your data in one place to create actionable insights and gain a deeper understanding of your target customers .

Automatically run complex text and statistical analysis to uncover exactly what your survey data is telling you, so you can react in real-time and make smarter decisions.

We can help you with survey management, too. From designing your survey and finding your target respondents to getting your survey in the field and reporting back on the results, we can help you every step of the way.

And for expert market researchers and survey designers, Qualtrics features custom programming to give you total flexibility over question types, survey design, embedded data, and other variables.

No matter what type of survey you want to run, what target audience you want to reach, or what assumptions you want to test or answers you want to uncover, we’ll help you design, deploy and analyze your survey with our team of experts.

Ready to find out more about Qualtrics CoreXM?

Get started with our free survey maker tool today

Related resources

Survey bias types 24 min read, post event survey questions 10 min read, best survey software 16 min read, close-ended questions 7 min read, survey vs questionnaire 12 min read, response bias 13 min read, double barreled question 11 min read, request demo.

Ready to learn more about Qualtrics?

  • Privacy Policy

Research Method

Home » Survey Research – Types, Methods, Examples

Survey Research – Types, Methods, Examples

Table of Contents

Survey Research

Survey Research

Definition:

Survey Research is a quantitative research method that involves collecting standardized data from a sample of individuals or groups through the use of structured questionnaires or interviews. The data collected is then analyzed statistically to identify patterns and relationships between variables, and to draw conclusions about the population being studied.

Survey research can be used to answer a variety of questions, including:

  • What are people’s opinions about a certain topic?
  • What are people’s experiences with a certain product or service?
  • What are people’s beliefs about a certain issue?

Survey Research Methods

Survey Research Methods are as follows:

  • Telephone surveys: A survey research method where questions are administered to respondents over the phone, often used in market research or political polling.
  • Face-to-face surveys: A survey research method where questions are administered to respondents in person, often used in social or health research.
  • Mail surveys: A survey research method where questionnaires are sent to respondents through mail, often used in customer satisfaction or opinion surveys.
  • Online surveys: A survey research method where questions are administered to respondents through online platforms, often used in market research or customer feedback.
  • Email surveys: A survey research method where questionnaires are sent to respondents through email, often used in customer satisfaction or opinion surveys.
  • Mixed-mode surveys: A survey research method that combines two or more survey modes, often used to increase response rates or reach diverse populations.
  • Computer-assisted surveys: A survey research method that uses computer technology to administer or collect survey data, often used in large-scale surveys or data collection.
  • Interactive voice response surveys: A survey research method where respondents answer questions through a touch-tone telephone system, often used in automated customer satisfaction or opinion surveys.
  • Mobile surveys: A survey research method where questions are administered to respondents through mobile devices, often used in market research or customer feedback.
  • Group-administered surveys: A survey research method where questions are administered to a group of respondents simultaneously, often used in education or training evaluation.
  • Web-intercept surveys: A survey research method where questions are administered to website visitors, often used in website or user experience research.
  • In-app surveys: A survey research method where questions are administered to users of a mobile application, often used in mobile app or user experience research.
  • Social media surveys: A survey research method where questions are administered to respondents through social media platforms, often used in social media or brand awareness research.
  • SMS surveys: A survey research method where questions are administered to respondents through text messaging, often used in customer feedback or opinion surveys.
  • IVR surveys: A survey research method where questions are administered to respondents through an interactive voice response system, often used in automated customer feedback or opinion surveys.
  • Mixed-method surveys: A survey research method that combines both qualitative and quantitative data collection methods, often used in exploratory or mixed-method research.
  • Drop-off surveys: A survey research method where respondents are provided with a survey questionnaire and asked to return it at a later time or through a designated drop-off location.
  • Intercept surveys: A survey research method where respondents are approached in public places and asked to participate in a survey, often used in market research or customer feedback.
  • Hybrid surveys: A survey research method that combines two or more survey modes, data sources, or research methods, often used in complex or multi-dimensional research questions.

Types of Survey Research

There are several types of survey research that can be used to collect data from a sample of individuals or groups. following are Types of Survey Research:

  • Cross-sectional survey: A type of survey research that gathers data from a sample of individuals at a specific point in time, providing a snapshot of the population being studied.
  • Longitudinal survey: A type of survey research that gathers data from the same sample of individuals over an extended period of time, allowing researchers to track changes or trends in the population being studied.
  • Panel survey: A type of longitudinal survey research that tracks the same sample of individuals over time, typically collecting data at multiple points in time.
  • Epidemiological survey: A type of survey research that studies the distribution and determinants of health and disease in a population, often used to identify risk factors and inform public health interventions.
  • Observational survey: A type of survey research that collects data through direct observation of individuals or groups, often used in behavioral or social research.
  • Correlational survey: A type of survey research that measures the degree of association or relationship between two or more variables, often used to identify patterns or trends in data.
  • Experimental survey: A type of survey research that involves manipulating one or more variables to observe the effect on an outcome, often used to test causal hypotheses.
  • Descriptive survey: A type of survey research that describes the characteristics or attributes of a population or phenomenon, often used in exploratory research or to summarize existing data.
  • Diagnostic survey: A type of survey research that assesses the current state or condition of an individual or system, often used in health or organizational research.
  • Explanatory survey: A type of survey research that seeks to explain or understand the causes or mechanisms behind a phenomenon, often used in social or psychological research.
  • Process evaluation survey: A type of survey research that measures the implementation and outcomes of a program or intervention, often used in program evaluation or quality improvement.
  • Impact evaluation survey: A type of survey research that assesses the effectiveness or impact of a program or intervention, often used to inform policy or decision-making.
  • Customer satisfaction survey: A type of survey research that measures the satisfaction or dissatisfaction of customers with a product, service, or experience, often used in marketing or customer service research.
  • Market research survey: A type of survey research that collects data on consumer preferences, behaviors, or attitudes, often used in market research or product development.
  • Public opinion survey: A type of survey research that measures the attitudes, beliefs, or opinions of a population on a specific issue or topic, often used in political or social research.
  • Behavioral survey: A type of survey research that measures actual behavior or actions of individuals, often used in health or social research.
  • Attitude survey: A type of survey research that measures the attitudes, beliefs, or opinions of individuals, often used in social or psychological research.
  • Opinion poll: A type of survey research that measures the opinions or preferences of a population on a specific issue or topic, often used in political or media research.
  • Ad hoc survey: A type of survey research that is conducted for a specific purpose or research question, often used in exploratory research or to answer a specific research question.

Types Based on Methodology

Based on Methodology Survey are divided into two Types:

Quantitative Survey Research

Qualitative survey research.

Quantitative survey research is a method of collecting numerical data from a sample of participants through the use of standardized surveys or questionnaires. The purpose of quantitative survey research is to gather empirical evidence that can be analyzed statistically to draw conclusions about a particular population or phenomenon.

In quantitative survey research, the questions are structured and pre-determined, often utilizing closed-ended questions, where participants are given a limited set of response options to choose from. This approach allows for efficient data collection and analysis, as well as the ability to generalize the findings to a larger population.

Quantitative survey research is often used in market research, social sciences, public health, and other fields where numerical data is needed to make informed decisions and recommendations.

Qualitative survey research is a method of collecting non-numerical data from a sample of participants through the use of open-ended questions or semi-structured interviews. The purpose of qualitative survey research is to gain a deeper understanding of the experiences, perceptions, and attitudes of participants towards a particular phenomenon or topic.

In qualitative survey research, the questions are open-ended, allowing participants to share their thoughts and experiences in their own words. This approach allows for a rich and nuanced understanding of the topic being studied, and can provide insights that are difficult to capture through quantitative methods alone.

Qualitative survey research is often used in social sciences, education, psychology, and other fields where a deeper understanding of human experiences and perceptions is needed to inform policy, practice, or theory.

Data Analysis Methods

There are several Survey Research Data Analysis Methods that researchers may use, including:

  • Descriptive statistics: This method is used to summarize and describe the basic features of the survey data, such as the mean, median, mode, and standard deviation. These statistics can help researchers understand the distribution of responses and identify any trends or patterns.
  • Inferential statistics: This method is used to make inferences about the larger population based on the data collected in the survey. Common inferential statistical methods include hypothesis testing, regression analysis, and correlation analysis.
  • Factor analysis: This method is used to identify underlying factors or dimensions in the survey data. This can help researchers simplify the data and identify patterns and relationships that may not be immediately apparent.
  • Cluster analysis: This method is used to group similar respondents together based on their survey responses. This can help researchers identify subgroups within the larger population and understand how different groups may differ in their attitudes, behaviors, or preferences.
  • Structural equation modeling: This method is used to test complex relationships between variables in the survey data. It can help researchers understand how different variables may be related to one another and how they may influence one another.
  • Content analysis: This method is used to analyze open-ended responses in the survey data. Researchers may use software to identify themes or categories in the responses, or they may manually review and code the responses.
  • Text mining: This method is used to analyze text-based survey data, such as responses to open-ended questions. Researchers may use software to identify patterns and themes in the text, or they may manually review and code the text.

Applications of Survey Research

Here are some common applications of survey research:

  • Market Research: Companies use survey research to gather insights about customer needs, preferences, and behavior. These insights are used to create marketing strategies and develop new products.
  • Public Opinion Research: Governments and political parties use survey research to understand public opinion on various issues. This information is used to develop policies and make decisions.
  • Social Research: Survey research is used in social research to study social trends, attitudes, and behavior. Researchers use survey data to explore topics such as education, health, and social inequality.
  • Academic Research: Survey research is used in academic research to study various phenomena. Researchers use survey data to test theories, explore relationships between variables, and draw conclusions.
  • Customer Satisfaction Research: Companies use survey research to gather information about customer satisfaction with their products and services. This information is used to improve customer experience and retention.
  • Employee Surveys: Employers use survey research to gather feedback from employees about their job satisfaction, working conditions, and organizational culture. This information is used to improve employee retention and productivity.
  • Health Research: Survey research is used in health research to study topics such as disease prevalence, health behaviors, and healthcare access. Researchers use survey data to develop interventions and improve healthcare outcomes.

Examples of Survey Research

Here are some real-time examples of survey research:

  • COVID-19 Pandemic Surveys: Since the outbreak of the COVID-19 pandemic, surveys have been conducted to gather information about public attitudes, behaviors, and perceptions related to the pandemic. Governments and healthcare organizations have used this data to develop public health strategies and messaging.
  • Political Polls During Elections: During election seasons, surveys are used to measure public opinion on political candidates, policies, and issues in real-time. This information is used by political parties to develop campaign strategies and make decisions.
  • Customer Feedback Surveys: Companies often use real-time customer feedback surveys to gather insights about customer experience and satisfaction. This information is used to improve products and services quickly.
  • Event Surveys: Organizers of events such as conferences and trade shows often use surveys to gather feedback from attendees in real-time. This information can be used to improve future events and make adjustments during the current event.
  • Website and App Surveys: Website and app owners use surveys to gather real-time feedback from users about the functionality, user experience, and overall satisfaction with their platforms. This feedback can be used to improve the user experience and retain customers.
  • Employee Pulse Surveys: Employers use real-time pulse surveys to gather feedback from employees about their work experience and overall job satisfaction. This feedback is used to make changes in real-time to improve employee retention and productivity.

Survey Sample

Purpose of survey research.

The purpose of survey research is to gather data and insights from a representative sample of individuals. Survey research allows researchers to collect data quickly and efficiently from a large number of people, making it a valuable tool for understanding attitudes, behaviors, and preferences.

Here are some common purposes of survey research:

  • Descriptive Research: Survey research is often used to describe characteristics of a population or a phenomenon. For example, a survey could be used to describe the characteristics of a particular demographic group, such as age, gender, or income.
  • Exploratory Research: Survey research can be used to explore new topics or areas of research. Exploratory surveys are often used to generate hypotheses or identify potential relationships between variables.
  • Explanatory Research: Survey research can be used to explain relationships between variables. For example, a survey could be used to determine whether there is a relationship between educational attainment and income.
  • Evaluation Research: Survey research can be used to evaluate the effectiveness of a program or intervention. For example, a survey could be used to evaluate the impact of a health education program on behavior change.
  • Monitoring Research: Survey research can be used to monitor trends or changes over time. For example, a survey could be used to monitor changes in attitudes towards climate change or political candidates over time.

When to use Survey Research

there are certain circumstances where survey research is particularly appropriate. Here are some situations where survey research may be useful:

  • When the research question involves attitudes, beliefs, or opinions: Survey research is particularly useful for understanding attitudes, beliefs, and opinions on a particular topic. For example, a survey could be used to understand public opinion on a political issue.
  • When the research question involves behaviors or experiences: Survey research can also be useful for understanding behaviors and experiences. For example, a survey could be used to understand the prevalence of a particular health behavior.
  • When a large sample size is needed: Survey research allows researchers to collect data from a large number of people quickly and efficiently. This makes it a useful method when a large sample size is needed to ensure statistical validity.
  • When the research question is time-sensitive: Survey research can be conducted quickly, which makes it a useful method when the research question is time-sensitive. For example, a survey could be used to understand public opinion on a breaking news story.
  • When the research question involves a geographically dispersed population: Survey research can be conducted online, which makes it a useful method when the population of interest is geographically dispersed.

How to Conduct Survey Research

Conducting survey research involves several steps that need to be carefully planned and executed. Here is a general overview of the process:

  • Define the research question: The first step in conducting survey research is to clearly define the research question. The research question should be specific, measurable, and relevant to the population of interest.
  • Develop a survey instrument : The next step is to develop a survey instrument. This can be done using various methods, such as online survey tools or paper surveys. The survey instrument should be designed to elicit the information needed to answer the research question, and should be pre-tested with a small sample of individuals.
  • Select a sample : The sample is the group of individuals who will be invited to participate in the survey. The sample should be representative of the population of interest, and the size of the sample should be sufficient to ensure statistical validity.
  • Administer the survey: The survey can be administered in various ways, such as online, by mail, or in person. The method of administration should be chosen based on the population of interest and the research question.
  • Analyze the data: Once the survey data is collected, it needs to be analyzed. This involves summarizing the data using statistical methods, such as frequency distributions or regression analysis.
  • Draw conclusions: The final step is to draw conclusions based on the data analysis. This involves interpreting the results and answering the research question.

Advantages of Survey Research

There are several advantages to using survey research, including:

  • Efficient data collection: Survey research allows researchers to collect data quickly and efficiently from a large number of people. This makes it a useful method for gathering information on a wide range of topics.
  • Standardized data collection: Surveys are typically standardized, which means that all participants receive the same questions in the same order. This ensures that the data collected is consistent and reliable.
  • Cost-effective: Surveys can be conducted online, by mail, or in person, which makes them a cost-effective method of data collection.
  • Anonymity: Participants can remain anonymous when responding to a survey. This can encourage participants to be more honest and open in their responses.
  • Easy comparison: Surveys allow for easy comparison of data between different groups or over time. This makes it possible to identify trends and patterns in the data.
  • Versatility: Surveys can be used to collect data on a wide range of topics, including attitudes, beliefs, behaviors, and preferences.

Limitations of Survey Research

Here are some of the main limitations of survey research:

  • Limited depth: Surveys are typically designed to collect quantitative data, which means that they do not provide much depth or detail about people’s experiences or opinions. This can limit the insights that can be gained from the data.
  • Potential for bias: Surveys can be affected by various biases, including selection bias, response bias, and social desirability bias. These biases can distort the results and make them less accurate.
  • L imited validity: Surveys are only as valid as the questions they ask. If the questions are poorly designed or ambiguous, the results may not accurately reflect the respondents’ attitudes or behaviors.
  • Limited generalizability : Survey results are only generalizable to the population from which the sample was drawn. If the sample is not representative of the population, the results may not be generalizable to the larger population.
  • Limited ability to capture context: Surveys typically do not capture the context in which attitudes or behaviors occur. This can make it difficult to understand the reasons behind the responses.
  • Limited ability to capture complex phenomena: Surveys are not well-suited to capture complex phenomena, such as emotions or the dynamics of interpersonal relationships.

Following is an example of a Survey Sample:

Welcome to our Survey Research Page! We value your opinions and appreciate your participation in this survey. Please answer the questions below as honestly and thoroughly as possible.

1. What is your age?

  • A) Under 18
  • G) 65 or older

2. What is your highest level of education completed?

  • A) Less than high school
  • B) High school or equivalent
  • C) Some college or technical school
  • D) Bachelor’s degree
  • E) Graduate or professional degree

3. What is your current employment status?

  • A) Employed full-time
  • B) Employed part-time
  • C) Self-employed
  • D) Unemployed

4. How often do you use the internet per day?

  •  A) Less than 1 hour
  • B) 1-3 hours
  • C) 3-5 hours
  • D) 5-7 hours
  • E) More than 7 hours

5. How often do you engage in social media per day?

6. Have you ever participated in a survey research study before?

7. If you have participated in a survey research study before, how was your experience?

  • A) Excellent
  • E) Very poor

8. What are some of the topics that you would be interested in participating in a survey research study about?

……………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………………….

9. How often would you be willing to participate in survey research studies?

  • A) Once a week
  • B) Once a month
  • C) Once every 6 months
  • D) Once a year

10. Any additional comments or suggestions?

Thank you for taking the time to complete this survey. Your feedback is important to us and will help us improve our survey research efforts.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Ethnographic Research

Ethnographic Research -Types, Methods and Guide

Basic Research

Basic Research – Types, Methods and Examples

Mixed Research methods

Mixed Methods Research – Types & Analysis

Focus Groups in Qualitative Research

Focus Groups – Steps, Examples and Guide

Qualitative Research Methods

Qualitative Research Methods

Case Study Research

Case Study – Methods, Examples and Guide

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Doing Survey Research | A Step-by-Step Guide & Examples

Doing Survey Research | A Step-by-Step Guide & Examples

Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.

Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyse the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research: Investigating the experiences and characteristics of different social groups
  • Market research: Finding out what customers think about products, services, and companies
  • Health research: Collecting data from patients about symptoms and treatments
  • Politics: Measuring public opinion about parties and policies
  • Psychology: Researching personality traits, preferences, and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism, run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • University students in the UK
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18 to 24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by post, online, or in person, and respondents fill it out themselves
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g., residents of a specific region).
  • The response rate is often low.

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyse.
  • The anonymity and accessibility of online surveys mean you have less control over who responds.

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g., the opinions of a shop’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations.

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g., yes/no or agree/disagree )
  • A scale (e.g., a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g., age categories)
  • A list of options with multiple answers possible (e.g., leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.

Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.

There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 7 June 2024, from https://www.scribbr.co.uk/research-methods/surveys/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.

  • Survey Research: Types, Examples & Methods

busayo.longe

Surveys have been proven to be one of the most effective methods of conducting research. They help you to gather relevant data from a large audience, which helps you to arrive at a valid and objective conclusion. 

Just like other research methods, survey research had to be conducted the right way to be effective. In this article, we’ll dive into the nitty-gritty of survey research and show you how to get the most out of it. 

What is Survey Research? 

Survey research is simply a systematic investigation conducted via a survey. In other words, it is a type of research carried out by administering surveys to respondents. 

Surveys already serve as a great method of opinion sampling and finding out what people think about different contexts and situations. Applying this to research means you can gather first-hand information from persons affected by specific contexts. 

Survey research proves useful in numerous primary research scenarios. Consider the case whereby a restaurant wants to gather feedback from its customers on its new signatory dish. A good way to do this is to conduct survey research on a defined customer demographic. 

By doing this, the restaurant is better able to gather primary data from the customers (respondents) with regards to what they think and feel about the new dish across multiple facets. This means they’d have more valid and objective information to work with. 

Why Conduct Survey Research?  

One of the strongest arguments for survey research is that it helps you gather the most authentic data sets in the systematic investigation. Survey research is a gateway to collecting specific information from defined respondents, first-hand.  

Surveys combine different question types that make it easy for you to collect numerous information from respondents. When you come across a questionnaire for survey research, you’re likely to see a neat blend of close-ended and open-ended questions, together with other survey response scale questions. 

Apart from what we’ve discussed so far, here are some other reasons why survey research is important: 

  • It gives you insights into respondents’ behaviors and preferences which is valid in any systematic investigation.
  • Many times, survey research is structured in an interactive manner which makes it easier for respondents to communicate their thoughts and experiences. 
  • It allows you to gather important data that proves useful for product improvement; especially in market research. 

Characteristics of Survey Research

  • Usage : Survey research is mostly deployed in the field of social science; especially to gather information about human behavior in different social contexts. 
  • Systematic : Like other research methods, survey research is systematic. This means that it is usually conducted in line with empirical methods and follows specific processes.
  • Replicable : In survey research, applying the same methods often translates to achieving similar results. 
  • Types : Survey research can be conducted using forms (offline and online) or via structured, semi-structured, and unstructured interviews . 
  • Data : The data gathered from survey research is mostly quantitative; although it can be qualitative. 
  • Impartial Sampling : The data sample in survey research is random and not subject to unavoidable biases.
  • Ecological Validity : Survey research often makes use of data samples obtained from real-world occurrences. 

Types of Survey Research

Survey research can be subdivided into different types based on its objectives, data source, and methodology. 

Types of Survey Research Based on Objective

  • Exploratory Survey Research

Exploratory survey research is aimed at finding out more about the research context. Here, the survey research pays attention to discovering new ideas and insights about the research subject(s) or contexts. 

Exploratory survey research is usually made up of open-ended questions that allow respondents to fully communicate their thoughts and varying perspectives on the subject matter. In many cases, systematic investigation kicks off with an exploratory research survey. 

  • Predictive Survey Research

This type of research is also referred to as causal survey research because it pays attention to the causative relationship between the variables in the survey research. In other words, predictive survey research pays attention to existing patterns to explain the relationship between two variables. 

It can also be referred to as conclusive research because it allows you to identify causal variables and resultant variables; that is cause and effect. Predictive variables allow you to determine the nature of the relationship between the causal variables and the effect to be predicted. 

  • Descriptive Survey Research

Unlike predictive research, descriptive survey research is largely observational. It is ideal for quantitative research because it helps you to gather numeric data. 

The questions listed in descriptive survey research help you to uncover new insights into the actions, thoughts, and feelings of survey respondents. With this data, you can know the extent to which different conditions can be obtained among these subjects. 

Types of Survey Research Based on Data Source

  • Secondary Data

Survey research can be designed to collect and process secondary data. Secondary data is a type of data that has been collected from primary sources in the past and is readily available for use. It is the type of data that is already existing.

Since secondary data is gathered from third-party sources, it is mostly generic, unlike primary data that is specific to the research context. Common sources of secondary data in survey research include books, data collected through other surveys, online data, data from government archives, and libraries. 

  • Primary Data

This is the type of research data that is collected directly; that is, data collected from first-hand sources. Primary data is usually tailored to a specific research context so that reflects the aims and objectives of the systematic investigation.

One of the strongest points of primary data over its secondary counterpart is validity. Because it is collected directly from first-hand sources, primary data typically results in objective research findings. 

You can collect primary data via interviews, surveys, and questionnaires, and observation methods. 

Types of Survey Research Based on Methodology

  • Quantitative Research

Quantitative research is a common research method that is used to gather numerical data in a systematic investigation. It is often deployed in research contexts that require statistical information to arrive at valid results such as in social science or science. 

For instance, as an organization looking to find out how many persons are using your product in a particular location, you can administer survey research to collect useful quantitative data. Other quantitative research methods include polls, face-to-face interviews, and systematic observation. 

  • Qualitative Research

This is a method of systematic investigation that is used to collect non-numerical data from research participants. In other words, it is a research method that allows you to gather open-ended information from your target audience. 

Typically, organizations deploy qualitative research methods when they need to gather descriptive data from their customers; for example, when they need to collect customer feedback in product evaluation. Qualitative research methods include one-on-one interviews, observation, case studies, and focus groups. 

Survey Research Scales

  • Nominal Scale

This is a type of survey research scale that uses numbers to label the different answer options in a survey. On a nominal scale , the numbers have no value in themselves; they simply serve as labels for qualitative variables in the survey. 

In cases where a nominal scale is used for identification, there is typically a specific one-on-one relationship between the numeric value and the variable it represents. On the other hand, when the variable is used for classification, then each number on the scale serves as a label or a tag. 

Examples of Nominal Scale in Survey Research 

1. How would you describe your complexion? 

2. Have you used this product?

  • Ordinal Scale

This is a type of variable measurement scale that arranges answer options in a specific ranking order without necessarily indicating the degree of variation between these options. Ordinal data is qualitative and can be named, ranked, or grouped. 

In an ordinal scale , the different properties of the variables are relatively unknown, and it also identifies, describes, and shows the rank of the different variables. With an ordered scale, it is easier for researchers to measure the degree of agreement and/or disagreement with different variables. 

With ordinal scales, you can measure non-numerical attributes such as the degree of happiness, agreement, or opposition of respondents in specific contexts. Using an ordinal scale makes it easy for you to compare variables and process survey responses accordingly. 

Examples of Ordinal Scale in Survey Research

1. How often do you use this product?

  • Prefer not to say

2. How much do you agree with our new policies? 

  • Totally agree
  • Somewhat agree
  • Totally disagree
  • Interval Scale

This is a type of survey scale that is used to measure variables existing at equal intervals along a common scale. In some way, it combines the attributes of nominal and ordinal scales since it is used where there is order and there is a meaningful difference between 2 variables. 

With an interval scale, you can quantify the difference in value between two variables in survey research. In addition to this, you can carry out other mathematical processes like calculating the mean and median of research variables. 

Examples of Interval Scale in Survey Research

1. Our customer support team was very effective. 

  • Completely agree
  • Neither agree nor disagree
  • Somewhat disagree
  • Completely disagree 

2. I enjoyed using this product.

Another example of an interval scale can be seen in the Net Promoter Score.

  • Ratio Scale

Just like the interval scale, the ratio scale is quantitative and it is used when you need to compare intervals or differences in survey research. It is the highest level of measurement and it is made up of bits and pieces of the other survey scales. 

One of the unique features of the ratio scale is it has a true zero and equal intervals between the variables on the scale. This zero indicates an absence of the variable being measured by the scale. Common occurrences of ratio scales can be seen with distance (length), area, and population measurement. 

Examples of Ratio Scale in Survey Research

1. How old are you?

  • Below 18 years
  • 41 and above

2. How many times do you shop in a week?

  • Less than twice
  • Three times
  • More than four times

Uses of Survey Research

  • Health Surveys

Survey research is used by health practitioners to gather useful data from patients in different medical and safety contexts. It helps you to gather primary and secondary data about medical conditions and risk factors of multiple diseases and infections. 

In addition to this, administering health surveys regularly helps you to monitor the overall health status of your population; whether in the workplace, school, or community. This kind of data can be used to help prevent outbreaks and minimize medical emergencies in these contexts. 

Survey research is also useful when conducting polls; whether online or offline. A poll is a data collection tool that helps you to gather public opinion about a particular subject from a well-defined research sample.

By administering survey research, you can gather valid data from a well-defined research sample, and utilize research findings for decision making. For example, during elections, individuals can be asked to choose their preferred leader via questionnaires administered as part of survey research.

  • Customer Satisfaction

Customer satisfaction is one of the cores of every organization as it is directly concerned with how well your product or service meets the needs of your clients. Survey research is an effective way to measure customer satisfaction at different intervals. 

As a restaurant, for example, you can send out online surveys to customers immediately when they patronize your business. In these surveys, encourage them to provide feedback on their experience and to provide information on how your service delivery can be improved. 

Survey research makes data collection and analysis easy during a census. With an online survey tool like Formplus , you can seamlessly gather data during a census without moving from a spot. Formplus has multiple sharing options that help you collect information without stress. 

Survey Research Methods

Survey research can be done using different online and offline methods. Let’s examine a few of them here.

  • Telephone Surveys

This is a means of conducting survey research via phone calls. In a telephone survey, the researcher places a call to the survey respondents and gathers information from them by asking questions about the research context under consideration. 

A telephone survey is a kind of simulation of the face-to-face survey experience since it involves discussing with respondents to gather and process valid data. However, major challenges with this method include the fact that it is expensive and time-consuming. 

  • Online Surveys

An online survey is a data collection tool used to create and administer surveys and questionnaires using data tools like Formplus. Online surveys work better than paper forms and other offline survey methods because you can easily gather and process data from a large sample size with them. 

  • Face-to-Face Interviews

Face-to-face interviews for survey research can be structured, semi-structured, or unstructured depending on the research context and the type of data you want to collect. If you want to gather qualitative data , then unstructured and semi-structured interviews are the way to go. 

On the other hand, if you want to collect quantifiable information from your research sample, conducting a structured interview is the best way to go. Face-to-face interviews can also be time-consuming and cost-intensive. Let’s mention here that face-to-face surveys are one of the most widely used methods of survey data collection. 

How to Conduct Research Surveys on Formplus 

With Formplus, you can create forms for survey research without any hassles. Follow this step-by-step guide to create and administer online surveys for research via Formplus. 

1. Sign up at www.formpl.us to create your Formplus account. If you already have a Formplus account, click here to log in.

5. Use the form customization options to change the appearance of your survey. You can add your organization’s logo to the survey, change the form font and layout, and insert preferred background images.

Advantages of Survey Research

  • It is inexpensive – with survey research, you can avoid the cost of in-person interviews. It’s also easy to receive data as you can share your surveys online and get responses from a large demographic
  • It is the fastest way to get a large amount of first-hand data
  • Surveys allow you to compare the results you get through charts and graphs
  • It is versatile as it can be used for any research topic
  • Surveys are perfect for anonymous respondents in the research 

Disadvantages of Survey Research

  • Some questions may not get answers
  • People may understand survey questions differently
  • It may not be the best option for respondents with visual or hearing impairments as well as a demographic with no literacy levels
  • People can provide dishonest answers in a survey research

Conclusion 

In this article, we’ve discussed survey research extensively; touching on different important aspects of this concept. As a researcher, organization, individual, or student, it is important to understand how survey research works to utilize it effectively and get the most from this method of systematic investigation. 

As we’ve already stated, conducting survey research online is one of the most effective methods of data collection as it allows you to gather valid data from a large group of respondents. If you’re looking to kick off your survey research, you can start by signing up for a Formplus account here. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • ethnographic research survey
  • survey research
  • survey research method
  • busayo.longe

Formplus

You may also like:

Need More Survey Respondents? Best Survey Distribution Methods to Try

This post offers the viable options you can consider when scouting for survey audiences.

example for survey research

Goodhart’s Law: Definition, Implications & Examples

In this article, we will discuss Goodhart’s law in different fields, especially in survey research, and how you can avoid it.

Cobra Effect & Perverse Survey Incentives: Definition, Implications & Examples

In this post, we will discuss the origin of the Cobra effect, its implication, and some examples

Cluster Sampling Guide: Types, Methods, Examples & Uses

In this guide, we’d explore different types of cluster sampling and show you how to apply this technique to market research.

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

  • Help Center
  • اَلْعَرَبِيَّةُ
  • Deutsch (Schweiz)
  • Español (Mexico)
  • Bahasa Indonesia
  • Bahasa Melayu
  • Português (Brasil)
  • Tiếng việt

Survey Research — Types, Methods and Example Questions

Survey research The world of research is vast and complex, but with the right tools and understanding, it's an open field of discovery. Welcome to a journey into the heart of survey research. What is survey research? Survey research is the lens through which we view the opinions, behaviors, and experiences of a population. Think of it as the research world's detective, cleverly sleuthing out the truths hidden beneath layers of human complexity. Why is survey research important? Survey research is a Swiss Army Knife in a researcher's toolbox. It’s adaptable, reliable, and incredibly versatile, but its real power? It gives voice to the silent majority. Whether it's understanding customer preferences or assessing the impact of a social policy, survey research is the bridge between unanswered questions and insightful data. Let's embark on this exploration, armed with the spirit of openness, a sprinkle of curiosity, and the thirst for making knowledge accessible. As we journey further into the realm of survey research, we'll delve deeper into the diverse types of surveys, innovative data collection methods, and the rewards and challenges that come with them. Types of survey research Survey research is like an artist's palette, offering a variety of types to suit your unique research needs. Each type paints a different picture, giving us fascinating insights into the world around us. Cross-Sectional Surveys: Capture a snapshot of a population at a specific moment in time. They're your trusty Polaroid camera, freezing a moment for analysis and understanding. Longitudinal Surveys: Track changes over time, much like a time-lapse video. They help to identify trends and patterns, offering a dynamic perspective of your subject. Descriptive Surveys: Draw a detailed picture of the current state of affairs. They're your magnifying glass, examining the prevalence of a phenomenon or attitudes within a group. Analytical Surveys: Deep dive into the reasons behind certain outcomes. They're the research world's version of Sherlock Holmes, unraveling the complex web of cause and effect. But, what method should you choose for data collection? The plot thickens, doesn't it? Let's unravel this mystery in our next section. Survey research and data collection methods Data collection in survey research is an art form, and there's no one-size-fits-all method. Think of it as your paintbrush, each stroke represents a different way of capturing data. Online Surveys: In the digital age, online surveys have surged in popularity. They're fast, cost-effective, and can reach a global audience. But like a mysterious online acquaintance, respondents may not always be who they say they are. Mail Surveys: Like a postcard from a distant friend, mail surveys have a certain charm. They're great for reaching respondents without internet access. However, they’re slower and have lower response rates. They’re a test of patience and persistence. Telephone Surveys: With the sound of a ringing phone, the human element enters the picture. Great for reaching a diverse audience, they bring a touch of personal connection. But, remember, not all are fans of unsolicited calls. Face-to-Face Surveys: These are the heart-to-heart conversations of the survey world. While they require more resources, they're the gold standard for in-depth, high-quality data. As we journey further, let’s weigh the pros and cons of survey research. Advantages and disadvantages of survey research Every hero has its strengths and weaknesses, and survey research is no exception. Let's unwrap the gift box of survey research to see what lies inside. Advantages: Versatility: Like a superhero with multiple powers, surveys can be adapted to different topics, audiences, and research needs. Accessibility: With online surveys, geographical boundaries dissolve. We can reach out to the world from our living room. Anonymity: Like a confessional booth, surveys allow respondents to share their views without fear of judgment. Disadvantages: Response Bias: Ever met someone who says what you want to hear? Survey respondents can be like that too. Limited Depth: Like a puddle after a rainstorm, some surveys only skim the surface of complex issues. Nonresponse: Sometimes, potential respondents play hard to get, skewing the data. Survey research may have its challenges, but it also presents opportunities to learn and grow. As we forge ahead on our journey, we dive into the design process of survey research. Limitations of survey research Every research method has its limitations, like bumps on the road to discovery. But don't worry, with the right approach, these challenges become opportunities for growth. Misinterpretation: Sometimes, respondents might misunderstand your questions, like a badly translated novel. To overcome this, keep your questions simple and clear. Social Desirability Bias: People often want to present themselves in the best light. They might answer questions in a way that portrays them positively, even if it's not entirely accurate. Overcome this by ensuring anonymity and emphasizing honesty. Sample Representation: If your survey sample isn't representative of the population you're studying, it can skew your results. Aiming for a diverse sample can mitigate this. Now that we're aware of the limitations let's delve into the world of survey design. {loadmoduleid 430} Survey research design Designing a survey is like crafting a roadmap to discovery. It's an intricate process that involves careful planning, innovative strategies, and a deep understanding of your research goals. Let's get started. Approach and Strategy Your approach and strategy are the compasses guiding your survey research. Clear objectives, defined research questions, and an understanding of your target audience lay the foundation for a successful survey. Panel The panel is the heartbeat of your survey, the respondents who breathe life into your research. Selecting a representative panel ensures your research is accurate and inclusive. 9 Tips on Building the Perfect Survey Research Questionnaire Keep It Simple: Clear and straightforward questions lead to accurate responses. Make It Relevant: Ensure every question ties back to your research objectives. Order Matters: Start with easy questions to build rapport and save sensitive ones for later. Avoid Double-Barreled Questions: Stick to one idea per question. Offer a Balanced Scale: For rating scales, provide an equal number of positive and negative options. Provide a ‘Don't Know’ Option: This prevents guessing and keeps your data accurate. Pretest Your Survey: A pilot run helps you spot any issues before the final launch. Keep It Short: Respect your respondents' time. Make It Engaging: Keep your respondents interested with a mix of question types. Survey research examples and questions Examples serve as a bridge connecting theoretical concepts to real-world scenarios. Let's consider a few practical examples of survey research across various domains. User Experience (UX) Imagine being a UX designer at a budding tech start-up. Your app is gaining traction, but to keep your user base growing and engaged, you must ensure that your app's UX is top-notch. In this case, a well-designed survey could be a beacon, guiding you toward understanding user behavior, preferences, and pain points. Here's an example of how such a survey could look: "On a scale of 1 to 10, how would you rate the ease of navigating our app?" "How often do you encounter difficulties while using our app?" "What features do you use most frequently in our app?" "What improvements would you suggest for our app?" "What features would you like to see in future updates?" This line of questioning, while straightforward, provides invaluable insights. It enables the UX designer to identify strengths to capitalize on and weaknesses to improve, ultimately leading to a product that resonates with users. Psychology and Ethics in survey research The realm of survey research is not just about data and numbers, but it's also about understanding human behavior and treating respondents ethically. Psychology: In-depth understanding of cognitive biases and social dynamics can profoundly influence survey design. Let's take the 'Recency Effect,' a psychological principle stating that people tend to remember recent events more vividly than those in the past. While framing questions about user experiences, this insight could be invaluable. For example, a question like "Can you recall an instance in the past week when our customer service exceeded your expectations?" is likely to fetch more accurate responses than asking about an event several months ago. Ethics: On the other hand, maintaining privacy, confidentiality, and informed consent is more than ethical - it's fundamental to the integrity of the research process. Imagine conducting a sensitive survey about workplace culture. Ensuring respondents that their responses will remain confidential and anonymous can encourage more honest responses. An introductory note stating these assurances, along with a clear outline of the survey's purpose, can help build trust with your respondents. Survey research software In the age of digital information, survey research software has become a trusted ally for researchers. It simplifies complex processes like data collection, analysis, and visualization, democratizing research and making it more accessible to a broad audience. LimeSurvey, our innovative, user-friendly tool, brings this vision to life. It stands at the crossroads of simplicity and power, embodying the essence of accessible survey research. Whether you're a freelancer exploring new market trends, a psychology student curious about human behavior, or an HR officer aiming to improve company culture, LimeSurvey empowers you to conduct efficient, effective research. Its suite of features and intuitive design matches your research pace, allowing your curiosity to take the front seat. For instance, consider you're a researcher studying consumer behavior across different demographics. With LimeSurvey, you can easily design demographic-specific questions, distribute your survey across various channels, collect responses in real-time, and visualize your data through intuitive dashboards. This synergy of tools and functionalities makes LimeSurvey a perfect ally in your quest for knowledge. Conclusion If you've come this far, we can sense your spark of curiosity. Are you eager to take the reins and conduct your own survey research? Are you ready to embrace the simple yet powerful tool that LimeSurvey offers? If so, we can't wait to see where your journey takes you next! In the world of survey research, there's always more to explore, more to learn and more to discover. So, keep your curiosity alive, stay open to new ideas, and remember, your exploration is just beginning! We hope that our exploration has been as enlightening for you as it was exciting for us. Remember, the journey doesn't end here. With the power of knowledge and the right tools in your hands, there's no limit to what you can achieve. So, let your curiosity be your guide and dive into the fascinating world of survey research with LimeSurvey! Try it out for free now! Happy surveying! {loadmoduleid 429}

example for survey research

Table Content

Survey research.

The world of research is vast and complex, but with the right tools and understanding, it's an open field of discovery. Welcome to a journey into the heart of survey research.

What is survey research?

Survey research is the lens through which we view the opinions, behaviors, and experiences of a population. Think of it as the research world's detective, cleverly sleuthing out the truths hidden beneath layers of human complexity.

Why is survey research important?

Survey research is a Swiss Army Knife in a researcher's toolbox. It’s adaptable, reliable, and incredibly versatile, but its real power? It gives voice to the silent majority. Whether it's understanding customer preferences or assessing the impact of a social policy, survey research is the bridge between unanswered questions and insightful data.

Let's embark on this exploration, armed with the spirit of openness, a sprinkle of curiosity, and the thirst for making knowledge accessible. As we journey further into the realm of survey research, we'll delve deeper into the diverse types of surveys, innovative data collection methods, and the rewards and challenges that come with them.

Types of survey research

Survey research is like an artist's palette, offering a variety of types to suit your unique research needs. Each type paints a different picture, giving us fascinating insights into the world around us.

  • Cross-Sectional Surveys: Capture a snapshot of a population at a specific moment in time. They're your trusty Polaroid camera, freezing a moment for analysis and understanding.
  • Longitudinal Surveys: Track changes over time, much like a time-lapse video. They help to identify trends and patterns, offering a dynamic perspective of your subject.
  • Descriptive Surveys: Draw a detailed picture of the current state of affairs. They're your magnifying glass, examining the prevalence of a phenomenon or attitudes within a group.
  • Analytical Surveys: Deep dive into the reasons behind certain outcomes. They're the research world's version of Sherlock Holmes, unraveling the complex web of cause and effect.

But, what method should you choose for data collection? The plot thickens, doesn't it? Let's unravel this mystery in our next section.

Survey research and data collection methods

Data collection in survey research is an art form, and there's no one-size-fits-all method. Think of it as your paintbrush, each stroke represents a different way of capturing data.

  • Online Surveys: In the digital age, online surveys have surged in popularity. They're fast, cost-effective, and can reach a global audience. But like a mysterious online acquaintance, respondents may not always be who they say they are.
  • Mail Surveys: Like a postcard from a distant friend, mail surveys have a certain charm. They're great for reaching respondents without internet access. However, they’re slower and have lower response rates. They’re a test of patience and persistence.
  • Telephone Surveys: With the sound of a ringing phone, the human element enters the picture. Great for reaching a diverse audience, they bring a touch of personal connection. But, remember, not all are fans of unsolicited calls.
  • Face-to-Face Surveys: These are the heart-to-heart conversations of the survey world. While they require more resources, they're the gold standard for in-depth, high-quality data.

As we journey further, let’s weigh the pros and cons of survey research.

Advantages and disadvantages of survey research

Every hero has its strengths and weaknesses, and survey research is no exception. Let's unwrap the gift box of survey research to see what lies inside.

Advantages:

  • Versatility: Like a superhero with multiple powers, surveys can be adapted to different topics, audiences, and research needs.
  • Accessibility: With online surveys, geographical boundaries dissolve. We can reach out to the world from our living room.
  • Anonymity: Like a confessional booth, surveys allow respondents to share their views without fear of judgment.

Disadvantages:

  • Response Bias: Ever met someone who says what you want to hear? Survey respondents can be like that too.
  • Limited Depth: Like a puddle after a rainstorm, some surveys only skim the surface of complex issues.
  • Nonresponse: Sometimes, potential respondents play hard to get, skewing the data.

Survey research may have its challenges, but it also presents opportunities to learn and grow. As we forge ahead on our journey, we dive into the design process of survey research.

Limitations of survey research

Every research method has its limitations, like bumps on the road to discovery. But don't worry, with the right approach, these challenges become opportunities for growth.

Misinterpretation: Sometimes, respondents might misunderstand your questions, like a badly translated novel. To overcome this, keep your questions simple and clear.

Social Desirability Bias: People often want to present themselves in the best light. They might answer questions in a way that portrays them positively, even if it's not entirely accurate. Overcome this by ensuring anonymity and emphasizing honesty.

Sample Representation: If your survey sample isn't representative of the population you're studying, it can skew your results. Aiming for a diverse sample can mitigate this.

Now that we're aware of the limitations let's delve into the world of survey design.

  •   Create surveys in 40+ languages
  •   Unlimited number of users
  •   Ready-to-go survey templates
  •   So much more...

Survey research design

Designing a survey is like crafting a roadmap to discovery. It's an intricate process that involves careful planning, innovative strategies, and a deep understanding of your research goals. Let's get started.

Approach and Strategy

Your approach and strategy are the compasses guiding your survey research. Clear objectives, defined research questions, and an understanding of your target audience lay the foundation for a successful survey.

The panel is the heartbeat of your survey, the respondents who breathe life into your research. Selecting a representative panel ensures your research is accurate and inclusive.

9 Tips on Building the Perfect Survey Research Questionnaire

  • Keep It Simple: Clear and straightforward questions lead to accurate responses.
  • Make It Relevant: Ensure every question ties back to your research objectives.
  • Order Matters: Start with easy questions to build rapport and save sensitive ones for later.
  • Avoid Double-Barreled Questions: Stick to one idea per question.
  • Offer a Balanced Scale: For rating scales, provide an equal number of positive and negative options.
  • Provide a ‘Don't Know’ Option: This prevents guessing and keeps your data accurate.
  • Pretest Your Survey: A pilot run helps you spot any issues before the final launch.
  • Keep It Short: Respect your respondents' time.
  • Make It Engaging: Keep your respondents interested with a mix of question types.

Survey research examples and questions

Examples serve as a bridge connecting theoretical concepts to real-world scenarios. Let's consider a few practical examples of survey research across various domains.

User Experience (UX)

Imagine being a UX designer at a budding tech start-up. Your app is gaining traction, but to keep your user base growing and engaged, you must ensure that your app's UX is top-notch. In this case, a well-designed survey could be a beacon, guiding you toward understanding user behavior, preferences, and pain points.

Here's an example of how such a survey could look:

  • "On a scale of 1 to 10, how would you rate the ease of navigating our app?"
  • "How often do you encounter difficulties while using our app?"
  • "What features do you use most frequently in our app?"
  • "What improvements would you suggest for our app?"
  • "What features would you like to see in future updates?"

This line of questioning, while straightforward, provides invaluable insights. It enables the UX designer to identify strengths to capitalize on and weaknesses to improve, ultimately leading to a product that resonates with users.

Psychology and Ethics in survey research

The realm of survey research is not just about data and numbers, but it's also about understanding human behavior and treating respondents ethically.

Psychology: In-depth understanding of cognitive biases and social dynamics can profoundly influence survey design. Let's take the 'Recency Effect,' a psychological principle stating that people tend to remember recent events more vividly than those in the past. While framing questions about user experiences, this insight could be invaluable.

For example, a question like "Can you recall an instance in the past week when our customer service exceeded your expectations?" is likely to fetch more accurate responses than asking about an event several months ago.

Ethics: On the other hand, maintaining privacy, confidentiality, and informed consent is more than ethical - it's fundamental to the integrity of the research process.

Imagine conducting a sensitive survey about workplace culture. Ensuring respondents that their responses will remain confidential and anonymous can encourage more honest responses. An introductory note stating these assurances, along with a clear outline of the survey's purpose, can help build trust with your respondents.

Survey research software

In the age of digital information, survey research software has become a trusted ally for researchers. It simplifies complex processes like data collection, analysis, and visualization, democratizing research and making it more accessible to a broad audience.

LimeSurvey, our innovative, user-friendly tool, brings this vision to life. It stands at the crossroads of simplicity and power, embodying the essence of accessible survey research.

Whether you're a freelancer exploring new market trends, a psychology student curious about human behavior, or an HR officer aiming to improve company culture, LimeSurvey empowers you to conduct efficient, effective research. Its suite of features and intuitive design matches your research pace, allowing your curiosity to take the front seat.

For instance, consider you're a researcher studying consumer behavior across different demographics. With LimeSurvey, you can easily design demographic-specific questions, distribute your survey across various channels, collect responses in real-time, and visualize your data through intuitive dashboards. This synergy of tools and functionalities makes LimeSurvey a perfect ally in your quest for knowledge.

If you've come this far, we can sense your spark of curiosity. Are you eager to take the reins and conduct your own survey research? Are you ready to embrace the simple yet powerful tool that LimeSurvey offers? If so, we can't wait to see where your journey takes you next!

In the world of survey research, there's always more to explore, more to learn and more to discover. So, keep your curiosity alive, stay open to new ideas, and remember, your exploration is just beginning!

We hope that our exploration has been as enlightening for you as it was exciting for us. Remember, the journey doesn't end here. With the power of knowledge and the right tools in your hands, there's no limit to what you can achieve. So, let your curiosity be your guide and dive into the fascinating world of survey research with LimeSurvey! Try it out for free now!

Happy surveying!

Think one step ahead.

Step into a bright future with our simple online survey tool

Open Source

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

example for survey research

Home Surveys

Best Survey Examples for your research

Survey examples

Whether you are creating a survey for market research, customer satisfaction evaluation, academic study or for human resource evaluation for your organization, good survey examples can go a long way in ensuring that your survey is up to the standard to collect great responses such that you get the best possible insights for your research.

Here are some critical survey examples by categories for your next research project:

  • Market Research

Market research is one of the most common reasons to conduct a survey. This also includes product evaluation and product testing where you would live to collect feedback on the potential of a product or service in a given market and demographic segment.

Here are a few survey examples for market research that will help you create a great market research survey:

Concept Evaluation and Pricing survey : This survey is used for evaluating a potential product / service content and its correlation to its pricing. This is a critical market research example because more than half of all market research surveys are for product evaluations in correlation to pricing models.

This brings us to our next example.

Conjoint Analysis survey : Whenever you want to study a product or service and its correlation to pricing or any other attribute – conjoint analysis survey is the best template to move forward. Conjoint analysis is used to undertake studies that want to understand how one aspect or feature may affect purchasing or choice pattern for another feature and the entire product overall.

Advertising Effectiveness survey : Any marketing or advertising campaign has to answer for its effectiveness, ROI and consumer / audience impact. This example template covers the important and standard questions that must be included in your next advertising effectiveness survey such that you can draw insightful conclusions on how well your campaign was able to perform, make your audience aware of your brand and how well could you convince them to purchase your product!

See more examples : Marketing and market research surveys

  • Customer Satisfaction

As businesses become more customer centric, bench-marking on customer satisfaction using surveys has become a defined metric for customer success. Here are a few examples for your next customer satisfaction evaluation survey:

Net Promoter Score Survey : Any customer experience evaluation must include the most critical and also the most heavily used survey – the Net Promoter Score question. What makes this customer question so important is that it provides you with a numeric metric on the basis of how your customer’s experience was, was it good enough for them to recommend your brand to friends and family or whether they are potential brand detractor. As a business, you need answers to these questions and the NPS survey gets you this insight with just a single question.

Product Satisfaction Survey : This survey example explores customer feedback based on their experience with the product as well as the organization at points of contact of purchase and post-purchase support. For any business or non-profit, wanting to understand customer satisfaction with their product is a critical step towards improving overall customer experience – through product changes as well as service improvements.

This brings us to our next survey example.

Motivation and Buying Experience : While it is great to understand customer experience with your product – you need to dig deeper to the source if you want to make fundamental changes that increases your product purchases. You need to understand “why” people buy your product, how was their buying experience and what you need to do to leverage your existing advantages over competition and what you need to do to improve it further.

See more examples : Customer satisfaction surveys

  • Human Resources and Employee Evaluation

The buzz of employee engagement has caused most of the mature industries to understand how employee engagement and motivation can affect their productivity and commitment towards product / service improvements and customer success.

Here are a few great survey examples to evaluate if your human resources are motivated and contribute towards creating a winning work culture:

Job Satisfaction Survey : A popular saying goes like this – “If you want satisfied customers, you need satisfied employees”. What it means is that your employees are the resources that keep your customers happy – every employee work division is ultimately geared towards getting more customers and keeping them satisfied. If an employee is unsatisfied with their job, their is a good chance that they won’t be able to provide a dedicated and satisfactory output that satisfies your customers either. Therefore, first begin to understand how your employees feel at work using this survey example.

Supervisor Evaluation Survey : Managers and supervisors form the first layer of employee hierarchy and ar responsible for translating company values and team motivation down to the last employee. This makes it critically important to evaluate if your managers / supervisors are well trained to carry out their daily tasks and if you need to make improvements to your mid-management process.

Senior Management Survey : Employees, managers / supervisors and now the cycle completes with collecting feedback for the senior-most management and leadership teams, including your executive team. After their immediate reporting managers, every employee of an organization looks up to the senior management for references and motivations. Through ideas and actions, the senior management must be well equipped to drive the core values of your organization and trickle down employee motivation and team building skills.

See more examples : HR and Employee Surveys

  • Academic Evaluation

Colleges, schools and academic institutions are becoming increasingly active in collecting insightful feedback through surveys. More and more educational power houses now want to conduct academic surveys where they want to actively collect feedback from students, parents and professors to improve their quality of education.

Here are some popular survey examples:

Professor Evaluation Survey : Professors are “gurus”. What they know, needs to be passed down to every generation that studies under them with on-hands experience in conducting case studies and research projects together. A University’s collective reputation begins with the quality of education imparted by its professors. This survey example collects feedback on professors for great insights on what can be improved in your current standards and what your professors are doing great.

Student Stress Evaluation Survey : Student stress is a major reason for student drop-outs and even suicides. A University’s reputation depends heavily on how it helps students cope with stress from studies and academic processes. Use this survey example to survey your students for signs of work stress and fatigue. Moreover, this is also the time when students learn how to deal with stress, a stress that comes with making career decisions as well. This is the time you can train your students on stress management, in their student lives as well as ahead in their job and profession.

Graduation / University Completion Survey : Your exiting students are the best source for collecting feedback on your University. Your students who have successfully accomplished their degrees are not just success stories, but also stories of their experience at the university, what they felt helped them complete their studies and also what they found to have hindered progress. This is a great survey example for your next academic survey!

See more examples : Academic Evaluation and Student Surveys

  • Psychographic and Demographics

Psychology and demographic surveys are important to researchers in most fields because these surveys focus on understanding the psychology and demographic categorization of a respondent.

Here are some great survey examples :

Lifestyle Survey : This survey example explores the general lifestyle of a respondent and collects feedback on some basic demographic questions. This survey forms a good foundation for any psychological profile survey of a demographic segment or consumer base. This can also form the basis for a territorial survey of a particular region’s lifestyle and general choices.

Internet / Web Demographic Survey : In today’s increasing online world where most decisions and actions are made over the internet, it is important to understand not just the lifestyle of a demographic, but also their behaviour and preference while browsing the internet. This survey example will help you formulate the template needed for your next web survey.

LEARN ABOUT: Pricing Research

Business / Profession Survey : Another fundamental survey example for any psychographic / demographic researcher is the business and profession survey. This survey helps you capture details of the respondent’s profession, along with basic demographics.

See more : Psychographic / Demographic Surveys

MORE LIKE THIS

We are on the front end of an innovation that can help us better predict how to transform our customer interactions.

How Can I Help You? — Tuesday CX Thoughts

Jun 5, 2024

example for survey research

Why Multilingual 360 Feedback Surveys Provide Better Insights

Jun 3, 2024

Raked Weighting

Raked Weighting: A Key Tool for Accurate Survey Results

May 31, 2024

Data trends

Top 8 Data Trends to Understand the Future of Data

May 30, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Adv Pract Oncol
  • v.6(2); Mar-Apr 2015

Logo of jadpraconcol

Understanding and Evaluating Survey Research

A variety of methodologic approaches exist for individuals interested in conducting research. Selection of a research approach depends on a number of factors, including the purpose of the research, the type of research questions to be answered, and the availability of resources. The purpose of this article is to describe survey research as one approach to the conduct of research so that the reader can critically evaluate the appropriateness of the conclusions from studies employing survey research.

SURVEY RESEARCH

Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative research strategies (e.g., using questionnaires with numerically rated items), qualitative research strategies (e.g., using open-ended questions), or both strategies (i.e., mixed methods). As it is often used to describe and explore human behavior, surveys are therefore frequently used in social and psychological research ( Singleton & Straits, 2009 ).

Information has been obtained from individuals and groups through the use of survey research for decades. It can range from asking a few targeted questions of individuals on a street corner to obtain information related to behaviors and preferences, to a more rigorous study using multiple valid and reliable instruments. Common examples of less rigorous surveys include marketing or political surveys of consumer patterns and public opinion polls.

Survey research has historically included large population-based data collection. The primary purpose of this type of survey research was to obtain information describing characteristics of a large sample of individuals of interest relatively quickly. Large census surveys obtaining information reflecting demographic and personal characteristics and consumer feedback surveys are prime examples. These surveys were often provided through the mail and were intended to describe demographic characteristics of individuals or obtain opinions on which to base programs or products for a population or group.

More recently, survey research has developed into a rigorous approach to research, with scientifically tested strategies detailing who to include (representative sample), what and how to distribute (survey method), and when to initiate the survey and follow up with nonresponders (reducing nonresponse error), in order to ensure a high-quality research process and outcome. Currently, the term "survey" can reflect a range of research aims, sampling and recruitment strategies, data collection instruments, and methods of survey administration.

Given this range of options in the conduct of survey research, it is imperative for the consumer/reader of survey research to understand the potential for bias in survey research as well as the tested techniques for reducing bias, in order to draw appropriate conclusions about the information reported in this manner. Common types of error in research, along with the sources of error and strategies for reducing error as described throughout this article, are summarized in the Table .

An external file that holds a picture, illustration, etc.
Object name is jadp-06-168-g01.jpg

Sources of Error in Survey Research and Strategies to Reduce Error

The goal of sampling strategies in survey research is to obtain a sufficient sample that is representative of the population of interest. It is often not feasible to collect data from an entire population of interest (e.g., all individuals with lung cancer); therefore, a subset of the population or sample is used to estimate the population responses (e.g., individuals with lung cancer currently receiving treatment). A large random sample increases the likelihood that the responses from the sample will accurately reflect the entire population. In order to accurately draw conclusions about the population, the sample must include individuals with characteristics similar to the population.

It is therefore necessary to correctly identify the population of interest (e.g., individuals with lung cancer currently receiving treatment vs. all individuals with lung cancer). The sample will ideally include individuals who reflect the intended population in terms of all characteristics of the population (e.g., sex, socioeconomic characteristics, symptom experience) and contain a similar distribution of individuals with those characteristics. As discussed by Mady Stovall beginning on page 162, Fujimori et al. ( 2014 ), for example, were interested in the population of oncologists. The authors obtained a sample of oncologists from two hospitals in Japan. These participants may or may not have similar characteristics to all oncologists in Japan.

Participant recruitment strategies can affect the adequacy and representativeness of the sample obtained. Using diverse recruitment strategies can help improve the size of the sample and help ensure adequate coverage of the intended population. For example, if a survey researcher intends to obtain a sample of individuals with breast cancer representative of all individuals with breast cancer in the United States, the researcher would want to use recruitment strategies that would recruit both women and men, individuals from rural and urban settings, individuals receiving and not receiving active treatment, and so on. Because of the difficulty in obtaining samples representative of a large population, researchers may focus the population of interest to a subset of individuals (e.g., women with stage III or IV breast cancer). Large census surveys require extremely large samples to adequately represent the characteristics of the population because they are intended to represent the entire population.

DATA COLLECTION METHODS

Survey research may use a variety of data collection methods with the most common being questionnaires and interviews. Questionnaires may be self-administered or administered by a professional, may be administered individually or in a group, and typically include a series of items reflecting the research aims. Questionnaires may include demographic questions in addition to valid and reliable research instruments ( Costanzo, Stawski, Ryff, Coe, & Almeida, 2012 ; DuBenske et al., 2014 ; Ponto, Ellington, Mellon, & Beck, 2010 ). It is helpful to the reader when authors describe the contents of the survey questionnaire so that the reader can interpret and evaluate the potential for errors of validity (e.g., items or instruments that do not measure what they are intended to measure) and reliability (e.g., items or instruments that do not measure a construct consistently). Helpful examples of articles that describe the survey instruments exist in the literature ( Buerhaus et al., 2012 ).

Questionnaires may be in paper form and mailed to participants, delivered in an electronic format via email or an Internet-based program such as SurveyMonkey, or a combination of both, giving the participant the option to choose which method is preferred ( Ponto et al., 2010 ). Using a combination of methods of survey administration can help to ensure better sample coverage (i.e., all individuals in the population having a chance of inclusion in the sample) therefore reducing coverage error ( Dillman, Smyth, & Christian, 2014 ; Singleton & Straits, 2009 ). For example, if a researcher were to only use an Internet-delivered questionnaire, individuals without access to a computer would be excluded from participation. Self-administered mailed, group, or Internet-based questionnaires are relatively low cost and practical for a large sample ( Check & Schutt, 2012 ).

Dillman et al. ( 2014 ) have described and tested a tailored design method for survey research. Improving the visual appeal and graphics of surveys by using a font size appropriate for the respondents, ordering items logically without creating unintended response bias, and arranging items clearly on each page can increase the response rate to electronic questionnaires. Attending to these and other issues in electronic questionnaires can help reduce measurement error (i.e., lack of validity or reliability) and help ensure a better response rate.

Conducting interviews is another approach to data collection used in survey research. Interviews may be conducted by phone, computer, or in person and have the benefit of visually identifying the nonverbal response(s) of the interviewee and subsequently being able to clarify the intended question. An interviewer can use probing comments to obtain more information about a question or topic and can request clarification of an unclear response ( Singleton & Straits, 2009 ). Interviews can be costly and time intensive, and therefore are relatively impractical for large samples.

Some authors advocate for using mixed methods for survey research when no one method is adequate to address the planned research aims, to reduce the potential for measurement and non-response error, and to better tailor the study methods to the intended sample ( Dillman et al., 2014 ; Singleton & Straits, 2009 ). For example, a mixed methods survey research approach may begin with distributing a questionnaire and following up with telephone interviews to clarify unclear survey responses ( Singleton & Straits, 2009 ). Mixed methods might also be used when visual or auditory deficits preclude an individual from completing a questionnaire or participating in an interview.

FUJIMORI ET AL.: SURVEY RESEARCH

Fujimori et al. ( 2014 ) described the use of survey research in a study of the effect of communication skills training for oncologists on oncologist and patient outcomes (e.g., oncologist’s performance and confidence and patient’s distress, satisfaction, and trust). A sample of 30 oncologists from two hospitals was obtained and though the authors provided a power analysis concluding an adequate number of oncologist participants to detect differences between baseline and follow-up scores, the conclusions of the study may not be generalizable to a broader population of oncologists. Oncologists were randomized to either an intervention group (i.e., communication skills training) or a control group (i.e., no training).

Fujimori et al. ( 2014 ) chose a quantitative approach to collect data from oncologist and patient participants regarding the study outcome variables. Self-report numeric ratings were used to measure oncologist confidence and patient distress, satisfaction, and trust. Oncologist confidence was measured using two instruments each using 10-point Likert rating scales. The Hospital Anxiety and Depression Scale (HADS) was used to measure patient distress and has demonstrated validity and reliability in a number of populations including individuals with cancer ( Bjelland, Dahl, Haug, & Neckelmann, 2002 ). Patient satisfaction and trust were measured using 0 to 10 numeric rating scales. Numeric observer ratings were used to measure oncologist performance of communication skills based on a videotaped interaction with a standardized patient. Participants completed the same questionnaires at baseline and follow-up.

The authors clearly describe what data were collected from all participants. Providing additional information about the manner in which questionnaires were distributed (i.e., electronic, mail), the setting in which data were collected (e.g., home, clinic), and the design of the survey instruments (e.g., visual appeal, format, content, arrangement of items) would assist the reader in drawing conclusions about the potential for measurement and nonresponse error. The authors describe conducting a follow-up phone call or mail inquiry for nonresponders, using the Dillman et al. ( 2014 ) tailored design for survey research follow-up may have reduced nonresponse error.

CONCLUSIONS

Survey research is a useful and legitimate approach to research that has clear benefits in helping to describe and explore variables and constructs of interest. Survey research, like all research, has the potential for a variety of sources of error, but several strategies exist to reduce the potential for error. Advanced practitioners aware of the potential sources of error and strategies to improve survey research can better determine how and whether the conclusions from a survey research study apply to practice.

The author has no potential conflicts of interest to disclose.

example for survey research

Conducting Survey Research

Surveys represent one of the most common types of quantitative, social science research. In survey research, the researcher selects a sample of respondents from a population and administers a standardized questionnaire to them. The questionnaire, or survey, can be a written document that is completed by the person being surveyed, an online questionnaire, a face-to-face interview, or a telephone interview. Using surveys, it is possible to collect data from large or small populations (sometimes referred to as the universe of a study).

Different types of surveys are actually composed of several research techniques, developed by a variety of disciplines. For instance, interview began as a tool primarily for psychologists and anthropologists, while sampling got its start in the field of agricultural economics (Angus and Katona, 1953, p. 15).

Survey research does not belong to any one field and it can be employed by almost any discipline. According to Angus and Katona, "It is this capacity for wide application and broad coverage which gives the survey technique its great usefulness..." (p. 16).

Types of Surveys

Surveys come in a wide range of forms and can be distributed using a variety of media.

Mail Surveys

Group administered questionnaires, drop-off surveys, oral surveys, electronic surveys.

  • An Example Survey

Example Survey

General Instructions: We are interested in your writing and computing experiences and attitudes. Please take a few minutes to complete this survey. In general, when you are presented with a scale next to a question, please put an X over the number that best corresponds to your answer. For example, if you strongly agreed with the following question, you might put an X through the number 5. If you agreed moderately, you might put an X through number 4, if you neither agreed nor disagreed, you might put an X through number 3.

Example Question:

As is the case with all of the information we are collecting for our study, we will keep all the information you provide to us completely confidential. Your teacher will not be made aware of any of your responses. Thanks for your help.

Your Name: ___________________________________________________________

Your Instructor's Name: __________________________________________________

Written Surveys

Imagine that you are interested in exploring the attitudes college students have about writing. Since it would be impossible to interview every student on campus, choosing the mail-out survey as your method would enable you to choose a large sample of college students. You might choose to limit your research to your own college or university, or you might extend your survey to several different institutions. If your research question demands it, the mail survey allows you to sample a very broad group of subjects at small cost.

Strengths and Weaknesses of Mail Surveys

Cost: Mail surveys are low in cost compared to other methods of surveying. This type of survey can cost up to 50% less than the self-administered survey, and almost 75% less than a face-to-face survey (Bourque and Fielder 9). Mail surveys are also substantially less expensive than drop-off and group-administered surveys.

Convenience: Since many of these types of surveys are conducted through a mail-in process, the participants are able to work on the surveys at their leisure.

Bias: Because the mail survey does not allow for personal contact between the researcher and the respondent, there is little chance for personal bias based on first impressions to alter the responses to the survey. This is an advantage because if the interviewer is not likeable, the survey results will be unfavorably affected. However, this could be a disadvantage as well.

Sampling--internal link: It is possible to reach a greater population and have a larger universe (sample of respondents) with this type of survey because it does not require personal contact between the researcher and the respondents.

Low Response Rate: One of the biggest drawbacks to written survey, especially as it relates to the mail-in, self-administered method, is the low response rate. Compared to a telephone survey or a face-to-face survey, the mail-in written survey has a response rate of just over 20%.

Ability of Respondent to Answer Survey: Another problem with self-administered surveys is three-fold: assumptions about the physical ability, literacy level and language ability of the respondents. Because most surveys pull the participants from a random sampling, it is impossible to control for such variables. Many of those who belong to a survey group have a different primary language than that of the survey. They may also be illiterate or have a low reading level and therefore might not be able to accurately answer the questions. Along those same lines, persons with conditions that cause them to have trouble reading, such as dyslexia, visual impairment or old age, may not have the capabilities necessary to complete the survey.

Imagine that you are interested in finding out how instructors who teach composition in computer classrooms at your university feel about the advantages of teaching in a computer classroom over a traditional classroom. You have a very specific population in mind, and so a mail-out survey would probably not be your best option. You might try an oral survey, but if you are doing this research alone this might be too time consuming. The group administered questionnaire would allow you to get your survey results in one space of time and would ensure a very high response rate (higher than if you stuck a survey into each instructor's mailbox). Your challenge would be to get everyone together. Perhaps your department holds monthly technology support meetings that most of your chosen sample would attend. Your challenge at this point would be to get permission to use part of the weekly meeting time to administer the survey, or to convince the instructors to stay to fill it out after the meeting. Despite the challenges, this type of survey might be the most efficient for your specific purposes.

Strengths and Weaknesses of Group Administered Questionnaires

Rate of Response: This second type of written survey is generally administered to a sample of respondents in a group setting, guaranteeing a high response rate.

Specificity: This type of written survey can be very versatile, allowing for a spectrum of open and closed ended types of questions and can serve a variety of specific purposes, particularly if you are trying to survey a very specific group of people.

Weaknesses of Group Administered Questionnaires

Sampling: This method requires a small sample, and as a result is not the best method for surveys that would benefit from a large sample. This method is only useful in cases that call for very specific information from specific groups.

Scheduling: Since this method requires a group of respondents to answer the survey together, this method requires a slot of time that is convenient for all respondents.

Imagine that you would like to find out about how the dorm dwellers at your university feel about the lack of availability of vegetarian cuisine in their dorm dining halls. You have prepared a questionnaire that requires quite a few long answers, and since you suspect that the students in the dorms may not have the motivation to take the time to respond, you might want a chance to tell them about your research, the benefits that might come from their responses, and to answer their questions about your survey. To ensure the highest response rate, you would probably pick a time of the day when you are sure that the majority of the dorm residents are home, and then work your way from door to door. If you don't have time to interview the number of students you need in your sample, but you don't trust the response rate of mail surveys, the drop-off survey might be the best option for you.

Strengths and Weaknesses of Drop-off Surveys

Convenience: Like the mail survey, the drop-off survey allows the respondents to answer the survey at their own convenience.

Response Rates: The response rates for the drop-off survey are better than the mail survey because it allows the interviewer to make personal contact with the respondent, to explain the importance of the survey, and to answer any questions or concerns the respondent might have.

Time: Because of the personal contact this method requires, this method takes considerably more time than the mail survey.

Sampling: Because of the time it takes to make personal contact with the respondents, the universe of this kind of survey will be considerably smaller than the mail survey pool of respondents.

Response: The response rate for this type of survey, although considerably better than the mail survey, is still not as high as the response rate you will achieve with an oral survey.

Oral surveys are considered more personal forms of survey than the written or electronic methods. Oral surveys are generally used to get thorough opinions and impressions from the respondents.

Oral surveys can be administered in several different ways. For instance, in a group interview, as opposed to a group administered written survey, each respondent is not given an instrument (an individual questionnaire). Instead, the respondents work in groups to answer the questions together while one person takes notes for the whole group. Another more familiar form of oral survey is the phone survey. Phone surveys can be used to get short one word answers (yes/no), as well as longer answers.

Strengths and Weaknesses of Oral Surveys

Personal Contact: Oral surveys conducted either on the telephone or in person give the interviewer the ability to answer questions from the participant. If the participant, for example, does not understand a question or needs further explanation on a particular issue, it is possible to converse with the participant. According to Glastonbury and MacKean, "interviewing offers the flexibility to react to the respondent's situation, probe for more detail, seek more reflective replies and ask questions which are complex or personally intrusive" (p. 228).

Response Rate: Although obtaining a certain number of respondents who are willing to take the time to do an interview is difficult, the researcher has more control over the response rate in oral survey research than with other types of survey research. As opposed to mail surveys where the researcher must wait to see how many respondents actually answer and send back the survey, a researcher using oral surveys can, if the time and money are available, interview respondents until the required sample has been achieved.

Cost: The most obvious disadvantage of face-to-face and telephone survey is the cost. It takes time to collect enough data for a complete survey, and time translates into payroll costs and sometimes payment for the participants.

Bias: Using face-to-face interview for your survey may also introduce bias, from either the interviewer or the interviewee.

Types of Questions Possible: Certain types of questions are not convenient for this type of survey, particularly for phone surveys where the respondent does not have a chance to look at the questionnaire. For instance, if you want to offer the respondent a choice of 5 different answers, it will be very difficult for respondents to remember all of the choices, as well as the question, without a visual reminder. This problem requires the researcher to take special care in constructing questions to be read aloud.

Attitude: Anyone who has ever been interrupted during dinner by a phone interviewer is aware of the negative feelings many people have about answering a phone survey. Upon receiving these calls, many potential respondents will simply hang up.

With the growth of the Internet (and in particular the World Wide Web) and the expanded use of electronic mail for business communication, the electronic survey is becoming a more widely used survey method. Electronic surveys can take many forms. They can be distributed as electronic mail messages sent to potential respondents. They can be posted as World Wide Web forms on the Internet. And they can be distributed via publicly available computers in high-traffic areas such as libraries and shopping malls. In many cases, electronic surveys are placed on laptops and respondents fill out a survey on a laptop computer rather than on paper.

Strengths and Weaknesses of Electronic Surveys

Cost-savings: It is less expensive to send questionnaires online than to pay for postage or for interviewers.

Ease of Editing/Analysis: It is easier to make changes to questionnaire, and to copy and sort data.

Faster Transmission Time: Questionnaires can be delivered to recipients in seconds, rather than in days as with traditional mail.

Easy Use of Preletters: You may send invitations and receive responses in a very short time and thus receive participation level estimates.

Higher Response Rate: Research shows that response rates on private networks are higher with electronic surveys than with paper surveys or interviews.

More Candid Responses: Research shows that respondents may answer more honestly with electronic surveys than with paper surveys or interviews.

Potentially Quicker Response Time with Wider Magnitude of Coverage: Due to the speed of online networks, participants can answer in minutes or hours, and coverage can be global.

Sample Demographic Limitations: Population and sample limited to those with access to computer and online network.

Lower Levels of Confidentiality: Due to the open nature of most online networks, it is difficult to guarantee anonymity and confidentiality.

Layout and Presentation issues: Constructing the format of a computer questionnaire can be more difficult the first few times, due to a researcher's lack of experience.

Additional Orientation/Instructions: More instruction and orientation to the computer online systems may be necessary for respondents to complete the questionnaire.

Potential Technical Problems with Hardware and Software: As most of us (perhaps all of us) know all too well, computers have a much greater likelihood of "glitches" than oral or written forms of communication.

Response Rate: Even though research shows that e-mail response rates are higher, Opermann (1995) warns that most of these studies found response rates higher only during the first few days; thereafter, the rates were not significantly higher.

Designing Surveys

Initial planning of the survey design and survey questions is extremely important in conducting survey research. Once surveying has begun, it is difficult or impossible to adjust the basic research questions under consideration or the tool used to address them since the instrument must remain stable in order to standardize the data set. This section provides information needed to construct an instrument that will satisfy basic validity and reliability issues. It also offers information about the important decisions you need to make concerning the types of questions you are going to use, as well as the content, wording, order and format of your survey questionnaire.

Overall Design Issues

Four key issues should be considered when designing a survey or questionnaire: respondent attitude, the nature of the items (or questions) on the survey, the cost of conducting the survey, and the suitability of the survey to your research questions.

Respondent attitude: When developing your survey instrument, it is important to try to put yourself into your target population's shoes. Think about how you might react when approached by a pollster while out shopping or when receiving a phone call from a pollster while you are sitting down to dinner. Think about how easy it is to throw away a response survey that you've received in the mail. When developing your instrument, it is important to choose the method you think will work for your research, but also one in which you have confidence. Ask yourself what kind of survey you, as a respondent, would be most apt to answer.

Nature of questions: It is important to consider the relationship between the medium that you use and the questions that you ask. For instance, certain types of questions are difficult to answer over the telephone. Think of the problems you would have in attempting to record Likert scale responses, as in closed-ended questions, over the telephone--especially if a scale of more than five points is used. Responses to open-ended questions would also be difficult to record and report in telephone interviews.

Cost: Along with decisions about the nature of the questions you ask, expense issues also enter into your decision making when planning a survey. The population under consideration, the geographic distribution of this sample population, and the type of questionnaire used all affect costs.

Ability of instrument to meet needs of research question: Finally, there needs to be a logical link between your survey instrument and your research questions. If it is important to get a large number of responses from a broad sample of the population, you obviously will not choose to do a drop-off written survey or an in-person oral survey. Because of the size of the needed sample, you will need to choose a survey instrument that meets this need, such as a phone or mail survey. If you are interested in getting thorough information that might need a large amount of interaction between the interviewer and respondent, you will probably pick in-person oral survey with a smaller sample of respondents. Your questions, then, will need to reflect both your research goals and your choice of medium.

Creating Questionnaire Questions

Developing well-crafted questionnaires is more difficult than it might seem. Researchers should carefully consider the type, content, wording, and order of the questions that they include. In this section, we discuss the steps involved in questionnaire development and the advantages and disadvantages of various techniques.

Open-ended vs. Closed-ended Questions

All researchers must make two basic decisions when designing a survey--they must decide: 1) whether they are going to employ an oral, written, or electronic method, and 2) whether they are going to choose questions that are open or close-ended.

Closed-Ended Questions: Closed-ended questions limit respondents' answers to the survey. The participants are allowed to choose from either a pre-existing set of dichotomous answers, such as yes/no, true/false, or multiple choice with an option for "other" to be filled in, or ranking scale response options. The most common of the ranking scale questions is called the Likert scale question. This kind of question asks the respondents to look at a statement (such as "The most important education issue facing our nation in the year 2000 is that all third graders should be able to read") and then "rank" this statement according to the degree to which they agree ("I strongly agree, I somewhat agree, I have no opinion, I somewhat disagree, I strongly disagree").

Open-Ended Questions: Open-ended questions do not give respondents answers to choose from, but rather are phrased so that the respondents are encouraged to explain their answers and reactions to the question with a sentence, a paragraph, or even a page or more, depending on the survey. If you wish to find information on the same topic as asked above (the future of elementary education), but would like to find out what respondents would come up with on their own, you might choose an open-ended question like "What do you think is the most important educational issue facing our nation in the year 2000?" rather than the Likert scale question. Or, if you would like to focus on reading as the topic, but would still not like to limit the participants' responses, you might pose the question this way: "Do you think that the most important issue facing education is literacy? Explain your answer below."

Note: Keep in mind that you do not have to use close-ended or open-ended questions exclusively. Many researchers use a combination of closed and open questions; often researchers use close-ended questions in the beginning of their survey, then allow for more expansive answers once the respondent has some background on the issue and is "warmed-up."

Rating scales: ask respondents to rate something like an idea, concept, individual, program, product, etc. based on a closed ended scale format, usually on a five-point scale. For example, a Likert scale presents respondents with a series of statements rather than questions, and the respondents are asked to which degree they disagree or agree.

Ranking scales: ask respondents to rank a set of ideas or things, etc. For example, a researcher can provide respondents with a list of ice cream flavors, and then ask them to rank these flavors in order of which they like best, with the rank of "one" representing their favorite. These are more difficult to use than rating scales. They will take more time, and they cannot easily be used for phone surveys since they often require visual aids. However, since ranking scales are more difficult, they may actually increase appropriate effort from respondents.

Magnitude estimation scales: ask respondents to provide numeric estimation of answers. For example, respondents might be asked: "Since your least favorite ice cream flavor is vanilla, we'll give it a score of 10. If you like another ice cream 20 times more than vanilla, you'll give it a score of 200, and so on. So, compared to vanilla at a score of ten, how much do you like rocky road?" These scales are obviously very difficult for respondents. However, these scales have been found to help increase variance explanations over ordinal scaling.

Split or unfolding questions: begin by asking respondents a general question, and then follow up with clarifying questions.

Funneling questions: guide respondents through complex issues or concepts by using a series of questions that progressively narrow to a specific question. For example, researchers can start asking general, open-ended questions, and then move to asking specific, closed-ended, forced-choice questions.

Inverted funneling questions: ask respondents a series of questions that move from specific issues to more general issues. For example, researchers can ask respondents specific, closed-ended questions first and then ask more general, open-ended questions. This technique works well when respondents are not expected to be knowledgeable about a content area or when they are not expected to have an articulate opinion regarding an issue.

Factorial questions: use stories or vignettes to study judgment and decision-making processes. For example, a researcher could ask respondents: "You're in a dangerous, rapidly burning building. Do you exit the building immediately or go upstairs to wake up the other inhabitants?" Converse and Presser (1986) warn that little is known about how this survey question technique compares with other techniques.

The wording of survey questions is a tricky endeavor. It is difficult to develop shared meanings or definitions between researchers and the respondents, and among respondents.

In The Practice of Social Research , Keith Crew, a professor of Sociology at the University of Kentucky, cites a famous example of a survey gone awry because of wording problems. An interview survey that included Likert-type questions ranging from "very much" to "very little" was given in a small rural town. Although it would seem that these items would accurately record most respondents' opinions, in the colloquial language of the region the word "very" apparently has an idiomatic usage which is closer to what we mean by "fairly" or even "poorly." You can just imagine what this difference in definition did to the survey results (p. 271).

This, however, is an extreme case. Even small changes in wording can shift the answers of many respondents. The best thing researchers can do to avoid problems with wording is to pretest their questions. However, researchers can also follow some suggestions to help them write more effective survey questions.

To write effective questions, researchers need to keep in mind these four important techniques: directness, simplicity, specificity, and discreteness.

  • Questions should be written in a straightforward, direct language that is not caught up in complex rhetoric or syntax, or in a discipline's slang or lingo. Questions should be specifically tailored for a group of respondents.
  • Questions should be kept short and simple. Respondents should not be expected to learn new, complex information in order to answer questions.
  • Specific questions are for the most part better than general ones. Research shows that the more general a question is the wider the range of interpretation among respondents. To keep specific questions brief, researchers can sometimes use longer introductions that make the context, background, and purpose of the survey clear so that this information is not necessary to include in the actual questions.
  • Avoid questions that are overly personal or direct, especially when dealing with sensitive issues.

When considering the content of your questionnaire, obviously the most important consideration is whether the content of the questions will elicit the kinds of questions necessary to answer your initial research question. You can gauge the appropriateness of your questions by pretesting your survey, but you should also consider the following questions as you are creating your initial questionnaire:

  • Does your choice of open or close-ended questions lead to the types of answers you would like to get from your respondents?
  • Is every question in your survey integral to your intent? Superfluous questions that have already been addressed or are not relevant to your study will waste the time of both the respondents and the researcher.
  • Does one topic warrant more than one question?
  • Do you give enough prior information/context for each set of questions? Sometimes lead-in questions are useful to help the respondent become familiar and comfortable with the topic.
  • Are the questions both general enough (they are both standardized and relevant to your entire sample), and specific enough (avoid vague generalizations and ambiguousness)?
  • Is each question as succinct as it can be without leaving out essential information?
  • Finally, and most importantly, try to put yourself in your respondents' shoes. Write a survey that you would be willing to answer yourself, and be polite, courteous, and sensitive. Thank the responder for participating both at the beginning and the end of the survey.

Order of Questions

Although there are no general rules for ordering survey questions, there are still a few suggestions researchers can follow when setting up a questionnaire.

  • Pretesting can help determine if the ordering of questions is effective.
  • Which topics should start the survey off, and which should wait until the end of the survey?
  • What kind of preparation do my respondents need for each question?
  • Do the questions move logically from one to the next, and do the topics lead up to each other?

The following general guidelines for ordering survey questions can address these questions:

  • Use warm-up questions. Easier questions will ease the respondent into the survey and will set the tone and the topic of the survey.
  • Sensitive questions should not appear at the beginning of the survey. Try to put the responder at ease before addressing uncomfortable issues. You may also prepare the reader for these sensitive questions with some sort of written preface.
  • Consider transition questions that make logical links.
  • Try not to mix topics. Topics can easily be placed into "sets" of questions.
  • Try not to put the most important questions last. Respondents may become bored or tired before they get to the end of the survey.
  • Be careful with contingency questions ("If you answered yes to the previous question . . . etc.").
  • If you are using a combination of open and close-ended questions, try not to start your survey with open-ended questions. Respondents will be more likely to answer the survey if they are allowed the ease of closed-questions first.

Borrowing Questions

Before developing a survey questionnaire, Converse and Presser (1986) recommend that researchers consult published compilations of survey questions, like those published by the National Opinion Research Center and the Gallup Poll. This will not only give you some ideas on how to develop your questionnaire, but you can even borrow questions from surveys that reflect your own research. Since these questions and questionnaires have already been tested and used effectively, you will save both time and effort. However, you will need to take care to only use questions that are relevant to your study, and you will usually have to develop some questions on your own.

Advantages of Closed-Ended Questions

  • Closed-ended questions are more easily analyzed. Every answer can be given a number or value so that a statistical interpretation can be assessed. Closed-ended questions are also better suited for computer analysis. If open-ended questions are analyzed quantitatively, the qualitative information is reduced to coding and answers tend to lose some of their initial meaning. Because of the simplicity of closed-ended questions, this kind of loss is not a problem.
  • Closed-ended questions can be more specific, thus more likely to communicate similar meanings. Because open-ended questions allow respondents to use their own words, it is difficult to compare the meanings of the responses.
  • In large-scale surveys, closed-ended questions take less time from the interviewer, the participant and the researcher, and so is a less expensive survey method. The response rate is higher with surveys that use closed-ended question than with those that use open-ended questions.

Advantages of Open-Ended Questions

  • Open-ended questions allow respondents to include more information, including feelings, attitudes and understanding of the subject. This allows researchers to better access the respondents' true feelings on an issue. Closed-ended questions, because of the simplicity and limit of the answers, may not offer the respondents choices that actually reflect their real feelings. Closed-ended questions also do not allow the respondent to explain that they do not understand the question or do not have an opinion on the issue.
  • Open-ended questions cut down on two types of response error; respondents are not likely to forget the answers they have to choose from if they are given the chance to respond freely, and open-ended questions simply do not allow respondents to disregard reading the questions and just "fill in" the survey with all the same answers (such as filling in the "no" box on every question).
  • Because they allow for obtaining extra information from the respondent, such as demographic information (current employment, age, gender, etc.), surveys that use open-ended questions can be used more readily for secondary analysis by other researchers than can surveys that do not provide contextual information about the survey population.

Potential Problems with Survey Questions

While designing questions for a survey, researchers should to be aware of a few problems and how to avoid them:

"Everyone has an opinion": It is incorrect to assume that each respondent has an opinion regarding every question. Therefore, you might offer a "no opinion" option to avoid this assumption. Filters can also be created. For example, researchers can ask respondents if they have any thoughts on an issue, to which they have the option to say "no."

Agree and disagree statements: according to Converse and Presser (1986), these statements suffer from "acquiescence" or the tendency of respondents to agree despite question content (p.35). Researchers can avoid this problem by using forced-choice questions with these statements.

Response order bias: this occurs when a respondent loses track of all options and picks one that comes easily to mind rather than the most accurate. Typically, the respondent chooses the last or first response option. This problem might occur if researchers use long lists and/or rating scales.

Response set: this problem can occur when using a close-ended question format with response options like yes/no or agree/disagree. Sometimes respondents do not consider each question and just answer no or disagree to all questions.

Telescoping: occurs when respondents report that an event took place more recently than it actually did. To avoid this problem, Frey and Mertens (1995) say researchers can use "aided recall"-using a reference point or landmark, or list of events or behaviors (p. 101).

Forward telescoping: occurs when respondents include events that have actually happened before the time frame established. This results in overreporting. According to Converse and Presser (1986), researchers can use "bounded recall" to avoid this problem (p.21). Bounded recall is when researchers interview respondents several months or so after the initial interview to inquire about events that have happened since then. This technique, however, requires more resources. Converse and Presser said that researchers can also just try to narrow the reference points used, which has been shown to reduce this problem too.

Fatigue effect: happens when respondents grow bored or tired during the interview. To avoid this problem, Frey and Mertens (1995) say researchers can use transitions, vary questions and response options, and they can put easy to answer questions at the end of the questionnaire.

Types of Questions to Avoid

  • Double-barreled questions- force respondents to make two decisions in one. For example, a question like: "Do you think women and children should be given the first available flu shots?" does not allow the responder to choose whether women or children should be given the first shots.
  • Double negative questions-for example: "Please tell me whether or not you agree or disagree with this statement. Graduate teaching assistants should not be required to help students outside of class." Respondents may confuse the meaning of the disagree option.
  • Hypothetical questions- are typically too difficult for respondents since they require more scrutiny. For example, "If there were a cure for cancer, would you still support euthanasia?"
  • Ambiguous questions- respondents might not understand the question.
  • Biased questions- For example, "Don't you think that suffering terminal cancer patients should be allowed to be released from their pain?" Researchers should never try to make one response option look more suitable than another.
  • Questions with long lists-these questions may tire respondents or respondents may lose track of the question.

Pretesting the Questionnaire

Ultimately, designing the perfect survey questionnaire is impossible. However, researchers can still create effective surveys. To determine the effectiveness of your survey questionnaire, it is necessary to pretest it before actually using it. Pretesting can help you determine the strengths and weaknesses of your survey concerning question format, wording and order.

There are two types of survey pretests: participating and undeclared .

  • Participating pretests dictate that you tell respondents that the pretest is a practice run; rather than asking the respondents to simply fill out the questionnaire, participating pretests usually involve an interview setting where respondents are asked to explain reactions to question form, wording and order. This kind of pretest will help you determine whether the questionnaire is understandable.
  • When conducting an undeclared pretest , you do not tell respondents that it is a pretest. The survey is given just as you intend to conduct it for real. This type of pretest allows you to check your choice of analysis and the standardization of your survey. According to Converse and Presser (1986), if researchers have the resources to do more than one pretest, it might be best to use a participatory pretest first, then an undeclared test.

General Applications of Pretesting:

Whether or not you use a participating or undeclared pretest, pretesting should ideally also test specifically for question variation, meaning, task difficulty, and respondent interest and attention. Your pretests should also include any questions you borrowed from other similar surveys, even if they have already been pretested, because meaning can be affected by the particular context of your survey. Researchers can also pretest the following: flow, order, skip patterns, timing, and overall respondent well-being.

Pretesting for reliability and validity:

Researchers might also want to pretest the reliability and validity of the survey questions. To be reliable, a survey question must be answered by respondents the same way each time. According to Weisberg et. al (1989), researchers can assess reliability by comparing the answers respondents give in one pretest with answers in another pretest. Then, a survey question's validity is determined by how well it measures the concept(s) it is intended to measure. Both convergent validity and divergent validity can be determined by first comparing answers to another question measuring the same concept, then by measuring this answer to the participant's response to a question that asks for the exact opposite answer.

For instance, you might include questions in your pretest that explicitly test for validity: if a respondent answers "yes" to the question, "Do you think that the next president should be a Republican?" then you might ask "What party do you think you might vote for in the next presidential election?" to check for convergent validity, then "Do you think that you will vote Democrat in the next election?" to check the answer for divergent validity.

Conducting Surveys

Once you have constructed a questionnaire, you'll need to make a plan that outlines how and to whom you will administer it. There are a number of options available in order to find a relevant sample group amongst your survey population. In addition, there are various considerations involved with administering the survey itself.

Administering a Survey

This section attempts to answer the question: "How do I go about getting my questionnaire answered?"

For all types of surveys, some basic practicalities need to be considered before the surveying begins. For instance, you need to find the most convenient time to carry out the data collection (this becomes particularly important in interview surveying and group-administered surveys), how long the data collection is likely to take. Finally, you need to make practical arrangements for administering the survey. Pretesting your survey will help you determine the time it takes to administer, process, and analyze your survey, and will also help you clear out some of the bugs.

Administering Written Surveys

Written surveys can be handled in several different ways. A research worker can deliver the questionnaires to the homes of the sample respondents, explain the study, and then pick the questionnaires up on a later date (or, alternately, ask the respondent to mail the survey back when completed). Another option is mailing questionnaires directly to homes and having researchers pick up and check the questionnaires for completeness in person. This method has proven to have higher response rates than straightforward mail surveys, although it tends to take more time and money to administer.

It is important to put yourself into the role of respondent when deciding how to administer your survey. Most of us have received and thrown away a mail survey, and so it may be useful to think back to the reasons you had for not filling it out and returning it. Here are some ideas for boosting your response rate:

  • Include in each questionnaire a letter of introduction and explanation, and a self-addressed, stamped envelope for returning the questionnaire.
  • Oftentimes, when it fits the study's budget, the envelope might also include a monetary "reward" (usually a dollar to five dollars) as an incentive to fill out the survey.
  • Another method for saving the responder time is to create a self-mailing questionnaire that requires no envelope but folds easily so that the return address appears on the outside. The easier you make the process of completing and returning the survey, the better your survey results will be.
  • Follow up mailings are an important part of administering mail surveys. Nonrespondents can be sent letters of additional encouragement to participate. Even better, a new copy of the survey can be sent to nonresponders. Methodological literature suggests that three follow up letters are adequate, and two to three weeks should be allowed between each mailing.

Administering Oral Surveys

Face-To-Face Surveys

Oftentimes conducting oral surveys requires a staff of interviewers; to control this variable as much as possible, the presentation and preparation of the interviewer is an important consideration.

  • In any face-to-face interview, the appearance of the interviewer is important. Since the success of any survey relies on the interest of the participants to respond to the survey, the interviewer should take care to dress and act in such a way that would not offend the general sample population.
  • Of equal importance is the preparedness of the interviewer. The interviewer should be well acquainted with the questions, and have ample practice administering the survey with mock interviews. If several interviewers will be used, they should be trained as a group to ensure standardization and control. Interviewers also need to carry a letter of identification/authentication to present at in-person surveys.

When actually administering the survey, you need to make decisions about how much of the participants' responses need to be recorded, how much the interviewer will need to "probe" for responses, and how much the interviewer will need to account for context (what is the respondent's age, race, gender, reaction to the study, etc.) If you are administering a close-ended question survey, these may not be considerations. On the other hand, when recording more open-ended responses, the researcher needs to decide beforehand on each of these factors:

  • It depends on the purpose of the study whether the interview should be recorded word for word, or whether the interviewer should record general impressions and opinions. However, for the sake of precision, the former approach is preferred. More information is always better than less when it comes to analyzing the results.
  • Sometimes respondents will respond to a question with an inappropriate answer; this can happen with both open and close-question surveys. Even if you give the participant structured choices like "I agree" or "I disagree," they might respond "I think that is true," which might require the interviewer to probe for an appropriate answer. In an open-question survey, this probing becomes more challenging. The interviewer might come with a set of potential questions if the respondent does not elaborate enough or strays from the subject. The nature of these probes, however, need to be constructed by the researcher rather than ad-libbed by the interviewers, and should be carefully controlled so that they do not lead the respondent to change answers.

Phone Surveys

Phone surveys certainly involve all of the preparedness of the face-to-face surveys, but encounter new problems because of their reputation. It is much easier to hang-up on a phone surveyor than it is to slam the door in someone's face, and so the sheer number of calls needed to complete a survey can be baffling. Computer innovation has tempered this problem a bit by allowing more for quick and random number dialing and the ability for interviewers to type answers programs that automatically set up the data for analysis. Systems like CATI (Computer-assisted survey interview) have made phone surveys a more cost and time effective method, and therefore a popular one, although respondents are getting more and more reluctant to answer phone surveys because of the increase in telemarketing.

Before conducting a survey, you must choose a relevant survey population. And, unless a survey population is very small, it is usually impossible to survey the entire relevant population. Therefore, researchers usually just survey a sample of a population from an actual list of the relevant population, which in turn is called a sampling frame . With a carefully selected sample, researchers can make estimations or generalizations regarding an entire population's opinions, attitudes or beliefs on a particular topic.

Sampling Procedures and Methods

There are two different types of sampling procedures-- probability and nonprobability . Probability sampling methods ensure that there is a possibility for each person in a sample population to be selected, whereas nonprobability methods target specific individuals. Nonprobability sampling methods include the following:

  • Purposive samples: to purposely select individuals to survey.
  • Volunteer subjects: to ask for volunteers to survey.
  • Haphazard sampling: to survey individuals who can be easily reached.
  • Quota sampling: to select individuals based on a set quota. For example, if a census indicates that more than half of the population is female, then the sample will be adjusted accordingly.

Clearly, there can be an inherent bias in nonprobability methods. Therefore, according to Weisberg, Krosnick, and Bowen (1989), it is not surprising that most survey researchers prefer probability sampling methods. Some commonly used probability sampling methods for surveys are:

  • Simple random sample: a sample is drawn randomly from a list of individuals in a population.
  • Systematic selection procedure sample: a variant of a simple random sample in which a random number is chosen to select the first individual and so on from there.
  • Stratified sample: dividing up the population into smaller groups, and randomly sampling from each group.
  • Cluster sample: dividing up a population into smaller groups, and then only sampling from one of the groups. Cluster sampling is " according to Lee, Forthofer, and Lorimer (1989), is considered a more practical approach to surveys because it samples by groups or clusters of elements rather than by individual elements" (p. 12). It also reduces interview costs. However, Weisberg et. al (1989) said accuracy declines when using this sampling method.
  • Multistage sampling: first, sampling a set of geographic areas. Then, sampling a subset of areas within those areas, and so on.

Sampling and Nonsampling Errors

Directly related to sample size are the concepts of sampling and nonsampling errors. According to Fox and Tracy (1986), surveys are subject to both sampling errors and nonsampling errors.

A sampling error arises from the fact that inevitably samples differ from their populations. Therefore, survey sample results should be seen only as estimations. Weisberg et. al. (1989) said sampling errors cannot be calculated for nonprobability samples, but they can be determined for probability samples. First, to determine sample error, look at the sample size. Then, look at the sampling fraction--the percentage of the population that is being surveyed. Thus, the more people surveyed, the smaller the error. This error can also be reduced, according to Fox and Tracy (1986), by increasing the representativeness of the sample.

Then, there are two different kinds of nonsampling error--random and nonrandom errors. Fox and Tracy (1986) said random errors decrease the reliability of measurements. These errors can be reduced through repeated measurements. Nonrandom errors result from a bias in survey data, which is connected to response and nonresponse bias.

Confidence Level and Interval

Any statement of sampling error must contain two essential components: the confidence level and the confidence interval. These two components are used together to express the accuracy of the sample's statistics in terms of the level of confidence that the statistics fall within a specified interval from the true population parameter. For example, a researcher may be "95 percent confident" that the sample statistic (that 50 percent favor candidate X) is within plus or minus 5 percentage points of the population parameter. In other words, the researcher is 95 percent confident that between 45 and 55 percent of the total population favor candidate X.

Lauer and Asher (1988) provide a table that gives the confidence interval limits for percentages based upon sample size (p. 58):

Sample Size and Confidence Interval Limits

(95% confidence intervals based on a population incidence of 50% and a large population relative to sample size.)

Confidence Limits and Sample Size

When selecting a sample size, one can consider that a higher number of individuals surveyed from a target group yields a tighter measurement, a lower number yields a looser range of confidence limits. The confidence limits may need to be corrected if, according to Lauer and Asher (1988), "the sample size starts to approach the population size" or if "the variable under scrutiny is known to have a much [original emphasis] smaller or larger occurrence than 50% in the whole population" (p. 59). For smaller populations, Singleton (1988) said the standard error or confidence interval should be multiplied by a correction factor equal to sqrt(1 - f), where "f" is the sampling fraction, or proportion of the population included in the sample.

Lauer and Asher (1988) give a table of correction factors for confidence limits where sample size is an important part of population size (p. 60) and also a table of correction factors for where the percentage incidence of the parameter in the population is not 50% (p. 61).

Tables for Calculating Confidence Limits vs. Sample Size

Correction Factors for Confidence Limits When Sample Size (n) Is an Important Part of Population Size (N >= 100)

(For n over 70% of N, take all of N)

From Lauer and Asher (1988, p. 60)

Correction Factors for Rare and Common Percentage of Variables

From Lauer and Asher (1988, p. 61)

Analyzing Survey Results

After creating and conducting your survey, you must now process and analyze the results. These steps require strict attention to detail and, in some cases, knowledge of statistics and computer software packages. How you conduct these steps will depend on the scope of your study, your own capabilities, and the audience to whom you wish to direct the work.

Processing the Results

It is clearly important to keep careful records of survey data in order to do effective work. Most researchers recommend using a computer to help sort and organize the data. Additionally, Glastonbury and MacKean point out that once the data has been filtered though the computer, it is possible to do an unlimited amount of analysis (p. 243).

Jolliffe (1986) believes that editing should be the first step to processing this data. He writes, "The obvious reason for this is to ensure that the data analyzed are correct and complete . At the same time, editing can reduce the bias, increase the precision and achieve consistency between the tables [regarding those produced by social science computer software] (p. 100). Of course, editing may not always be necessary, if for example you are doing a qualitative analysis of open-ended questions, or the survey is part of a larger project and gets distributed to other agencies for analysis. However, editing could be as simple as checking the information input into the computer.

All of this information should be used to test for statistical significance. See our guide on Statistics for more on this topic.

Information may be recorded in any number of ways. Charts and graphs are clear, visual ways to record findings in many cases. For instance, in a mail-out survey where response rate is an issue, you might use a response rate graph to make the process easier. The day the surveys are mailed out should be recorded first. Then, every day thereafter, the number of returned questionnaires should be logged on the graph. Be sure to record both the number returned each day, and the cumulative number, or percentage. Also, as each completed questionnaire is returned, each should be opened, scanned and assigned an identification number.

Analyzing the Results

Before actually beginning the survey the researcher should know how they want to analyze the data. As stated in the Processing the Results section, if you are collecting quantifiable data, a code book is needed for interpreting your data and should be established prior to collecting the survey data. This is important because there are many different formulas needed in order to properly analyze the survey research and obtain statistical significance. Since computer programs have made the process of analyzing data vastly easier than it was, it would be sensible to choose this route. Be sure to pick your program before you design your survey - - some programs require the data to be laid out in different ways.

After the survey is conducted and the data collected, the results must be assembled in some useable format that allows comparison within the survey group, between groups, or both. The results could be analyzed in a number of ways. A T-test may be used to determine if scores of two groups differ on a single variable--whether writing ability differs among students in two classrooms, for instance. A matched T-Test could also be applied to determine if scores of the same participants in a study differ under different conditions or over time. An ANOVA could be applied if the study compares multiple groups on one or more variables. Correlation measurements could also be constructed to compare the results of two interacting variables within the data set.

Secondary Analysis

Secondary analysis of survey data is an accepted methodology which applies previously collected survey data to new research questions. This methodology is particularly useful to researchers who do not have the time or money to conduct an extensive survey, but may be looking at questions for which some large survey has already collected relevant data. A number of books and chapters have been written about this methodology, some of which are listed in the annotated bibliography under "Secondary Analysis."

Advantages and Disadvantages of Using Secondary Analysis

  • Considerably cheaper and faster than doing original studies
  • You can benefit from the research from some of the top scholars in your field, which for the most part ensures quality data.
  • If you have limited funds and time, other surveys may have the advantage of samples drawn from larger populations.
  • How much you use previously collected data is flexible; you might only extract a few figures from a table, you might use the data in a subsidiary role in your research, or even in a central role.
  • A network of data archives in which survey data files are collected and distributed is readily available, making research for secondary analysis easily accessible.

Disadvantages

  • Since many surveys deal with national populations, if you are interested in studying a well-defined minority subgroup you will have a difficult time finding relevant data.
  • Secondary analysis can be used in irresponsible ways. If variables aren't exactly those you want, data can be manipulated and transformed in a way that might lessen the validity of the original research.
  • Much research, particularly of large samples, can involve large data files and difficult statistical packages.

Data-entry Packages Available for Survey Data Analysis

SNAP: Offers simple survey analysis, is able to help with the survey from start to finish, including the designing of questions and questionnaires.

SPSS: Statistical package for social sciences; can cope with most kinds of data.

SAS: A flexible general purpose statistical analysis system.

MINITAB: A very easy-to-use and fairly limited general purpose package for "beginners."

STATGRAPHS: General interactive statistical package with good graphics but not very flexible.

Reporting Survey Results

The final stage of the survey is to report your results. There is not an established format for reporting a survey's results. The report may follow a pattern similar to formal experimental write-ups, or the analysis may show up in pitches to advertising agencies--as with Arbitron data--or the analysis may be presented in departmental meetings to aid curriculum arguments. A formal report might contain contextual information, a literature review, a presentation of the research question under investigation, information on survey participants, a section explaining how the survey was conducted, the survey instrument itself, a presentation of the quantified results, and a discussion of the results.

You can choose to graphically represent your data for easier interpretation by others outside your research project. You can use, for example, bar graphs, histograms, frequency polygrams, pie charts and consistency tables.

Commentary on Survey Research

In this section, we present several commentaries on survey research.

Strengths and Weaknesses of Surveys

  • Surveys are relatively inexpensive (especially self-administered surveys).
  • Surveys are useful in describing the characteristics of a large population. No other method of observation can provide this general capability.
  • They can be administered from remote locations using mail, email or telephone.
  • Consequently, very large samples are feasible, making the results statistically significant even when analyzing multiple variables.
  • Many questions can be asked about a given topic giving considerable flexibility to the analysis.
  • There is flexibilty at the creation phase in deciding how the questions will be administered: as face-to-face interviews, by telephone, as group administered written or oral survey, or by electonic means.
  • Standardized questions make measurement more precise by enforcing uniform definitions upon the participants.
  • Standardization ensures that similar data can be collected from groups then interpreted comparatively (between-group study).
  • Usually, high reliability is easy to obtain--by presenting all subjects with a standardized stimulus, observer subjectivity is greatly eliminated.

Weaknesses:

  • A methodology relying on standardization forces the researcher to develop questions general enough to be minimally appropriate for all respondents, possibly missing what is most appropriate to many respondents.
  • Surveys are inflexible in that they require the initial study design (the tool and administration of the tool) to remain unchanged throughout the data collection.
  • The researcher must ensure that a large number of the selected sample will reply.
  • It may be hard for participants to recall information or to tell the truth about a controversial question.
  • As opposed to direct observation, survey research (excluding some interview approaches) can seldom deal with "context."

Reliability and Validity

Surveys tend to be weak on validity and strong on reliability. The artificiality of the survey format puts a strain on validity. Since people's real feelings are hard to grasp in terms of such dichotomies as "agree/disagree," "support/oppose," "like/dislike," etc., these are only approximate indicators of what we have in mind when we create the questions. Reliability, on the other hand, is a clearer matter. Survey research presents all subjects with a standardized stimulus, and so goes a long way toward eliminating unreliability in the researcher's observations. Careful wording, format, content, etc. can reduce significantly the subject's own unreliability.

Ethical Considerations of Using Electronic Surveys

Because electronic mail is rapidly becoming such a large part of our communications system, this survey method deserves special attention. In particular, there are four basic ethical issues researchers should consider if they choose to use email surveys.

Sample Representatives: Since researchers who choose to do surveys have an ethical obligation to use population samples that are inclusive of race, gender, educational and income levels, etc., if you choose to utilize e-mail to administer your survey you face some serious problems. Individuals who have access to personal computers, modems and the Internet are not necessarily representative of a population. Therefore, it is suggested that researchers not use an e-mail survey when a more inclusive research method is available. However, if you do choose to do an e-mail survey because of its other advantages, you might consider including as part of your survey write up a reminder of the limitations of sample representativeness when using this method.

Data Analysis: Even though e-mail surveys tend to have greater response rates, researchers still do not necessarily know exactly who has responded. For example, some e-mail accounts are screened by an unintended viewer before they reach the intended viewer. This issue challenges the external validity of the study. According to Goree and Marszalek (1995), because of this challenge, "researchers should avoid using inferential analysis for electronic surveys" (p. 78).

Confidentiality versus Anonymity: An electronic response is never truly anonymous, since researchers know the respondents' e-mail addresses. According to Goree and Marszalek (1995), researchers are ethically required to guard the confidentiality of their respondents and to assure respondents that they will do so.

Responsible Quotation: It is considered acceptable for researchers to correct typographical or grammatical errors before quoting respondents since respondents do not have the ability to edit their responses. According to Goree and Marszalek (1995), researchers are also faced with the problem of "casual language" use common to electronic communication (p. 78). Casual language responses may be difficult to report within the formal language used in journal articles.

Response Rate Issues

Each year, nonresponse and response rates are becoming more and more important issues in survey research. According to Weisberg, Krosnick and Bowen (1989), in the 1950s it was not unusual for survey researchers to obtain response rates of 90 percent. Now, however, people are not as trusting of interviewers and response rates are much lower--typically 70 percent or less. Today, even when survey researchers obtain high response rates, they still have to deal with many potential respondent problems.

Nonresponse Issues

Nonresponse Errors Nonresponse is usually considered a source of bias in a survey, aptly called nonresponse bias . Nonresponse bias is a problem for almost every survey as it arises from the fact that there are usually differences between the ideal sample pool of respondents and the sample that actually responds to a survey. According to Fox and Tracy (1986), "when these differences are related to criterion measures, the results may be misleading or even erroneous" (p. 9). For example, a response rate of only 40 or 50 percent creates problems of bias since the results may reflect an inordinate percentage of a particular demographic portion of the sample. Thus, variance estimates and confidence intervals become greater as the sample size is reduced, and it becomes more difficult to construct confidence limits.

Nonresponse bias usually cannot be avoided and so inevitably negatively affects most survey research by creating errors in a statistical measurement. Researchers must therefore account for nonresponse either during the planning of their survey or during the analysis of their survey results. If you create a larger sample during the planning stage, confidence limits may be based on the actual number of responses themselves.

Household-Level Determinants of Nonresponse

According to Couper and Groves (1996), reductions in nonresponse and its errors should be based on a theory of survey participation. This theory of survey participation argues that a person's decision to participate in a survey generally occurs during the first moments of interaction with an interviewer or the text. According to Couper and Groves, four types of influences affect a potential respondent's decision of whether or not to cooperate in a survey. First, potential respondents are influenced by two factors that the researcher cannot control: by their social environments and by their immediate households. Second, potential respondents are influenced by two factors the researcher can control: the survey design and the interviewer.

To minimize nonresponse, Couper and Groves suggest that researchers manipulate the two factors they can control--the survey design and the interviewer.

Response Issues

Not only do survey researchers have to be concerned about nonresponse rate errors, but they also have to be concerned about the following potential response rate errors:

  • Response bias occurs when respondents deliberately falsify their responses. This error greatly jeopardizes the validity of a survey's measurements.
  • Response order bias occurs when a respondent loses track of all options and picks one that comes easily to mind rather than the most accurate.
  • Response set bias occurs when respondents do not consider each question and just answer all the questions with the same response. For example, they answer "disagree" or "no" to all questions.

These response errors can seriously distort a survey's results. Unfortunately, according to Fox and Tracy (1986), response bias is difficult to eliminate; even if the same respondent is questioned repeatedly, he or she may continue to falsify responses. Response order bias and response set errors, however, can be reduced through careful development of the survey questionnaire.

Satisficing

Related to the issue of response errors, especially response order bias and response bias, is the issue of satisficing. According to Krosnick, Narayan, and Smith (1996) satisficing is the notion that certain survey response patterns occur as respondents "shortcut the cognitive processes necessary for generating optimal answers" (p. 29). This theoretical perspective arises from the belief that most respondents are not highly motivated to answer a survey's questions, as reflected in the declining response rates in recent years. Since many people are reluctant to be interviewed, it is presumptuous to assume that respondents will devote a lot of effort to answering a survey.

The theoretical notion of satisficing can be further understood by considering what respondents must do to provide optimal answers. According to Krosnick et. al. (1996), "respondents must carefully interpret the meaning of each question, search their memories extensively for all relevant information, integrate that information carefully into summary judgments, and respond in ways that convey those judgments' meanings as clearly and precisely as possible"(p. 31). Therefore, satisficing occurs when one or more of these cognitive steps is compromised.

Satisficing takes two forms: weak and strong . Weak satisficing occurs when respondents go through all of the cognitive steps necessary to provide optimal answers, but are not as thorough in their cognitive processing. For example, respondents can answer a question with the first response that seems acceptable instead of generating an optimal answer. Strong satisficing, on the other hand, occurs when respondents omit the steps of judgment and retrieval altogether.

Even though they believe that not enough is known yet to offer suggestions on how to increase optimal respondent answers, Krosnick et. al. (1996) argue that satisficing can be reduced by maximizing "respondent motivation" and by "minimizing task difficulty" in the survey questionnaire (p. 43).

Annotated Bibliography

General Survey Information:

Allan, Graham, & Skinner, Chris (eds.) (1991). Handbook for Research Students in the Social Sciences. The Falmer Press: London.

This book is an excellent resource for anyone studying in the social sciences. It is not only well-written, but it is clear and concise with pertinent research information.

Alreck, P. L., & Settle, R. B. (1995 ). The survey research handbook: Guidelines and strategies for conducting a survey (2nd). Burr Ridge, IL: Irwin.

Provides thorough, effective survey research guidelines and strategies for sponsors, information seekers, and researchers. In a very accessible, but comprehensive, format, this handbook includes checklists and guidelists within the text, bringing together all the different techniques and principles, skills and activities to do a "really effective survey."

Babbie, E.R. (1973). Survey research methods . Belmont, CA: Wadsworth.

A comprehensive overview of survey methods. Solid basic textbook on the subject.

Babbie, E.R. (1995). The practice of social research (7th). Belmont, CA: Wadsworth.

The reference of choice for many social science courses. An excellent overview of question construction, sampling, and survey methodology. Includes a fairly detailed critique of an example questionnaire. Also includes a good overview of statistics related to sampling.

Belson, W.A. (1986). Validity in survey research . Brookvield, VT: Gower.

Emphasis on construction of survey instrument to account for validity.

Bourque, Linda B. & Fiedler, Eve P. (1995). How to Conduct Self-Administered and Mail Surveys. Sage Publications: Thousand Oaks.

Contains current information on both self-administered and mail surveys. It is a great resource if you want to design your own survey; there are step-by-step methods for conducting these two types of surveys.

Bradburn, N.M., & Sudman, S. (1979). Improving interview method and questionnaire design . San Francisco: Jossey-Bass Publishers.

A good overview of polling. Includes setting up questionnaires and survey techniques.

Bradburn, N. M., & Sudman, S. (1988). Polls and Surveys: Understanding What They Tell Us. San Francisco: Jossey-Bass Publishers.

These veteran survey researchers answer questions about survey research that are commonly asked by the general public.

Campbell, Angus, A., ∧ Katona, Georgia. (1953). The Sample Survey: A Technique for Social Science Research. In Newcomb, Theodore M. (Ed). Research Methods in the Behavioral Sciences. The Dryden Press: New York. p 14-55.

Includes information on all aspects of social science research. Some chapters in this book are outdated.

Converse, J. M., & Presser, S. (1986). Survey questions: Handcrafting the standardized questionnaire . Newbury Park, CA: Sage.

A very helpful little publication that addresses the key issues in question construction.

Dillman, D.A. (1978). Mail and telephone surveys: The total design method . New York: John Wiley & Sons.

An overview of conducting telephone surveys.

Frey, James H., & Oishi, Sabine Mertens. (1995). How To Conduct Interviews By Telephone and In Person. Sage Publications: Thousand Oaks.

This book has a step-by-step breakdown of how to conduct and design telephone and in person interview surveys.

Fowler, Floyd J., Jr. (1993). Survey Research Methods (2nd.). Newbury Park, CA: Sage.

An overview of survey research methods.

Fowler, F. J. Jr., & Mangione, T. W. (1990). Standardized survey interviewing: Minimizing interviewer-related error . Newbury Park, CA: Sage.

Another aspect of validity/reliability--interviewer error.

Fox, J. & Tracy, P. (1986). Randomized Response: A Method for Sensitive Surveys . Beverly Hills, CA: Sage.

Authors provide a good discussion of response issues and methods of random response, especially for surveys with sensitive questions.

Frey, J. H. (1989). Survey research by telephone (2nd). Newbury Park, CA: Sage.

General overview to telephone polling.

Glock, Charles (ed.) (1967). Survey Research in the Social Sciences. New York: Russell Sage Foundation.

Although fairly outdated, this collection of essays is useful in illustrating the somewhat different ways in which different disciplines regard and use survey research.

Hoinville, G. & Jowell, R. (1978). Survey research practice . London: Heinemann.

Practical overview of the methods and procedures of survey research, particularly discussing problems which may arise.

Hyman, H. H. (1972). Secondary Analysis of Sample Surveys. New York: John Wiley & Sons.

This source is particularly useful for anyone attempting to do secondary analysis. It offers a comprehensive overview of this research method, and couches it within the broader context of social scientific research.

Hyman, H. H. (1955). Survey design and analysis: Principles, cases, and procedures . Glencoe, IL: Free Press.

According to Babbie, an oldie but goodie--a classic.

Jones, R. (1985). Research methods in the social and behavioral sciences . Sunderland, MA: Sinauer.

General introduction to methodology. Helpful section on survey research, especially the discussion on sampling.

Kalton, G. (1983). Compensating for missing survey data . Ann Arbor, MI: Survey Research Center, Institute for Social Research, the University of Michigan.

Addresses a problem often encountered in survey methodology.

Kish, L. (1965). Survey sampling . New York: John Wiley & Sons.

Classic text on sampling theories and procedures.

Lake, C.C., & Harper, P. C. (1987). Public opinion polling: A handbook for public interest and citizen advocacy groups . Washington, D.C.: Island Press.

Clearly written easy to read and follow guide for planning, conducting and analyzing public surveys. Presents material in a step-by-step fashion, including checklists, potential pitfalls and real-world examples and samples.

Lauer, J.M., & Asher, J. W. (1988). Composition research: Empirical designs . New York: Oxford UP.

Excellent overview of a number of research methodologies applicable to composition studies. Includes a chapter on "Sampling and Surveys" and appendices on basic statistical methods and considerations.

Monette, D. R., Sullivan, T. J, & DeJong, C. R. (1990). Applied Social Research: Tool for the Human Services (2nd). Fort Worth, TX: Holt.

A good basic general research textbook which also includes sections on minority issues when doing research and the analysis of "available" or secondary data..

Rea, L. M., & Parker, R. A. (1992). Designing and conducting survey research: A comprehensive guide . San Francisco: Jossey-Bass.

Written for the social and behavioral sciences, public administration, and management.

Rossi, P.H., Wright, J.D., & Anderson, A.B. (eds.) (1983). Handbook of survey research . New York: Academic Press.

Handbook of quantitative studies in social relations.

Salant, P., & Dillman, D. A. (1994). How to conduct your own survey . New York: Wiley.,

A how-to book written for the social sciences.

Sayer, Andrew. (1992). Methods In Social Science: A Realist Approach. Routledge: London and New York.

Gives a different perspective on social science research.

Schuldt, Barbara A., & Totter, Jeff W. (1994, Winter). Electronic Mail vs. Mail Survey Response Rates. Marketing Research, 6. 36-39.

An article with specific information for electronic and mail surveys. Mainly a technical resource.

Schuman, H. & Presser, S. (1981). Questions and answers in attitude surveys . New York: Academic Press.

Detailed analysis of research question wording and question order effects on respondents.

Schwartz, N. & Seymour, S. (1996) Answering Questions: Methodology for Determining Cognitive and Communication Processes in Survey Research. San Francisco: Josey-Bass.

Authors provide a summary of the latest research methods used for analyzing interpretive cognitive and communication processes in answering survey questions.

Seymour, S., Bradburn, N. & Schwartz, N. (1996) Thinking About Answers: The Application of Cognitive Processes to Survey Methodology. San Francisco: Josey-Bass.

Explores the survey as a "social conversation" to investigate what answers mean in relation to how people understand the world and communicate.

Simon, J. (1969). Basic research methods in social science: The art of empirical investigation. New York: Random .

An excellent discussion of survey analysis. The definitions and descriptions begin from a fairly understandable (simple) starting point, then the discussion unfolds to cover some fairly complex interpretive strategies.

Singleton, R. Jr., et. al. (1988). Approaches to social research . New York: Oxford UP.

Has a very accessible chapter on sampling as well as a chapter on survey research.

Smith, Robert B. (Ed.) (1982). A Handbook of Social Science Methods, Volume 3. Prayer: New York.

There is a series of handbooks, each one with specific topics in social science research. A good technical resource, yet slightly dated.

Sul Lee, E., Forthofer, R.N.,& Lorimor, R.J. (1989). Analyzing complex survey data . Newbury Park, CA: Sage Publications.

Details on the statistical analysis of survey data.

Singer, E., & Presser, S., eds. (1989). Survey research methods: A reader . Chicago: U of Chicago P.

The essays in this volume originally appeared in various issues of Public Opinion Quarterly.

Survey Research Center (1983). Interviewer's manual . Ann Arbor, MI: University of Michigan Press.

Very practical, step-by-step guide to conducting a survey and interview with lots of examples to illustrate the process.

Pearson, R.W., &Borouch, R.F. (Eds.) (1986). Survey Research Design: Towards a Better Understanding of Their Costs and Benefits. Springer-Verag: Berlin.

Explains, in a technical fashion, the financial aspects of research design. Somewhat of a cost-analysis book.

Weissberg, H.F., Krosnick , J.A., & Bowen, B.D. (1989). An introduction to survey research and data analysis . Glenview, IL: Scott Foresman.

A good discussion of basic analysis and statistics, particularly what statistical applications are appropriate for particular kinds of data.

Anderson, B., Puur, A., Silver, B., Soova, H., & Voormann, R. (1994). Use of a lottery as an incentive for survey participation: a pilot survey in Estonia. International Journal of Public Opinion Research, 6 , 64-71.

Looks at return results in a study that offers incentives, and recommends incentive use to increase response rates.

Bare, J. (1994). Truth about daily fluctuations in 1992 pre-election polls. Newspaper Research Journal, 15, 73-81.

Comparison of variations between daily poll results of the major polls used during the 1992 American Presidential race.

Chi, S. (1993). Computer knowledge, interests, attitudes, and uses among faculty in two teachers' universities in China. DAI-A, 54/12 , 4412-4623.

Survey indicating a strong link between subject area and computer usage.

Cowans, J. (1994). Wielding the people: Opinion polls and the problem of legitimacy in France since 1944. DAI-A, 54/12 , 4556-5027.

Study looks at how the advent of opinion polling has affected the legitimacy of French governments since World War II.

Crewe, I. (1993). A nation of liars? Opinion polls and the 1992 election. Journal of the Market Research Society, 35 , 341-359.

Poses possible reasons the British polls were so wrong in predicting the outcomes of the 1992 national elections.

Daly, J., & Miller, M. (1975). The empirical development of an instrument to measure writing apprehension. Research in the teaching of English , 9 (3), 242-249.

Discussion of basics in question development and data analysis. Also includes some sample questions.

Daniell, S. (1993). Graduate teaching assistants' attitudes toward and responses to academic dishonesty. DAI-A,54/06, 2065- 2257.

Study explores the ethical and academic responses to cheating, using a large survey tool.

Mittal, B. (1994). Public assessment of TV advertising: Faint praise and harsh criticism. Journal of Advertising Research, 34, 35-53.

Results of a survey of Southern U.S. television viewers' perceptions of television advertisements.

Palmquist, M., & Young, R.E. (1992). Is writing a gift? The impact on students who believe it is. Reading empirical research studies: The rhetoric of research . Hayes et al. eds. Hillsdale NJ: Erlbaum.

This chapter presents results of a study of student beliefs about writing. Includes sample questions and data analysis.

Serow, R. C., & Bitting, P. F. (1995). National service as educational reform: A survey of student attitudes. Journal of research and development in education , 28 (2), 87-90.

This study assessed college students' attitude toward a national service program.

Stouffer, Samuel. (1955). Communism, Conformity, and Civil Liberties. New York: John Wiley & Sons.

This is a famous old survey worth examining. This survey examined the impact of McCarthyism on the attitudes of both the general public and community leaders, a asking whether the repression of the early 1950s affected support for civil liberties.

Wanta, W. & Hu, Y. (1993). The agenda-setting effects of international news coverage: An examination of differing news frames. International Journal of Public Opinion Research, 5, 250-264.

Discusses results of Gallup polls on important problems in relation to the news coverage of international news.

Worcester, R. (1992). The performance of the political opinion polls in the 1992 British general election. Marketing and Research Today, 20, 256-263.

A critique of the use of polls in an attempt to predict voter actions.

Yamada, S, & Synodinos, N. (1994). Public opinion surveys in Japan. International Journal of Public Opinion Research, 6 , 118-138.

Explores trends in opinion poll usage, response rates, and refusals in Japanese polls from 1975 to 1990.

Criticism/Critique/Evaluation:

Bangura, A. K. (1992). The limitations of survey research methods in assessing the problem of minority student retention in higher education . San Francisco: Mellen Research UP.

Case study done at a Maryland university addressing an aspect of validity involving intercultural factors.

Bateson, N. (1984). Data construction in social surveys. London: Allen & Unwin.

Tackles the theory of the method (but not the methods of the method) of data construction. Deals with validity of the data by validizing the process of data construction.

Braverman, M. (1996). Sources of Survey Error: Implications for Evaluation Studies. New Directions for Evaluation: Advances in Survey Research ,70, 17-28.

Looks at how evaluations using surveys can benefit from using survey design methods that reduce various survey errors.

Brehm, J. (1994). Stubbing our toes for a foot in the door? Prior contact, incentives and survey response. International Journal of Public Opinion Research, 6 , 45-63.

Considers whether incentives or the original contact letter lead to increased response rates.

Bulmer, M. (1977). Social-survey research. In M. Bulmer (ed.), Sociological research methods: An introduction . London: Macmillan.

The section includes discussions of pros and cons of survey research findings, inferences and interpreting relationships found in social-survey analysis.

Couper, M. & Groves, R. (1996). Household-Level Determinants of Survey Nonresponse. . New Directions for Evaluation: Advances in Survey Research , 70, 63-80.

Authors discuss their theory of survey participation. They believe that decisions to participate are based on two occurences: interactions with the interviewer, and the sociodemographic characteristics of respondents.

Couto, R. (1987). Participatory research: Methodology and critique. Clinical Sociology Review, 5 , 83-90.

Criticism of survey research. Addresses knowledge/power/change issues through the critique.

Dillman, D., Sangster, R., Tarnai, J., & Rockwood, T. (1996) Understanding Differences in People's Answers to Telephone and Mail Surveys. New Directions for Evaluation: Advances in Survey Research , 70, 45-62.

Explores the issue of differences in respondents' answers in telephone and mail surveys, which can affect a survey's results.

Esaiasson, P. & Granberg, D. (1993). Hidden negativism: Evaluation of Swedish parties and their leaders under different survey methods. International Journal of Public Opinion Research, 5, 265-277.

Compares varying results of mailed questionnaires vs. telephone and personal interviews. Findings indicate methodology affected results.

Guastello, S. & Rieke, M. (1991). A review and critique of honesty test research. Behavioral Sciences and the Law, 9, 501-523.

Looks at the use of honesty, or integrity, testing to predict theft by employees, questioning further use of the tests due to extremely low validity. Social and legal implications are also considered.

Hamilton, R. (1991). Work and leisure: On the reporting of poll results. Public Opinion Quarterly, 55 , 347-356.

Looks at methodology changes that affected reports of results in the Harris poll on American Leisure.

Juster, F. & Stanford, F. (1991). Comment on work and leisure: On reporting of poll results. Public Opinion Quarterly, 55 , 357-359.

Rebuttal of the Hamilton essay, cited above. The rebuttal is based upon statistical interpretation methods used in the cited survey.

Krosnick, J., Narayan, S., & Smith, W. (1996). Satisficing in Surveys: Initial Evidence. New Directions in Evaluation: Advances in Survey Research , 70, 29-44.

Authors discuss "satisficing," a cognitive approach to survey response, which they believe helps researchers understand how survey respondents arrive at their answers.

Lindsey, J.K. (1973). Inferences from sociological survey data: A unified approach . San Francisco: Jossey-Bass.

Examines the statistical analysis of survey data.

Morgan, F. (1990). Judicial standards for survey research: An update and guidelines. Journal of Marketing, 54 , 59-70.

Looks at legal use of survey information as defined and limited in recent cases. Excellent definitions.

Pottick, K. (1990). Testing the underclass concept by surveying attitudes and behavior. Journal of Sociology and Social Welfare, 17, 117-125.

Review of definitional tests constructed to define "underclass."

Rohme, N. (1992). The state of the art of public opinion polling worldwide. Marketing and Research Today, 20, 264-271.

A quick review of the use of polling in several countries, concluding that the use of polling is on the rise worldwide.

Sabatelli, R. (1988). Measurement issues in marital research: A review and critique of contemporary survey instruments. Journal of Marriage and the Family, 55 , 891-915.

Examines issues of methodology.

Schriesheim, C. A.,& Denisi, A. S. (1980). Item Presentation as an Influence on Questionnaire Validity: A Field Experiment. Educational-and-Psychological-Measurement ; 40 (1), 175-82.

Two types of questionnaire formats measuring leadership variables were examined: one with items measuring the same dimensions grouped together and the second with items measuring the same dimensions distributed randomly. The random condition showed superior validity.

Smith, T. (1990). "A critique of the Kinsey Institute/Roper organization national sex knowledge survey." Public Opinion Quarterly, Vol. 55 , 449-457.

Questions validity of the survey based upon question selection and response interpretations. A rejoinder follows, defending the poll.

Smith, Tom W. (1990). "The First Straw? A Study of the Origins of Election Polls," Public Opinion Quarterly, Vol. 54 (Spring: 21-36).

This article offers a look at the early history of American political polling, with special attention to media reactions to the polls. This is an interesting source for anyone interested in the ethical issues surrounding polling and survey.

Sniderman, P. (1986). Reflections on American racism. Journal of Social Issues, 42 , 173-187.

Rebuttal of critique of racism research. Addresses issues of bias and motive attribution.

Stanfield, J. H. II, & Dennis, R. M., eds (1993). Race and Ethnicity in Research Methods . Newbury Park, CA: Sage.

The contributions in this volume examine the array of methods used in quantitative, qualitative, and comparative and historical research to show how research sensitive to ethnic issues can best be conducted.

Stapel, J. (1993). Public opinion polling: Some perspectives in response to 'critical perspectives.' International Journal of Public Opinion Research, 5, 193-194.

Discussion of the moral power of polling results.

Wentland, E. J., & Smith, K. W. (1993). Survey responses: An evaluation of their validity . San Diego: Academic Press.

Reviews and analyzes data from studies that have, through the use of external criteria, assessed the validity of individuals' responses to questions concerning personal characteristics and behavior in a wide variety of areas.

Williams, R. M., Jr. (1989). "The American Soldier: An Assessment, Several Wars Later." Public Opinion Quarterly. Vol. 53 (Summer: 155-174).

One of the classic studies in the history of survey research is reviewed by one of its authors.

Secondary Analysis:

Jolliffe, F.R. (1986). Survey Design and Analysis. Ellis Horwood Limited: Chichester.

Information about survey design as well as secondary analysis of surveys.

Kiecolt, K. J., & Nathan, L. E. (1985). Secondary analysis of survey data . Beverly Hills, CA: Sage.

Discussion of how to use previously collected survey data to answer a new research question.

Monette, D. R., Sullivan, T. J, & DeJong, C. R. (1990). Analysis of available data. In Applied Social Research: Tool for the Human Services (2nd ed., pp. 202-230). Fort Worth, TX: Holt.

Gives some existing sources for statistical data as well as discussing ways in which to use it.

Rubin, A. (1988). Secondary analyses. In R. M. Grinnell, Jr. (Ed.), Social work research and evaluation. (3rd ed., pp. 323-341). Itasca, IL: Peacock.

Chapter discusses inductive and deductive processes in relation to research designs using secondary data. It also discusses methodological issues and presents a case example.

Dale, A., Arber, S., & Procter, M. (1988). Doing Secondary Analysis . London: Unwin Hyman.

A whole book about how to do secondary analysis.

Electronic Surveys:

Carr, H. H. (1991). Is using computer-based questionnaires better than using paper? Journal of Systems Management September, 19, 37.

Reference from Thach.

Dunnington, Richard A. (1993). New methods and technologies in the organizational survey process. American Behavioral Scientist , 36 (4), 512-30.

Asserts that three decades of technological advancements in communications and computer techhnology have transformed, if not revolutionized, organizational survey use and potential.

Goree, C. & Marszalek, J. (1995). Electronic Surveys: Ethical Issues for Researchers. The College Student Affairs Journal , 15 (1), 75-79.

Explores how the use of electronic surveys challenge existing ethical standards of survey research, and how that researchers need to be aware of these new ethical issues.

Hsu, J. (1995). The Development of Electronic Surveys: A Computer Language-Based Method. The Electronic Library , 13 (3), 195-201.

Discusses the need for a markup language method to properly support the creation of survey questionnaires.

Kiesler, S. & Sproull, L. S. (1986). Response effects in the electronic survey. Public Opinion Quarterly, 50 , 402-13.

Opperman, M. (1995) E-Mail Surveys--Potentials and Pitfalls. Marketing Research, 7 (3), 29-33.

A discussion of the advantages and disadvantages of using E-Mail surveys.

Sproull, L. S. (1986). Using electronic mail for data collection in organizational research. Academy of Management Journal, 29, 159-69.

Synodinos, N. E., & Brennan, J. M. (1988). Computer interactive interviewing in survey research. Psychology & Marketing, 5 (2), 117-137.

Thach, Liz. (1995). Using electronic mail to conduct survey research. Educational Technology, 35, 27-31.

A review of the literature on the topic of survey research via electronic mail concentrating on the key issues in design, implementation, and response using this medium.

Walsh, J. P., Kiesler, S., Sproull, L. S., & Hesse, B. W. (1992). Self-selected and randomly selected respondents in a computer network survey. Public Opinion Quarterly, 56, 241-244.

Further Investigation

Bery, David N., & Smith , Kenwyn K. (eds.) (1988). The Self in Social Inquiry: Researching Methods. Sage Publications: Newbury Park.

Has some ethical issues about the role of researcher in social science research.

Barribeau, Paul, Bonnie Butler, Jeff Corney, Megan Doney, Jennifer Gault, Jane Gordon, Randy Fetzer, Allyson Klein, Cathy Ackerson Rogers, Irene F. Stein, Carroll Steiner, Heather Urschel, Theresa Waggoner, & Mike Palmquist. (2005). Survey Research. Writing@CSU . Colorado State University. https://writing.colostate.edu/guides/guide.cfm?guideid=68

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

9 Survey research

Survey research is a research method involving the use of standardised questionnaires or interviews to collect data about people and their preferences, thoughts, and behaviours in a systematic manner. Although census surveys were conducted as early as Ancient Egypt, survey as a formal research method was pioneered in the 1930–40s by sociologist Paul Lazarsfeld to examine the effects of the radio on political opinion formation of the United States. This method has since become a very popular method for quantitative research in the social sciences.

The survey method can be used for descriptive, exploratory, or explanatory research. This method is best suited for studies that have individual people as the unit of analysis. Although other units of analysis, such as groups, organisations or dyads—pairs of organisations, such as buyers and sellers—are also studied using surveys, such studies often use a specific person from each unit as a ‘key informant’ or a ‘proxy’ for that unit. Consequently, such surveys may be subject to respondent bias if the chosen informant does not have adequate knowledge or has a biased opinion about the phenomenon of interest. For instance, Chief Executive Officers may not adequately know employees’ perceptions or teamwork in their own companies, and may therefore be the wrong informant for studies of team dynamics or employee self-esteem.

Survey research has several inherent strengths compared to other research methods. First, surveys are an excellent vehicle for measuring a wide variety of unobservable data, such as people’s preferences (e.g., political orientation), traits (e.g., self-esteem), attitudes (e.g., toward immigrants), beliefs (e.g., about a new law), behaviours (e.g., smoking or drinking habits), or factual information (e.g., income). Second, survey research is also ideally suited for remotely collecting data about a population that is too large to observe directly. A large area—such as an entire country—can be covered by postal, email, or telephone surveys using meticulous sampling to ensure that the population is adequately represented in a small sample. Third, due to their unobtrusive nature and the ability to respond at one’s convenience, questionnaire surveys are preferred by some respondents. Fourth, interviews may be the only way of reaching certain population groups such as the homeless or illegal immigrants for which there is no sampling frame available. Fifth, large sample surveys may allow detection of small effects even while analysing multiple variables, and depending on the survey design, may also allow comparative analysis of population subgroups (i.e., within-group and between-group analysis). Sixth, survey research is more economical in terms of researcher time, effort and cost than other methods such as experimental research and case research. At the same time, survey research also has some unique disadvantages. It is subject to a large number of biases such as non-response bias, sampling bias, social desirability bias, and recall bias, as discussed at the end of this chapter.

Depending on how the data is collected, survey research can be divided into two broad categories: questionnaire surveys (which may be postal, group-administered, or online surveys), and interview surveys (which may be personal, telephone, or focus group interviews). Questionnaires are instruments that are completed in writing by respondents, while interviews are completed by the interviewer based on verbal responses provided by respondents. As discussed below, each type has its own strengths and weaknesses in terms of their costs, coverage of the target population, and researcher’s flexibility in asking questions.

Questionnaire surveys

Invented by Sir Francis Galton, a questionnaire is a research instrument consisting of a set of questions (items) intended to capture responses from respondents in a standardised manner. Questions may be unstructured or structured. Unstructured questions ask respondents to provide a response in their own words, while structured questions ask respondents to select an answer from a given set of choices. Subjects’ responses to individual questions (items) on a structured questionnaire may be aggregated into a composite scale or index for statistical analysis. Questions should be designed in such a way that respondents are able to read, understand, and respond to them in a meaningful way, and hence the survey method may not be appropriate or practical for certain demographic groups such as children or the illiterate.

Most questionnaire surveys tend to be self-administered postal surveys , where the same questionnaire is posted to a large number of people, and willing respondents can complete the survey at their convenience and return it in prepaid envelopes. Postal surveys are advantageous in that they are unobtrusive and inexpensive to administer, since bulk postage is cheap in most countries. However, response rates from postal surveys tend to be quite low since most people ignore survey requests. There may also be long delays (several months) in respondents’ completing and returning the survey, or they may even simply lose it. Hence, the researcher must continuously monitor responses as they are being returned, track and send non-respondents repeated reminders (two or three reminders at intervals of one to one and a half months is ideal). Questionnaire surveys are also not well-suited for issues that require clarification on the part of the respondent or those that require detailed written responses. Longitudinal designs can be used to survey the same set of respondents at different times, but response rates tend to fall precipitously from one survey to the next.

A second type of survey is a group-administered questionnaire . A sample of respondents is brought together at a common place and time, and each respondent is asked to complete the survey questionnaire while in that room. Respondents enter their responses independently without interacting with one another. This format is convenient for the researcher, and a high response rate is assured. If respondents do not understand any specific question, they can ask for clarification. In many organisations, it is relatively easy to assemble a group of employees in a conference room or lunch room, especially if the survey is approved by corporate executives.

A more recent type of questionnaire survey is an online or web survey. These surveys are administered over the Internet using interactive forms. Respondents may receive an email request for participation in the survey with a link to a website where the survey may be completed. Alternatively, the survey may be embedded into an email, and can be completed and returned via email. These surveys are very inexpensive to administer, results are instantly recorded in an online database, and the survey can be easily modified if needed. However, if the survey website is not password-protected or designed to prevent multiple submissions, the responses can be easily compromised. Furthermore, sampling bias may be a significant issue since the survey cannot reach people who do not have computer or Internet access, such as many of the poor, senior, and minority groups, and the respondent sample is skewed toward a younger demographic who are online much of the time and have the time and ability to complete such surveys. Computing the response rate may be problematic if the survey link is posted on LISTSERVs or bulletin boards instead of being emailed directly to targeted respondents. For these reasons, many researchers prefer dual-media surveys (e.g., postal survey and online survey), allowing respondents to select their preferred method of response.

Constructing a survey questionnaire is an art. Numerous decisions must be made about the content of questions, their wording, format, and sequencing, all of which can have important consequences for the survey responses.

Response formats. Survey questions may be structured or unstructured. Responses to structured questions are captured using one of the following response formats:

Dichotomous response , where respondents are asked to select one of two possible choices, such as true/false, yes/no, or agree/disagree. An example of such a question is: Do you think that the death penalty is justified under some circumstances? (circle one): yes / no.

Nominal response , where respondents are presented with more than two unordered options, such as: What is your industry of employment?: manufacturing / consumer services / retail / education / healthcare / tourism and hospitality / other.

Ordinal response , where respondents have more than two ordered options, such as: What is your highest level of education?: high school / bachelor’s degree / postgraduate degree.

Interval-level response , where respondents are presented with a 5-point or 7-point Likert scale, semantic differential scale, or Guttman scale. Each of these scale types were discussed in a previous chapter.

Continuous response , where respondents enter a continuous (ratio-scaled) value with a meaningful zero point, such as their age or tenure in a firm. These responses generally tend to be of the fill-in-the blanks type.

Question content and wording. Responses obtained in survey research are very sensitive to the types of questions asked. Poorly framed or ambiguous questions will likely result in meaningless responses with very little value. Dillman (1978) [1] recommends several rules for creating good survey questions. Every single question in a survey should be carefully scrutinised for the following issues:

Is the question clear and understandable ?: Survey questions should be stated in very simple language, preferably in active voice, and without complicated words or jargon that may not be understood by a typical respondent. All questions in the questionnaire should be worded in a similar manner to make it easy for respondents to read and understand them. The only exception is if your survey is targeted at a specialised group of respondents, such as doctors, lawyers and researchers, who use such jargon in their everyday environment. Is the question worded in a negative manner ?: Negatively worded questions such as ‘Should your local government not raise taxes?’ tend to confuse many respondents and lead to inaccurate responses. Double-negatives should be avoided when designing survey questions.

Is the question ambiguous ?: Survey questions should not use words or expressions that may be interpreted differently by different respondents (e.g., words like ‘any’ or ‘just’). For instance, if you ask a respondent, ‘What is your annual income?’, it is unclear whether you are referring to salary/wages, or also dividend, rental, and other income, whether you are referring to personal income, family income (including spouse’s wages), or personal and business income. Different interpretation by different respondents will lead to incomparable responses that cannot be interpreted correctly.

Does the question have biased or value-laden words ?: Bias refers to any property of a question that encourages subjects to answer in a certain way. Kenneth Rasinky (1989) [2] examined several studies on people’s attitude toward government spending, and observed that respondents tend to indicate stronger support for ‘assistance to the poor’ and less for ‘welfare’, even though both terms had the same meaning. In this study, more support was also observed for ‘halting rising crime rate’ and less for ‘law enforcement’, more for ‘solving problems of big cities’ and less for ‘assistance to big cities’, and more for ‘dealing with drug addiction’ and less for ‘drug rehabilitation’. A biased language or tone tends to skew observed responses. It is often difficult to anticipate in advance the biasing wording, but to the greatest extent possible, survey questions should be carefully scrutinised to avoid biased language.

Is the question double-barrelled ?: Double-barrelled questions are those that can have multiple answers. For example, ‘Are you satisfied with the hardware and software provided for your work?’. In this example, how should a respondent answer if they are satisfied with the hardware, but not with the software, or vice versa? It is always advisable to separate double-barrelled questions into separate questions: ‘Are you satisfied with the hardware provided for your work?’, and ’Are you satisfied with the software provided for your work?’. Another example: ‘Does your family favour public television?’. Some people may favour public TV for themselves, but favour certain cable TV programs such as Sesame Street for their children.

Is the question too general ?: Sometimes, questions that are too general may not accurately convey respondents’ perceptions. If you asked someone how they liked a certain book and provided a response scale ranging from ‘not at all’ to ‘extremely well’, if that person selected ‘extremely well’, what do they mean? Instead, ask more specific behavioural questions, such as, ‘Will you recommend this book to others, or do you plan to read other books by the same author?’. Likewise, instead of asking, ‘How big is your firm?’ (which may be interpreted differently by respondents), ask, ‘How many people work for your firm?’, and/or ‘What is the annual revenue of your firm?’, which are both measures of firm size.

Is the question too detailed ?: Avoid unnecessarily detailed questions that serve no specific research purpose. For instance, do you need the age of each child in a household, or is just the number of children in the household acceptable? However, if unsure, it is better to err on the side of details than generality.

Is the question presumptuous ?: If you ask, ‘What do you see as the benefits of a tax cut?’, you are presuming that the respondent sees the tax cut as beneficial. Many people may not view tax cuts as being beneficial, because tax cuts generally lead to lesser funding for public schools, larger class sizes, and fewer public services such as police, ambulance, and fire services. Avoid questions with built-in presumptions.

Is the question imaginary ?: A popular question in many television game shows is, ‘If you win a million dollars on this show, how will you spend it?’. Most respondents have never been faced with such an amount of money before and have never thought about it—they may not even know that after taxes, they will get only about $640,000 or so in the United States, and in many cases, that amount is spread over a 20-year period—and so their answers tend to be quite random, such as take a tour around the world, buy a restaurant or bar, spend on education, save for retirement, help parents or children, or have a lavish wedding. Imaginary questions have imaginary answers, which cannot be used for making scientific inferences.

Do respondents have the information needed to correctly answer the question ?: Oftentimes, we assume that subjects have the necessary information to answer a question, when in reality, they do not. Even if a response is obtained, these responses tend to be inaccurate given the subjects’ lack of knowledge about the question being asked. For instance, we should not ask the CEO of a company about day-to-day operational details that they may not be aware of, or ask teachers about how much their students are learning, or ask high-schoolers, ‘Do you think the US Government acted appropriately in the Bay of Pigs crisis?’.

Question sequencing. In general, questions should flow logically from one to the next. To achieve the best response rates, questions should flow from the least sensitive to the most sensitive, from the factual and behavioural to the attitudinal, and from the more general to the more specific. Some general rules for question sequencing:

Start with easy non-threatening questions that can be easily recalled. Good options are demographics (age, gender, education level) for individual-level surveys and firmographics (employee count, annual revenues, industry) for firm-level surveys.

Never start with an open ended question.

If following a historical sequence of events, follow a chronological order from earliest to latest.

Ask about one topic at a time. When switching topics, use a transition, such as, ‘The next section examines your opinions about…’

Use filter or contingency questions as needed, such as, ‘If you answered “yes” to question 5, please proceed to Section 2. If you answered “no” go to Section 3′.

Other golden rules . Do unto your respondents what you would have them do unto you. Be attentive and appreciative of respondents’ time, attention, trust, and confidentiality of personal information. Always practice the following strategies for all survey research:

People’s time is valuable. Be respectful of their time. Keep your survey as short as possible and limit it to what is absolutely necessary. Respondents do not like spending more than 10-15 minutes on any survey, no matter how important it is. Longer surveys tend to dramatically lower response rates.

Always assure respondents about the confidentiality of their responses, and how you will use their data (e.g., for academic research) and how the results will be reported (usually, in the aggregate).

For organisational surveys, assure respondents that you will send them a copy of the final results, and make sure that you follow up with your promise.

Thank your respondents for their participation in your study.

Finally, always pretest your questionnaire, at least using a convenience sample, before administering it to respondents in a field setting. Such pretesting may uncover ambiguity, lack of clarity, or biases in question wording, which should be eliminated before administering to the intended sample.

Interview survey

Interviews are a more personalised data collection method than questionnaires, and are conducted by trained interviewers using the same research protocol as questionnaire surveys (i.e., a standardised set of questions). However, unlike a questionnaire, the interview script may contain special instructions for the interviewer that are not seen by respondents, and may include space for the interviewer to record personal observations and comments. In addition, unlike postal surveys, the interviewer has the opportunity to clarify any issues raised by the respondent or ask probing or follow-up questions. However, interviews are time-consuming and resource-intensive. Interviewers need special interviewing skills as they are considered to be part of the measurement instrument, and must proactively strive not to artificially bias the observed responses.

The most typical form of interview is a personal or face-to-face interview , where the interviewer works directly with the respondent to ask questions and record their responses. Personal interviews may be conducted at the respondent’s home or office location. This approach may even be favoured by some respondents, while others may feel uncomfortable allowing a stranger into their homes. However, skilled interviewers can persuade respondents to co-operate, dramatically improving response rates.

A variation of the personal interview is a group interview, also called a focus group . In this technique, a small group of respondents (usually 6–10 respondents) are interviewed together in a common location. The interviewer is essentially a facilitator whose job is to lead the discussion, and ensure that every person has an opportunity to respond. Focus groups allow deeper examination of complex issues than other forms of survey research, because when people hear others talk, it often triggers responses or ideas that they did not think about before. However, focus group discussion may be dominated by a dominant personality, and some individuals may be reluctant to voice their opinions in front of their peers or superiors, especially while dealing with a sensitive issue such as employee underperformance or office politics. Because of their small sample size, focus groups are usually used for exploratory research rather than descriptive or explanatory research.

A third type of interview survey is a telephone interview . In this technique, interviewers contact potential respondents over the phone, typically based on a random selection of people from a telephone directory, to ask a standard set of survey questions. A more recent and technologically advanced approach is computer-assisted telephone interviewing (CATI). This is increasing being used by academic, government, and commercial survey researchers. Here the interviewer is a telephone operator who is guided through the interview process by a computer program displaying instructions and questions to be asked. The system also selects respondents randomly using a random digit dialling technique, and records responses using voice capture technology. Once respondents are on the phone, higher response rates can be obtained. This technique is not ideal for rural areas where telephone density is low, and also cannot be used for communicating non-audio information such as graphics or product demonstrations.

Role of interviewer. The interviewer has a complex and multi-faceted role in the interview process, which includes the following tasks:

Prepare for the interview: Since the interviewer is in the forefront of the data collection effort, the quality of data collected depends heavily on how well the interviewer is trained to do the job. The interviewer must be trained in the interview process and the survey method, and also be familiar with the purpose of the study, how responses will be stored and used, and sources of interviewer bias. They should also rehearse and time the interview prior to the formal study.

Locate and enlist the co-operation of respondents: Particularly in personal, in-home surveys, the interviewer must locate specific addresses, and work around respondents’ schedules at sometimes undesirable times such as during weekends. They should also be like a salesperson, selling the idea of participating in the study.

Motivate respondents: Respondents often feed off the motivation of the interviewer. If the interviewer is disinterested or inattentive, respondents will not be motivated to provide useful or informative responses either. The interviewer must demonstrate enthusiasm about the study, communicate the importance of the research to respondents, and be attentive to respondents’ needs throughout the interview.

Clarify any confusion or concerns: Interviewers must be able to think on their feet and address unanticipated concerns or objections raised by respondents to the respondents’ satisfaction. Additionally, they should ask probing questions as necessary even if such questions are not in the script.

Observe quality of response: The interviewer is in the best position to judge the quality of information collected, and may supplement responses obtained using personal observations of gestures or body language as appropriate.

Conducting the interview. Before the interview, the interviewer should prepare a kit to carry to the interview session, consisting of a cover letter from the principal investigator or sponsor, adequate copies of the survey instrument, photo identification, and a telephone number for respondents to call to verify the interviewer’s authenticity. The interviewer should also try to call respondents ahead of time to set up an appointment if possible. To start the interview, they should speak in an imperative and confident tone, such as, ‘I’d like to take a few minutes of your time to interview you for a very important study’, instead of, ‘May I come in to do an interview?’. They should introduce themself, present personal credentials, explain the purpose of the study in one to two sentences, and assure respondents that their participation is voluntary, and their comments are confidential, all in less than a minute. No big words or jargon should be used, and no details should be provided unless specifically requested. If the interviewer wishes to record the interview, they should ask for respondents’ explicit permission before doing so. Even if the interview is recorded, the interviewer must take notes on key issues, probes, or verbatim phrases

During the interview, the interviewer should follow the questionnaire script and ask questions exactly as written, and not change the words to make the question sound friendlier. They should also not change the order of questions or skip any question that may have been answered earlier. Any issues with the questions should be discussed during rehearsal prior to the actual interview sessions. The interviewer should not finish the respondent’s sentences. If the respondent gives a brief cursory answer, the interviewer should probe the respondent to elicit a more thoughtful, thorough response. Some useful probing techniques are:

The silent probe: Just pausing and waiting without going into the next question may suggest to respondents that the interviewer is waiting for more detailed response.

Overt encouragement: An occasional ‘uh-huh’ or ‘okay’ may encourage the respondent to go into greater details. However, the interviewer must not express approval or disapproval of what the respondent says.

Ask for elaboration: Such as, ‘Can you elaborate on that?’ or ‘A minute ago, you were talking about an experience you had in high school. Can you tell me more about that?’.

Reflection: The interviewer can try the psychotherapist’s trick of repeating what the respondent said. For instance, ‘What I’m hearing is that you found that experience very traumatic’ and then pause and wait for the respondent to elaborate.

After the interview is completed, the interviewer should thank respondents for their time, tell them when to expect the results, and not leave hastily. Immediately after leaving, they should write down any notes or key observations that may help interpret the respondent’s comments better.

Biases in survey research

Despite all of its strengths and advantages, survey research is often tainted with systematic biases that may invalidate some of the inferences derived from such surveys. Five such biases are the non-response bias, sampling bias, social desirability bias, recall bias, and common method bias.

Non-response bias. Survey research is generally notorious for its low response rates. A response rate of 15-20 per cent is typical in a postal survey, even after two or three reminders. If the majority of the targeted respondents fail to respond to a survey, this may indicate a systematic reason for the low response rate, which may in turn raise questions about the validity of the study’s results. For instance, dissatisfied customers tend to be more vocal about their experience than satisfied customers, and are therefore more likely to respond to questionnaire surveys or interview requests than satisfied customers. Hence, any respondent sample is likely to have a higher proportion of dissatisfied customers than the underlying population from which it is drawn. In this instance, not only will the results lack generalisability, but the observed outcomes may also be an artefact of the biased sample. Several strategies may be employed to improve response rates:

Advance notification: Sending a short letter to the targeted respondents soliciting their participation in an upcoming survey can prepare them in advance and improve their propensity to respond. The letter should state the purpose and importance of the study, mode of data collection (e.g., via a phone call, a survey form in the mail, etc.), and appreciation for their co-operation. A variation of this technique may be to ask the respondent to return a prepaid postcard indicating whether or not they are willing to participate in the study.

Relevance of content: People are more likely to respond to surveys examining issues of relevance or importance to them.

Respondent-friendly questionnaire: Shorter survey questionnaires tend to elicit higher response rates than longer questionnaires. Furthermore, questions that are clear, non-offensive, and easy to respond tend to attract higher response rates.

Endorsement: For organisational surveys, it helps to gain endorsement from a senior executive attesting to the importance of the study to the organisation. Such endorsement can be in the form of a cover letter or a letter of introduction, which can improve the researcher’s credibility in the eyes of the respondents.

Follow-up requests: Multiple follow-up requests may coax some non-respondents to respond, even if their responses are late.

Interviewer training: Response rates for interviews can be improved with skilled interviewers trained in how to request interviews, use computerised dialling techniques to identify potential respondents, and schedule call-backs for respondents who could not be reached.

Incentives : Incentives in the form of cash or gift cards, giveaways such as pens or stress balls, entry into a lottery, draw or contest, discount coupons, promise of contribution to charity, and so forth may increase response rates.

Non-monetary incentives: Businesses, in particular, are more prone to respond to non-monetary incentives than financial incentives. An example of such a non-monetary incentive is a benchmarking report comparing the business’s individual response against the aggregate of all responses to a survey.

Confidentiality and privacy: Finally, assurances that respondents’ private data or responses will not fall into the hands of any third party may help improve response rates

Sampling bias. Telephone surveys conducted by calling a random sample of publicly available telephone numbers will systematically exclude people with unlisted telephone numbers, mobile phone numbers, and people who are unable to answer the phone when the survey is being conducted—for instance, if they are at work—and will include a disproportionate number of respondents who have landline telephone services with listed phone numbers and people who are home during the day, such as the unemployed, the disabled, and the elderly. Likewise, online surveys tend to include a disproportionate number of students and younger people who are constantly on the Internet, and systematically exclude people with limited or no access to computers or the Internet, such as the poor and the elderly. Similarly, questionnaire surveys tend to exclude children and the illiterate, who are unable to read, understand, or meaningfully respond to the questionnaire. A different kind of sampling bias relates to sampling the wrong population, such as asking teachers (or parents) about their students’ (or children’s) academic learning, or asking CEOs about operational details in their company. Such biases make the respondent sample unrepresentative of the intended population and hurt generalisability claims about inferences drawn from the biased sample.

Social desirability bias . Many respondents tend to avoid negative opinions or embarrassing comments about themselves, their employers, family, or friends. With negative questions such as, ‘Do you think that your project team is dysfunctional?’, ‘Is there a lot of office politics in your workplace?’, ‘Or have you ever illegally downloaded music files from the Internet?’, the researcher may not get truthful responses. This tendency among respondents to ‘spin the truth’ in order to portray themselves in a socially desirable manner is called the ‘social desirability bias’, which hurts the validity of responses obtained from survey research. There is practically no way of overcoming the social desirability bias in a questionnaire survey, but in an interview setting, an astute interviewer may be able to spot inconsistent answers and ask probing questions or use personal observations to supplement respondents’ comments.

Recall bias. Responses to survey questions often depend on subjects’ motivation, memory, and ability to respond. Particularly when dealing with events that happened in the distant past, respondents may not adequately remember their own motivations or behaviours, or perhaps their memory of such events may have evolved with time and no longer be retrievable. For instance, if a respondent is asked to describe his/her utilisation of computer technology one year ago, or even memorable childhood events like birthdays, their response may not be accurate due to difficulties with recall. One possible way of overcoming the recall bias is by anchoring the respondent’s memory in specific events as they happened, rather than asking them to recall their perceptions and motivations from memory.

Common method bias. Common method bias refers to the amount of spurious covariance shared between independent and dependent variables that are measured at the same point in time, such as in a cross-sectional survey, using the same instrument, such as a questionnaire. In such cases, the phenomenon under investigation may not be adequately separated from measurement artefacts. Standard statistical tests are available to test for common method bias, such as Harmon’s single-factor test (Podsakoff, MacKenzie, Lee & Podsakoff, 2003), [3] Lindell and Whitney’s (2001) [4] market variable technique, and so forth. This bias can potentially be avoided if the independent and dependent variables are measured at different points in time using a longitudinal survey design, or if these variables are measured using different methods, such as computerised recording of dependent variable versus questionnaire-based self-rating of independent variables.

  • Dillman, D. (1978). Mail and telephone surveys: The total design method . New York: Wiley. ↵
  • Rasikski, K. (1989). The effect of question wording on public support for government spending. Public Opinion Quarterly , 53(3), 388–394. ↵
  • Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology , 88(5), 879–903. http://dx.doi.org/10.1037/0021-9010.88.5.879. ↵
  • Lindell, M. K., & Whitney, D. J. (2001). Accounting for common method variance in cross-sectional research designs. Journal of Applied Psychology , 86(1), 114–121. ↵

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Research survey examples, templates, and types

Research surveys help base your next important decision on data. With our survey research templates and questions, gather valuable data easily and improve your business.

Get started

What are the benefits of survey research?

Providing data that can be relied on. Whether conducting market research or preparing a new product launch, research surveys supply the precise information needed to succeed. Avoid the confusion of conflicting opinions with data analysis that provides a clear picture of what people think.

At SurveyPlanet, we’re committed to making survey research easy to conduct. With our templates, have access to questions that will deliver the data you need.

The wide variety of research survey templates available is how to get useful data quickly—which makes developing more powerful solutions easier. Survey research can provide data you can rely on.

The wide variety of survey templates available helps develop the correct solution. At SurveyPlanet, we're committed to making research surveys easy to conduct and with our templates, we deliver on that promise.

What are research questionnaires?

They are a tool that returns insight about any topic. Just asking friends, family, and coworkers about a new product is not the best approach. Why? To put it simply, they're not a representative sample and may have biases.

What is needed is the opinions of your target audience. At the end of the day, it is their opinion that matters most. This requires a large enough sample to produce statistically significant data. That's where online surveys can play an important role.

Types of research surveys

Research questionnaires are a great tool to gain insights about all kinds of things (and not just business purposes). These surveys play an important role in extracting valuable insights from diverse populations. When thoughtfully designed, they become powerful instruments for informed decision-making and the advancement of knowledge across various domains.

Let's dive deeper into the types of surveys and where to apply them to get the best results.

Market research survey

Most businesses fail because their management believes their products and services are great—while the market thinks otherwise. To sell anything, the opinions of the people doing the buying need to be understood. Market research surveys offer insights about where a business stands with potential customers—and thus its potential market share—long before resources are dedicated to trying to make a product work in the marketplace.

Learn more about market research surveys.

Media consumption research survey

This type of survey explores how different people consume media content. It provides answers about what they view, how often they do so, and what kind of media they prefer. With a media consumption survey, learn everything about people's viewing and reading habits.

Reading preferences research survey

Ever wondered how, why, and what people enjoy reading? With a reading preferences research survey, such information can be discovered. By further analyzing the data, learn what different groups of people read (and the similarities and differences between different groups).

Product research survey

When launching a new product, understanding its target audience is crucial. This type of survey is a great tool that provides valuable feedback and insight that can be incorporated into a successful product launch.

Learn more about product research surveys.

Brand surveys

These help ascertain how customers feel about a brand. People buy from those they connect with; therefore, ask about their experiences and occasionally check in with them to see if they trust your brand.

Learn more about brand surveys.

Path-to-purchase research surveys

A path-to-purchase research survey investigates the steps consumers take from initial product awareness to final purchase. It typically includes questions about the decision-making process, product research, and factors influencing the ultimate purchasing decision. Such surveys can be conducted through various methods, but the best is via online surveys. The results of path-to-purchase surveys help businesses and marketers understand their target audience and develop effective marketing strategies.

Marketing research surveys

These help a company stand out from competitors and tailor marketing messages that better resonate with a target audience. Market research surveys are another type of research that is crucial when launching a new product or service.

Learn more about marketing research surveys.

Academic research surveys

These surveys are instrumental in improving knowledge about a specific subject. Consolidated results can be used to improve the efficiency of decision-making. Reliability is produced using methodologies and tools like questionnaires, surveys, interviews, and structured online forms.

Learn more about academic surveys.

Types of research methods

The three main types of research methods are exploratory, descriptive, and causal research.

Exploratory research

Exploratory research is conducted when a researcher seeks to explore a new subject or phenomenon with limited or no prior understanding. The primary goal of exploratory research is to gain insights, generate ideas, and form initial hypotheses for more in-depth investigation. This type of research is often the first step in the research process and is particularly useful when the topic is not well-defined or when there is a lack of existing knowledge. Researchers often use open-ended questions and qualitative methods to gather data, allowing them to adapt their approach as they learn more about the topic.

Descriptive research

Descriptive research aims to provide an accurate and detailed portrayal of a specific phenomenon or group. Unlike exploratory research, which seeks to generate insights and hypotheses, descriptive research is focused on describing the characteristics, behaviors, or conditions of a subject without manipulating variables.

Causal research

Causal research, also known as explanatory or experimental research, seeks to establish a cause-and-effect relationship between two or more variables. The primary goal of causal research is to determine whether a change in one variable causes a change in another variable. Unlike descriptive research, which focuses on describing relationships and characteristics, causal research involves manipulating one or more independent variables to observe their impact on dependent variables.

The research survey application

Research methods are designed to produce the best information from a group of research subjects (aka, the focus group). Such methods are used in many types of research and studies. They are methodologies that can be used for research study and data collection.

Depending on the kind of research and research methodology being carried out, different types of research survey questions are used, including multiple choice questions , Likert , scale questions , open-ended questions , demographic questions , and even image choice questions .

There are many survey applications that can collect data from many customers quickly and easily—a great way to get information about products, services, customer experiences, and marketing efforts.

Why you should use research questionnaires

The power of research questionnaires lies in their ease of use and cost-effectiveness. They provide answers to the most vital questions. What are the main benefits of these surveys?

  • You don't have to wonder WHO, WHAT, and WHY because this type of analysis provides answers to those—and many other—questions.
  • With a complete understanding of what's important in a research project, the best inquiries can be incorporated into survey questions.
  • Get an unbiased opinion from a target audience and use it to your advantage.
  • Collect data that matters and have it at your fingertips at all times.

Advantages and disadvantages of survey research

People use these surveys because they have many advantages compared to other research tools. What are the main advantages?

  • Cost-effective.
  • Collect data from many respondents.
  • Quantifiable results.
  • Convenient.
  • The most practical solution for gathering data.
  • Fast and reliable.
  • Easily comparable results.
  • Allows for the exploration of any topic.

While such advantages make it a no-brainer to use research questionnaires, it's always good to know their disadvantages:

  • Biased responses.
  • Cultural differences in understanding questions.
  • Analyzing and understanding responses can be difficult.
  • Some people won't read the questions before answering.
  • Survey fatigue.

However, when these issues are understood, mitigation strategies can be activated. Every research method has flaws, but we firmly believe their benefits outweigh their disadvantages.

To execute a research campaign, the creation of a survey is one of the first steps. This includes designing questions or using a premade template. Below are some of the best research survey examples, templates, and tips for designing these surveys.

20 research survey examples and templates

Specific survey questions for research depend on your goals. A research questionnaire can be conducted about any topic or interest. Here are some of the best questions and ranking prompts:

  • How often do you purchase books without actually reading them?
  • What is your favorite foreign language film?
  • During an average day, how many times do you check the news?
  • Who is your favorite football player of all time? Why?
  • Have you ever used any of the following travel websites to plan a vacation?
  • Do you currently use a similar or competing product?
  • On a scale of 1 to 5, how satisfied are you with the product?
  • What is your single favorite feature of our product?
  • When our product becomes available, are you likely to use it instead of a similar or competing product?
  • What improvements would you suggest for our service?
  • Please rank the following features in order of importance.
  • How often do you consume fruits and vegetables in a typical week?
  • How many days per week do you engage in physical activity?
  • Do you prefer traditional classroom learning or online learning?
  • How many hours a week do you spend studying for your courses?
  • What are your career aspirations upon completing your education?
  • Please rate our website's user interface from poor to excellent.
  • In what ways can we better support you as a customer?
  • Please rank the following factors in order of importance when choosing a new car.
  • Order the following smartphone features based on your preference.

Of course, you get demographic information like:

  • Employment status
  • Marital status
  • Household income

No matter the research topic, this demographic information will lead to better data-driven conclusions. Interested in knowing more about demographic survey questions? Check out our blog post explaining the advantages of gathering demographic information and how to do it appropriately.

Sign up for SurveyPlanet for free. Conduct your first survey to explore what people think. And don't worry about questions because we have some amazing templates to get you started.

Sign up now

Free unlimited surveys, questions and responses.

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

9.1 Overview of Survey Research

Learning objectives.

  • Define what survey research is, including its two important characteristics.
  • Describe several different ways that survey research can be used and give some examples.

What Is Survey Research?

Survey research is a quantitative approach that has two important characteristics. First, the variables of interest are measured using self-reports. In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviors. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. In fact, survey research may be the only approach in psychology in which random sampling is routinely used. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers.

Most survey research is nonexperimental. It is used to describe single variables (e.g., the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population) and also to assess statistical relationships between variables (e.g., the relationship between income and health). But surveys can also be experimental. The study by Lerner and her colleagues is a good example. Their use of self-report measures and a large national sample identifies their work as survey research. But their manipulation of an independent variable (anger vs. fear) to assess its effect on a dependent variable (risk judgments) also identifies their work as experimental.

History and Uses of Survey Research

Survey research may have its roots in English and American “social surveys” conducted around the turn of the 20th century by researchers and reformers who wanted to document the extent of social problems such as poverty (Converse, 1987). By the 1930s, the US government was conducting surveys to document economic and social conditions in the country. The need to draw conclusions about the entire population helped spur advances in sampling procedures. At about the same time, several researchers who had already made a name for themselves in market research, studying consumer preferences for American businesses, turned their attention to election polling. A watershed event was the presidential election of 1936 between Alf Landon and Franklin Roosevelt. A magazine called Literary Digest conducted a survey by sending ballots (which were also subscription requests) to millions of Americans. Based on this “straw poll,” the editors predicted that Landon would win in a landslide. At the same time, the new pollsters were using scientific methods with much smaller samples to predict just the opposite—that Roosevelt would win in a landslide. In fact, one of them, George Gallup, publicly criticized the methods of Literary Digest before the election and all but guaranteed that his prediction would be correct. And of course it was. (We will consider the reasons that Gallup was right later in this chapter.)

From market research and election polling, survey research made its way into several academic fields, including political science, sociology, and public health—where it continues to be one of the primary approaches to collecting new data. Beginning in the 1930s, psychologists made important advances in questionnaire design, including techniques that are still used today, such as the Likert scale. (See “What Is a Likert Scale?” in Section 9.2 “Constructing Survey Questionnaires” .) Survey research has a strong historical association with the social psychological study of attitudes, stereotypes, and prejudice. Early attitude researchers were also among the first psychologists to seek larger and more diverse samples than the convenience samples of college students that were routinely used in psychology (and still are).

Survey research continues to be important in psychology today. For example, survey data have been instrumental in estimating the prevalence of various mental disorders and identifying statistical relationships among those disorders and with various other factors. The National Comorbidity Survey is a large-scale mental health survey conducted in the United States (see http://www.hcp.med.harvard.edu/ncs ). In just one part of this survey, nearly 10,000 adults were given a structured mental health interview in their homes in 2002 and 2003. Table 9.1 “Some Lifetime Prevalence Results From the National Comorbidity Survey” presents results on the lifetime prevalence of some anxiety, mood, and substance use disorders. (Lifetime prevalence is the percentage of the population that develops the problem sometime in their lifetime.) Obviously, this kind of information can be of great use both to basic researchers seeking to understand the causes and correlates of mental disorders and also to clinicians and policymakers who need to understand exactly how common these disorders are.

Table 9.1 Some Lifetime Prevalence Results From the National Comorbidity Survey

And as the opening example makes clear, survey research can even be used to conduct experiments to test specific hypotheses about causal relationships between variables. Such studies, when conducted on large and diverse samples, can be a useful supplement to laboratory studies conducted on college students. Although this is not a typical use of survey research, it certainly illustrates the flexibility of this approach.

Key Takeaways

  • Survey research is a quantitative approach that features the use of self-report measures on carefully selected samples. It is a flexible approach that can be used to study a wide variety of basic and applied research questions.
  • Survey research has its roots in applied social research, market research, and election polling. It has since become an important approach in many academic disciplines, including political science, sociology, public health, and, of course, psychology.

Discussion: Think of a question that each of the following professionals might try to answer using survey research.

  • a social psychologist
  • an educational researcher
  • a market researcher who works for a supermarket chain
  • the mayor of a large city
  • the head of a university police force

Converse, J. M. (1987). Survey research in the United States: Roots and emergence, 1890–1960 . Berkeley, CA: University of California Press.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Survey animation

90 Survey Question Examples + Best Practices Checklist

What makes a good survey question, what is the importance of asking the right questions, 9 types of survey questions + examples, how to conduct surveys effectively, make surveys easier with fullsession, fullsession pricing plans, install your first website survey today, faqs about survey questions.

An effective survey is the best way to collect customer feedback. It will serve as your basis for multiple functions, such as improving your product, supplementing market research, creating new marketing strategies, and much more. But what makes an effective survey?

The answer is simple–you have to ask the right questions. Good survey questions gather concrete information from your audience and give you a solid idea of what you need to do next. However, the process of creating a survey is not that easy–you want to make every question count.

In this article we’ll cover everything you need to know about survey questions, with 90 examples and use cases.

Understanding the anatomy of a good survey question can transform your approach to data collection, ensuring you gather information that’s both actionable and insightful. Let’s dive deeper into the elements that make a survey question effective:

  • Clarity is Key:  Questions should be straightforward and leave no room for interpretation, ensuring uniform understanding across all respondents.
  • Conciseness Matters:  Keep questions short and to the point. Avoid unnecessary wording that could confuse or disengage your audience.
  • Bias-Free Questions:  Ensure questions are neutral and do not lead respondents toward a particular answer. This maintains the integrity of your data.
  • Avoiding Ambiguity:  Specify the context clearly and ask questions in a way that allows for direct and clear answers, eliminating confusion.
  • Ensuring Relevance:  Each question should have a clear purpose and be directly related to your survey’s objectives, avoiding any irrelevant inquiries.
  • Easy to Answer:  Design questions in a format that is straightforward for respondents to understand and respond to, whether open-ended, multiple-choice, or using a rating scale.

Keep these points in mind as you prepare to write your survey questions. It also helps to refer back to these goals after drafting your survey so you can see if you hit each mark.

The primary goal of a survey is to collect information that would help meet a specific goal, whether that be gauging customer satisfaction or getting to know your target audience more. Asking the right survey questions is the best way to achieve that goal. More specifically, a good survey can help you with:

Informed Decision-Making

A solid foundation of data is essential for any business decision, and the right survey questions point you in the direction of the most valuable information.

Survey responses serve as a basis for the strategic decisions that can propel a business forward or redirect its course to avoid potential pitfalls. By understanding what your audience truly wants or needs, you can tailor your products or services to meet those demands more effectively.

Uncovering Customer Preferences

Today’s consumers have more options than ever before, and their preferences can shift with the wind. Asking the right survey questions helps you tap into the current desires of their target market, uncovering trends and preferences that may not be immediately obvious.

This insight allows you to adapt your products, services, and marketing messages to resonate more deeply with the target audience, fostering loyalty and encouraging engagement.

Identifying Areas for Improvement

No product, service, or customer experience is perfect, but the path to improvement lies in understanding where the gaps are. The right survey questions can shine a light on these areas, offering a clear view of what’s working and what’s not.

This feedback is invaluable for continuous improvement, helping you refine your products and enhance the customer experience. In turn, this can lead to increased satisfaction, loyalty, and positive word-of-mouth.

Reducing Churn Rate

Churn rate is the percentage of customers who stop using your service or product over a given period. High churn rates can be a symptom of deeper issues, such as dissatisfaction with the product or service, poor customer experience, or unmet needs. Including good survey questions can help you identify the reasons behind customer departure and take proactive steps to address them.

For example, survey questions that explore customer satisfaction levels, reasons for discontinuation, or the likelihood of recommending the service to others can pinpoint specific factors contributing to churn.

Minimizing Website Bounce Rate

Bounce rate  is the percentage of visitors leaving a website after viewing just one page. High bounce rates may signal issues with a site’s content, layout, or user experience not meeting visitor expectations.

Utilizing surveys to ask about visitors’ web experiences can provide valuable insights into website usability, content relevance, and navigation ease. Effectively, well-crafted survey questions aimed at understanding the user experience can lead to strategic adjustments, improving overall website performance, and fostering a more engaged audience.

three people filling out a feedback form animated picture

A good survey consists of two or more types of survey questions. However, all questions must serve a purpose. In this section, we divide survey questions into nine categories and include the best survey question examples for each type:

1. Open Ended Questions

Open-ended questions  allow respondents to answer in their own words instead of selecting from pre-selected answers.

“What features would you like to see added to our product?”

“How did you hear about our service?”

“What was your reason for choosing our product over competitors?”

“Can you describe your experience with our customer service?”

“What improvements can we make to enhance your user experience?”

“Why did you cancel your subscription?”

“What challenges are you facing with our software?”

“How can we better support your goals?”

“What do you like most about our website?”

“Can you provide feedback on our new product launch?”

When to use open-ended questions: Using these survey questions is a good idea when you don’t have a solid grasp of customer satisfaction yet. Customers will have the freedom to express all their thoughts and opinions, which, in turn, will let you have an accurate feel of how customers perceive your brand.

2. Multiple Choice Questions

Multiple-choice questions offer a set of predefined answers, usually three to four. Businesses usually use multiple-choice survey questions to gather information on participants’ attitudes, behaviors, and preferences.

“Which of the following age groups do you fall into? (Under 18, 19-25, 26-35, 36-45, 46-55, 56+)”

“What is your primary use of our product? (Personal, Business, Educational)”

“How often do you use our service? (Daily, Weekly, Monthly, Rarely)”

“Which of our products do you use? (Product A, Product B, Product C, All of the above)”

“What type of content do you prefer? (Blogs, Videos, Podcasts, eBooks)”

“Where do you usually shop for our products? (Online, In-store, Both)”

“What is your preferred payment method? (Credit Card, PayPal, Bank Transfer, Cash)”

“Which social media platforms do you use regularly? (Facebook, Twitter, Instagram, LinkedIn)”

“What is your employment status? (Employed, Self-Employed, Unemployed, Student)”

“Which of the following best describes your fitness level? (Beginner, Intermediate, Advanced, Expert)”

When to use multiple-choice questions: Asking multiple-choice questions can help with market research and segmentation. You can easily divide respondents depending on what pre-determined answer they choose. However, if this is the purpose of your survey, each question must be based on behavioral types or customer personas.

3. Yes or No Questions

Yes or no questions are straightforward, offering a binary choice.

“Have you used our product before?”

“Would you recommend our service to a friend?”

“Are you satisfied with your purchase?”

“Do you understand the terms and conditions?”

“Was our website easy to navigate?”

“Did you find what you were looking for?”

“Are you interested in receiving our newsletter?”

“Have you attended one of our events?”

“Do you agree with our privacy policy?”

“Have you experienced any issues with our service?”

When to use yes/no questions: These survey questions are very helpful in market screening and filtering out certain people for targeted surveys. For example, asking “Have you used our product before?” helps you separate the people who have tried out your product, a.k.a. the people who qualify for your survey.

4. Rating Scale Questions

Rating scale questions ask respondents to rate their experience or satisfaction on a numerical scale.

“On a scale of 1-10, how would you rate our customer service?”

“How satisfied are you with the product quality? (1-5)”

“Rate your overall experience with our website. (1-5)”

“How likely are you to purchase again? (1-10)”

“On a scale of 1-10, how easy was it to find what you needed?”

“Rate the value for money of your purchase. (1-5)”

“How would you rate the speed of our service? (1-10)”

“Rate your satisfaction with our return policy. (1-5)”

“How comfortable was the product? (1-10)”

“Rate the accuracy of our product description. (1-5)”

When to use rating scale questions: As you can see from the survey question examples above, rating scale questions give you excellent  quantitative data  on customer satisfaction.

5. Checkbox Questions

Checkbox questions allow respondents to select multiple answers from a list. You can also include an “Others” option, where the respondent can answer in their own words.

“Which of the following features do you value the most? (Select all that apply)”

“What topics are you interested in? (Select all that apply)”

“Which days are you available? (Select all that apply)”

“Select the services you have used. (Select all that apply)”

“What types of notifications would you like to receive? (Select all that apply)”

“Which of the following devices do you own? (Select all that apply)”

“Select any dietary restrictions you have. (Select all that apply)”

“Which of the following brands have you heard of? (Select all that apply)”

“What languages do you speak? (Select all that apply)”

“Select the social media platforms you use regularly. (Select all that apply)”

When to use checkbox questions: Checkbox questions are an excellent tool for collecting  psychographic data , including information about customers’ lifestyles, behaviors, attitudes, beliefs, etc. Moreover, survey responses will help you correlate certain characteristics to specific market segments.

6. Rank Order Questions

Rank order questions ask respondents to prioritize options according to their preference or importance.

“Rank the following features in order of importance to you. (Highest to Lowest)”

“Please rank these product options based on your preference. (1 being the most preferred)”

“Rank these factors by how much they influence your purchase decision. (Most to Least)”

“Order these services by how frequently you use them. (Most frequent to Least frequent)”

“Rank these issues by how urgently you think they need to be addressed. (Most urgent to Least urgent)”

“Please prioritize these company values according to what matters most to you. (Top to Bottom)”

“Rank these potential improvements by how beneficial they would be for you. (Most beneficial to Least beneficial)”

“Order these content types by your interest level. (Most interested to Least interested)”

“Rank these brands by your preference. (Favorite to Least favorite)”

“Prioritize these activities by how enjoyable you find them. (Most enjoyable to Least enjoyable)”

When to use rank order questions: Respondents must already be familiar with your brand or products to answer these questions, which is why we recommend using these for customers in the middle or bottom of your  conversion funnel .

Checklist of items animated

7. Likert Scale Questions

Likert scale questions measure the intensity of feelings towards a statement on a scale of agreement or satisfaction. Usually, these survey questions use a 5 to 7-point scale, ranging from “Strongly Agree” to “Strongly Disagree” or something similar.

  • “I am satisfied with the quality of customer service. (Strongly Agree, Agree, Neutral, Disagree, Strongly Disagree)”
  • “The product meets my needs. (Strongly Agree to Strongly Disagree)”
  • “I find the website easy to navigate. (Strongly Agree to Strongly Disagree)”
  • “I feel that the pricing is fair for the value I receive. (Strongly Agree to Strongly Disagree)”
  • “I would recommend this product/service to others. (Strongly Agree to Strongly Disagree)”
  • “I am likely to purchase from this company again. (Strongly Agree to Strongly Disagree)”
  • “The company values customer feedback. (Strongly Agree to Strongly Disagree)”
  • “I am confident in the security of my personal information. (Strongly Agree to Strongly Disagree)”
  • “The product features meet my expectations. (Strongly Agree to Strongly Disagree)”
  • “Customer service resolved my issue promptly. (Strongly Agree to Strongly Disagree)”

When to use Likert scale questions: You can use these survey question examples in different types of surveys, such as customer satisfaction (CSAT) surveys. Likert scale questions give you precise measurements of how satisfied respondents are with a specific aspect of your product or service.

8. Matrix Survey Questions

Matrix survey questions allow respondents to evaluate multiple items using the same set of response options. Many companies combine matrix survey questions with Likert scales to make the survey easier to do.

  • “Please rate the following aspects of our service. (Customer support, Product quality, Delivery speed)”
  • “Evaluate your level of satisfaction with these website features. (Search functionality, Content relevance, User interface)”
  • “Rate the importance of the following factors in your purchasing decision. (Price, Brand, Reviews)”
  • “Assess your agreement with these statements about our company. (Innovative, Ethical, Customer-focused)”
  • “Rate your satisfaction with these aspects of our product. (Ease of use, Durability, Design)”
  • “Evaluate these aspects of our mobile app. (Performance, Security, Features)”
  • “Rate how well each of the following describes our brand. (Trustworthy, Innovative, Responsive)”
  • “Assess your satisfaction with these elements of our service. (Responsiveness, Accuracy, Friendliness)”
  • “Rate the effectiveness of these marketing channels for you. (Email, Social Media, Print Ads)”
  • “Evaluate your agreement with these workplace policies. (Flexibility, Diversity, Wellness initiatives)”

When to use matrix survey questions: Ask matrix survey questions when you want to make your survey more convenient to answer, as they allow multiple questions on various topics without repeating options. This is particularly helpful when you want to cover many points of interest in one survey.

9. Demographic Questions

Lastly, demographic questions collect basic information about respondents, aiding in data segmentation and analysis.

  • “What is your age?”
  • “What is your gender? (Male, Female, Prefer not to say, Other)”
  • “What is your highest level of education completed?”
  • “What is your employment status? (Employed, Self-employed, Unemployed, Student)”
  • “What is your household income range?”
  • “What is your marital status? (Single, Married, Divorced, Widowed)”
  • “How many people live in your household?”
  • “What is your ethnicity?”
  • “In which city and country do you currently reside?”
  • “What is your occupation?”

When to use demographic questions: From the survey question examples, you can easily tell that these questions aim to collect information on your respondents’ backgrounds, which will be helpful in creating buyer personas and improving market segmentation.

Checklist pointer arrow on tablet held in hands animation

Surveys can help you accomplish many things for your business, but only if you do it right. Creating the perfect survey isn’t just about crafting the best survey questions, you also have to:

1. Define Your Objectives

Before crafting your survey, be clear about what you want to achieve. Whether it’s understanding customer satisfaction, gauging interest in a new product, or collecting feedback on services, having specific objectives will guide your survey design and ensure you ask the right questions.

2. Know Your Audience

Understanding who your respondents are will help tailor the survey to their interests and needs, increasing the likelihood of participation. Consider demographics, behaviors, and preferences to make your survey relevant and engaging to your target audience.

3. Choose the Right Type of Survey Questions

Utilize a mix of the nine types of survey questions to gather a wide range of data. Balance open-ended questions for qualitative insights with closed-ended questions for easy-to-analyze quantitative data. Ensure each question aligns with your objectives and is clear and concise.

4. Keep It Short and Simple (KISS)

Respondents are more likely to complete shorter surveys. Aim for a survey that takes 5-10 minutes to complete, focusing on essential questions only. A straightforward and intuitive survey design encourages higher response rates.

5. Use Simple Language

Avoid technical jargon, complex words, or ambiguous terms. The language should be accessible to all respondents, ensuring that questions are understood as intended.

6. Ensure Anonymity and Confidentiality

Assure respondents that their answers are anonymous and their data will be kept confidential. This assurance can increase the honesty and accuracy of the responses you receive.

7. Test Your Survey

Pilot your survey with a small group before full deployment. This testing phase can help identify confusing questions, technical issues, or any other aspects of the survey that might hinder response quality or quantity.

8. Choose the Right Distribution Channels

Select the most effective channels to reach your target audience. This could be via email, social media, your website, or in-app notifications, depending on where your audience is most active and engaged.

9. Offer Incentives

Consider offering incentives to increase participation rates. Incentives can range from discounts, entry into a prize draw, or access to exclusive content. Ensure the incentive is relevant and appealing to your target audience.

10. Analyze and Act on the Data

After collecting the responses, analyze the data to extract meaningful insights. Use these insights to make informed decisions, implement changes, or develop strategies that align with your objectives. Sharing key findings and subsequent actions with respondents can also demonstrate the value of their feedback and encourage future participation.

11. Follow Up

Consider following up with respondents after the survey, especially if you promised to share results or if you’re conducting longitudinal studies. A follow-up can reinforce their importance to your research and maintain engagement over time.

12. Iterate and Improve

Surveys are not a one-time activity. Regularly conducting surveys and iterating based on previous feedback and results can help you stay aligned with your audience’s changing needs and preferences.

Checklist of items animated

These survey question examples are a great place to start in creating efficient and effective surveys. Why not take it a step further by integrating a  customer feedback tool  on your website?

FullSession  lets you collect instant visual feedback with an intuitive in-app survey. With this tool, you can:

  • Build unique surveys
  • Target feedback based on users’ devices or specific pages
  • Measure survey responses

Aside from FullSession’s customer feedback tool, you also gain access to:

  • Interactive heat maps: A  website heat map  shows you which items are gaining the most attention and which ones are not, helping you optimize UI and UX.
  • Session recordings: Watch  replays  or live sessions to see how users are navigating your website and pinpoint areas for improvement.
  • Funnels and conversions: Analyze funnel data to figure out what’s causing  funnel drops  and what contributes to successful conversions.

fullsession pricing image

The FullSession platform offers a  14-day free trial.  It provides two paid plans—Basic and Business. Here are more details on each plan.

  • The Basic plan costs $39/month and allows you to monitor up to 5,000 monthly sessions.
  • The Business plan costs $149/month and helps you to track and analyze up to 25,000 monthly sessions.
  • The Enterprise plan starts from 100,000 monthly sessions and has custom pricing.

If you need more information, you can  get a demo.

It takes less than 5 minutes to set up your first website or app survey form, with  FullSession , and it’s completely free!

How many questions should I include in my survey?

Aim for 10-15 questions to keep surveys short and engaging, ideally taking 5-10 minutes to complete. Focus on questions that directly support your objectives.

How can I ensure my survey questions are not biased?

Use neutral language, avoid assumptions, balance answer choices, and pre-test your survey with a diverse group to identify and correct biases.

How do I increase my survey response rate?

To boost response rates, ensure your survey is concise and relevant to the audience. Use engaging questions, offer incentives where appropriate, and communicate the value of respondents’ feedback. Choose the right distribution channels to reach your target audience effectively.

example for survey research

Enhance Your Insights With Richer User Behavior Data

Discover FullSession's Digital Experience Intelligence solution firsthand. Explore FullSession for free

women shopping

19 Consumer experience survey examples and questions to inspire you

Why is it important to understand the consumer experience, 19 example surveys and questions to ask your consumers, how to analyze your customer experience survey results, when should you send a customer experience survey, consumer insight templates at your fingertips.

A customer experience survey asks customers about their interactions with your brand across various touch points—from initial discovery through purchase and beyond.

And they matter a lot. Because you might think you know what customers are facing when browsing your website or using your product, but when you learn about how that all happens in their true context and circumstances, you might learn some surprising stuff.

Consumers don’t hold back when it comes to sharing their brand experiences. Often, the bad reviews travels faster, sometimes coming from the well-meant advice of others: “Just tweet about it; that’ll get their attention.”

While this method may offer consumers a temporary fix to their beef with companies, it would be a lot better if brands were preventing these issues in the first place—a great way to boost customer loyalty. How? By investing more thought and effort into shaping stellar customer experiences, and proactively collecting customer feedback.

Understanding consumer experience goes beyond just product opinions or customer service interactions. It’s about the entire journey your customers take with your brand.

In this guide, we’ll dissect consumer experience research, touchpoint after touchpoint, showing you how to draw insights from the very people who know it best. We’ll walk you through example questions and give you some best practices to incorporate in your customer experience surveys. Here’s a quick summary:

  • Analyze trends over time : like people, surveys don’t do well in isolation. Send out customer experience surveys regularly and compare results over time to spot trends and measure the impact of changes you’ve made.
  • Ask the why and how : it’s not just about what your customers experience, but what they think of it. If they indicate that support was quick, it might not mean that it was kind. Follow up on ratings with open-ended questions to get to the heart of their experience.
  • Segment your audience : tailor your surveys to different customer groups for more relevant insights. What a long-time customer thinks might be worlds apart from a first-timer’s views.
  • Use a conversational tone : make your surveys feel like a chat, not a chore. A friendly tone can encourage more responses and richer insights.

Want to know more? Keep reading.

Customer experience surveys can be a revelation for brands stuck wondering why their marketing isn’t resonating or why satisfied customers just aren’t returning. Because sometimes your product or marketing isn’t actually the issue, and simple customer satisfaction surveys (CSATs) aren’t giving you the insights you need.

Broadening your view and diving into the customer experience might reveal a small hiccup in the purchasing process, a gap in customer support, or a disconnect in the messaging. Understanding these details allows brands to make precise adjustments.

Here’s the thing: creating experiences that perfectly match what your customers expect is really tough.

Making them smile when they buy or when they need help isn’t enough. Your responsibility to delight them is for the whole journey—before they’ve even decided to choose you and after they’ve paid.

6 simple ways to test consumer preferences

Check out the top ways to test how and why your target customers make their choices

The risk of not being in tune with your customers

Understanding the consumer experience helps you be intentional in every interaction. It’s the little things, often. But there are big things at play here, too. When you’re not in tune with this experience, things can start to unravel quickly:

  • Wasted resources : without a clear understanding, you might be pouring effort and money into areas that don’t actually matter to your customers.
  • Off-target marketing : your messages might as well be arrows shot in the dark. They don’t hit home, they don’t bring in new customers, and they definitely don’t bring in more revenue.
  • Attracting the wrong crowd : ever thrown a party and the wrong people show up? That’s what it’s like. You end up attracting customers who don’t really gel with what you’re offering. They leave, often loudly, on their way out.
  • Scattered product development : it’s like trying to cook without knowing what ingredients you have. Your product development can lack direction and focus.

All in all, knowing the ins and outs of the customer experience will allow your brand to create tailored experiences that feel personalized, which will not just boost customer satisfaction scores, but also retain customers, and increase sales.

Examining the customer journey through surveys

You could try to put yourself in the shoes of your customers and walk through every touchpoint. And while that is recommendable for some things, it isn’t the same as getting to know the true customer experience.

For starters: you know your product, service or website better than customers who haven’t interacted with it before.

So you’ll use it differently. You’ll miss the context that they are operating in. Moreover, you’re not necessarily your own target audience, and different segments can have different experiences.

That’s why customer experience surveys are the way to go. You can use them to get insights into every touchpoint, every stage of the journey:

  • Pre-purchase : what are customers looking for? What hesitations do they have? Surveys can uncover what potential customers are looking for in products like yours, helping you fine-tune your offerings and marketing strategies to meet or even exceed these expectations.
  • Post-purchase : now they’ve bought in. How do you keep the excitement alive, address any concerns, and encourage them to become brand ambassadors?
  • Following customer support : are you solving their problems and making them feel valued? Understand the effectiveness of your support team and where improvements can be made to turn negative experiences into positive ones.
  • Post-cancellation/downgrade : why did they leave? Is there a chance to win them back or learn for the future and boost customer loyalty?

Note: these types of surveys aren’t customer satisfaction surveys. You’re not just gauging whether or not people are happy, you’re zooming in on what they’re experiencing and what that means for the way they perceive your brand.

Before you dive into these customer experience example questions, remember they’re starting points, not a one-size-fits-all checklist for your customer satisfaction survey template. Your brand is unique, and so are your research needs.

Use these customer feedback questions as inspiration to put together a survey that digs into the specifics of your customer experience.

Tailor your questions to explore the unique aspects of your brand’s interaction with its customers.

Mix, match, and modify these prompts to zero in on what really matters for your business and the insights you’re seeking to uncover.

example for survey research

1. What’s something you wish you had known before purchasing this product/service?

This question sheds light on critical information that could have sped up the customer’s decision-making process or boosted satisfaction.

By identifying these gaps, you can refine your communication strategy to better inform future customers.

2. What was missing from your experience that you’ve found elsewhere?

If you understand what competitors offer that you don’t, you can pinpoint specific areas for improvement or innovation.

This customer feedback helps you to understand your competitive landscape better and adjust your value proposition accordingly.

3. What’s the biggest challenge you face when using this product/service?

This question reveals specific obstacles or frustrations customers encounter, which will give you direct targets for product improvement or service enhancement to make your customer’s response offerings more user-friendly.

4. How did the customer service team make you feel during your last interaction?

Customer feedback on service interactions can highlight strengths and areas needing improvement.

Positive responses can be modeled, while negative experiences offer a roadmap for training and making improvements to customer support quality.

5. What’s one thing this brand could do to make your next experience better?

Customer suggestions provide tangible actions you can take to immediately enhance the customer experience. Quick wins!

This customer feedback is great for making quick adjustments that can significantly impact customer satisfaction and loyalty.

6. What aspect of your experience do you think the brand is not aware of but should be?

This question helps gather data on hidden aspects of the customer experience that might have been overlooked, allowing you to address and refine these areas to improve overall satisfaction.

7. Was there a moment during your experience where you felt hesitant or unsure?

Identifying moments of uncertainty helps evaluate how effectively your brand supports and reassures customers throughout their journey, pinpointing opportunities to enhance guidance and trust.

8. Can you recall a detail in your experience that stood out as a positive/negative experience?

Focusing on specific positive or negative experiences provides clarity on what works well and what doesn’t, guiding strategic decisions to replicate success and mitigate issues.

9. Describe a moment where you felt frustrated with your experience. What could the brand have done to prevent or solve this?

Direct insights into customer frustrations and proposed solutions will show you a clear path to preventing future dissatisfaction, improving the overall customer experience.

10. Have you ever decided against purchasing from this brand at the last minute? What stopped you?

Understanding the reasons behind last-minute purchase hesitations can reveal barriers in the purchase process. These insights can directly help you to improve conversion rates.

11. How well do our communication channels (email, social media, website) meet your needs?

Collecting more customer data and feedback on how customers interact with your content and the effectiveness of different communication channels from the customer’s perspective can help optimize these touchpoints for better engagement and information dissemination.

12. If you could change one thing about this product/service, what would it be?

Fishing for direct suggestions for changes allows customers to voice their most desired improvements, guiding your product development priorities.

13. How likely are you to return to this brand for your future needs?

These types of customer satisfaction survey results and questions help you gauge the likelihood of repeat business and the reasons behind it can help identify what keeps loyal customers coming back or what might drive them away.

14. Describe how you felt the first time you used this product/service.

Capturing your customers’ initial emotional responses and impressions provides valuable feedback on the effectiveness of your onboarding experience and initial product/service impact.

15. What does this brand represent to you?

This question lets you collect feedback on customers’ perceptions of your brand values and image. It gives you an indication of how well your brand messaging aligns with customer expectations and values.

16. How does this product/service compare to your ideal version of such a product/service?

Identifying gaps between customer expectations and the reality of your offering can spotlight areas for improvement and innovation.

17. In what way has this product/service impacted your daily routine or life?

Do you know how your customers feel thanks to your business? This question helps you gain insight into the practical and emotional impact of your product/service on customers’ lives and can highlight its value and areas where further enhancements could enrich customer experiences even more.

18. Could you share a suggestion for how this brand might enhance your overall experience?

Encouraging customers to share ideas for holistic improvements can reveal innovative ways to create a more positive experience across all touchpoints.

19. What motivated your last purchase with this brand, and did the product/service meet your expectations?

Linking the motivations behind purchases to satisfaction levels helps assess how well your product/service is meeting customer expectations, informing strategies to maintain or enhance customer satisfaction score.

  • Segment customers and data: break down your survey results by different customer demographics, purchase history, or any specific segmentation relevant to your business. This helps you identify patterns or needs unique to certain groups, like existing customers and those who have left.
  • Look for trends over time: if your customer feedback survey is part of an ongoing effort, compare the results with past data. Are there improvements in areas you’ve been working on? Are there new issues have emerged that weren’t apparent before? This is valuable information to identify trends in the market and how customers’ opinions change.
  • Look beyond customer satisfaction: measuring customer satisfaction isn’t useless, but it should be done in the right context and as part of a larger research question.
  • Identify common themes: use thematic analysis to sift through open-ended responses. This could reveal unexpected insights about your product or service that quantitative questions might not capture.
  • Prioritize actions based on frequency and impact: focus on the issues or suggestions mentioned most frequently by your customers, but also consider the potential impact of less common feedback that could significantly enhance the customer experience.
  • Cross-reference with other data sources: compare survey findings with data from other customer interaction points like social media, customer support logs, or sales data to validate insights and identify broader trends.
  • Contrast sentiments between segments: look beyond surface-level data to compare sentiment scores between different customer segments. This can highlight if specific groups have substantially different experiences or perceptions that need tailored responses.
  • Analyze verbatim for emotional tone: use sentiment analysis tools to assess the emotional tone of open-ended responses. This can reveal not just what customers are saying, but how they feel about their experiences, offering clues on emotional drivers or deterrents.
  • Map feedback to customer journey stages: assign each piece of feedback to a specific stage in the customer journey. This approach helps identify which parts of the journey are delighting customers and which are creating friction.
  • Benchmark against industry standards: if industry benchmarks are available, compare your survey results against them to see how your customer experience stacks up against competitors and industry leaders.
  • Identify advocates vs. detractors: using Net Promoter Score (NPS) segments or similar metrics, specifically analyze comments from promoters and detractors separately.
  • Engage in follow-up conversations: don’t make this a one-time-thing. Reach out for follow-up conversations with respondents who provided particularly insightful feedback. This can uncover the nuances behind their responses and generate qualitative insights that surveys alone might not capture.

It’s not just a question of when, it’s also a question of ‘how often’?

Ideally, you do it regularly, across different touchpoints and survey both paying customers and those who are still shopping around.

Because sending a customer experience survey shouldn’t be a reactive measure; it’s a proactive step to understanding your customers better.

Whether you’ve noticed a spike in complaints or just want to stay ahead, it’s important to time your surveys wisely. Regular surveys can help you keep a close eye on customer sentiment, allowing you to address any issues before they escalate.

Ideally, integrating these surveys into your customer journey at key milestones—like after a purchase or interaction with customer service—ensures you’re always in tune with your customers’ needs and expectations.

Ready to get a clearer picture of your consumers? Attest customer experience survey templates are here to make it easier. Dive into our Brand Perception , Customer Profiling , or JTBD (Jobs to Be Done) templates to kick-start your consumer insights journey.

They’re straightforward, easy to use, and packed with the kind of questions that reveal what your consumers really think and need.

Perfect for fine-tuning your marketing, product development, or overall strategy. Check them out and see how you can start understanding your consumers on a whole new level today.

example for survey research

What are the top consumer experience providers?

Get reliable insights on what your customers experience – check our list of the top consumer experience providers

By conducting customer surveys, you can pinpoint exactly where your service excels or needs improvement. This direct feedback is crucial for training your team and refining your approach, ensuring that every customer call or interaction adds positively to your customer’s experience.

Asking the right customer feedback questions helps you understand the effort your customers put into their interactions with your brand, from navigating your ordering process to getting their issues resolved. This knowledge allows you to streamline processes and improve customer satisfaction.

Measuring how much effort customers need to exert to use your product or service, or resolve issues, is vital. A lower customer effort score (CES) usually correlates with higher customer satisfaction, loyalty, and, ultimately, a decrease in customer churn.

Analyzing survey data gives you a comprehensive view of your customers’ opinions and experiences. This insight helps you make informed decisions to address issues, improve customer experiences, and keep your customers satisfied and loyal.

Understanding your target audience is key to designing effective customer surveys. Tailoring questions to fit the specific needs and preferences of your existing customers or a particular loyal customer segment ensures that the feedback you collect is relevant and actionable.

A representative sample ensures that the survey data accurately reflects the broader customer base’s opinions and experiences. This accuracy is crucial for making informed decisions that positively impact a wide range of customers, from new prospects to long-term loyalists.

example for survey research

Stephanie Rand

Senior Customer Research Manager 

Related articles

11 types of qualitative research marketers navigate every day, 6 qualitative data examples for thorough market researchers, making it personal – using tech to build connections with consumers | diageo, subscribe to our newsletter.

Fill in your email and we’ll drop fresh insights and events info into your inbox each week.

* I agree to receive communications from Attest. Privacy Policy .

You're now subscribed to our mailing list to receive exciting news, reports, and other updates!

example for survey research

Work Life is Atlassian’s flagship publication dedicated to unleashing the potential of every team through real-life advice, inspiring stories, and thoughtful perspectives from leaders around the world.

Kelli María Korducki

Contributing Writer

Dominic Price

Work Futurist

Dr. Mahreen Khan

Senior Quantitative Researcher, People Insights

Kat Boogaard

Principal Writer

example for survey research

5 tips for employee surveys that actually make a difference

Best practices for collecting and analyzing human data in the workplace.

Pega Davoudzadeh

Senior Researcher, People Insights

Surveys are a tool for getting to know people. That makes them a great place to start if you want to build a culture where people feel valued, connected to one another, and inspired by their work. 

But effective employee surveys don’t happen by accident. Constructing surveys that return useful results – surveys that actually make peoples’ lives better – is both an art and a rigorous science. 

Through my academic research in psychometrics and quantitative psychology, I’ve spent my career studying how we can use analytical methods to understand people’s experiences and behaviors. Now, as a senior researcher on Atlassian’s People Insights team, I help design surveys that meaningfully improve work and life for our employees.

You might not have the resources to work with a professional survey scientist. But by following these best practices for collecting and analyzing human data, you can start gathering the knowledge you need to create a better employee experience. 

What are employee surveys?

An employee survey is any assessment designed to capture data about how employees experience their work. 

Employee surveys might ask open-ended questions, requiring written responses. Or, they might capture more quantitative data with numeric scales or multiple-choice questions. 

Typically, organizations conduct employee surveys because they want to create a better employee experience. They need data to understand both the current baseline they’re starting from, and specific actions they can take to improve it.

Of course, “employee experience” is a broad and all-encompassing term! Good employee surveys often target and investigate specific parts of this experience, like onboarding, job satisfaction, or manager-employee relationships. 

common types of employee surveys

  • Company culture surveys: How are employees experiencing their company’s culture?
  • Pulse surveys: How is the employee experience changing from month to month?
  • Onboarding or training surveys: How did employees experience training and onboarding? Did it help them settle in and become productive faster? 
  • Employee engagement or satisfaction surveys: Are employees feeling happy, challenged, and inspired by their work? 
  • Manager effectiveness surveys: How do employees experience relationships with their managers?

The power of effective employee surveys

Obviously, improving peoples’ working lives is a worthwhile goal on its own. But happier, more inspired employees elevate their organization in very tangible ways, too. 

Throughout my career, I’ve seen that ultimately, more engaged and happy employees perform better, are more productive, and stay with the organization longer. All these qualities can help a company achieve its goals – and surveys are the most direct way to uncover how you can achieve them. 

Another excellent survey goal is to foster belonging – a sense of personal connection to the organization, its values, and the people within it. Belonging is an often-overlooked precursor to engagement – according to some research , 91% of employees who feel they belong are engaged, compared to 20% of employees who don’t. 

5 ingredients for a survey that’s worth your time

Want your survey to help you create positive change? These five principles are a great place to start. 

Know your goal

Instead of trying to capture every aspect of your employees’ experience, think critically about your goal. What actual outcomes are you trying to achieve? What are you trying to move the needle on? 

Then, work backwards from your desired outcome. What aspects of the employee experience impact the outcome you want, and how can you investigate them? What questions will help you take an informed action?

When your survey doesn’t have a clear goal, you may end up asking questions you can’t take action on. That’s a surefire route to survey fatigue, damaged employee trust, and diminished morale. 

For example, maybe you want to reduce turnover. So, you could ask questions about employees’ relationships with their managers, desired future prospects with the organization, and work-life balance.

survey fatigue

Imagine if a friend asked how you felt about your relationship, but never acted on your concerns. You wouldn’t feel heard or respected – and you might even betrayed or angry that they wasted your time.

Employee surveys are no different. People quickly get sick of opening up and sharing their experience if doing so doesn’t actually improve their life at work.

“Survey fatigue” sounds like it’s caused by surveys that are too long, or conducted too often. While that’s certainly possible, it’s not the whole picture. 

When employers send out surveys that don’t lead to action, it damages trust and engagement, especially if it happens repeatedly. Often, folks just stop participating. But even worse, they feel let down and disrespected.

Focus on actionability 

Once your goal is clear, it’s easier to ask questions that will help you achieve it. While you can include some general, “sense check” questions, your survey should be geared towards informing concrete action.

Actionability has three components: 

  • Ask questions that can inform action. For example, “what does an ideal work-life balance look like for you?” is more actionable than “how satisfied are you with your work-life balance?” 
  • Don’t ask questions you can’t take action on. A classic example is “How satisfied are you with your compensation?” Nearly everyone would like a raise, but it’s unlikely an employer can grant one to their entire workforce at the same time. 
  • Show people you’re taking action. For example, you could hold an all-hands meeting after your survey, sharing the data and outlining your action plan. Then, check back in periodically to follow up and show them you meant what you said. 

Keep context in mind

Every organization is different, and the employee experience changes over time. If you don’t keep organizational climate in mind when designing questions, your survey can come across as insensitive, and return unhelpful answers.

Here are some examples of how context can inform survey questions: 

  • Asking a question like “How optimistic are you about the future with this company?” after there’s just been a round of layoffs and morale is low will most certainly return unfavorable responses
  • If your workforce includes hourly, salaried, and contractor employees, don’t ask all of them the same questions about compensation.
  • If you ask people’s opinions of managers or leadership right before their annual performance review, you might get biased responses.

Test out (pilot) your survey items

“Survey piloting” might sound technical, and it definitely can be! Survey scientists use sophisticated statistical modeling and cognitive testing techniques to make sure survey items will actually drive the changes they’re looking for. 

But piloting is a great idea for anyone who’s running an employee survey. Piloting survey items essentially means testing them, with a small random sample, before releasing them to your entire organization. Piloting helps improve your questions, so you’ll get better data when you run the survey at full scale. 

It’s very important that your sample be randomly selected – don’t just ask your teammates to pilot the survey with you. The idea is to understand how the survey will land with people in different demographic groups, areas of the company, and levels of seniority.  If you work at a small organization, you can still gain meaningful insights from a pilot survey – even a focus group as small as five people is worthwhile. You can also try piloting more questions than you’ll want in the finished survey, so you can identify the ones that perform best.

There are two ways of testing survey questions: qualitatively and quantitatively. 

Qualitative testing: Also called “cognitive testing” by survey scientists, qualitative testing is like a focus group for your survey. After running your test, you’ll talk to the respondents about their experience. Did the survey flow well? Were any items confusing or hard to understand? 

Based on their answers, you’ll be able to evaluate how the items resonated with people from different backgrounds, and whether the questions were relevant to your goal. 

Quantitative testing: Quantitative testing happens once your random sample has actually answered your questions, rather than just giving you feedback on the survey design. You’ll analyze this first batch of data, looking for irregularities that could indicate something is amiss. 

Quantitative testing follows the same basic principles as interpreting a larger set of data. We’re covering those next, so keep reading!

Interpret your results critically 

Evaluating your answers is just as important as asking the right questions. When you get your results back, don’t just take them at face value! Even if you carefully tested your survey, there are many ways data can be biased or skewed. 

Here are four best practices to follow when interpreting survey data. 

Consider response rates 

If not many people responded to your survey, you won’t have statistically significant data – and the data you do have might not accurately represent your organization. For example, maybe respondents skewed heavily towards employees in your department, because you’d reminded them multiple times. 

If you don’t have enough data, or it’s not distributed across your company evenly, you may need to re-run your survey. To avoid this problem, send the survey to double or triple the number of participants you’ll actually need to get useful data. 

Look for irregularities

Normally, survey responses roughly distribute along a bell curve . Answers may trend in one area, but there will be responses on either side. 

If you see dramatic trends that don’t follow this distribution, it probably indicates an issue with your question. For example, if almost everyone responded to a question with “strongly agree” or “strongly disagree,” that typically means your item is not worded in a way that would elicit varying opinions.

If everyone shares the same super-strong opinion about something, there’s very little additional information you can glean, or actions you can take from those results.  

Use benchmarks whenever possible

Say 70% of your survey respondents love their managers. Is that good or bad? One way to know is to use benchmarks – a company or industry standard to compare your results against. 

Some companies compile and sell benchmark data for this exact purpose. Even better, you can use your own company’s past performance as a starting point. 

It’s not the end of the world if you don’t have benchmark data, and you shouldn’t over-rely on it anyway. You’ll get better results by focusing on the outcomes that matter to you, not pushing to get engagement from 76% to 80% just because that’s the industry standard.

Focus on what will actually move the needle – not on your “worst” results

One super-common pitfall when interpreting survey data is to focus attention where the company performed worst. 

In reality, action should be directed not to what feels like an emergency, but to areas that will have the biggest impact on your goal. In survey science, this is referred to as “predictive power.”

For example, say your desired outcome was improved employee retention, and survey respondents were highly dissatisfied with manager communication. That doesn’t mean you need to push every manager into communication training!

What if a more generous vacation policy would do more to entice your employees to stay? In that case, a question about vacation would have more “predictive power,” and would be a better way to guide action.

Survey scientists use statistical modeling to evaluate predictive power. Even if that’s not within reach for you, it’s a good reminder to stay focused on your goal – not rush into hasty actions because you underperformed in one area. 

Special thanks to Genevieve Michaels for her contributions to this article.

Get stories like this in your inbox

Advice, stories, and expertise about work life today.

Examples

Research Questionnaire

Questionnaire generator.

example for survey research

When a researcher creates a research paper using the scientific method they will need to use a gathering method that is adjacent to the research topic. This means that the researcher will use a quantitative research method for a quantitive topic and a qualitative method for a qualitative  one.  The research questionnaire is one of the quantitative data-gathering methods a researcher can use in their research paper.

1. Market Research Questionnaire Template Example

Market Research Questionnaire Template

  • Google Docs
  • Apple Pages

Size: 38 KB

2. Market Research Questionnaire Example

Market Research Questionnaire Example1

Size: 94 KB

3. Research Questionnaire Example

Research Questionnaire Example

4. Sample Market Research Questionnaire

Market Research Questionnaire

Size: 35 KB

5. Research Survey Questionnaire

Research Survey Questionnaire

Size: 42 KB

6. Research Survey Questionnaire Construction

Research Survey Questionnaire Construction

Size: 80 KB

7. Research Questionnaire Survey of Consumers

Research Questionnaire Survey of Consumers

Size: 39 KB

8. Guide to the Design of Research Questionnaires

Guide to the Design of Research Questionnaires

Size: 77 KB

9. Planning Survey Research Questionnaires

Planning Survey Research Questionnaires

Size: 85 KB

10. Climate Change Survey Questionnaires

Climate Change Survey Questionnaires

Size: 41 KB

11. Survey Questionnaire Design

Survey Questionnaire Design

Size: 96 KB

12. Developing Questionnaires for Educational Research

Developing Questionnaires for Educational Research

Size: 81 KB

13. Graudate Research Student Questionnaires

Graudate Research Student Questionnaires

14. Sample Research Survey Questionnaires

Sample Research Survey Questionnaires

Size: 46 KB

15. Market Research Questionnaire Example

Market Research Questionnaire Example

16. Research Survey Questionnaire Example

Research Survey Questionnaire Example

17. Product X Research Study Questionnaire Example

Product X Research Study Questionnaire Example

What Is a Research Questionnaire?

A research questionnaire is a physical or digital questionnaire that researchers use to obtain quantitative data. The research questionnaire is a more in-depth version of a survey   as its questions often delve deeper than survey questions .

How to Write a Research Questionnaire

A well-made research questionnaire can effectively and efficiently gather data from the population. Creating a good research questionnaire does not require that many writing skills , soft skills , or hard skills , it just requires the person to properly understand the data set they are looking for.

Step 1: Select a Topic or Theme for the Research Questionnaire

Begin by choosing a topic or theme   for the research questionnaire as this will provide much-needed context for the research questionnaire. Not only that but the topic will also dictate the tone of the questions in the questionnaire.

Step 2: Obtain or Use a Research Questionnaire Outline

You may opt to use a research questionnaire outline or outline format for your research questionnaire. This outline will provide you with a structure you can use to easily make your research questionnaire.

Step 3: Create your Research Questionnaire

Start by creating questions that will help provide you with the necessary data to prove or disprove your research question. You may conduct brainstorming sessions to formulate the questions for your research questionnaire.

Step 4: Edit and Have Someone Proofread the Questionnaire

After you have created and completed the research questionnaire, you must edit the contents of the questionnaire. Not only that but it is wise to have someone proofread the contents of your questionnaire before deploying the questionnaire. 

How does a research questionnaire help businesses?

A successful business or company utilizes research questionnaires to not only obtain data from their customers but also to gather data about the performance and quality of the employees in the business. The research questionnaire provides the business or company with actionable data, which they can use to improve the product, service, or commodity to obtain more customers.

Do I need to provide a consent form when I ask someone to answer the research questionnaire?

Yes, consent is very important as without this the data you have gathered from your questionnaires or surveys are useless. Therefore it is important to provide a consent form with your research questionnaire when you are asking a participant to answer the document.

What type of answers are allowed in the research questionnaire?

Research questionnaires can host a multitude of types of questions each with its specific way of answering.  A questionnaire can use multiple-choice questions, open-ended questions, and closed questions. Just be sure to properly pace the questions as having too many different types of answering styles can demotivate or distract the target audience, which might lead to errors.

A research questionnaire is a data-gathering document people can use to obtain information and data from a specific group of people. Well-made and crafted research questionnaires will provide much-needed information one can use to answer a specific research question.

Twitter

Text prompt

  • Instructive
  • Professional

Create a fun quiz to find out which historical figure you're most like in your study habits

Design a survey to discover students' favorite school subjects and why they love them.

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Table of Contents

Which social media platforms are most common, who uses each social media platform, find out more, social media fact sheet.

Many Americans use social media to connect with one another, engage with news content, share information and entertain themselves. Explore the patterns and trends shaping the social media landscape.

To better understand Americans’ social media use, Pew Research Center surveyed 5,733 U.S. adults from May 19 to Sept. 5, 2023. Ipsos conducted this National Public Opinion Reference Survey (NPORS) for the Center using address-based sampling and a multimode protocol that included both web and mail. This way nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race and ethnicity, education and other categories.

Polls from 2000 to 2021 were conducted via phone. For more on this mode shift, read our Q&A.

Here are the questions used for this analysis , along with responses, and  its methodology ­­­.

A note on terminology: Our May-September 2023 survey was already in the field when Twitter changed its name to “X.” The terms  Twitter  and  X  are both used in this report to refer to the same platform.

example for survey research

YouTube and Facebook are the most-widely used online platforms. About half of U.S. adults say they use Instagram, and smaller shares use sites or apps such as TikTok, LinkedIn, Twitter (X) and BeReal.

Note: The vertical line indicates a change in mode. Polls from 2012-2021 were conducted via phone. In 2023, the poll was conducted via web and mail. For more details on this shift, please read our Q&A . Refer to the topline for more information on how question wording varied over the years. Pre-2018 data is not available for YouTube, Snapchat or WhatsApp; pre-2019 data is not available for Reddit; pre-2021 data is not available for TikTok; pre-2023 data is not available for BeReal. Respondents who did not give an answer are not shown.

Source: Surveys of U.S. adults conducted 2012-2023.

example for survey research

Usage of the major online platforms varies by factors such as age, gender and level of formal education.

% of U.S. adults who say they ever use __ by …

  • RACE & ETHNICITY
  • POLITICAL AFFILIATION

example for survey research

This fact sheet was compiled by Research Assistant  Olivia Sidoti , with help from Research Analyst  Risa Gelles-Watnick , Research Analyst  Michelle Faverio , Digital Producer  Sara Atske , Associate Information Graphics Designer Kaitlyn Radde and Temporary Researcher  Eugenie Park .

Follow these links for more in-depth analysis of the impact of social media on American life.

  • Americans’ Social Media Use  Jan. 31, 2024
  • Americans’ Use of Mobile Technology and Home Broadband  Jan. 31 2024
  • Q&A: How and why we’re changing the way we study tech adoption  Jan. 31, 2024

Find more reports and blog posts related to  internet and technology .

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

The way we travel now

What sorts of journeys do today’s travelers dream about? Where would they like to go? What do they hope to do when they get there? How much are they willing to spend on it all? And what should industry stakeholders do to adapt to the traveler psychology of the moment?

About the authors

To gauge what’s on the minds of current-day travelers, we surveyed more than 5,000 of them in February and March of this year. 1 Unless otherwise noted, the source for all data and projections is McKinsey State of Travel Survey, 5,061 participants, February 27 to March 11, 2024. Our universe of respondents included travelers from five major, representative source markets: China, Germany, the United Arab Emirates, the United Kingdom, and the United States. All respondents took at least one leisure trip in the past two years. We asked them more than 50 questions about their motivations, behavior, and expectations.

Results from this survey, supplemented with findings from focus groups and other additional research, suggest six vital trends that are shaping traveler sentiment now.

Travel has become a top priority, especially for younger generations

Sixty-six percent of the travelers we surveyed say they’re more interested in travel now than they were before the COVID-19 pandemic. This pattern holds across all surveyed age groups and nationalities. Respondents also indicate that they’re planning more trips in 2024 than they did in 2023.

Travel isn’t merely an interest these days. It’s become a priority—even amid uncertain economic conditions that can make budgeting a challenge. Travel continues to be one of the fastest-growing consumer spending areas, rising 6 percent over a recent 12-month period in the United States, even when adjusted for inflation. Only 15 percent of our survey respondents say they’re trying to save money by reducing the number of trips they go on. And in the February 2024 McKinsey ConsumerWise Global Sentiment Survey of more than 4,000 participants, 33 percent of consumers said they planned to splurge on travel, ranking it the third-most-popular splurge category—trailing only eating at home and eating out at restaurants. 2 Christina Adams, Kari Alldredge, Lily Highman, and Sajal Kohli, “An update on US consumer sentiment: Consumers see a brighter future ahead,” McKinsey, February 29, 2024.

Younger generations appear to propel much of the rising interest in travel (Exhibit 1). In 2023, millennials and Gen Zers took, on average, nearly five trips, versus less than four for Gen Xers and baby boomers. Millennials and Gen Zers also say they devote, on average, 29 percent of their incomes to travel, compared with 26 percent for Gen Zers and 25 percent for baby boomers.

Younger travelers are the most keen to venture abroad

Younger travelers are particularly excited about international travel. Gen Zers and millennials who responded to our survey are planning a nearly equal number of international and domestic trips in 2024, no matter their country of origin, whereas older generations are planning to take roughly twice as many domestic trips (Exhibit 2).

Younger travelers’ thirst for novelty might be motivating their urge to cross borders. Gen Zers say their number-one consideration when selecting a destination is their desire to experience someplace new. For Gen Xers, visiting a new place comes in at number eight, behind factors such as cost, ease of getting around, and quality of accommodation.

There might be a mindset shift under way, with international travel feeling more within reach for younger travelers—in terms of both cost and convenience. Younger travelers have become adept at spotting international destinations that feature more affordable prices or comparatively weak currencies. Low-cost airlines have proliferated, carrying 35 percent of the world’s booked seats over a recent 12-month period. 3 “Low-cost carriers in the aviation industry: What are they?,” OAG Aviation Worldwide, September 13, 2023. Meanwhile, translation software is lowering language barriers, mobile connectivity overseas is becoming cheaper and more hassle free, and recent visa initiatives in various regions have made passport-related obstacles easier to overcome.

It remains to be seen whether this mindset shift will endure as younger generations get older. But early evidence from millennials suggests that they’ve retained their interest in international travel even as they’ve begun to age and form families. It could be that this is a lasting attitude adjustment, influenced as much by the changing dynamics of travel as it is by youth.

Baby boomers are willing to spend if they see value

Baby boomers are selective about their travel choices and travel spending. Enjoying time with family and friends is their number-one motivation for taking a trip. Experiencing a new destination is less important to them—by as much as 15 percentage points—than to any other demographic.

Although older travelers appreciate the convenience that technology can offer, they prefer human contact in many contexts (Exhibit 3). For example, 44 percent of baby boomers—versus only 30 percent of other respondents—say they value having a travel agent book an entire travel experience for them. And only 42 percent of baby boomers have used a mobile app to book transportation, versus 71 percent of other respondents.

While this generation typically has more accumulated savings than other generations, they remain thoughtful about how they choose to spend. Their top two cited reasons for not traveling more are “travel is becoming too expensive” and “not having enough money to travel.” They make up demographic most willing to visit a destination out of season, with 62 percent saying they’re open to off-peak travel to bring costs down.

Baby boomers might be willing to spend strategically, in ways that make travel more convenient and less burdensome. For example, whereas 37 percent of Gen Zers are willing to take a cheaper flight to lower their travel costs—even if it means flying at inconvenient times or with a stopover—only 22 percent of baby boomers say they’ll do the same. But these older travelers don’t splurge indiscriminately: only 7 percent describe their attitude toward spending as “I go out all the way when I travel.” They’re much more willing to forgo experiences to save money, identifying this as the first area where they cut spending. Gen Zers, on the other hand, will cut all other expense categories before they trim experiences.

Whatever baby boomers’ stated feelings and preferences, they still account for a substantial share of travel spending. And they still spend more than younger generations—three times more per traveler than Gen Zers in 2023, for example.

The adventure starts before the trip begins

Travelers are delighting in crafting their own trips. Only 17 percent of survey respondents say they used a travel agent to book a trip in the past year. When asked why, respondents’ top-cited reason is that they want full control over their itineraries. Their second-most-cited reason? They simply enjoy the planning process. In fact, studies have shown that the anticipation of a journey can lead to higher levels of happiness than the journey itself. 4 Jeroen Nawijn et al., “Vacationers happier, but most not happier after a holiday,” Applied Research in Quality of Life , March 2010, Volume 5, Number 1.

When seeking inspiration during the planning process, respondents are most likely to turn to friends and family—either directly or on social media (Exhibit 4). Advice from other travelers is also sought after. Fewer and fewer travelers rely on travel guidebooks for inspiration.

Today’s travelers tend to view the planning process, in part, as a treasure hunt. Seventy-seven percent of respondents describe the research phase as an effort to ensure that they’re finding good deals or saving money. And all demographics describe “value for money” as the most important factor when choosing a booking channel.

Unexpected traveler archetypes are emerging

When we analyzed our survey results, we identified seven clusters of travelers who express shared attitudes and motivations toward travel. While the distribution of these archetypes varies across source markets, respondents within each archetype exhibit strong similarities:

Seven clusters of travelers express shared attitudes and motivations toward travel. Each archetype’s distribution varies across source markets, but the travelers within them exhibit strong similarities.
  • Sun and beach travelers (23 percent of respondents). These vacationers travel rarely and spend frugally, preferring sun and beach destinations that are easy to get to. They like to relax and visit with family. They’re relatively more likely to place significant value on nonstop flights (72 percent, versus 54 percent overall) and are less interested in authentic and immersive experiences (only 13 percent say these are main reasons why they travel).
  • Culture and authenticity seekers (18 percent). These are active and high-budget travelers who typically spend more than $150 per day on holiday, love to sightsee, are willing to spend on experiences, and don’t want to settle for typical bucket-list destinations. Only 6 percent prioritize familiarity when choosing where to go—the lowest percentage of any traveler segment. This segment is also least likely (at 17 percent) to say they would shorten a holiday to save money.
  • Strategic spenders (14 percent). These travelers are open to selectively splurging on authentic, carefully curated experiences. But they keep a watchful eye on total spending. They’re willing to sacrifice some conveniences, such as nonstop flights, in the interest of cost savings.
  • Trend-conscious jet-setters (14 percent). Travelers in this high-budget group (they spend more than $150 per day when traveling) turn first to friends and family (79 percent) and then to social media (62 percent) when scouting destinations. Seventy-six percent say the popularity of a destination is an important factor, compared with 63 percent overall. And 75 percent say they focus on hotel brands when selecting accommodations.
  • Cost-conscious travelers (11 percent). This travel segment is made up of predominantly older travelers who travel rarely and frequently return to the same destinations and activities. They’re relatively more likely to care about the familiarity of a destination (54 percent, versus 35 percent overall) and the cost of the trip (76 percent, versus 65 percent overall).
  • Premium travelers (12 percent). This segment expects high-quality trappings when they travel, and only 20 percent say that cost is an important factor. These frequent travelers are especially selective about accommodation—they, on average, are more likely than travelers overall to care about brand, prestige, exclusivity, design, decor, amenities, and sustainability. Similarly to trend-conscious jet-setters, this traveler segment is, on average, more likely than travelers overall (at 27 percent, versus 18 percent) to be swayed by celebrities and influencers when choosing travel destinations.
  • Adventure seekers (8 percent). This younger segment enjoys active holidays that present opportunities to encounter like-minded travelers. Nineteen percent say they’re motivated by adventure and physical activities, and 15 percent say meeting new people is a major reason why they travel. They aren’t after large-group events; instead, they prefer small-group adventures. This segment prizes remoteness, privacy, and sustainability.

What travelers want depends on where they’re from

When asked what trips survey respondents are planning next, 69 percent of Chinese respondents say they plan to visit a famous site—a marked difference from the 20 percent of North American and European travelers who say the same. Chinese travelers are particularly motivated by sightseeing: 50 percent cite visiting attractions as their main reason for traveling, versus an average of 33 percent for those from other countries.

Emirati travelers, like their Chinese counterparts, favor iconic destinations, with 43 percent saying they plan to visit a famous site. They also have a penchant for shopping and outdoor activities. Fifty-six percent of respondents from the United Arab Emirates describe the range of available shopping options as an important factor when selecting a destination—a far higher proportion than the 35 percent of other respondents. And respondents from the United Arab Emirates report going on a greater number of active vacations (involving, for instance, hiking or biking) than any other nationality.

Travelers from Europe and North America are especially keen to escape their daily routines. Respondents from Germany (45 percent), the United States (40 percent), and the United Kingdom (38 percent) place importance on “getting away from it all.” Only 17 percent of respondents from China and the Middle East feel the same way. European travelers are particularly fond of beach getaways: respondents from the United Kingdom and Germany cite “soaking in the sun” at twice the rate of American respondents as a main reason they travel.

Travel is a collective story, with destinations as the backdrop

Younger generations are prioritizing experiences over possessions. Fifty-two percent of Gen Zers in our survey say they splurge on experiences, compared with only 29 percent of baby boomers (Exhibit 5). Gen Z travelers will try to save money on flights, local transportation, shopping, and food before they’ll look to trim their spending on experiences. Even terminology used by younger generations to describe travel is experience oriented: “Never stop exploring” is tagged to nearly 30 million posts on Instagram.

The value of experiences is often realized in the stories people tell about them. Books and films have spurred tourists to flock to specific destinations (for instance, when droves of Eat, Pray, Love: One Woman's Search for Everything across Italy, India and Indonesia [Viking Penguin, 2006] readers visited Bali). And travel has always been a word-of-mouth business, in which travelers’ stories—crafted from their experiences—can inspire other travelers to follow in their footsteps.

Social media is the latest link in this chain: a technology-driven, collective storytelling platform. Ninety-two percent of younger travelers in our survey say their last trip was motivated in some way by social media. Their major sources of social inspiration, however, aren’t necessarily influencers or celebrities (30 percent) but rather friends and family (42 percent). Consumers’ real-life social networks are filled with extremely effective microinfluencers.

Posting vacation selfies is a popular way to share the story of a journey. But a growing number of social media users are searching for ways to present their travel narratives in a more detailed and more enduring fashion, and new apps and platforms are emerging to help them do so. The microblogging app Polarsteps, which more than nine million people have downloaded, helps travelers plan, track, and then share their travels—allowing journeys to be captured in hardcover books that document routes, travel statistics, and musings.

Giving today’s travelers what they need and want

From our survey findings, important takeaways emerge that can help tourism industry players engage with today’s travelers.

Know customer segments inside and out

Serving up a one-size-fits-all experience is no longer sufficient. Using data to segment customers by behavior can help tourism players identify opportunities to tailor their approaches more narrowly.

Cutting-edge data strategies aren’t always necessary to get started. Look-alike analysis and hypothesis-driven testing can go a long way. Even without having data about a specific family’s previous travel patterns, for example, an airline might be able to hypothesize that a family of four traveling from New York to Denver on a long weekend in February is going skiing—and therefore might be interested in a discounted offer that lets them check an additional piece of luggage.

The same philosophy applies to personalization, which doesn’t necessarily need to be focused on a single individual. Merely having a clearer sense of the specific segments that a provider is targeting can help it craft a more compelling offer. Instead of simply creating an offer geared toward families, for instance, providers might build an offer tailored to families who are likely to visit in the spring and will be primarily interested in outdoor activities. And instead of relying on standard tourist activities, providers might find ways to cater to more specific traveler interests—for example, facilitating a home-cooked meal with locals instead of serving up a fine-dining experience.

Help travelers share their journeys

Today’s travelers want to share their travel stories. And friends and family back home are more likely to be influenced by these stories than by anything else they see or hear. Providers should consider ways to tap into this underexploited marketing channel.

Hotels can install a photo booth that enables guests to share pictures from their journeys. Guests can be given small souvenirs to take home to their friends and family. Hotels might also send guests photos on the anniversary of a trip to help jog happy memories and prompt a future booking.

Given the right incentives, customers can act as a distributed team of marketers. Reposting guests’ social media photos and videos, for example, or spurring engagement with contests and shareable promo codes can encourage travelers to become evangelists across an array of different channels.

Recognize younger generations’ unquenchable thirst for travel

Younger travelers’ remarkable desire for experiences isn’t always in line with their budgets—or with providers’ standard offerings. A new generation of customers is ripe to be cultivated if providers can effectively meet their needs:

  • Travel companies can better match lower-budget accommodations with younger travelers’ preferences by incorporating modern design into rooms and facilities, curating on-site social events, and locating properties in trendy neighborhoods.
  • More affordable alternatives to classic tourist activities (for example, outdoor fitness classes instead of spas or street food crawls instead of fine dining) can be integrated into targeted packages.
  • Familiar destinations can be reinvented for younger travelers by focusing on experiences (for instance, a street art tour of Paris) instead of more traditional attractions (such as the Eiffel Tower).

Cater to older travelers by using a human touch and featuring family-oriented activities

Older generations remain a major source of travel spending. Providers can look for ways to keep these travelers coming back by meeting their unique needs:

  • While older travelers are growing more comfortable with technology, they continue to favor human interaction. Stakeholders can cater to this preference by maintaining in-person visitor centers and other touchpoints that emphasize a human touch.
  • Older travelers are generally fond of returning to familiar destinations. Providers can look to maximize repeat business by keeping track of guest information that aids personalization (such as favorite meals or wedding anniversary dates). Identifying historical behavior patterns (for example, parents repeatedly visiting children in the same city) can help providers make targeted offers that could maximize spending (for example, a museum subscription in that city).
  • The off-seasonal travel patterns that older travelers often exhibit might open opportunities for providers to create appealing experiences scheduled for lower-occupancy periods—for example, an autumn wellness retreat at a popular summer destination.
  • Older travelers’ propensity to visit family and friends opens the door to offerings that appeal to a range of generations, such as small-group trips pairing activities for grandparents and grandchildren.

Travelers are more interested in travel—and more willing to spend on it—than ever before. But the familiar, one-size-fits-all tourism offerings of the past have grown outdated. Today’s travelers want to indulge in creative experiences that are tailored to their priorities and personal narratives. The good news for providers: new technology and new approaches, coupled with tried-and-true strengths such as managerial stamina and careful attention to service, are making it easier than ever to shape personalized offerings that can satisfy a traveler’s unique needs.

Caroline Tufft is a senior partner in McKinsey’s London office, Margaux Constantin is a partner in the Dubai office, Matteo Pacca is a senior partner in the Paris office, Ryan Mann is a partner in the Chicago office, Ivan Gladstone is an associate partner in the Riyadh office, and Jasperina de Vries is an associate partner in the Amsterdam office.

The authors wish to thank Abdulhadi Alghamdi, Alessandra Powell, Alex Dichter, Cedric Tsai, Diane Vu, Elisa Wallwitz, Lily Miller, Maggie Coffey, Nadya Snezhkova, Nick Meronyk, Paulina Baum, Peimin Suo, Rebecca Stone, Sarah Fellay, Sarah Sahel, Sophia Wang, Steffen Fuchs, Steffen Köpke, Steve Saxon, and Urs Binggeli for their contributions to this article.

This article was edited by Seth Stevenson, a senior editor in the New York office.

Explore a career with us

Related articles.

Delivery robot stopped beside room in hotel waiting for pick up - stock photo

The future of tourism: Bridging the labor gap, enhancing customer experience

""

The promise of travel in the age of AI

A mature Indian tourist couple riding in a commuter train in Lithuania, talking, having fun, and looking out the window.

From India to the world: Unleashing the potential of India’s tourists

example for survey research

Common Sense Media

Movie & TV reviews for parents

  • For Parents
  • For Educators
  • Our Work and Impact
  • Get the app
  • Media Choice
  • Digital Equity
  • Digital Literacy and Citizenship
  • Tech Accountability
  • Healthy Childhood
  • 20 Years of Impact

example for survey research

The Healthy Young Minds Campaign

example for survey research

AI Ratings and Reviews

  • Our Current Research
  • Past Research
  • Archived Datasets

example for survey research

How Diverse Communities of Young People Think About Social Media and Mental Health

example for survey research

2023 State of Kids' Privacy

  • Press Releases
  • Our Experts
  • Our Perspectives
  • Public Filings and Letters

example for survey research

Common Sense Media Announces Framework for First-of-Its-Kind AI Ratings System

example for survey research

Protecting Kids' Digital Privacy Is Now Easier Than Ever

  • How We Work
  • Diversity and Inclusion
  • Meet Our Team
  • Board of Directors
  • Board of Advisors
  • Our Partners
  • How We Rate and Review Media
  • How We Rate and Review for Learning
  • Our Offices
  • Join as a Parent
  • Join as an Educator
  • Sign Up for Our Newsletters
  • Request a Speaker
  • We're Hiring

Cover of the report with two photos of young people using phones and laptops.

Teen and Young Adult Perspectives on Generative AI: Patterns of Use, Excitements, and Concerns

June 3, 2024

Generative artificial intelligence (AI) has quickly become an integral part of the digital landscape, surfacing new ways for people to learn, create, and innovate. At the same time, it brings both proven and unknown risks to everything from privacy to equity and accuracy.

Young people are extremely important in considering the future of generative AI—they're not only early adopters and influencers, but will also be among the first to grapple with its consequences. Understanding the perspectives of young people when it comes to generative AI is paramount, especially considering the impact of digital technologies on youth well-being.

This study, conducted in partnership with  Hopelab and the  Center for Digital Thriving at Harvard Graduate School of Education, examines how young people perceive and interact with generative AI technologies, with special attention to race and ethnicity, age, gender, and LGBTQ+ identity.

These nuanced views of teens and young adults from diverse demographic groups offer valuable insights into the potential benefits of generative AI, such as broader access to information, streamlining of tasks, and enhanced creativity. However, young people also expressed concerns about potential negative impacts, including job loss, privacy issues, intellectual property theft, misinformation and disinformation, and even AI taking over the world.

It's essential to understand young people's perspectives about generative AI, especially when considering programs, policies, and design features that impact the mental health of marginalized and minority populations like LGBTQ+, Black, and Latinx youth. The data in this report can ensure that the well-being of the earliest adopters is prioritized.

More resources:

See the press release .

Learn more about Common Sense's AI literacy lessons for grades 6-12.

Learn more about our AI Initiative, including previous research and our ratings for popular products.

Explore resources from our partners at Hopelab and the Center for Digital Thriving .

IMAGES

  1. FREE 8+ Sample Survey Example Templates in PDF

    example for survey research

  2. Survey Examples For Research : Our list of sample survey questionnaires:

    example for survey research

  3. Research Survey

    example for survey research

  4. Survey Research

    example for survey research

  5. Survey

    example for survey research

  6. Survey Questionnaire (SAMPLE)

    example for survey research

VIDEO

  1. Sample Survey: Introduction (Part 1) (Hindi)

  2. CSAMT Survey

  3. Applied Survey Research Survey Fundamentals and Terminology

  4. Importing Qualitative Survey Results in Figma

  5. What is survey research?

  6. Survey Research Design, Explanation in Urdu/Hindi 2020

COMMENTS

  1. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  2. Survey Research: Definition, Examples & Methods

    Survey research is the process of collecting data from a predefined group (e.g. customers or potential customers) with the ultimate goal of uncovering insights about your products, services, or brand overall.. As a quantitative data collection method, survey research can provide you with a goldmine of information that can inform crucial business and product decisions.

  3. Survey Research: Definition, Examples and Methods

    Survey Research Definition. Survey Research is defined as the process of conducting research using surveys that researchers send to survey respondents. The data collected from surveys is then statistically analyzed to draw meaningful research conclusions. In the 21st century, every organization's eager to understand what their customers think ...

  4. Survey Research

    Survey Research. Definition: Survey Research is a quantitative research method that involves collecting standardized data from a sample of individuals or groups through the use of structured questionnaires or interviews. The data collected is then analyzed statistically to identify patterns and relationships between variables, and to draw conclusions about the population being studied.

  5. Doing Survey Research

    Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.

  6. Survey Research: Types, Examples & Methods

    Data: The data gathered from survey research is mostly quantitative; although it can be qualitative. Impartial Sampling: The data sample in survey research is random and not subject to unavoidable biases. Ecological Validity: Survey research often makes use of data samples obtained from real-world occurrences.

  7. Survey Research

    Survey research examples and questions Examples serve as a bridge connecting theoretical concepts to real-world scenarios. Let's consider a few practical examples of survey research across various domains. User Experience (UX) Imagine being a UX designer at a budding tech start-up. Your app is gaining traction, but to keep your user base ...

  8. Survey Research: Definition, Types & Methods

    Descriptive research is the most common and conclusive form of survey research due to its quantitative nature. Unlike exploratory research methods, descriptive research utilizes pre-planned, structured surveys with closed-ended questions. It's also deductive, meaning that the survey structure and questions are determined beforehand based on existing theories or areas of inquiry.

  9. Types of surveys with examples

    Learn more: Research Design. Types of surveys with examples. A researcher must have a proper medium to conduct research and collect meaningful information to make informed decisions. Also, it is essential to have a platform to create and deploy these various types of market research surveys. LEARN ABOUT: Top 12 Tips to Create A Good Survey

  10. Best Survey Examples for your research

    Here are a few survey examples for market research that will help you create a great market research survey: Concept Evaluation and Pricing survey : This survey is used for evaluating a potential product / service content and its correlation to its pricing. This is a critical market research example because more than half of all market research ...

  11. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  12. Guide: Conducting Survey Research

    Conducting Survey Research. Surveys represent one of the most common types of quantitative, social science research. In survey research, the researcher selects a sample of respondents from a population and administers a standardized questionnaire to them. The questionnaire, or survey, can be a written document that is completed by the person ...

  13. Survey research

    Survey research is a research method involving the use of standardised questionnaires or interviews to collect data about people and their preferences, thoughts, and behaviours in a systematic manner. Although census surveys were conducted as early as Ancient Egypt, survey as a formal research method was pioneered in the 1930-40s by sociologist Paul Lazarsfeld to examine the effects of the ...

  14. Research survey

    To execute a research campaign, the creation of a survey is one of the first steps. This includes designing questions or using a premade template. Below are some of the best research survey examples, templates, and tips for designing these surveys. 20 research survey examples and templates. Specific survey questions for research depend on your ...

  15. 9.1 Overview of Survey Research

    Survey research may have its roots in English and American "social surveys" conducted around the turn of the 20th century by researchers and reformers who wanted to document the extent of social problems such as poverty (Converse, 1987). ... And as the opening example makes clear, survey research can even be used to conduct experiments to ...

  16. Writing Survey Questions

    An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would "favor or oppose taking military action in Iraq to end Saddam Hussein's rule," 68% said they favored military action while 25% said they opposed military action.

  17. PDF Survey Research

    This chapter describes a research methodology that we believe has much to offer social psychologists in- terested in a multimethod approach: survey research. Survey research is a specific type of field study that in- volves the collection of data from a sample of ele- ments (e.g., adult women) drawn from a well-defined

  18. 150+ Free Questionnaire Examples & Sample Survey Templates

    Filter by survey type. All our sample survey template questions are expert-certified by professional survey methodologists to make sure you ask questions the right way-and get reliable results. You can send out our templates as is, choose separate variables, add additional questions, or customize our questionnaire templates to fit your needs.

  19. Survey Research

    A well-known example of a national cross-sectional survey is the General Social Survey (GSS) conducted by the National Opinion Research Center at the University of Chicago. In the GSS, which has been conducted nearly every year on independent samples of Americans for the past 30 years, some questions appear in only 1 year, whereas others are ...

  20. PDF Fundamentals of Survey Research Methodology

    The survey is then constructed to test this model against observations of the phenomena. In contrast to survey research, a . survey. is simply a data collection tool for carrying out survey research. Pinsonneault and Kraemer (1993) defined a survey as a "means for gathering information about the characteristics, actions, or opinions of a ...

  21. 90 Survey Question Examples + Best Practices Checklist

    However, all questions must serve a purpose. In this section, we divide survey questions into nine categories and include the best survey question examples for each type: 1. Open Ended Questions. Open-ended questions allow respondents to answer in their own words instead of selecting from pre-selected answers.

  22. 19 Consumer Experience Survey Examples & Questions| Attest

    19 Example surveys and questions to ask your consumers. Before you dive into these customer experience example questions, remember they're starting points, not a one-size-fits-all checklist for your customer satisfaction survey template. Your brand is unique, and so are your research needs. Use these customer feedback questions as inspiration ...

  23. 5 tips for employee surveys that actually make a difference

    Another excellent survey goal is to foster belonging - a sense of personal connection to the organization, its values, and the people within it. Belonging is an often-overlooked precursor to engagement - according to some research, 91% of employees who feel they belong are engaged, compared to 20% of employees who don't.

  24. Research Questionnaire

    The research questionnaire is one of the quantitative data-gathering methods a researcher can use in their research paper. 1. Market Research Questionnaire Template Example. Details. File Format. Size: 38 KB. Download. 2. Market Research Questionnaire Example.

  25. Social Media Fact Sheet

    How we did this. To better understand Americans' social media use, Pew Research Center surveyed 5,733 U.S. adults from May 19 to Sept. 5, 2023. Ipsos conducted this National Public Opinion Reference Survey (NPORS) for the Center using address-based sampling and a multimode protocol that included both web and mail.

  26. The way we travel now

    Younger travelers are the most keen to venture abroad. Younger travelers are particularly excited about international travel. Gen Zers and millennials who responded to our survey are planning a nearly equal number of international and domestic trips in 2024, no matter their country of origin, whereas older generations are planning to take roughly twice as many domestic trips (Exhibit 2).

  27. Example code for implementing physics-informed neural networks and

    There are 2 Jupyter notebooks that are example code for implementing a physics informed neutral network, and the code is companion to the manuscript entitled: "Spatio-temporal ecological models via physics-informed neural net- works for studying chronic wasting disease" by Reyes et al. (2024). The first notebook simulates data, and the second notebook uses this simulated data when implementing an

  28. Teen and Young Adult Perspectives on Generative AI: Patterns of Use

    See all research. Teen and Young Adult Perspectives on Generative AI: Patterns of Use, Excitements, and Concerns. June 3, 2024. Generative artificial intelligence (AI) has quickly become an integral part of the digital landscape, surfacing new ways for people to learn, create, and innovate. At the same time, it brings both proven and unknown ...