- Awards Season
- Big Stories
- Pop Culture
- Video Games
Where Can I Get Help Writing My Thesis Online?
You’ve spent years preparing for your master’s degree or PhD. You’ve read, studied and spent hours of time and energy writing papers. Now you’ve arrived at the culmination of all this effort: writing your thesis. There are plenty of compelling stories about the time and energy that students have spent drafting their dissertations and theses.
The good news is that you’re not alone. While you certainly don’t want to hire someone to write your thesis for you, which goes against most institution policies and puts your academic integrity at risk, you can get plenty of help with certain aspects of your thesis online. Whether you’re looking for a little guidance or extensive assistance, various services can make writing or editing your thesis go smoothly.
One of the greatest challenges of writing your thesis can be juggling your family or job responsibilities with your studies. The time that writing takes can add another layer of obligation to your already-packed schedule. Dissertation Editor is a company whose founder is a PhD-educated writer and professor, and it promises to help you complete your thesis or dissertation on time and in compliance with your university’s rules and regulations.
Dissertation Editor’s primary function is to guide you along in the writing process and provide a helping hand in understanding everything you need to take care of. It places you with a writer who specializes in your area of study, and this individual can help you organize and analyze your research while making sure that your thesis fits your writing style and personality. This company also specializes in helping with any statistical analysis that you use in your thesis.
If you’re concerned about using a service to help you write your thesis because you think it’ll be obvious that you hired help, don’t worry. Thesis Helpers puts its team of experienced writers to work for you to help you craft a thesis that finishes your degree on a high note. No matter what level of help you need, from narrowing down a topic to advanced editing and proofreading, they’re available to help.
The writers have advanced degrees in their areas of expertise, and one of the best things about Thesis Helpers is that it gives you ultimate say in the final product of your thesis. This company can help you with revisions and additional research, and you can rest assured that your thesis will meet anti-plagiarism standards.
Sometimes when you’re writing a thesis or dissertation, you can get stuck on one section or chapter. You may not need assistance writing the whole thing, but getting some help with the exact portion you’re struggling with can come in handy. That’s one of the strengths of using Best Dissertation . You don’t have to rely on it for help with your entire thesis if it’s not what you need.
Like most of the top thesis-assistance services, Best Dissertation employs writers with advanced degrees who specialize in various fields of study. What truly sets this company apart is the live support that it offers any time of the day or night. It claims to take the stress and strain out of writing your dissertation or thesis.
While some companies place a premium on helping you get your thesis written, others emphasize the editing and proofreading process. If you don’t need help with writing but need a hand with proofreading and editing, Scribbr is a good option for you. Its editors can help you get a grasp on the grammar and tone that are appropriate for academic writing.
Scribbr doesn’t just provide boilerplate feedback that you can find anywhere. It offers personalized feedback aimed at helping you become a better writer in the long run. You can even see examples of how its editors work by looking at the company’s website.
My Assignment Help
Writing a thesis has its own challenges that other academic writing simply doesn’t, which is why the team at My Assignment Help offers its particular brand of expertise. If you need assistance with a dissertation or thesis at the PhD or master’s level, its writers have the level of education and experience to help you write an expertly crafted and edited thesis.
My Assignment Help prides itself on hiring subject matter experts, meaning you can pair up with a helper who already has an advanced degree in your field. They understand the nuances of academic writing that are specific to your area of study, and they can provide advice on everything from making your abstract more unique to crafting a thought-provoking conclusion.
MORE FROM ASK.COM
More Related Content
What's hot ( 20 )
Viewers also liked
Viewers also liked ( 20 )
Similar to Thesis questionnaire
Similar to Thesis questionnaire ( 20 )
Recently uploaded ( 20 )
- 1. As part of my MBA research thesis at the Iqra University Gulshan Campus Karachi, I am conducting a survey that investigates the attitudes of the customers towards luxury brands in the youth of Pakistan the case of mobile phone.”I will appreciate if you could complete the following table. Any information obtained in connection with this study that can be identified with you will remain confidential. Respondent’s Details: Name: _____________________ Age: ________ Gender: Male / Female Mobile number: ____________ Education: _________________ Strongly Agree (1) Agree (2) Neutral (3) Disagree (4) Strongly Disagree (5) S.R. No. Factors (1) (2) (3) (4) (5) Social need (Kyung Hoon Kim1, 2010) From brand I learn about fashions and about what to buy to impress others Brand tells me what people with lifestyles similar to mine are buying and using. Brand helps me know which products will or will not reflect the sort of person I am Quite often, my phone is amusing and entertaining. Brand shows the social status. (Soo Jiuan Tan 2007) Materialism (Hye-Jung Park, 2008) Brand promotes undesirable values in our society. Brand makes people live in a world of fantasy I admire people who own expensive homes, cars, and clothes I don’t place much emphasis on the amount of material objects people own as a sign of success It sometimes bothers me quite a bit that I can’t afford to buy all the things I’d like Consumer attitude (Soo Jiuan Tan 2007)
- 2. Overall, I consider brand a good thing. My general opinion of brand is unfavorable I like brands. Overall I consider mobile phones a bad thing Overall I consider mobile phones essential thing of life. Experiential need (Hye-Jung Park, 2008) I am satisfied with my brand. When I use the brand I am pleased with my its result. I feel happy when I purchase my brand. I know about new retail web sites before most other people in my circle do I will visit a new company’s web site even if I have not heard of it before Fashion trends (Hye-Jung Park, 2008) Before buying the latest brand I never judge it will suit me or not I am affected by the fashion of the celebrity. Style is the most important thing for me. In general, I am the last in my circle of friends to know the names of the latest new fashions If I heard that a new fashion item was available in the store, I would be interested enough to buy it Need of uniqueness: (Hem,2001) Overall, I enjoy buying the latest products. I like to purchase new products before others do.
- 3. Overall, it is exciting to buy the latest products. Sometimes I take pleasure in thinking about what I saw or heard or read about my mobile phones (Soo Jiuan Tan 2007) Quite often, mobile phones are amusing and entertaining. Conformity (Soo Jiuan Tan 2007) Brands can never be replaced For me appearance is most important I say positive things about this brand to other people (Hong Youl Ha 2011) I will not buy at other brands if my brand is available I would continue to use my mobile phone if it its prices increase somewhat
28 Questionnaire Examples, Questions, & Templates to Survey Your Clients
Published: May 15, 2023
The adage "the customer is always right" has received some pushback in recent years, but when it comes to conducting surveys , the phrase is worth a deeper look. In the past, representatives were tasked with solving client problems as they happened. Now, they have to be proactive by solving problems before they come up.
Salesforce found that 63% of customers expect companies to anticipate their needs before they ask for help. But how can a customer service team recognize these customer needs in advance and effectively solve them on a day-to-day basis?
A customer questionnaire is a tried-and-true method for collecting survey data to inform your customer service strategy . By hearing directly from the customer, you'll capture first-hand data about how well your service team meets their needs. In this article, you'll get free questionnaire templates and best practices on how to administer them for the most honest responses.
Table of Contents:
Survey vs. questionnaire, questionnaire templates.
- Questionnaire Examples
Survey question examples.
- Examples of Good Survey Questions
How to Make a Questionnaire
A questionnaire is a research tool used to conduct surveys. It includes specific questions with the goal to understand a topic from the respondents' point of view. Questionnaires typically have closed-ended, open-ended, short-form, and long-form questions.
The questions should always stay as unbiased as possible. For instance, it's unwise to ask for feedback on a specific product or service that’s still in the ideation phase. To complete the questionnaire, the customer would have to imagine how they might experience the product or service rather than sharing their opinion about their actual experience with it.
Ask broad questions about the kinds of qualities and features your customers enjoy in your products or services and incorporate that feedback into new offerings your team is developing.
What makes a good questionnaire?
Define the goal, make it short and simple, use a mix of question types, proofread carefully, keep it consistent.
A good questionnaire should find what you need versus what you want. It should be valuable and give you a chance to understand the respondent’s point of view.
Make the purpose of your questionnaire clear. While it's tempting to ask a range of questions simultaneously, you'll get more valuable results if you stay specific to a set topic.
According to HubSpot research , 47% of those surveyed say their top reason for abandoning a survey is the time it takes to complete.
So, questionnaires should be concise and easy to finish. If you're looking for a respondent’s experience with your business, focus on the most important questions.
5 Free Customer Satisfaction Survey Templates
Easily measure customer satisfaction and begin to improve your customer experience.
- Net Promoter Score
- Customer Effort Score
You're all set!
Click this link to access this resource at any time.
5 Customer Survey Templates
Your questionnaire should include a combination of question types, like open-ended, long-form, or short-ended questions.
Open-ended questions give users a chance to share their own answers. But closed-ended questions are more efficient and easy to quantify, with specific answer choices.
If you're not sure which question types are best, read here for more survey question examples .
While it's important to check spelling and grammar, there are two other things you'll want to check for a great questionnaire.
First, edit for clarity. Jargon, technical terms, and brand-specific language can be confusing for respondents. Next, check for leading questions. These questions can produce biased results that will be less useful to your team.
Consistency makes it easier for respondents to quickly complete your questionnaire. This is because it makes the questions less confusing. It can also reduce bias.
Being consistent is also helpful for analyzing questionnaire data because it makes it easier to compare results. With this in mind, keep response scales, question types, and formatting consistent.
In-Depth Interviews vs. Questionnaire
Questionnaires can be a more feasible and efficient research method than in-depth interviews. They are a lot cheaper to conduct. That’s because in-depth interviews can require you to compensate the interviewees for their time and give accommodations and travel reimbursement.
Questionnaires also save time for both parties. Customers can quickly complete them on their own time, and employees of your company don't have to spend time conducting the interviews. They can capture a larger audience than in-depth interviews, making them much more cost-effective.
It would be impossible for a large company to interview tens of thousands of customers in person. The same company could potentially get feedback from its entire customer base using an online questionnaire.
When considering your current products and services (as well as ideas for new products and services), it's essential to get the feedback of existing and potential customers. They are the ones who have a say in purchasing decisions.
A questionnaire is a tool that’s used to conduct a survey. A survey is the process of gathering, sampling, analyzing, and interpreting data from a group of people.
The confusion between these terms most likely stems from the fact that questionnaires and data analysis were treated as very separate processes before the Internet became popular. Questionnaires used to be completed on paper, and data analysis occurred later as a separate process. Nowadays, these processes are typically combined since online survey tools allow questionnaire responses to be analyzed and aggregated all in one step.
But questionnaires can still be used for reasons other than data analysis. Job applications and medical history forms are examples of questionnaires that have no intention of being statistically analyzed. The key difference between questionnaires and surveys is that they can exist together or separately.
Below are some of the best free questionnaire templates you can download to gather data that informs your next product or service offering.
What makes a good survey question?
Have a goal in mind, draft clear and distinct answers and questions, ask one question at a time, check for bias and sensitivity, include follow-up questions.
To make a good survey question, you have to choose the right type of questions to use. Include concise, clear, and appropriate questions with answer choices that won’t confuse the respondent and will clearly offer data on their experience.
Good survey questions can give a business good data to examine. Here are some more tips to follow as you draft your survey questions.
To make a good survey, consider what you are trying to learn from it. Understanding why you need to do a survey will help you create clear and concise questions that you need to ask to meet your goal. The more your questions focus on one or two objectives, the better your data will be.
You have a goal in mind for your survey. Now you have to write the questions and answers depending on the form you’re using.
For instance, if you’re using ranks or multiple-choice in your survey, be clear. Here are examples of good and poor multiple-choice answers:
Poor Survey Question and Answer Example
- Contains the tallest mountain in the United States.
- Has an eagle on its state flag.
- Is the second-largest state in terms of area.
- Was the location of the Gold Rush of 1849.
Good Survey Question and Answer Example
What is the main reason so many people moved to California in 1849?
- California's land was fertile, plentiful, and inexpensive.
- The discovery of gold in central California.
- The East was preparing for a civil war.
- They wanted to establish religious settlements.
In the poor example, the question may confuse the respondent because it's not clear what is being asked or how the answers relate to the question. The survey didn’t fully explain the question, and the options are also confusing.
In the good example above, the question and answer choices are clear and easy to understand.
Always make sure answers and questions are clear and distinct to create a good experience for the respondent. This will offer your team the best outcomes from your survey.
It's surprisingly easy to combine multiple questions into one. They even have a name — they’re called "double-barreled" questions. But a good survey asks one question at a time.
For example, a survey question could read, "What is your favorite sneaker and clothing apparel brand?" This is bad because you’re asking two questions at once.
By asking two questions simultaneously, you may confuse your respondents and get unclear answers. Instead, each question should focus on getting specific pieces of information.
For example, ask, "What is your favorite sneaker brand?" then, "What is your favorite clothing apparel brand?" By separating the questions, you allow your respondents to give separate and precise answers.
Biased questions can lead a respondent toward a specific response. They can also be vague or unclear. Sensitive questions such as age, religion, or marital status can be helpful for demographics. These questions can also be uncomfortable for people to answer.
There are a few ways to create a positive experience with your survey questions.
First, think about question placement. Sensitive questions that appear in context with other survey questions can help people understand why you are asking. This can make them feel more comfortable responding.
Next, check your survey for leading questions, assumptions, and double-barreled questions. You want to make sure that your survey is neutral and free of bias.
Asking more than one survey question about an area of interest can make a survey easier to understand and complete. It also helps you collect more in-depth insights from your respondents.
1. Free HubSpot Questionnaire Template
HubSpot offers a variety of free customer surveys and questionnaire templates to analyze and measure customer experience. Choose from five templates: net promoter score, customer satisfaction, customer effort, open-ended questions, and long-form customer surveys.
2. Client Questionnaire Template
It's a good idea to gauge your clients' experiences with your business to uncover opportunities to improve your offerings. That will, in turn, better suit their lifestyles. You don't have to wait for an entire year to pass before polling your customer base about their experience either. A simple client questionnaire, like the one below, can be administered as a micro survey several times throughout the year. These types of quick survey questions work well to retarget your existing customers through social media polls and paid interactive ads.
1. How much time do you spend using [product or service]?
- Less than a minute
- About 1 - 2 minutes
- Between 2 and 5 minutes
- More than 5 minutes
2. In the last month, what has been your biggest pain point?
- Finding enough time for important tasks
- Delegating work
- Having enough to do
3. What's your biggest priority right now?
- Finding a faster way to work
- Staff development
3. Website Questionnaire Template
Whether you just launched a brand new website or you're gathering data points to inform a redesign, you'll find customer feedback to be essential in both processes. A website questionnaire template will come in handy to collect this information using an unbiased method.
1. How many times have you visited [website] in the past month?
- More than once
2. What is the primary reason for your visit to [website]?
- To make a purchase
- To find more information before making a purchase in-store
- To contact customer service
3. Are you able to find what you're looking for on the website homepage?
4. Customer Satisfaction Questionnaire Template
If you've never surveyed your customers and are looking for a template to get started, this one includes some basic customer satisfaction questions. These will apply to just about any customer your business serves.
1. How likely are you to recommend us to family, friends, or colleagues?
- Extremely unlikely
- Somewhat unlikely
- Somewhat likely
- Extremely likely
2. How satisfied were you with your experience?
1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10
3. Rank the following items in terms of their priority to your purchasing process.
- Helpful staff
- Quality of product
- Price of product
- Ease of purchase
- Proximity of store
- Online accessibility
- Current need
- Appearance of product
4. Who did you purchase these products for?
- Family member
- On behalf of a business
5. Please rate our staff on the following terms:
- Friendly __ __ __ __ __ Hostile
- Helpful __ __ __ __ __ Useless
- Knowledgeable __ __ __ __ __ Inexperienced
- Professional __ __ __ __ __ Inappropriate
6. Would you purchase from our company again?
7. How can we improve your experience for the future?
5. Customer Effort Score Questionnaire Template
The following template gives an example of a brief customer effort score (CES) questionnaire. This free template works well for new customers to measure their initial reaction to your business.
1. What was the ease of your experience with our company?
- Extremely difficult
- Somewhat difficult
- Somewhat easy
- Extremely easy
2. The company did everything it could to make my process as easy as possible.
- Strongly disagree
- Somewhat disagree
- Somewhat agree
- Strongly agree
3. On a scale of 1 to 10 (1 being "extremely quickly" and 10 being "extremely slowly"), how fast were you able to solve your problem?
4. How much effort did you have to put forth while working with our company?
- Much more than expected
- Somewhat more than expected
- As much as expected
- Somewhat less than expected
- Much less than expected
6. Demographic Questionnaire Template
Here's a template for surveying customers to learn more about their demographic background. You could substantiate the analysis of this questionnaire by corroborating the data with other information from your web analytics, internal customer data, and industry data.
1. How would you describe your employment status?
- Employed full-time
- Employed part-time
- Freelance/contract employee
2. How many employees work at your company?
3. How would you classify your role?
- Individual Contributor
4. How would you classify your industry?
Below, we have curated a list of questionnaire examples that do a great job of gathering valuable qualitative and quantitative data.
4 Questionnaire Examples
1. customer satisfaction questions.
Rating scale questions offer a scale of numbers and ask respondents to rate topics based on the sentiments assigned to that scale. This is effective when assessing customer satisfaction.
Rating scale survey question examples : "Rate your level of satisfaction with the customer service you received today on a scale of 1-10."
Yes or no survey questions are a type of dichotomous question. These are questions that only offer two possible responses. They’re useful because they’re quick to answer and can help with customer segmentation.
Yes or no survey questions example : "Have you ever used HubSpot before?"
Likert scale questions assess whether a respondent agrees with the statement, as well as the extent to which they agree or disagree.
These questions typically offer five or seven responses, with sentiments ranging from items such as "strongly disagree" to "strongly agree." Check out this post to learn more about the Likert scale .
Likert scale survey question examples : “How satisfied are you with the service from [brand]?”
Open-ended questions ask a broader question or offer a chance to elaborate on a response to a close-ended question. They're accompanied by a text box that leaves room for respondents to write freely. This is particularly important when asking customers to expand on an experience or recommendation.
Open-ended survey question examples : "What are your personal goals for using HubSpot? Please describe."
A matrix table is usually a group of multiple-choice questions grouped in a table. Choices for these survey questions are usually organized in a scale. This makes it easier to understand the relationships between different survey responses.
Matrix table survey question examples : "Rate your level of agreement with the following statements about HubSpot on a scale of 1-5."
Rank Order Scaling
These questions ask respondents to rank a set of terms by order of preference or importance. This is useful for understanding customer priorities.
Rank order scaling examples : "Rank the following factors in order of importance when choosing a new job."
Semantic Differential Scale
This scale features pairs of opposite adjectives that respondents use for rating, usually for a feature or experience. This type of question makes it easier to understand customer attitudes and beliefs.
Semantic differential scale question examples : "Rate your overall impression of this brand as friendly vs. unfriendly, innovative vs. traditional, and boring vs. exciting."
This matrix table format includes two sets of questions horizontally for easy comparison. This format can help with customer gap analysis.
Side-by-side matrix question examples : "Rate your level of satisfaction with HubSpot's customer support compared to its ease of use."
The Stapel rating scale offers a single adjective or idea for rating. It uses a numerical scale with a zero point in the middle. This survey question type helps with in-depth analysis.
Stapel scale survey question examples : "Rate your overall experience with this product as +5 (excellent) to -5 (terrible)."
Constant Sum Survey Questions
In this question format, people distribute points to different choices based on the perceived importance of each point. This kind of question is often used in market research and can help your team better understand customer choices .
Constant sum survey question examples : "What is your budget for the following marketing expenses: Paid campaigns, Events, Freelancers, Agencies, Research."
This survey question type shows several images. Then, it asks the respondent to choose the image that best matches their response to the question. These questions are useful for understanding your customers’ design preferences.
Image choice survey questions example : "Which of these three images best represents your brand voice?"
This survey question offers a hypothetical scenario, then the respondent must choose from the presented options. It's a useful type of question when you are refining a product or strategy.
Choice model survey questions example : "Which of these three deals would be most appealing to you?"
Click Map Questions
Click map questions offer an image click on specific areas of the image in response to a question. This question uses data visualization to learn about customer preferences for design and user experience.
Click map question examples : "Click on the section of the website where you would expect to find pricing information."
This survey question example asks the respondent to upload a file or document in response to a question. This type of survey question can help your team collect data and context that might be tough to collect otherwise.
Data upload question examples : "Please upload a screenshot of the error you encountered during your purchase."
This question type asks a respondent to compare their answers to a group or benchmark. These questions can be useful if you're trying to compare buyer personas or other customer groups.
Benchmarkable survey questions example : "Compare your company's marketing budget to other companies in your industry."
Good Survey Questions
- What is your favorite product?
- Why did you purchase this product?
- How satisfied are you with [product]?
- Would you recommend [product] to a friend?
- Would you recommend [company name] to a friend?
- If you could change one thing about [product], what would it be?
- Which other options were you considering before [product or company name]?
- Did [product] help you accomplish your goal?
- How would you feel if we did not offer this product, feature, or service?
- What would you miss the most if you couldn't use your favorite product from us?
- What is one word that best describes your experience using our product?
- What's the primary reason for canceling your account?
- How satisfied are you with our customer support?
- Did we answer all of your questions and concerns?
- How can we be more helpful?
- What additional features would you like to see in this product?
- Are we meeting your expectations?
- How satisfied are you with your experience?
1. "What is your favorite product?"
This question is a great starter for your survey. Most companies want to know what their most popular products are, and this question cuts right to the point.
It's important to note that this question gives you the customer's perspective, not empirical evidence. You should compare the results to your inventory to see if your customers' answers match your actual sales. You may be surprised to find your customers' "favorite" product isn't the highest-selling one.
2. "Why did you purchase this product?"
Once you know their favorite product, you need to understand why they like it so much. The qualitative data will help your marketing and sales teams attract and engage customers. They'll know which features to advertise most and can seek out new leads similar to your existing customers.
3. "How satisfied are you with [product]?"
When you have a product that isn't selling, you can ask this question to see why customers are unhappy with it. If the reviews are poor, you'll know that the product needs reworking, and you can send it back to product management for improvement. Or, if these results are positive, they may have something to do with your marketing or sales techniques. You can then gather more info during the questionnaire and restrategize your campaigns based on your findings.
4. "Would you recommend [product] to a friend?"
This is a classic survey question used with most NPS® surveys. It asks the customer if they would recommend your product to one of their peers. This is extremely important because most people trust customer referrals more than traditional advertising. So, if your customers are willing to recommend your products, you'll have an easier time acquiring new leads.
5. "Would you recommend [company name] to a friend?"
Similar to the question above, this one asks the customer to consider your business as a whole and not just your product. This gives you insight into your brand's reputation and shows how customers feel about your company's actions. Even if you have an excellent product, your brand's reputation may be the cause of customer churn . Your marketing team should pay close attention to this question to see how they can improve the customer experience .
6. "If you could change one thing about [product], what would it be?"
This is a good question to ask your most loyal customers or ones that have recently churned. For loyal customers, you want to keep adding value to their experience. Asking how your product can improve helps your development team find flaws and increases your chances of retaining a valuable customer segment.
For customers that have recently churned, this question gives insight into how you can retain future users that are unhappy with your product or service. By giving these customers a space to voice their criticisms, you can either reach out and offer solutions or relay feedback for consideration.
7. "Which other options were you considering before [product or company name]?"
If you're operating in a competitive industry, customers will have more than one choice when considering your brand. And if you sell variations of your product or produce new models periodically, customers may prefer one version over another.
For this question, you should offer answers to choose from in a multiple-selection format. This will limit the types of responses you'll receive and help you get the exact information you need.
8. "Did [product] help you accomplish your goal?"
The purpose of any product or service is to help customers reach a goal. So, you should be direct and ask them if your company steered them toward success. After all, customer success is an excellent retention tool. If customers are succeeding with your product, they're more likely to stay loyal to your brand.
9. "How would you feel if we did not offer this product, feature, or service?"
Thinking about discontinuing a product? This question can help you decide whether or not a specific product, service, or feature will be missed if you were to remove it.
Even if you know that a product or service isn't worth offering, it's important to ask this question anyway because there may be a certain aspect of the product that your customers like. They'll be delighted if you can integrate that feature into a new product or service.
10. "If you couldn't use your favorite product from us, what would you miss the most about it?"
This question pairs well with the one above because it frames the customer's favorite product from a different point of view. Instead of describing why they love a particular product, the customer can explain what they'd be missing if they didn't have it at all. This type of question uncovers "fear of loss," which can be a very different motivating factor than "hope for gain."
11. "What word best describes your experience using our product?"
Your marketing team will love this question. A single word or a short phrase can easily sum up your customers’ emotions when they experience your company, product, or brand. Those emotions can be translated into relatable marketing campaigns that use your customers’ exact language.
If the responses reveal negative emotions, it's likely that your entire customer service team can relate to that pain point. Rather than calling it "a bug in the system," you can describe the problem as a "frustrating roadblock" to keep their experience at the forefront of the solution.
12. "What's the primary reason for canceling your account?"
Finding out why customers are unhappy with your product or service is key to decreasing your churn rate . If you don't understand why people leave your brand, it's hard to make effective changes to prevent future turnover. Or worse, you might alter your product or service in a way that increases your churn rate, causing you to lose customers who were once loyal supporters.
13. "How satisfied are you with our customer support?"
It's worth asking customers how happy they are with your support or service team. After all, an excellent product doesn't always guarantee that customers will stay loyal to your brand. Research shows that one in six customers will leave a brand they love after just one poor service experience.
14. "Did we answer all of your questions and concerns?"
This is a good question to ask after a service experience. It shows how thorough your support team is and whether they're prioritizing speed too much over quality. If customers still have questions and concerns after a service interaction, your support team is focusing too much on closing tickets and not enough on meeting customer needs .
15. "How can we be more helpful?"
Sometimes it's easier to be direct and simply ask customers what else you can do to help them. This shows a genuine interest in your buyers' goals which helps your brand foster meaningful relationships with its customer base. The more you can show that you sincerely care about your customers' problems, the more they'll open up to you and be honest about how you can help them.
16. What additional features would you like to see in this product?
With this question, your team can get inspiration for the company's next product launch. Think of the responses as a wish list from your customers. You can discover what features are most valuable to them and whether they already exist within a competitor's product.
Incorporating every feature suggestion is nearly impossible, but it's a convenient way to build a backlog of ideas that can inspire future product releases.
17. "Are we meeting your expectations?"
This is a really important question to ask because customers won't always tell you when they're unhappy with your service. Not every customer will ask to speak with a manager when they're unhappy with your business. In fact, most will quietly move on to a competitor rather than broadcast their unhappiness to your company. To prevent this type of customer churn, you need to be proactive and ask customers if your brand is meeting their expectations.
18. "How satisfied are you with your experience?"
This question asks the customer to summarize their experience with your business. It gives you a snapshot of how the customer is feeling in that moment and their perception of your brand. Asking this question at the right stage in the customer's journey can tell you a lot about what your company is doing well and where you can stand to improve.
Next, let's dig into some tips for creating your own questionnaire.
Start with templates as a foundation. Know your question types. Keep it brief when possible. Choose a simple visual design. Use a clear research process. Create questions with straightforward, unbiased language. Make sure every question is important. Ask one question at a time. Order your questions logically. Consider your target audience. Test your questionnaire.
1. Use questionnaire templates.
Rather than build a questionnaire from scratch, consider using questionnaire templates to get started. HubSpot's collection of customer-facing questionnaire templates can help you quickly build and send a questionnaire to your clients and analyze the results right on Google Drive.
Vrnda LeValley , customer training manager at HubSpot, recommends starting with an alignment question like, "Does this class meet your expectations?" because it gives more context to any positive or negative scores that follow. She continues, "If it didn't meet expectations, then there will potentially be negative responses across the board (as well as the reverse)."
3. Keep it brief, when possible.
Most questionnaires don't need to be longer than a page. For routine customer satisfaction surveys, it's unnecessary to ask 50 slightly varied questions about a customer's experience when those questions could be combined into 10 solid questions.
The shorter your questionnaire is, the more likely a customer will complete it. Plus a shorter questionnaire means less data for your team to collect and analyze. Based on the feedback, it will be a lot easier for you to get the information you need to make the necessary changes in your organization and products.
4. Choose a simple visual design.
There's no need to make your questionnaire a stunning work of art. As long as it's clear and concise, it will be attractive to customers. When asking questions that are important to furthering your company, it's best to keep things simple. Select a font that’s common and easy to read, like Helvetica or Arial. Use a text size that customers of all abilities can navigate.
A questionnaire is most effective when all the questions are visible on a single screen. The layout is important. If a questionnaire is even remotely difficult to navigate, your response rate could suffer. Make sure that buttons and checkboxes are easy to click and that questions are visible on both computer and mobile screens.
5. Use a clear research process.
Before planning questions for your questionnaire, you'll need to have a definite direction for it. A questionnaire is only effective if the results answer an overarching research question. After all, the research process is an important part of the survey, and a questionnaire is a tool that's used within the process.
In your research process, you should first come up with a research question. What are you trying to find out? What's the point of this questionnaire? Keep this in mind throughout the process.
After coming up with a research question, it's a good idea to have a hypothesis. What do you predict the results will be for your questionnaire? This can be structured in a simple "If … then …" format. A structured experiment — yes, your questionnaire is a type of experiment — will confirm that you're only collecting and analyzing data necessary to answer your research question. Then, you can move forward with your survey .
6. Create questions with straightforward, unbiased language.
When crafting your questions, it's important to structure them to get the point across. You don't want any confusion for your customers because this may influence their answers. Instead, use clear language. Don't use unnecessary jargon, and use simple terms in favor of longer-winded ones.
You may risk the reliability of your data if you try to combine two questions. Rather than asking, "How was your experience shopping with us, and would you recommend us to others?" separate it into two separate questions. Customers will be clear on your question and choose a response most appropriate for each one.
You should always keep the language in your questions unbiased. You never want to sway customers one way or another because this will cause your data to be skewed. Instead of asking, "Some might say that we create the best software products in the world. Would you agree or disagree?" it may be better to ask, "How would you rate our software products on a scale of 1 to 10?" This removes any bias and confirms that all the responses are valid.
7. Ask only the most important questions.
When creating your questionnaire, keep in mind that time is one of the most valuable commodities for customers. Most aren't going to sit through a 50-question survey, especially when they're being asked about products or services they didn't use. Even if they do complete it, most of these will be half-hearted responses from fatigued customers who simply want to be finished with it.
If your questionnaire has five or 55 questions, make sure each has a specific purpose. Individually, they should be aimed at collecting certain pieces of information that reveal new insights into different aspects of your business. If your questions are irrelevant or seem out of place, your customers will be easily derailed by the survey. And, once the customer has lost interest, it'll be difficult to regain their focus.
8. Ask one question at a time.
Since every question has a purpose, ask them one at a time. This lets the customer focus and encourages them to share a thoughtful response. This is particularly important for open-ended questions where customers need to describe an experience or opinion.
By grouping questions together, you risk overwhelming busy customers who don't have time for a long survey. They may think you're asking them too much, or they might see your questionnaire as a daunting task. You want your survey to appear as painless as possible. Keeping your questions separated will make it more user-friendly.
9. Order your questions logically.
A good questionnaire is like a good book. The beginning questions should lay the framework, the middle ones should cut to the core issues, and the final questions should tie up all loose ends. This flow keeps customers engaged throughout the entire survey.
When creating your questionnaire, start with the most basic questions about demographics. You can use this information to segment your customer base and create different buyer personas.
Next, add in your product and services questions. These are the ones that offer insights into common customer roadblocks and where you can improve your business's offerings. Questions like these guide your product development and marketing teams looking for new ways to enhance the customer experience.
Finally, you should conclude your questionnaire with open-ended questions to understand the customer journey. These questions let customers voice their opinions and point out specific experiences they've had with your brand.
10. Consider your target audience.
Whenever you collect customer feedback, you need to keep in mind the goals and needs of your target audience. After all, the participants in this questionnaire are your active customers. Your questions should be geared toward the interests and experiences they've already had with your company.
You can even create multiple surveys that target different buyer personas. For example, if you have a subscription-based pricing model, you can personalize your questionnaire for each type of subscription your company offers.
11. Test your questionnaire.
Once your questionnaire is complete, it's important to test it. If you don't, you may end up asking the wrong questions and collecting irrelevant or inaccurate information. Start by giving your employees the questionnaire to test, then send it to small groups of customers and analyze the results. If you're gathering the data you're looking for, then you should release the questionnaire to all of your customers.
How Questionnaires Can Benefit Your Customer Service Strategy
Whether you have one customer or 1000 customers, their opinions matter when it comes to the success of your business. Their satisfaction with your offerings can reveal how well or how poorly your customer service strategy and business are meeting their needs. A questionnaire is one of the most powerful, cost-effective tools to uncover what your customers think about your business. When analyzed properly, it can inform your product and service launches.
Use the free questionnaire templates, examples, and best practices in this guide to conduct your next customer feedback survey.
Now that you know the slight difference between a survey and a questionnaire, it’s time to put it into practice with your products or services. Remember, a good survey and questionnaire always start with a purpose. But, a great survey and questionnaire give data that you can use to help companies increase the way customers respond to their products or services because of the questions.
Net Promoter, Net Promoter System, Net Promoter Score, NPS, and the NPS-related emoticons are registered trademarks of Bain & Company, Inc., Fred Reichheld, and Satmetrix Systems, Inc.
Editor's note: This post was originally published in July 2018 and has been updated for comprehensiveness.
Don't forget to share this post!
Online Panels: What They Are & How to Use Them Effectively
The Complete Guide to Survey Logic (+Expert Tips)
Focus Group vs. Survey: Which One Should You Use?
Leading Questions: What They Are & Why They Matter [+ Examples]
What are Survey Sample Sizes & How to Find Your Sample Size
24 Diversity, Equity, and Inclusion Survey Questions to Ask Your Employees
How Long Should a Survey Be? The Ideal Survey Length [New Data]
What Is Survey Fatigue & How to Avoid It [Research]
Common Types of Survey Bias and How to Avoid Them
SurveyMonkey vs. Qualtrics: What’s the Difference?
5 free templates for learning more about your customers and respondents.
Survey Questionnaire Development
Survey questionnaire are generally used for research purposes to obtain data from the public or to determine the distribution of certain characteristics in a population. Formulating a survey questionnaire is one of the most vital stages in the process of survey development. Most of the components in a survey are based on common sense, but there are certain aspects that the authors should be aware of. We believe in providing numerous samples, surveys, questions and responses that will help you in constructing the survey questionnaires.
Some of the ground rules to be kept in mind before writing the first word:
- Each of the questions should directly associate to the objectives of the survey questionnaire.
- Each and every respondent should be able to answer all questions with ease
- Each question should be formed in a way that all respondents interpret it in the same manner
- Each question should give answers to what you need to know, not what would be preferable to know
The following are the four main parts of a survey questionnaire. Though each of these parts is different from each other, it is important to understand that all of them are necessary for drafting a good survey questionnaire .
There are four main parts to a survey questionnaire. Though each of them is different from each other, it is significant to know that all of them play a vital role in formulating a good questionnaire.
The four main parts of a survey questionnaire development are invitation, introduction, question types and close. There are various ways as to how these surveys can be used such as emails, website links, online advertising etc. The five main parts of the invitation are introduction, why are only certain respondents selected to answer the questionnaire, how long the questionnaire will usually take, what are the benefits they will get for answering the questionnaire and how their responses will be used by the surveyors or researchers.
The introduction part of the survey questionnaire will include an attractive introduction and that clearly portrays the aim of the research.
There are five steps in writing a survey questionnaire such as determining the aim, finalizing the attributes to gauge, recognize the type of audience, select the scales of measurement and finally check the reliability and the validity of the survey.
After formulating a survey questionnaire, it is essential to check for errors because they fail to attain their goals due to the errors. Random errors are responsible for the decreased reliability of a survey and it happens when questions are poorly structured or presented in an inaccurate way.
24/7 Support - Order Now
Read our research on: Israel | Internet & Technology | Science
Regions & Countries
Writing survey questions.
Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire.
Questionnaire design is a multistage process that requires attention to many details at once. Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions. Researchers are also often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.
Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in order to better understand how people think about an issue or comprehend a question. Pretesting a survey is an essential step in the questionnaire design process to evaluate how people respond to the overall questionnaire and specific questions, especially when questions are being introduced for the first time.
For many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire. Here, we discuss the pitfalls and best practices of designing questionnaires.
There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey. For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media. We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis to better understand whether people’s opinions are changing.
At Pew Research Center, questionnaire development is a collaborative and iterative process where staff meet to discuss drafts of the questionnaire several times over the course of its development. We frequently test new survey questions ahead of time through qualitative research methods such as focus groups , cognitive interviews, pretesting (often using an online, opt-in sample ), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on the ATP.
Measuring change over time
Many surveyors want to track changes over time in people’s attitudes, opinions and behaviors. To measure change, questions are asked at two or more points in time. A cross-sectional design surveys different people in the same population at multiple points in time. A panel, such as the ATP, surveys the same people over time. However, it is common for the set of people in survey panels to change over time as new panelists are added and some prior panelists drop out. Many of the questions in Pew Research Center surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or Black Americans), or what we call “trending the data”.
When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously (see question wording and question order for further information). All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current survey and previous surveys in which we asked the question.
The Center’s transition from conducting U.S. surveys by live telephone interviewing to an online panel (around 2014 to 2020) complicated some opinion trends, but not others. Opinion trends that ask about sensitive topics (e.g., personal finances or attending religious services ) or that elicited volunteered answers (e.g., “neither” or “don’t know”) over the phone tended to show larger differences than other trends when shifting from phone polls to the online ATP. The Center adopted several strategies for coping with changes to data trends that may be related to this change in methodology. If there is evidence suggesting that a change in a trend stems from switching from phone to online measurement, Center reports flag that possibility for readers to try to head off confusion or erroneous conclusions.
Open- and closed-ended questions
One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.
For example, in a poll conducted after the 2008 presidential election, people responded very differently to two versions of the question: “What one issue mattered most to you in deciding how you voted for president?” One was closed-ended and the other open-ended. In the closed-ended version, respondents were provided five options and could volunteer an option not on the list.
When explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8%) provided a response other than the five they were read. By contrast, fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question. All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version. (Also see “High Marks for the Campaign, a High Bar for Obama” for more information.)
Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions based off that pilot study that include the most common responses as answer choices. In this way, the questions may better reflect what the public is thinking, how they view a particular issue, or bring certain issues to light that the researchers may not have been aware of.
When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered, and the order in which options are read can all influence how people respond. One example of the impact of how categories are defined can be found in a Pew Research Center poll conducted in January 2002. When half of the sample was asked whether it was “more important for President Bush to focus on domestic policy or foreign policy,” 52% chose domestic policy while only 34% said foreign policy. When the category “foreign policy” was narrowed to a specific aspect – “the war on terrorism” – far more people chose it; only 33% chose domestic policy while 52% chose the war on terrorism.
In most circumstances, the number of answer choices should be kept to a relatively small number – just four or perhaps five at most – especially in telephone surveys. Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time. When the question is asking about an objective fact and/or demographics, such as the religious affiliation of the respondent, more categories can be used. In fact, they are encouraged to ensure inclusivity. For example, Pew Research Center’s standard religion questions include more than 12 different categories, beginning with the most common affiliations (Protestant and Catholic). Most respondents have no trouble with this question because they can expect to see their religious group within that list in a self-administered survey.
In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions. Research suggests that in telephone surveys respondents more frequently choose items heard later in a list (a “recency effect”), and in self-administered surveys, they tend to choose items at the top of the list (a “primacy” effect).
Because of concerns about the effects of category order on responses to closed-ended questions, many sets of response options in Pew Research Center’s surveys are programmed to be randomized to ensure that the options are not asked in the same order for each respondent. Rotating or randomizing means that questions or items in a list are not asked in the same order to each respondent. Answers to questions are sometimes affected by questions that precede them. By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between). This does not eliminate the potential impact of previous questions on the current question, but it does ensure that this bias is spread randomly across all of the questions or items in the list. For instance, in the example discussed above about what issue mattered most in people’s vote, the order of the five issues in the closed-ended version of the question was randomized so that no one issue appeared early or late in the list for all respondents. Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly.
Questions with ordinal response categories – those with an underlying order (e.g., excellent, good, only fair, poor OR very favorable, mostly favorable, mostly unfavorable, very unfavorable) – are generally not randomized because the order of the categories conveys important information to help respondents answer the question. Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents. For example, in one of Pew Research Center’s questions about abortion, half of the sample is asked whether abortion should be “legal in all cases, legal in most cases, illegal in most cases, illegal in all cases,” while the other half of the sample is asked the same question with the response categories read in reverse order, starting with “illegal in all cases.” Again, reversing the order does not eliminate the recency effect but distributes it randomly across the population.
The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.
An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action. However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule even if it meant that U.S. forces might suffer thousands of casualties, ” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S. casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.
There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked. The issues related to question wording are more numerous than can be treated adequately in this short space, but below are a few of the important things to consider:
First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide (an issue or problem, a month, number of days, etc.). Closed-ended questions should include all reasonable responses (i.e., the list of options is exhaustive) and the response categories should not overlap (i.e., response options should be mutually exclusive). Further, it is important to discern when it is best to use forced-choice close-ended questions (often denoted with a radio button in online surveys) versus “select-all-that-apply” lists (or check-all boxes). A 2019 Center study found that forced-choice questions tend to yield more accurate responses, especially for sensitive questions. Based on that research, the Center generally avoids using select-all-that-apply questions.
It is also important to ask only one question at a time. Questions that ask respondents to evaluate more than one concept (known as double-barreled questions) – such as “How much confidence do you have in President Obama to handle domestic and foreign policy?” – are difficult for respondents to answer and often lead to responses that are difficult to interpret. In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy.
In general, questions that use simple and concrete language are more easily understood by respondents. It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question. Double negatives (e.g., do you favor or oppose not allowing gays and lesbians to legally marry) or unfamiliar abbreviations or jargon (e.g., ANWR instead of Arctic National Wildlife Refuge) can result in respondent confusion and should be avoided.
Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” Although both versions of the question are asking about the same thing, the reaction of respondents was different. In another example, respondents have reacted differently to questions using the word “welfare” as opposed to the more generic “assistance to the poor.” Several experiments have shown that there is much greater public support for expanding “assistance to the poor” than for expanding “welfare.”
We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two forms of the questionnaire. Respondents are assigned randomly to receive either form, so we can assume that the two groups of respondents are essentially identical. On questions where two versions are used, significant differences in the answers between the two forms tell us that the difference is a result of the way we worded the two versions.
One of the most common formats used in survey questions is the “agree-disagree” format. In this type of question, respondents are asked whether they agree or disagree with a particular statement. Research has shown that, compared with the better educated and better informed, less educated and less informed respondents have a greater tendency to agree with such statements. This is sometimes called an “acquiescence bias” (since some kinds of respondents are more likely to acquiesce to the assertion than are others). This behavior is even more pronounced when there’s an interviewer present, rather than when the survey is self-administered. A better practice is to offer respondents a choice between alternative statements. A Pew Research Center experiment with one of its routinely asked values questions illustrates the difference that question format can make. Not only does the forced choice format yield a very different result overall from the agree-disagree format, but the pattern of answers between respondents with more or less formal education also tends to be very different.
One other challenge in developing questionnaires is what is called “social desirability bias.” People have a natural tendency to want to be accepted and liked, and this may lead people to provide inaccurate answers to questions that deal with sensitive subjects. Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias. They also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election. Researchers attempt to account for this potential bias in crafting questions about these topics. For instance, when Pew Research Center surveys ask about past voting behavior, it is important to note that circumstances may have prevented the respondent from voting: “In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote?” The choice of response options can also make it easier for people to be honest. For example, a question about church attendance might include three of six response options that indicate infrequent attendance. Research has also shown that social desirability bias can be greater when an interviewer is present (e.g., telephone and face-to-face surveys) than when respondents complete the survey themselves (e.g., paper and web surveys).
Lastly, because slight modifications in question wording can affect responses, identical question wording should be used when the intention is to compare results to those from earlier surveys. Similarly, because question wording and responses can vary based on the mode used to survey respondents, researchers should carefully evaluate the likely effects on trend measurements if a different survey mode will be used to assess change in opinion over time.
Once the survey questions are developed, particular attention should be paid to how they are ordered in the questionnaire. Surveyors must be attentive to how questions early in a questionnaire may have unintended effects on how respondents answer subsequent questions. Researchers have demonstrated that the order in which questions are asked can influence how people respond; earlier questions can unintentionally provide context for the questions that follow (these effects are called “order effects”).
One kind of order effect can be seen in responses to open-ended questions. Pew Research Center surveys generally ask open-ended questions about national problems, opinions about leaders and similar topics near the beginning of the questionnaire. If closed-ended questions that relate to the topic are placed before the open-ended question, respondents are much more likely to mention concepts or considerations raised in those earlier questions when responding to the open-ended question.
For closed-ended opinion questions, there are two main types of order effects: contrast effects ( where the order results in greater differences in responses), and assimilation effects (where responses are more similar as a result of their order).
An example of a contrast effect can be seen in a Pew Research Center poll conducted in October 2003, a dozen years before same-sex marriage was legalized in the U.S. That poll found that people were more likely to favor allowing gays and lesbians to enter into legal agreements that give them the same rights as married couples when this question was asked after one about whether they favored or opposed allowing gays and lesbians to marry (45% favored legal agreements when asked after the marriage question, but 37% favored legal agreements without the immediate preceding context of a question about same-sex marriage). Responses to the question about same-sex marriage, meanwhile, were not significantly affected by its placement before or after the legal agreements question.
Another experiment embedded in a December 2008 Pew Research Center poll also resulted in a contrast effect. When people were asked “All in all, are you satisfied or dissatisfied with the way things are going in this country today?” immediately after having been asked “Do you approve or disapprove of the way George W. Bush is handling his job as president?”; 88% said they were dissatisfied, compared with only 78% without the context of the prior question.
Responses to presidential approval remained relatively unchanged whether national satisfaction was asked before or after it. A similar finding occurred in December 2004 when both satisfaction and presidential approval were much higher (57% were dissatisfied when Bush approval was asked first vs. 51% when general satisfaction was asked first).
Several studies also have shown that asking a more specific question before a more general question (e.g., asking about happiness with one’s marriage before asking about one’s overall happiness) can result in a contrast effect. Although some exceptions have been found, people tend to avoid redundancy by excluding the more specific question from the general rating.
Assimilation effects occur when responses to two questions are more consistent or closer together because of their placement in the questionnaire. We found an example of an assimilation effect in a Pew Research Center poll conducted in November 2008 when we asked whether Republican leaders should work with Obama or stand up to him on important issues and whether Democratic leaders should work with Republican leaders or stand up to them on important issues. People were more likely to say that Republican leaders should work with Obama when the question was preceded by the one asking what Democratic leaders should do in working with Republican leaders (81% vs. 66%). However, when people were first asked about Republican leaders working with Obama, fewer said that Democratic leaders should work with Republican leaders (71% vs. 82%).
The order questions are asked is of particular importance when tracking trends over time. As a result, care should be taken to ensure that the context is similar each time a question is asked. Modifying the context of the question could call into question any observed changes over time (see measuring change over time for more information).
A questionnaire, like a conversation, should be grouped by topic and unfold in a logical order. It is often helpful to begin the survey with simple questions that respondents will find interesting and engaging. Throughout the survey, an effort should be made to keep the survey interesting and not overburden respondents with several difficult questions right after one another. Demographic questions such as income, education or age should not be asked near the beginning of a survey unless they are needed to determine eligibility for the survey or for routing respondents through particular sections of the questionnaire. Even then, it is best to precede such items with more interesting and engaging questions. One virtue of survey panels like the ATP is that demographic questions usually only need to be asked once a year, not in each survey.
Other research methods.
About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .
Call us : 0091 80 4111 6961
- Questionnaire Design
Why Seek Professional Help for Questionnaire Development
A questionnaire is a standardised data collection tool which can be used to address questions to the research samples across subject domains. We, at PhD Thesis aim at designing questionnaires that can minimise any type of bias or error, increase response rates, and enhance analysis. Carefully curated questionnaires can result in improved decision making and concluding of extensive research works. We have dedicated PhD thesis questionnaire design services in Bangalore for PhD research to design various data collection tools, including questionnaires, surveys and interview guides. Whether you are conducting quantitative or qualitative research, our team follows the prescribed formats and caters to the specific research objectives and aims. We ensure that the questionnaires so designed are reliable, valid and universally acceptable.
Multiple Aspects of Our Questionnaire designed
- 1. Target audience oriented
- 2. Caters to all research objectives
- 3. Comprehensible language for easy understanding Improving
- 4. Sequential ordering/layout of questions for higher response rate
- 5. Format compatible with data analysis software to be used
What is included in PhD questionnaire designing service?
Extracting relevant information.
We make sure that the questionnaire is developed with an aim to elicit only the relevant information per your study objectives from samples. This enables you to find usable data without having to exhaust through large amounts of information. We ensure that all research questions will be catered to through the different sections under the questionnaire.
Precision to achieve study goals
Our experts offering PhD thesis questionnaire design services in Bangalore carefully understand the goals and design of your study in order to design the best data collection tool intended to gather accurate data.
Open- and close-ended questions
We help you in designing both open-ended and closed-ended questions. This depends on the way you want to capture your data for your specific research. It may also depend on your study type, whether it is qualitative, quantitative, or mixed design research.
Questions for easy interpretation
Further, we ensure that your questionnaire is designed with such questions that can make it easier for you to later analyse and interpret the responses received. For this purpose, we word them carefully and offer clear instructions to the respondents.
Ethical questionnaire design
Questions included in a study questionnaire take into account the target respondents’ sentiments, socio-economic backgrounds and avoid any trigger points or double-meaning questions resulting in biased or unreliable data collection.
Making the questionnaire flawless
After your questionnaire is designed, we check it for the accuracy of language and format. We clear it of every single error, whether it be of grammar or format. We also apply the apt rating scale, checklist, or similar response options to enable participants to convey their response clearly.
Reliability and validity testing
We apply the developed questionnaire on a small sample to test its reliability and validity. It is very important that your study tool is reliable when used repeatedly under the same conditions. It has to be a standardised tool of study.
Gain an Edge with an Effective Data Collection Tool
Our statistical experts have an intensive knowledge of several ratings scales, question types, ethical issues, reliability and validity factors, etc. Thus, they are able to provide an objective set of enquiries in the form of a questionnaire. Designing a questionnaire is not simply taking some raw data and converting it into usable information. It demands detailed knowledge of a wide range of statistical methods and techniques, as well as research methods to be able to design a wonderful surveying tool.
As our statistical experts possess the experience of developing a variety of data collection tools for PhD candidates working in varied academic domains, they are skilled to handle every minor aspect of developing a study questionnaire for you. To know more about our questionnaire design services for PhD research, send us an enquiry email at [email protected] .
Request a Call Back
Latest posts from our blog.
- Limitations and Delimitations: The Boundaries and Weakness of Your Research
- Developing Your Conference Paper with PhD Thesis
- Evolution of the Problem Statement: From Arriving at it to Solidification
- Understanding Quasi-Experimental Quantitative Research Design
- Chaptering Your Research Paper: Writing the Discussion Section
- 5 Unknown Differences Between Limitation and Delimitation
- 3 Game-Changing Tips for PhD Thesis Writing in 2023-24
- What to Do When PhD Dissertation Defense Preparation Derail?
- Common Pitfalls in Avoiding Rejection for PhD Thesis
- Unravelling the Pros and Cons of Research Paper Writing Help for PhD
- Effective Data Presentation and Analysis in Research Paper Writing in PhD For Publication In International Journals
- A fresh insight into ethnographic research design processes
- Ways to Explore and Address Research Gaps in Digital Marketing: A PhD Researchers’ Guide
- Solving the statistical juggle: Anova, Ancova,Manova, Mancova
- How to Select a questionnaire method based on your research topics ?
Need more details? Contact us
We are here to assist you. Contact us by phone, email.
How to get published in SCI Indexed Journals
Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
Chapter 9: Survey Research
Constructing Survey Questionnaires
- Describe the cognitive processes involved in responding to a survey item.
- Explain what a context effect is and give some examples.
- Create a simple survey questionnaire based on principles of effective item writing and organization.
The heart of any survey research project is the survey questionnaire itself. Although it is easy to think of interesting questions to ask people, constructing a good survey questionnaire is not easy at all. The problem is that the answers people give can be influenced in unintended ways by the wording of the items, the order of the items, the response options provided, and many other factors. At best, these influences add noise to the data. At worst, they result in systematic biases and misleading results. In this section, therefore, we consider some principles for constructing survey questionnaires to minimize these unintended effects and thereby maximize the reliability and validity of respondents’ answers.
Survey Responding as a Psychological Process
Before looking at specific principles of survey questionnaire construction, it will help to consider survey responding as a psychological process.
A Cognitive Model
Figure 9.1 presents a model of the cognitive processes that people engage in when responding to a survey item (Sudman, Bradburn, & Schwarz, 1996)  . Respondents must interpret the question, retrieve relevant information from memory, form a tentative judgment, convert the tentative judgment into one of the response options provided (e.g., a rating on a 1-to-7 scale), and finally edit their response as necessary.
Consider, for example, the following questionnaire item:
How many alcoholic drinks do you consume in a typical day?
- _____ a lot more than average
- _____ somewhat more than average
- _____ average
- _____ somewhat fewer than average
- _____ a lot fewer than average
Although this item at first seems straightforward, it poses several difficulties for respondents. First, they must interpret the question. For example, they must decide whether “alcoholic drinks” include beer and wine (as opposed to just hard liquor) and whether a “typical day” is a typical weekday, typical weekend day, or both . Even though Chang and Krosnick (2003)  found that asking about “typical” behaviour has been shown to be more valid than asking about “past” behaviour, their study compared “typical week” to “past week” and may be different when considering typical weekdays or weekend days) . Once they have interpreted the question, they must retrieve relevant information from memory to answer it. But what information should they retrieve, and how should they go about retrieving it? They might think vaguely about some recent occasions on which they drank alcohol, they might carefully try to recall and count the number of alcoholic drinks they consumed last week, or they might retrieve some existing beliefs that they have about themselves (e.g., “I am not much of a drinker”). Then they must use this information to arrive at a tentative judgment about how many alcoholic drinks they consume in a typical day. For example, this mental calculation might mean dividing the number of alcoholic drinks they consumed last week by seven to come up with an average number per day. Then they must format this tentative answer in terms of the response options actually provided. In this case, the options pose additional problems of interpretation. For example, what does “average” mean, and what would count as “somewhat more” than average? Finally, they must decide whether they want to report the response they have come up with or whether they want to edit it in some way. For example, if they believe that they drink much more than average, they might not want to report th e higher number for fear of looking bad in the eyes of the researcher.
From this perspective, what at first appears to be a simple matter of asking people how much they drink (and receiving a straightforward answer from them) turns out to be much more complex.
Context Effects on Questionnaire Responses
Again, this complexity can lead to unintended influences on respondents’ answers. These are often referred to as context effects because they are not related to the content of the item but to the context in which the item appears (Schwarz & Strack, 1990)  . For example, there is an item-order effect when the order in which the items are presented affects people’s responses. One item can change how participants interpret a later item or change the information that they retrieve to respond to later items. For example, researcher Fritz Strack and his colleagues asked college students about both their general life satisfaction and their dating frequency (Strack, Martin, & Schwarz, 1988)  . When the life satisfaction item came first, the correlation between the two was only −.12, suggesting that the two variables are only weakly related. But when the dating frequency item came first, the correlation between the two was +.66, suggesting that those who date more have a strong tendency to be more satisfied with their lives. Reporting the dating frequency first made that information more accessible in memory so that they were more likely to base their life satisfaction rating on it.
The response options provided can also have unintended effects on people’s responses (Schwarz, 1999)  . For example, when people are asked how often they are “really irritated” and given response options ranging from “less than once a year” to “more than once a month,” they tend to think of major irritations and report being irritated infrequently. But when they are given response options ranging from “less than once a day” to “several times a month,” they tend to think of minor irritations and report being irritated frequently. People also tend to assume that middle response options represent what is normal or typical. So if they think of themselves as normal or typical, they tend to choose middle response options. For example, people are likely to report watching more television when the response options are centred on a middle option of 4 hours than when centred on a middle option of 2 hours. To mitigate against order effects, rotate questions and response items when there is no natural order. Counterbalancing is a good practice for survey questions and can reduce response order effects which show that among undecided voters, the first candidate listed in a ballot receives a 2.5% boost simply by virtue of being listed first  !
Writing Survey Questionnaire Items
Types of items.
Questionnaire items can be either open-ended or closed-ended. Open-ended items simply ask a question and allow participants to answer in whatever way they choose. The following are examples of open-ended questionnaire items.
- “What is the most important thing to teach children to prepare them for life?”
- “Please describe a time when you were discriminated against because of your age.”
- “Is there anything else you would like to tell us about?”
Open-ended items are useful when researchers do not know how participants might respond or want to avoid influencing their responses. They tend to be used when researchers have more vaguely defined research questions—often in the early stages of a research project. Open-ended items are relatively easy to write because there are no response options to worry about. However, they take more time and effort on the part of participants, and they are more difficult for the researcher to analy z e because the answers must be transcribed, coded, and submitted to some form of qualitative analysis, such as content analysis. The advantage to open-ended items is that they are unbiased and do not provide respondents with expectations of what the researcher might be looking for. Open-ended items are also more valid and more reliable. The disadvantage is that respondents are more likely to skip open-ended items because they take longer to answer. It is best to use open-ended questions when the answer is unsure and for quantities which can easily be converted to categories later in the analysis.
Closed-ended items ask a question and provide a set of response options for participants to choose from. The alcohol item just mentioned is an example, as are the following:
How old are you?
- _____ Under 18
- _____ 18 to 34
- _____ 35 to 49
- _____ 50 to 70
- _____ Over 70
On a scale of 0 (no pain at all) to 10 (worst pain ever experienced), how much pain are you in right now?
Have you ever in your adult life been depressed for a period of 2 weeks or more?
Closed-ended items are used when researchers have a good idea of the different responses that participants might make. They are also used when researchers are interested in a well-defined variable or construct such as participants’ level of agreement with some statement, perceptions of risk, or frequency of a particular behaviour. Closed-ended items are more difficult to write because they must include an appropriate set of response options. However, they are relatively quick and easy for participants to complete. They are also much easier for researchers to analyze because the responses can be easily converted to numbers and entered into a spreadsheet. For these reasons, closed-ended items are much more common.
All closed-ended items include a set of response options from which a participant must choose. For categorical variables like sex, race, or political party preference, the categories are usually listed and participants choose the one (or ones) that they belong to. For quantitative variables, a rating scale is typically provided. A rating scale is an ordered set of responses that participants must choose from. Figure 9.2 shows several examples. The number of response options on a typical rating scale ranges from three to 11—although five and seven are probably most common. Five-point scales are best for unipolar scales where only one construct is tested, such as frequency (Never, Rarely, Sometimes, Often, Always). Seven-point scales are best for bipolar scales where there is a dichotomous spectrum, such as liking (Like very much, Like somewhat, Like slightly, Neither like nor dislike, Dislike slightly, Dislike somewhat, Dislike very much). For bipolar questions, it is useful to offer an earlier question that branches them into an area of the scale; if asking about liking ice cream, first ask “Do you generally like or dislike ice cream?” Once the respondent chooses like or dislike, refine it by offering them one of choices from the seven-point scale. Branching improves both reliability and validity (Krosnick & Berent, 1993)  . Although you often see scales with numerical labels, it is best to only present verbal labels to the respondents but convert them to numerical values in the analyses. Avoid partial labels or length or overly specific labels. In some cases, the verbal labels can be supplemented with (or even replaced by) meaningful graphics. The last rating scale shown in Figure 9.2 is a visual-analog scale, on which participants make a mark somewhere along the horizontal line to indicate the magnitude of their response.
What is a Likert Scale?
In reading about psychological research, you are likely to encounter the term Likert scale . Although this term is sometimes used to refer to almost any rating scale (e.g., a 0-to-10 life satisfaction scale), it has a much more precise meaning.
In the 1930s, researcher Rensis Likert (pronounced LICK-ert) created a new approach for measuring people’s attitudes (Likert, 1932)  . It involves presenting people with several statements—including both favourable and unfavourable statements—about some person, group, or idea. Respondents then express their agreement or disagreement with each statement on a 5-point scale: Strongly Agree , Agree , Neither Agree nor Disagree , Disagree , Strongly Disagree . Numbers are assigned to each response (with reverse coding as necessary) and then summed across all items to produce a score representing the attitude toward the person, group, or idea. The entire set of items came to be called a Likert scale.
Thus unless you are measuring people’s attitude toward something by assessing their level of agreement with several statements about it, it is best to avoid calling it a Likert scale. You are probably just using a “rating scale.”
Writing Effective Items
We can now consider some principles of writing questionnaire items that minimize unintended context effects and maximize the reliability and validity of participants’ responses. A rough guideline for writing questionnaire items is provided by the BRUSO model (Peterson, 2000)  . An acronym, BRUSO stands for “brief,” “relevant,” “unambiguous,” “specific,” and “objective.” Effective questionnaire items are brief and to the point. They avoid long, overly technical, or unnecessary words. This brevity makes them easier for respondents to understand and faster for them to complete. Effective questionnaire items are also relevant to the research question. If a respondent’s sexual orientation, marital status, or income is not relevant, then items on them should probably not be included. Again, this makes the questionnaire faster to complete, but it also avoids annoying respondents with what they will rightly perceive as irrelevant or even “nosy” questions. Effective questionnaire items are also unambiguous ; they can be interpreted in only one way. Part of the problem with the alcohol item presented earlier in this section is that different respondents might have different ideas about what constitutes “an alcoholic drink” or “a typical day.” Effective questionnaire items are also specific , so that it is clear to respondents what their response should be about and clear to researchers what it is about. A common problem here is closed-ended items that are “double barrelled.” They ask about two conceptually separate issues but allow only one response. For example, “Please rate the extent to which you have been feeling anxious and depressed.” This item should probably be split into two separate items—one about anxiety and one about depression. Finally, effective questionnaire items are objective in the sense that they do not reveal the researcher’s own opinions or lead participants to answer in a particular way. Table 9.2 shows some examples of poor and effective questionnaire items based on the BRUSO criteria. The best way to know how people interpret the wording of the question is to conduct pre-tests and ask a few people to explain how they interpreted the question.
For closed-ended items, it is also important to create an appropriate response scale. For categorical variables, the categories presented should generally be mutually exclusive and exhaustive. Mutually exclusive categories do not overlap. For a religion item, for example, the categories of Christian and Catholic are not mutually exclusive but Protestant and Catholic are. Exhaustive categories cover all possible responses.
Although Protestant and Catholic are mutually exclusive, they are not exhaustive because there are many other religious categories that a respondent might select: Jewish , Hindu , Buddhist , and so on. In many cases, it is not feasible to include every possible category, in which case an Other category, with a space for the respondent to fill in a more specific response, is a good solution. If respondents could belong to more than one category (e.g., race), they should be instructed to choose all categories that apply.
For rating scales, five or seven response options generally allow about as much precision as respondents are capable of. However, numerical scales with more options can sometimes be appropriate. For dimensions such as attractiveness, pain, and likelihood, a 0-to-10 scale will be familiar to many respondents and easy for them to use. Regardless of the number of response options, the most extreme ones should generally be “balanced” around a neutral or modal midpoint. An example of an unbalanced rating scale measuring perceived likelihood might look like this:
Unlikely | Somewhat Likely | Likely | Very Likely | Extremely Likely
A balanced version might look like this:
Extremely Unlikely | Somewhat Unlikely | As Likely as Not | Somewhat Likely | Extremely Likely
Note, however, that a middle or neutral response option does not have to be included. Researchers sometimes choose to leave it out because they want to encourage respondents to think more deeply about their response and not simply choose the middle option by default. Including middle alternatives on bipolar dimensions is useful to allow people to genuinely choose an option that is neither.
Formatting the Questionnaire
Writing effective items is only one part of constructing a survey questionnaire. For one thing, every survey questionnaire should have a written or spoken introduction that serves two basic functions (Peterson, 2000)  . One is to encourage respondents to participate in the survey. In many types of research, such encouragement is not necessary either because participants do not know they are in a study (as in naturalistic observation) or because they are part of a subject pool and have already shown their willingness to participate by signing up and showing up for the study. Survey research usually catches respondents by surprise when they answer their phone, go to their mailbox, or check their e-mail—and the researcher must make a good case for why they should agree to participate. Thus the introduction should briefly explain the purpose of the survey and its importance, provide information about the sponsor of the survey (university-based surveys tend to generate higher response rates), acknowledge the importance of the respondent’s participation, and describe any incentives for participating.
The second function of the introduction is to establish informed consent. Remember that this aim means describing to respondents everything that might affect their decision to participate. This includes the topics covered by the survey, the amount of time it is likely to take, the respondent’s option to withdraw at any time, confidentiality issues, and so on. Written consent forms are not typically used in survey research, so it is important that this part of the introduction be well documented and presented clearly and in its entirety to every respondent.
The introduction should be followed by the substantive questionnaire items. But first, it is important to present clear instructions for completing the questionnaire, including examples of how to use any unusual response scales. Remember that the introduction is the point at which respondents are usually most interested and least fatigued, so it is good practice to start with the most important items for purposes of the research and proceed to less important items. Items should also be grouped by topic or by type. For example, items using the same rating scale (e.g., a 5-point agreement scale) should be grouped together if possible to make things faster and easier for respondents. Demographic items are often presented last because they are least interesting to participants but also easy to answer in the event respondents have become tired or bored. Of course, any survey should end with an expression of appreciation to the respondent.
- Responding to a survey item is itself a complex cognitive process that involves interpreting the question, retrieving information, making a tentative judgment, putting that judgment into the required response format, and editing the response.
- Survey questionnaire responses are subject to numerous context effects due to question wording, item order, response options, and other factors. Researchers should be sensitive to such effects when constructing surveys and interpreting survey results.
- Survey questionnaire items are either open-ended or closed-ended. Open-ended items simply ask a question and allow respondents to answer in whatever way they want. Closed-ended items ask a question and provide several response options that respondents must choose from.
- Use verbal labels instead of numerical labels although the responses can be converted to numerical data in the analyses.
- According to the BRUSO model, questionnaire items should be brief, relevant, unambiguous, specific, and objective.
- Discussion: Write a survey item and then write a short description of how someone might respond to that item based on the cognitive model of survey responding (or choose any item on the Rosenberg Self-Esteem Scale .
- How much does the respondent use Facebook?
- How much exercise does the respondent get?
- How likely does the respondent think it is that the incumbent will be re-elected in the next presidential election?
- To what extent does the respondent experience “road rage”?
Figure 9.1 long description: Flowchart modelling the cognitive processes involved in responding to a survey item. In order, these processes are:
- Question Interpretation
- Information Retrieval
- Judgment Formation
- Response Formatting
- Response Editing
[Return to Figure 9.1]
Figure 9.2 long description: Three different rating scales for survey questions. The first scale provides a choice between “strongly agree,” “agree,” “neither agree nor disagree,” “disagree,” and “strongly disagree.” The second is a scale from 1 to 7, with 1 being “extremely unlikely” and 7 being “extremely likely.” The third is a sliding scale, with one end marked “extremely unfriendly” and the other “extremely friendly.” [Return to Figure 9.2]
Figure 9.3 long description: A note reads, “Dear Isaac. Do you like me?” with two check boxes reading “yes” or “no.” Someone has added a third check box, which they’ve checked, that reads, “There is as yet insufficient data for a meaningful answer.” [Return to Figure 9.3]
- Study by XKCD CC BY-NC (Attribution NonCommercial)
- Sudman, S., Bradburn, N. M., & Schwarz, N. (1996). Thinking about answers: The application of cognitive processes to survey methodology . San Francisco, CA: Jossey-Bass. ↵
- Chang, L., & Krosnick, J.A. (2003). Measuring the frequency of regular behaviors: Comparing the ‘typical week’ to the ‘past week’. Sociological Methodology, 33 , 55-80. ↵
- Schwarz, N., & Strack, F. (1990). Context effects in attitude surveys: Applying cognitive theory to social research. In W. Stroebe & M. Hewstone (Eds.), European review of social psychology (Vol. 2, pp. 31–50). Chichester, UK: Wiley. ↵
- Strack, F., Martin, L. L., & Schwarz, N. (1988). Priming and communication: The social determinants of information use in judgments of life satisfaction. European Journal of Social Psychology, 18 , 429–442. ↵
- Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist, 54 , 93–105. ↵
- Miller, J.M. & Krosnick, J.A. (1998). The impact of candidate name order on election outcomes. Public Opinion Quarterly, 62 (3), 291-330. ↵
- Krosnick, J.A. & Berent, M.K. (1993). Comparisons of party identification and policy preferences: The impact of survey question format. American Journal of Political Science, 27 (3), 941-964. ↵
- Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology,140 , 1–55. ↵
- Peterson, R. A. (2000). Constructing effective questionnaires . Thousand Oaks, CA: Sage. ↵
Being tested in one condition can also change how participants perceive stimuli or interpret their task in later conditions.
The order in which the items are presented affects people’s responses.
A questionnaire item that allows participants to answer in whatever way they choose.
A questionnaire item that asks a question and provides a set of response options for participants to choose from.
An ordered set of responses that participants must choose from.
A guideline for questionnaire items that suggests they should be brief, relevant, specific, and objective.
Research Methods in Psychology - 2nd Canadian Edition by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.
Share This Book
- Solutions Industry Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Employee Experience Conjoint
- Resources Blog eBooks Survey Templates Case Studies Training Help center
- Desktop Created with Sketch.
- Mobile Created with Sketch.
Related templates and questionnaires
- Sample Questions
- Sample Reports
- Survey Logic
- Professional Services
- Survey Software
- Customer Experience
- Polls Explore the QuestionPro Poll Software - The World's leading Online Poll Maker & Creator. Create online polls, distribute them using email and multiple other options and start analyzing poll results.
- Research Edition
- Survey Templates
- Case Studies
- Coronavirus Resources
- Qualtrics Explore the list of features that QuestionPro has compared to Qualtrics and learn how you can get more, for less.
- Market Research Survey Software Real-time, automated and advanced market research survey software & tool to create surveys, collect data and analyze results for actionable market insights.
- Employee Survey Software Employee survey software & tool to create, send and analyze employee surveys. Get real-time analysis for employee satisfaction, engagement, work culture and map your employee experience from onboarding to exit!
- Customer Survey Software Robust, automated and easy to use customer survey software & tool to create surveys, real-time data collection and robust analytics for valuable customer insights.
- Community Survey Software Use the community survey software & tool to create and manage a robust online community for market research. Collect community feedback and insights from real-time analytics!
- Web Survey Software Powerful web survey software & tool to conduct comprehensive survey research using automated and real-time survey data collection and advanced analytics to get actionable insights.
- Mobile Survey Software Leverage the mobile survey software & tool to collect online and offline data and analyze them on the go. Create and launch smart mobile surveys!
- Business Survey Software Powerful business survey software & tool to create, send and analyze business surveys. Get actionable insights with real-time and automated survey data collection and powerful analytics!
- Enterprise Survey Software Real time, automated and robust enterprise survey software & tool to create surveys. collect data and analyze responses to get quick actionable insights.
- Email Survey Software Robust email survey software & tool to create email surveys, collect automated and real-time data and analyze results to gain valuable feedback and actionable insights!
- SMS Survey Software Use the power of SMS to send surveys to your respondents at the click of a button. SMS survey software and tool offers robust features to create, manage and deploy survey with utmost ease.
- Offline Surveys
- Customer Satisfaction Surveys
- Net Promoter Score (NPS) Learn everything about Net Promoter Score (NPS) and the Net Promoter Question. Get a clear view on the universal Net Promoter Score Formula, how to undertake Net Promoter Score Calculation followed by a simple Net Promoter Score Example.
- Conjoint Analysis
- GDPR & EU Compliance
- Likert Scale Complete Likert Scale Questions, Examples and Surveys for 5, 7 and 9 point scales. Learn everything about Likert Scale with corresponding example for each question and survey demonstrations.
- Executive Team
- Advisory Board
- In the news
QuestionPro in your language
- Encuestas Online
- Pesquisa Online
- Umfrage Software
- Сервис опросов
- برامج للمسح
- Logiciel d'enquête
Awards & certificates
The experience journal.
Find innovative ideas about Experience Management from the experts
- Privacy Statement
- Cookie Settings
Creating a survey with QuestionPro is optimized for use on larger screens -
Though you're welcome to continue on your mobile screen, we'd suggest a desktop or notebook experience for optimal results.