Reading time: 12 minutes

This article contains an example ux survey, with sample questions and answers, the metrics you can get from each, and justifications for including them.

For more information on the overall UX survey structure, please view our article on crafting effective UX surveys.

Project and organisation summary for our example

Our example UX survey is based on a project where an existing search centre is being replaced, involving a complete rebuild and migration. We have 250 active users and around 100,000 research documents.

For our scenario, we are addressing 3 primary pain points in the old system, with some high-level business requirements to help illustrate why we chose the example questions and answers:

  1. Documents are difficult to find, and without a preview or accurate metadata, our users download frequently used documents to their desktops for future reference, causing concurrency issues (where their local version is out of date with the version on the server).
    • Improve search so users can find documents easily.
    • Implement metadata filters to reduce large result sets.
    • A document preview should be provided for visual distinction.
  2. Documents cannot be edited on the server, and require users to download, edit and re-upload documents when making changes.
    • Allow viewing and editing documents on the server.
    • Allow sharing a link to a document.
    • Discourage users from saving documents locally.
  3. New starters have difficulties navigating the system and finding documents. This causes them to lean on other employees to help them complete their tasks.
    • Provide more intuitive UX.
    • Search should interpret user intent and not just perform plain-text search.

Our goals

The new search centre has been completed, and employees have been using it for a month now.

We need a survey that answers the following high-level business questions:

  1. Can users find documents more easily?
    • How has it improved?
    • How much time are we saving?
  2. Is it an improvement on the old system?
    • What features provide the most value?
    • How much of an improvement?
  3. Does it support good working habits?
    • Are users still finding workarounds to our preferred way of working?
  4. Does it work equally well for new starters and established users?
    • Can new starters find what they’re looking for without assistance?

For good quantitative metrics, we need a baseline. By surveying users on their experience in the old system while it is actively being used, and about a month after users transitioned to the new system, asking the same set of questions for comparison.

Closed-ended questions (quantitative)

Closed-ended questions offer predefined answer options, making it easy to analyse data quantitatively. The answers we crafted provide clear signals and metrics (using Google’s Goals, Signals Metrics). They are specific and offer no overlap or room for interpretation.

1. How well do you need to know the document you are searching for?

  1. Search is very specific – you need to get the wording exactly right.
  2. Search is interpretive but requires some knowledge of the content.
  3. Search is intuitive and understands what I’m after, even though I’m not using the exact text from the documents.

Metrics

  • Can users with little knowledge find what they are after?
    • 1 = no
    • 2 = further improvements are required
    • 3 = yes
  • Provides insights on adoption and retention rates for new and existing users, and assumptions about habits.
    • 1 = bad adoption / retention – users will most likely revert back to saving documents locally.
    • 2 = partial adoption with some users still saving documents locally.
    • 3 = good adoption and few users saving documents locally.

2. On average, how many times do you need to search before you find what you’re looking for?

  1. searched once.
  2. searched twice.
  3. searched three times.
  4. searched four or more times.

Metrics

  • Assign time values to each search, e.g.: 2 mins. to search and review results:
    • 1 search = 2 minutes
    • 2 searches = 4 minutes
    • 3 searches = 6 minutes
    • 4+ searches = 10 minutes
  • How much time are we saving?
    • Compare results from the old system to the new system with some assumptions:
      • On average, users perform 3 searches per day.
      • We have 250 active users.
  • Example metrics:
    • Finding documents in the old system = 110 hours per day
      • 70% said 4+ searches.
        • Searches (3) x Minutes (10) x Employees (175) = 87.5 hours across users.
      • 30% said 3 searches.
        • Searches (3) x Minutes (6) x Employees (75) = 22.5 hours across users.
    • Finding documents in the new system = 42.5 hours per day
      • 50% said 1 search.
        • Searches (3) x Minutes (2) x Employees (125) = 12.5 hours across users.
      • 30% said 2 searches.
        • Searches (3) x Minutes (4) x Employees (75) = 15 hours across users.
      • 20% said 3 searches.
        • Searches (3) x Minutes (6) x Employees (50) = 15 hours across users.
    • Tangible improvements:
      • 61% improvement in efficiency
      • 67.5 hours saved per day (16.2 minutes per user)

3. Did you apply any filters before seeing the desired results?

  • No, I did not user the filters.
  • 1 filter
  • Two or more filters.

Metrics

We can assume that metadata is used for search indexing, search filtering and categorising documents.

  • Is document metadata useful for search? This may indicate whether maintaining metadata for search is worth the effort.
    • no engagement with filters indicates that the filters are not useful, unnecessary, or not prominent enough.
    • depending on the answers in questions 2 and 4, this may require a follow-up with specific users to determine whether further action is required.

4. How far down in the results was the item you searched for?

  • In the top 3 (top of page 1)
  • In the top 10 (page 1)
  • 11 to 20 (page 2)
  • 21+ (page 3 and beyond)

Metrics

This question measures the effectiveness of search functionality (search box and filters).

  • The time users spend scanning through search results can be calculated in a similar method to question 2, where each set of results is assigned a length in minutes.
  • Depending on the answers to questions 2 and 3, we may want to apply a weight to individual responses:
    • the business may consider additional user training if only search was performed, no filters were used, and the desired result was on page 2 or 3.
    • consider improving the search experience if users performed multiple searches, applied filters, and still could not find their result in first 10 results.

More quantitative examples

You may be tempted to add a None of the above option to the questions below, but instead, we recommend making these question optional, so users aren’t blocked from submitting their answers if they didn’t check an option, which may result in abandonment.

On a scale of 1 to 5, how satisfied are you with the customer support portal?

  1. Very dissatisfied
  2. Dissatisfied
  3. Neutral
  4. Satisfied
  5. Very satisfied

To improve this question, you may want to consider removing the Neutral option and being more specific and playful with the answers, e.g.:

  1. It’s impossible to use.
  2. It works but could be better.
  3. I like it and have ideas for further improvements.
  4. I love it and wouldn’t change a thing.

Which device do you primarily use to access the search centre?

  1. Desktop computer
  2. Laptop
  3. Smartphone
  4. Tablet
  5. Other (please specify)

How often do you use the search centre?

  1. Never
  2. Rarely
  3. Occasionally
  4. Frequently
  5. Always

To improve this question, you may consider less interpretive options, e.g.:

  1. Never
  2. Once a month
  3. Once a week
  4. Every few days
  5. Every day

5. Are you interested in receiving our monthly newsletter for product updates?

  1. Yes, please!
  2. Maybe, send me more details
  3. No, not interested

6. Which of the following features would you like to see in our mobile app? (Select all that apply)

  1. Push notifications
  2. In-app chat
  3. Enhanced search functionality
  4. Exclusive offers

Example open-ended questions (qualitative)

These questions encourage respondents to provide detailed, written responses. They are valuable for uncovering nuances and insights that quantitative data might miss, e.g.:

  • What do you like most about the new search centre?
  • What do you find most challenging when navigating the search centre? Please provide specific examples.
  • Tell us about a specific feature or aspect of the search centre that you find helpful or enjoyable.
  • If you could change one thing about the search centre to make it better, what would it be, and why?
  • Please share any additional comments or suggestions you have for improving the search experience.

Example demographic questions

Collecting basic demographic data alongside user feedback allows you to create user personas and segment feedback based on these characteristics. This segmentation can lead to more personalised and effective user experiences, as you can address the unique needs and preferences of different user groups.

Be careful not to ask questions that are in violation of your organisation’s data privacy policies, or questions that may make your users uncomfortable. Always handle and store demographic data with care, ensuring privacy and compliance with data protection regulations.

Consider the following categories:

Age

  • Under 18 years old
  • 18-24 years old
  • 25-34 years old
  • 35-44 years old
  • 45-54 years old
  • 55-64 years old
  • 65 years old or older

Age can provide insights into generational preferences and expectations. Younger users may have different digital habits and preferences than older users.

Gender

  • Male
  • Female
  • Non-binary
  • Prefer not to say
  • Other (please specify)

Gender can help in identifying potential gender-related usability issues or preferences. It also aids in ensuring inclusivity in your designs.

Location

  • City
  • Suburb
  • Rural area
  • State/region
  • Country

Location data can be useful for understanding regional variations in user behaviour and preferences. It can also help in localizing content and services.

Education level

  • High school or lower
  • Some college or vocational training
  • Bachelor’s degree
  • Master’s degree
  • Doctorate or higher

Education level can indicate the user’s familiarity with technology and their ability to navigate complex interfaces or content.

Occupation

  • Student
  • Professional (e.g., doctor, lawyer, engineer)
  • Manager or supervisor
  • Clerical or administrative
  • Skilled tradesperson
  • Retired
  • Other (please specify)

Occupation can offer insights into users’ professional needs and how they might interact with your product or service in a work-related context.

Income level

  • Under $25,000
  • $25,000 – $49,999
  • $50,000 – $74,999
  • $75,000 – $99,999
  • $100,000 – $149,999
  • $150,000 or more

Income level can influence purchasing behaviour and affordability. It’s essential for businesses offering products or services with different price points.

Household size

  • 1 person
  • 2 people
  • 3 people
  • 4 people
  • 5 or more people

Household size can impact purchasing decisions and content consumption. It’s especially relevant for businesses targeting families.

Language spoken at home

  • English
  • French
  • Spanish
  • Other (please specify)

Language spoken at home, which are crucial for multilingual or international platforms in ensuring content is presented in the user’s preferred language.

Tech proficiency

  • Beginner
  • Intermediate
  • Advanced

Tech proficiency helps in tailoring user interfaces. Beginners may require simpler interfaces, while advanced users may benefit from advanced features.

Frequency of product/service use

  • Daily
  • Weekly
  • Monthly
  • Rarely
  • First-time user

Frequency of product/service use can help in segmenting users based on their engagement level, and helps identify power users and occasional users.

Don’t use the Likert scale

You might be tempted to construct your questions like this common Likert scale question with 5 options:

I can find documents easily in the new search centre

  • Strongly disagree
  • Disagree
  • Neither agree nor disagree
  • Agree
  • Strongly agree

Problems with this question

The answers offer room for interpretation (e.g., Agree and Strongly agree) depending on the mood of the user, and provide no clear direction on what to do with negative feedback (if everyone selects Strongly Disagree, we don’t know why unless we pair this with another question) – provide more specific options for users to select from.

Agree and Strongly agree are too similar, open to interpretation and will vary based how users feel about the organisation, not the product. Improve your results by being more specific.

Neither agree nor disagree is the safe choice to breeze through answers quickly, and provides no value to the user experience – consider removing it.

How to improve it

Ask the question more succinctly, and provide 4 clear scenarios so users can easily choose an option that best describes their experience. You can also add an “Other, please provide details” for ad-hoc options.

How easily can you find documents in the new search centre?

  • Impossible – I had to asked a co-worker for help.
  • Not easy – It took a bit of time, but after a few searches and some scrolling, I found what I was looking for.
  • Easy – My search returned a manageable set of results and I was able to find what I was looking for without too many refinements.
  • Very easy – My first search returned what I was looking for, towards the top of the results list.

This doesn’t give us clear metrics on the nuances of the search experience, but provides improved metrics on the overall UX.

Avoid bias

To ensure accurate data, the questions and answers, even negative ones, need to adopt a similar tone to avoid bias or leading. Evaluate each carefully, ensuring that the question is not leading, and answers do not negatively reflect on the user.

The example below illustrates a question with a strong bias, followed by a version with neutral language.

Biased: How much do you agree that our website’s cutting-edge design greatly improves your overall experience compared to outdated and dull websites?

  1. Strongly agree – we’re all awesome!
  2. Agree – our website is better than other websites.
  3. Neutral – I don’t care.
  4. Disagree – it’s just as dull as the rest.
  5. Strongly disagree – I hate it.

This question contains bias because it uses subjective and leading language such as “cutting-edge design,” “greatly improves,” and “outdated and dull websites.” It assumes that the website’s design is cutting-edge and that it is better than outdated and dull websites.

The answers are exaggerated in this example so you can clearly see the bias for answer 1 and 2. The other answers negatively reflect on the respondent, and even if they don’t like the website, it’s unlikely that they’ll choose 3, 4 or 5.

This bias can influence respondents to provide positive feedback.

Unbiased: Please share your thoughts on the website’s design.

  1. I find the design visually appealing.
  2. The design is user-friendly and easy to navigate.
  3. I have no strong opinion about the design.
  4. The design could be improved for better user experience.
  5. I find the design unappealing.

The revised question eliminates bias by using neutral language and allowing respondents to express their opinions without leading them toward a particular response. This question gathers more objective feedback about the website’s design.

Summary and further development

UX surveys are not just about improving the overall user experience and eliminating pain points, but are also valuable in measuring improvement and justifying the associated costs. They can also be invaluable in justifying future projects to potential investors and stakeholders.

We hope you found our example UX survey questions helpful. To learn more about the overall survey structure, view our article on crafting effective UX surveys.

Last updated 14 May 2024

About the Author: Stephan

With 20 years of industry experience as a UX specialist, designer and developer, I enjoy teaching and sharing insights about UX, accessibility and best practices for e-commerce and the web.

Related insights