Nov 24, 2020
Consider this: you’re developing a brand identity, and you’ve sketched important design elements, including the logo, the website, and so on. With the sketches ready, you're sure these elements show your brand’s cheeky personality.
But hang on a second: isn’t this brand perception an idea in your mind? Who’s to say your target audience will feel the same?
Enter: preference tests that can help you pick your users’ brains early on in the design phase. Not only do these tests help you understand your users' perceptions of your brand, but they also help you learn which visual design appeals to them.
As Mitchelle Chibundu, Lead Product Designer at Flutterwave puts it, preference tests give you “insight into which designs resonate more with your users.”
Not sure how to conduct a preference test? We’ve put together this guide to tell you that and much more, including what a preference test is, its limitations, and how they differ from A/B tests.
Let’s get going.
Preference testing is a research method that involves sharing 2-3 design variants with participants and asking about their preference – which one they like and why. It can help you learn:
But here’s the key to crushing preference tests: don’t just show your designs to test participants and ask them which they like or find trustworthy. Instead, also ask follow-up questions to understand why. This gives you qualitative feedback, which helps you understand how to improve the prototype based on user preference.
Need to drive home only quantitative data from preference tests? Keep in mind that you can tap into any of the two types of tests that Jolene Tan-Davidovic, Senior User Researcher at N26, shares:
“[These] are usually in the form of an interview where a user is shown one or more versions of a design and then asked which one they prefer. The users are also probed on what their impressions are of each design, their attitudes about it, and why they prefer one over the other.”
“[These] can take the form of a survey with users selecting which design they prefer and what their attitudes are about each design. This allows us to get feedback from many more users than qualitative tests, enabling us to be more confident that the findings are generalizable to all users.”
That said, Tan-Davidovic warns, “quantitative tests are suitable only when the design is relatively straightforward and doesn’t cut across multiple screens. It only makes sense to conduct a quantitative preference test when we're sure that we understand the reasons why users would prefer one design over the other and the context that the user is facing.”
Preference tests come in the early design phase “before [you] invest to refine the design,” advises Tan-Davidovic. At this point, you’d “want to understand which is a more viable direction and why.” A preference test can answer that for you.
If you’re planning a redesign, let’s say a website redesign, you can conduct a preference test here as well–testing your design against a competitor’s.
That said, be aware: preference tests aren’t A/B tests. Want to learn more about how the two differ? Read on.
The short answer is: A/B tests come later on in the process when your design is close to ready and users can interact with it in a live setting to give their feedback. Preference tests, on the flip side, help in the early stages when you have a rough sketch or wireframe to share with test participants.
To go into detail, Chibundu explains, “Preference testing is more about understanding what designs the user prefers and why they prefer them before the product is completed.”
Example: You have three different homepage layout sketches for a new product ready. Conduct a preference test here to learn which layout potential users will prefer instead of making assumptions yourself.
“On the other hand, A/B Testing is KPI-based,” continues Chibundu. “It's about finding out if behaviors are being influenced by different variants and how people use a product to achieve a goal.”
Example: Let’s say, there’s a drop in your eStore’s newsletter signups recently. You have a few options to encourage more signups such as different CTA box colors in each design variant. Show these options to users to A/B test their behavior and learn which CTA color gets more signups.
So you’ve decided to do some preference testing, how do you go about conducting one? Follow these steps:
Do you plan to understand which design variant users prefer? Or, do you want to learn how they perceive each design? Whatever your research objective is, lay it out at the top of your research board so that you can share it with your test participants.
At the same time, settle on which type of feedback you want to gather – is it going to be qualitative or quantitative? Don’t forget to ensure all design variants are handy.
Tip: “Try to keep the test variants between two or three because when it gets more than these, it becomes difficult for people to make a comparison.”
Mitchelle Chibundu, Lead Product Designer @ Flutterwave
Based on whether you’re conducting qualitative or quantitative preference testing, plan how you’d want test participants to share their preferences.
👉 Ask open-ended questions where participants explain their choice. For such an interview, spend 15-30 minutes per participant, Chibundu recommends.
As for the questions to ask, Chibundu shares a list:
👉 Give them a closed list of adjectives. These could be 3-5 words that describe a design variant, for example, clean, minimal, classic, elegant, and so on.
👉 Give test participants an open word choice where you ask them to share 3-5 adjectives/words that they think describe the design variant.
👉 Gather Numerical rating: Get numerical ratings about which design shows particular brand qualities.
Tip: "To assess preference, you could ask 'In your opinion, which one would be a better solution?' or 'If you had to decide which one we would build, which would it be?'"
Jolene Tan-Davidovic, Senior User Researcher @ N26
“Participants tend to be polite and usually refrain from criticizing the design directly even if explicitly encouraged to be ‘brutally honest’. Because of this, it is better to use indirect methods to ask about their reasons for preferring one over the other.
For example, you could ask them how they would describe or recommend the feature to a friend and then you can take note of the positive and negative things that they mentioned. Or you could ask them where they think other people would have problems with this design,” adds Tan-Davidovic.
As is the user testing rule of thumb, you need to find test participants that “reflect your target customers as closely as possible as well as the frame of mind or context needed to understand the design,” notes Tan-Davidovic.
“Sometimes this means that you will recruit existing customers (if it is important that participants understand the usage context), or sometimes this means you want non-customers who nevertheless reflect your target market (because you want ‘fresh eyes’).”
Hence, focus on recruiting test participants, including settling on whether you plan to pay or incentivize participants and where you’d pool target users from.
Wondering what sample size you need for a preference test? Chibundu recommends you aim for 20-30 participants or a larger sample, so your test results carry statistical significance.
Now that your test participants, design variants, and research questions are ready, go ahead and explain the process to the participants before you start the test.
If there’s something specific you want their feedback on, tell them about it.
Tip: “Ensure that you alternate which design is shown first. This reduces bias as it takes care of the recency effect where the last shown design is more likely to be favored.”
Jolene Tan-Davidovic, Senior User Researcher @N26
This is an essential step so you can take the insights you’ve learned from the preference tests and improve your design accordingly.
For qualitative data, this would mean you group similar responses and find patterns in those. As for quantitative data, analyze the questionnaire responses to find out the most preferred answer.
If there's a significant difference in the results, it will be easy to determine which design is a winner. If not, you can repeat the process by doing a second test on a new iteration of the design.
Before we wrap this up, let’s also count the shortcomings of preference testing. Essentially, these limitations surface from a lack of user interaction with your design, leading to the following concerns:
Here’s hoping you are now confident about what a preference test is and how to conduct one.
Not sure if it’s right for you?
Keep the following in mind: despite their limitations, preference tests have their place in the early design process as they can help you design based on user preference instead of personal guesswork. Preference tests are also easy to conduct and less costly than A/B tests.
Therefore, as Tan-Davidovic summarizes, “conducting qualitative or survey preference tests before an A/B test is launched helps us be more confident that the final design actually solves the user problem and, hence, is more likely to perform better in the A/B test.”
Looking for more reliable data? It’s best you go ahead with usability testing.