The complete guide to usability testing
3 real-life usability testing examples based on actual products
Get a feel for what an actual test looks like with three real-life usability testing examples from Typeform, ElectricFeel, and Movista. You'll learn about these companies' test scenarios, the types of questions and tasks these designers asked, and the key things they learned.
If you've been working through this guide in order, you should now know pretty much everything you need to run your own usability test. All that’s left is to get your designs in front of some users.
While it’s essential to learn about each aspect of usability testing, it can be helpful to get a feel for what an actual test looks like before creating your first test plan. Creating user testing scenarios to get the feedback you need comes naturally after you’ve run a few tests, but it’s normal to feel less confident at first. Remember: running usability tests isn’t just useful for identifying usability problems and improving your product’s user experience—it’s also the best way to fine-tune your usability testing process.
For inspiration, this chapter contains real-world examples of usability tests, with some advice from designers on writing usability tasks and scenarios for testing products.
Just arrived here? Here’s a quick re-cap to make sure you have the context you need:
Start testing your product
Maze is a usability testing tool that allows you to run quick and easy usability tests with your prototype from Figma, InVision, Marvel, and Sketch. Get started for free.
If you’re not sure whether you are at the right stage of the design process to conduct usability studies, the answer is almost certainly: yes!
It’s important to test your design as early and often as possible. As long as you have some kind of prototype, running a usability test will help you avoid relying on assumptions by involving real users from the beginning. So start testing early.
The scenarios, questions, and tasks you should create, as well as the overall testing process, will vary depending on the stage you’re at. Let’s look at three examples of usability tests at different stages in the design process: early-, mid- and late-stage usability testing.
ElectricFeel's product is a software platform for entrepreneurs and public transport companies to launch, grow, and scale fleets of shared electric bicycles and mopeds. It includes a mobile app for riders to rent vehicles and a system for mobility companies to run day-to-day fleet operations.
When a new rider signs up to the ElectricFeel app, a fleet management team member from the mobility company has to verify their personal info and driver’s license before they can rent a vehicle.
The ElectricFeel team hypothesized that if they could make this process smoother for fleet management teams, they could reduce the time between someone registering and taking their first ride. This would make the overall experience for new riders more frictionless.
The idea to improve the rider activation process came from a wider user testing initiative, which the team saw as a vital first step before they started working on new designs. Product designer, Gerard Marti, explains:
To address the gap between how you want your product to be received and how it is received, it’s key to understand your users’ day-to-day experience.
After comparing the results of user persona workshops conducted both within the company and with real customers, the team used the insights to sketch wireframes of the new rider activation user interface.
Then Gerard ran some usability tests with fleet managers to validate whether the new designs actually made it easier to verify new riders, tweaking the design based on people’s feedback.
The next step in their process is conducting quantitative tests on alternative designs, then continuing to test and iterate the option that wins with more quantitative testing. Gerard sees quantitative testing as a vital step towards validating designs with real human behavior:
What people say and what they actually end up doing is not always the same. While opinions are important and you should listen to them, behavior is what matters in the end.
“You have four riders in the pipeline waiting to be accepted.”
Gerard would often leave the scenario at just this, as he wanted to observe the order in which users perceive each element of the design without sending them in a direction with a goal.
When testing early versions of designs, leaving the usability test scenario open lets you find out whether users naturally understand the purpose of the screen without prompting.
To generate a conversational and open atmosphere with participants, Gerard starts with open questions that don’t invite criticism or praise from the user:
He then moves on to asking about specific elements of the design:
By asking users to evaluate individual elements of the design, Gerard invites participants to give deeper consideration to their thought process when activating riders. This yields crucial insights on how the fundamentals of the interface should be designed.
After testing, we realized that people scan the page, look for the name, then check the image to see if it matches. So while we assumed the picture ID should be on the right, this insight revealed that it should be on the left.
Typeform is a people-friendly online form and survey maker. Its unique selling point is its focus on design, which aims to make the experience for respondents as smooth and interactive as possible. As a result, typeforms have a high completion rate.
Since completion rates are a big deal for Typeform users, being able to see the exact questions where people leave your form was a highly requested feature for a long time. Typeform’s interface asks respondents one question at a time, so this is especially important. The feature is now called ‘Drop-off Analysis’.
Product tip ✨
Before you even start designing a prototype for a usability test, do research to discover the kind of products, features, or solutions that your target audience needs. Maze Discovery can help you validate ideas before you start designing.
Yuri Martins, Product Designer at Typeform, explains the point when his team felt like it was time to test their designs for the new Drop-off Analysis feature:
We had a lot of different ideas and drawings for how the feature could work. But we felt like we couldn’t commit to any of them without input from users to see things from their perspective.
Fortunately, they had already contacted users and arranged some moderated tests one or two weeks before this point, anticipating that they’d need user feedback after the first design sprints. By the time the tests rolled around, Yuri had designed “a few alternative ways that users could achieve their objectives” in Figma.
Since the team wanted to iterate the design fast, they tested each prototype, then created a new updated version based on user feedback for the next testing session a day or two later. Yuri says they “kept running tests until we saw that feedback was repeating itself in a positive way.”
Finding participants is often the biggest obstacle to conducting usability tests. So schedule them in advance, then spend the following weeks refining what you’d like to test.
“One of your typeforms has already collected a number of responses. The info you see appears in the ‘Results’ page.”
This scenario was designed to be relatable for Typeform users that had already:
Choosing a scenario that appeals to this group of users ensured the feedback was as relevant as possible, as the people being tested were more likely to use the Drop-off Analysis feature to analyze their typeform’s results further.
Typeform’s Drop-off Analysis prototypes only existed in Figma at this point, which meant that users couldn’t interact with the design to complete usability tasks.
Instead, Yuri and the team came up with broader, more open-ended tasks and questions that aimed to test their assumptions about the design:
After the general questions, they asked questions about specific elements of the design to get feedback where they needed it most:
This example shows that you don’t need a fully functional prototype to start testing your assumptions. For useful qualitative feedback midway through the design process, tweak your questions to be more open-ended.
Product tip ✨
Maze is fully integrated with Figma, so you can easily upload your designs and create an unmoderated usability test with your Figma prototype. Learn more.
“We’d assumed that people would want to know how many respondents dropped off at each question. But by usability testing, we discovered that people were much more concerned with the percentage of respondents who dropped off—not the total number.”
Movista is a workforce management software used by retail and manufacturing suppliers. It helps its users coordinate and execute tasks both in-store and in the field with a mobile app.
As part of a wider design update on their entire product, Movista is about to launch a new product for communications, messaging, chats, and sending announcements. This will let people in-store communicate with people out in the field better.
Movista’s new comms feature is at a late stage of the design process, so they tested a high fidelity prototype. Product designer, Matt Elbert, explains:
For the final round of usability testing before sending our designs to be developed, we wanted to test an MVP that’s as close as possible to the final product.
By this point, the team were confident about the fundamental aspects of the design. These tests were to iron out any final usability issues, which can be harder to identify during the process. By testing with a higher number of people, they hoped to get more statistically significant results to validate their designs before launch.
The team used Maze to conduct remote testing with their prototype, which included an overall goal broken down into tasks, and questions to find out how easy or difficult the previous step was.
“You have received new messages. Navigate to your messages.”
The usability tests would often begin in different parts of the product, with participants given a clear navigational goal. This prompts people to act straight away—without getting sidetracked by other areas of the app.
Matt advises people to be specific when using testing tools for unmoderated tests, as you won’t be there to make sure the user understands what you’re asking them to do.
The general format of the usability test was giving people a very specific task, then following up with an open question to ask participants how it went.
Matt and the team would also sometimes ask questions before a task to see if their designs matched users’ expectations:
“Questions like this are super useful because this is such a new feature that we don’t know for sure what people’s priorities are," said Matt. The team would rank people’s responses, then consider including different options if there was consistent demand for them.
Finally, Matt says it’s important to always include an invitation for participants to share any last thoughts at the end:
Some people might take a long time to complete a task because they’re checking out other areas of the product—not because they found it difficult. Letting people express their overall opinion stops these instances from skewing your test results.
Based on the insights we got from final results and feedback, we ended up shifting the step of selecting a recipient to much earlier in the process.