Chapter 2

Understanding the top usability testing methods

There are various usability testing methods available, but in this chapter, we look at the top techniques you need to know when running a usability test. We explore the difference between quantitative and qualitative data and how to choose between moderated and unmoderated usability testing.

Start usability testing now

Maze is a usability testing tool that allows you to run quick and easy usability tests with your prototype from Figma, InVision, Marvel, and Sketch. Sign up for free.

Quantitative vs. Qualitative usability testing

Anytime you collect data when usability testing, it's going to be in one of two types of studies: qualitative or quantitative. Neither is empirically better, though there are specific use-cases where one may be more beneficial than the other. Most research benefits from having both types of data, so it's important to understand the differences between the two methods and how to employ each best. Think about them as different players on the same team. Their goal is the same–gain valuable insights–but their approach varies.

In the end, qualitative and quantitative usability testing are both valuable tools in the user research toolkit, but which one works for you will depend on your research goal.

The difference between qualitative and quantitative data

All usability testing involves participants attempting to complete assigned tasks with a product. Though the format of qualitative and quantitative tests doesn't change that much for the participant, how and what kind of data you collect will differ significantly.

Qualitative data consists of observational findings. That means there isn't a hard number or statistic assigned to the data. This type of data may come in the form of notes from observation or comments from participants. Qualitative data requires interpretation, and different observers could come to different conclusions during a test.

The main distinction between quantitative and qualitative testing is in the way the data are collected. With qualitative testing, data about behaviors and attitudes are directly collected by observing what users do and how they react to your product.

In contrast, quantitative testing accumulates data about users' behaviors and attitudes in an indirect way. With testing tools, quantitative data is usually recorded automatically while participants complete the tasks.

Examples of qualitative usability data: product reviews, user comments during usability testing, descriptions of the issues encountered, facial expressions, preferences, etc.  

Quantitative data consists of statistical data that can be quantified and expressed in numerical terms. This data comes in the form of metrics like how long it took for someone to complete a task, or what percentage of a group clicked a section of a design, etc. 

With quantitative data, you need the context of the test for it to make sense. For example, if I simply told you, "50% of participants failed to complete the task," it doesn't give much insight as to why they had trouble. 

Examples of quantitative usability data: completion rates, mis-click rates, time spent, etc.  

When to do qualitative or quantitative testing

You can cut your grass with a pair of scissors, but a lawnmower is far more efficient. The same is true with qualitative and quantitative usability testing. In most scenarios, you could use either to collect data, but one will be better depending on the task at hand. 

Below we provide some example scenarios for both. It's not meant to be exhaustive, but representative of when you may employ one type of testing over the other.

Quantitative usability testing: Measuring user experience with data

With quantitative testing, the goal is to uncover what is happening in a product. Quantitative testing works well when you're looking to find out information about how your design performs, and if users encounter major usability problems while using your product. 

For example, let's say you just released a reminder function in your app. You can run a test where you ask participants to set a reminder for a day in the week. You want to know if participants are able to complete the task within two minutes. Quantitative usability testing is great for this scenario because you can measure the time it takes for a participant to complete the task. 

Let's say you find only 30% of participants are able to complete the task within two minutes. Now that you have that data, you can study the heatmaps of the journey the user takes to understand what usability issues the user has encountered when trying to complete the task. Or, you can follow-up the quantitative study with a few user interviews to dive deeper into the experience of those users who've struggled to complete the task. 

Product tip ✨

Maze automatically collects quantitative data such as time spent, success rates, and mis-click rates, and gives you heatmaps for each session so you can dive deeper into the test results and improve the UX of your product. Try it out for free.

Qualitative Usability Testing: Understanding the why behind actions

Qualitative user testing enables you to understand the reason why someone does something in a product and research your target audience's pain points, opinions, and mental models. Qualitative usability testing usually employs the Think out loud method during the testing sessions. This research technique asks the participant to voice any words in their mind as they're completing the tasks.

This way, you get access to users' opinions and comments, which can be very useful in trying to understand why an experience or design doesn't work for them, or what needs to be changed. As you collect more qualitative data, you may start to discover trends among users, which you can use to make changes in the next design iteration. 

Qualitative and quantitative user testing perform best when used in conjunction with one another. So, whereas they are separate, thinking about them as two pieces of a whole may be the best approach.

Moderated vs. unmoderated usability testing

When running a usability study, you have to decide on one of two approaches: moderated or unmoderated. Both are viable options and have their advantages and disadvantages depending on your research goals. In this section, we talk about what moderated and unmoderated usability testing is, the pros and cons of each, and when to use each type of usability testing. 

Moderated usability testing 

As the name suggests, in a moderated usability test, there's a moderator that's with the participant during the test to guide them through it. The role of the moderator is to facilitate the session, instruct the user on the tasks to complete, ask follow-up questions, and provide relevant guidance. 

Moderated usability tests can happen either in-person or remotely. When it's done remotely, the moderator usually joins through a video call, and the participant uses screen sharing to display their screen, so the moderator can see and hear exactly what the participant is doing during the test.

When running a moderated usability test, it's important to be aware of a few best practices. The first is not to lead the participant towards an answer or action. That means you have to carefully phrase questions in a way that prompts users to complete the tasks but leaves enough room for them to find out how to do it. Make sure you're not asking them things like "Click here" or "Go to that page." Even if the intention is good, these types of instructions bias the results.

Another best practice to keep in mind is to encourage participants to explore the product or prototype as it naturally comes to their mind. There is no wrong or right answer—the goal of usability testing is to understand how they experience your product and improve accordingly. As a moderator, you should clearly explain that the thing being tested is the product, not the user.

Examples of moderated usability tests are lab tests, guerilla testing, card sorting, user interviews, screen sharing, etc.

The benefits of moderated usability testing

One of the biggest advantages of moderated usability testing is control. Since these tests are guided, you're able to keep participants focused on completing the tasks and answering your questions. If you're conducting the experiment in a lab, you're able to control for environmental factors and make sure those don't skew your results. 

Most importantly, the biggest benefit of moderated tests is that you're able to ask the participant follow-up questions about why they did something. In an unmoderated test you're sometimes left guessing, so having the chance to dive deeper into an issue or question with participants helps you uncover learnings about user behavior and pain points. 

For example, if you're running a moderated session to test a new user interface for a check-out process—and notice a participant struggling with a part of the process—you can ask them what they thought about the process, how they would improve it, and why they struggled to use the product. Such opportunities rarely arise in an unmoderated session, making moderated usability testing essential if you're looking to get rich, qualitative insights from your users. 

The disadvantages of moderated usability testing

Moderated tests require investment, both in terms of resources like a tool or a lab to organize the tests, but also an investment of time. Moderated usability testing sessions take time to plan, organize, and run, as each individual session needs to be facilitated by a researcher or someone with experience in the field. 

With these constraints, your pool of possible participants may also shrink. Finding participants to come to your lab or join a user interview call can be a hassle, so usually, you can only collect qualitative user feedback. For those reasons, moderated user tests only work at the start of the UX design process, usually when doing formative research.  

When to run moderated usability testing

Moderated tests work best at the initial stages of the design process, as they allow you to dig deeper into the experience of the participants, and get early feedback to inform the overall direction of the design. 

You can run moderated usability tests with low- to mid-fidelity prototypes or wireframes to collect users' opinions, comments, and reactions to a first iteration of the design. At this stage of the process, you'll usually be testing the information architecture, the layout of the webpage, or simply do focus groups to research if your solution works with real users. 

By asking usability testing questions before, during, and after the test, you and your team can uncover insights that help you make better design decisions.

As you move through the design process, you can continue doing moderated usability tests with users after each iteration, and based on the results, design the final iteration.

Unmoderated usability testing

An unmoderated usability test happens without the presence of a moderator. The participant is given instructions and tasks to complete beforehand, but there's no one present as they're completing the assigned tasks. 

Unmoderated user tests happen mostly at the place and time of the participant's choosing. Similar to moderated testing, you can run an unmoderated test either in-person or remotely. Depending on your resources, one might be better than the other. We look at remote vs. in-person usability testing in more detail in the next chapter.

Examples of unmoderated usability tests are first-click tests, session recordings, eye-tracking, 5-second tests, etc.

Product tip ✨

Maze allows you to run unmoderated tests with unlimited users. Get started by importing your prototype into Maze.

The benefits of unmoderated usability testing

One of the advantages of unmoderated usability testing is that it has a lower cost and quicker turnaround. The obvious reason for this is that you don't have to hire a moderator, find a dedicated lab space, and look for test participants who are willing to come to your lab.

Along those same lines, unmoderated remote tests are a bit more advantageous as the participant can complete the assigned tasks at the time and place of their choosing. The convenience of having a participant complete tasks in an uncontrolled environment is that it more closely resembles how someone would use your product in a natural environment, thus yielding more accurate results. 

Last but not least, one of the biggest benefits of unmoderated usability testing is the ability to collect results from a larger sample size of test participants. Because you don't have to moderate each session, measuring usability metrics, or doing A/B testing is much easier in an unmoderated environment. 

With unmoderated remote usability testing, you can collect results in hours or even minutes. Plus, testing with a global user base in different time-zones is possible with unmoderated usability testing.

The disadvantages of unmoderated usability testing

On the other hand, unmoderated usability testing has a couple of drawbacks you'll need to keep in mind when choosing this method. Since unmoderated tests happen without your presence, they might be limiting in the types of insights and data you can gather as you won't be there to delve into users' actions in real time.

Product tip ✨

When you run unmoderated usability tests with Maze, you can create questions, surveys, or opinion scales before, during, or after the test to get user feedback and data. Try it out for free.

When to run unmoderated usability testing

When you need to collect quantitative data, consider doing an unmoderated usability test. In those scenarios, you're looking for statistically relevant data, so testing a large sample size is faster and easier with unmoderated tests.

Additionally, unmoderated usability testing works best towards the end of the product development process. When you finish designing a final solution, you can run an unmoderated test with a high-fidelity prototype that resembles the final product. This will ensure the solution works before you move on to the development process. 

Another use case for unmoderated usability testing is measuring the performance of tasks within the product. For instance, examples of tasks for users are: sign up, subscribe to the newsletter, create a new project, etc. For each of these tasks, you can run quick unmoderated usability tests to make sure the user flow is intuitive and easy to use. 

Unmoderated tests are usually done when remote testing using a combination of prototyping and testing tools. These allow you to create a test based on a prototype, share links to the test with participants, and even hire testers from a specialized testers' panel.