Usability testing: Everything you need to know to get started

February 04, 2019

Welcome to our beginner’s guide to usability testing. This guide is divided into five sections—read them in order or jump to a part that interests you.

  1. The basics of usability testing  
  2. Usability testing methods
  3. How to kick-start your usability testing journey
  4. How to make usability testing part of your design process
  5. Parting words

So you’re thinking about conducting usability testing? Good—usability testing is one of the best ways to evaluate your product’s design.

You’ve probably heard about the importance of usability testing before. Maybe you read a couple of articles here and there or bought a few books on the subject to go deeper.

Perhaps you even spent a little time trying to convince your stakeholders to do usability testing. Or you’ve done some testing before but want a refresher on the topic.

This article will equip you with the most important information you need to know about usability testing.

And if you have a prototype at the ready, you can get started right away.

The basics of usability testing

In the first section of this guide, we’ll look at the definition of usability, ask when is the best time to test with users and go through some of the proven benefits of testing your product for usability. Let’s get started.

What is usability?

Usability — /ˌjuːzəˈbɪləti/ — noun

The main idea surrounding usability is as follows: usability is the degree to which a product is easy to use. The ISO 9241 definition looks at three qualitative variables when measuring usability: effectiveness, efficiency, and satisfaction.

And according to the Interaction Design Foundation, a product should also be aesthetically pleasing and engaging to be considered usable.

Pinpointing a single definition is beside the point of this guide. The essential thing to note here is that usability is a central pillar of good user experience.

If a product isn’t usable, it creates obstacles for users trying to complete their goals. That doesn’t mean they will abandon the product straight away—people might still use it for lack of a better alternative, but that doesn’t make it a good one.

Usability testing is the process of measuring how a product performs with real users by asking test participants to complete a series of tasks with a prototype.

Based on that feedback, you adjust your product in subsequent iterations.

There are different things you can learn from a testing session depending on the method you choose to use. But before we take a look at usability testing methods, let’s see why it’s necessary to conduct usability testing in the first place.

Why it’s important to conduct usability testing

“A good user experience doesn’t guarantee success, but a bad one nearly always leads quickly to failure.” Goodman, Elizabeth, and Mike Kuniavsky. Observing the User Experience.

The overarching goal of usability testing is to make it easy for people to use your product. However, there are numerous benefits of testing with users and many of them directly tie in with the business side of things.

“Good design is good business,” declared Thomas Watson Jr. in 1973. After taking over from his father, he reshaped IBM into the corporate success that we know of today.

Apple, Starbucks, Google, Disney, Spotify: what these companies have in common—besides a majority market share in their industries—is a focus on a great user experience built in and around the product.

More recently, a study by McKinsey researched 300 publicly listed companies to determine the value design adds to business. The results are impressive.

Top companies that performed well in design-related areas increased their revenue and total return to shareholders over five years faster.

According to the report, two of the design actions that led to higher revenue growth were:

  • measuring design performance with the same exactness as revenue and costs; and
  • de-risking development by continually testing and iterating with end users.

We’re no longer questioning the importance of user-centric design and the methods for delivering it—we know they work.

When to do usability testing

A common question that comes up with usability testing is when you should do it. You’ll want to choose the best time for usability testing based on the needs of your project.

Let’s now look at three instances when you and your team can run usability tests during a project.

Pre-design testing

If you’re developing a new solution to an existing customer problem, submit competitor products to test and observe what users do with the product. Take notice of existent patterns of behavior and how the laws of UX apply in that context.

Planning a redesign of your website or app? Start the process by testing how the present design performs with users. Analyze the difficulties users experience when navigating your product and determine the issues you want to solve with your new design.

Pre-design testing helps you determine what success looks like when your project is finished.

During design: from low to high-fidelity prototype testing

During the design process, usability testing helps you validate or invalidate ideas by creating prototypes based on design hypotheses and testing them with users. You can test anything from a single interaction to an entire app.

With low-fidelity prototypes, the goal is to translate your ideas into testable artifacts that allow for rapid experimentation in an iterative way. When you’re at this stage of the process, you can test functionality, layout, micro-interactions, and more.

Once you have a ready-to-release version of your design, create a hi-fidelity prototype and run usability tests to assess its viability. In this part of the process, you’re looking to simulate real use scenarios before going into development.

You can test workflows from start to finish or dive deep into how different design elements such as visuals, content, or navigation work with users.

Usability testing during design is an integral step of the iterative design process and ensures that your final product meets user needs and serves business goals.

Post-launch testing

It’s easy to think something is done when you release it to customers. The reality is, as we know it too well, a product is never truly finished.

Once you have the first version of a feature or a product out in the wild, it’s time to start measuring how it performs and improve on that.

Usability tests after launch can help you to understand how current users perform an action in your product and surface improvement areas that you can work on next.

This feedback mechanism can complement other customer data sources such as website analytics or customer support insights. Future design iterations should integrate all accumulated evidence of how your customers use and perceive your product.

Usability testing should be included throughout the design and the product development process as required until enough evidence is collected for the next iteration.

The most important thing to remember is this: usability testing isn’t a one-off occurrence. There’s a reason the answer to the question of when should you test is almost always: “Early and often.” It’s true.

Usability testing methods

Usability testing is part of the broader practice of user research. To better understand its place in the user research field, let’s take a quick step back and look at four types of research methods that are important for our purposes: behavioral vs. attitudinal and qualitative vs. quantitative methods.

Behavioral vs. Attitudinal Testing

Behavioral studies refer to those methods that involve watching and measuring a participant’s actions when they interact with a product. Examples of behavioral testing include A/B testing, eye tracking, lab studies, and more.

On the other hand, attitudinal methods imply studying people’s beliefs, needs, attitudes, etc. Examples of attitudinal testing are card sorting, surveys, focus groups, and others.

The difference lies in what people do (behavior) versus what people say (attitude), which often differs significantly.

Quantitative vs. Qualitative Testing

The distinction between quantitative and qualitative testing is in the way data are collected.

With qualitative testing, data about behaviors and attitudes are directly collected by observing what users do and how they react to your product.

In contrast, quantitative testing accumulates data about users’ behaviors and attitudes in an indirect way. The goal here is to gather statistically significant data about a particular behavior.

For instance, you can get quantitative data about how many people perform a particular action within a product or how long it takes to complete that action.

Now that we know the difference between these types of user research, it’s vital to note that usability testing can be qualitative (in-person sessions) and quantitative (analytics, Maze tests), and you can be tracking behaviors, attitudes, or both.

Most common techniques for usability testing

Usability testing is a task-based activity. No matter how you test—remote or in-person—you have to create tasks based on your product functionality.

You can complement these tasks with various other research methods such as surveys or interviews for additional information.

In-person vs. remote usability testing

You can choose to conduct your usability tests in-person or remote. For in-person tests, it’s important to have a dedicated space (conference room will do) where you can meet people and have them complete the tasks without distractions.

Remote usability testing is often done with the help of user testing tools such as Maze. This type of testing is great if your team is remote or your customers are out-of-reach.

Moderated vs. unmoderated usability testing

"The mantra of usability testing is, “We are testing the product, not you." Barnum, Calum. Usability Testing Essentials: Ready, Set...Test

A usability test can be moderated or unmoderated based on the presence or absence of a facilitator. A facilitator's role is to instruct the participants on their tasks, take notes, and generally oversee the session.

Both in-person and remote usability tests can be moderated or unmoderated based on how you want to set them up.

Screen sharing and video conferencing tools are often used in remote moderated testing. Here are three we recommend:

  • Lookback — real-time user testing with screen sharing and video calls
  • — video conferencing tool
  • QuickTime Player (macOS) — screen, voice, and video recordings

If you choose to moderate your testing sessions, there are various techniques you can employ to direct the user in giving you feedback. The most common moderation techniques are:

  • Concurrent Think Aloud

For this method, you ask the user to “think aloud” while they go through a set of tasks. This is the most common type of usability testing, usually conducted in a quiet room with a moderator and/or note-taker. Based on preference, you can also ask permission to record the audio and video for future references.

  • Retrospective Think Aloud

This technique involves asking the user to recall their thoughts and give you their feedback after they completed the tasks. Aim to find out what the user found frustrating about the product or why they completed a task in a particular way.

How to kick-start your usability testing journey

If you’re a usability testing novice, start small to get a feel of what usability testing is and the value it could offer you and your product.

We’ll now look at one simple way to easily test your product and then learn how you can scale this process.

Start with a nano-usability test

In Observing the User Experience (2nd Edition), authors Mike Kuniavsky and Elizabeth Goodman advice you to do a nano-usability test to get you started with the practice.

The five steps to nano-usability testing are:

Step 1: Find one person who cares about your product.

Step 2: Watch them use that product.

Step 3: Ask them to use the product to do something they care about.

Step 4: Watch them do that thing without interrupting or asking questions.

Step 5: Ask yourself: what did you learn?

While this is a scaled-down version of usability testing, it’s the perfect way to get you started. As usability guru Steve Krug writes:

“If you want a great site, you’ve got to test. After you’ve worked on a site for even a few weeks, you can’t see it freshly anymore. You know too much. The only way to find out if it really works is to test it.”

Design, Prototype, Test, Analyze  

Usability testing is at the heart of the iterative design process. Instead of expecting the perfect design to emerge from a first attempt, follow each design sprint with a testing session, and then evaluate and refine the product based on the feedback received.

Each design cycle should be supported by insights from users and the solutions improved upon until user needs are met.

When you have something ready to test, here’s what you need to prepare to carry out a usability testing session:

Create tasks based on your prototype

Start by writing down the functions of your product. Then decide which of these functions you want to test and create tasks based on them.

For instance, say your product is a mobile app that helps doctors monitor the status of their patients. One of the functions of this app would be to allow a doctor to pull up a patient's medical records on the screen.

Here’s a task example for a usability test:

"You want to find out how patient in room A is doing. Pull up all available information about the patient on the screen."

Follow the natural flow of the website or application when creating the tasks. Don't give away hints or offer guidance. Avoid using your product lingo, unless you’re specifically testing copy.

A product should be self-explanatory enough to enable users to accomplish their tasks easily.

Find participants for your usability tests

Your customers are the most suitable participants for testing new features, redesigns, or other updates you want to introduce to your product. So if you can, test with your customers.

Alternatively, recruit test participants from external panels and specify demographic criteria as necessary. User testing tools such as Maze offer a tester panel for recruiting test participants.

No user base or budget for hiring? Turn to social media and online communities to ask for feedback on your product.

Get a moderator and write a script (if necessary)

If you’re planning to hold in-person usability sessions, then you should consider choosing a moderator and writing a script.

The moderator can be anyone familiar enough with the product. Their role will be to facilitate and guide participants through the session.

A script is a planning document that has all the information you need for usability testing. Include time, dates, tasks, information about participants, goals, etc. The purpose of the script is to guide you and everyone involved through the process.

How to make usability testing part of your design process

Making usability testing a constant in your design process will take time. Most importantly, you have to do the work, and you have to do it well. Set goals, analyze results, and improve on those metrics continually.

If all you do at the end of a session is write a report no one’s going to read, then what’s the point?

Usability testing sessions can be something that you and your team look forward to because of the value and insight they provide into how your customers experience your product.

Here are three tips for incorporating usability testing into your process.

Establish a usability testing day and stick to it

Nothing gets done if it’s not written down. That’s true for goals, that’s true for usability testing. If you’re only talking about testing but not setting a date or planning it in your work, then it won’t happen.

If your team works in sprints, write down usability testing as a standalone task that should get done. For instance, the well-known five-day design sprint method clearly establishes the fifth day of the sprint as the day of testing.

Define goals and track results

There are a number of metrics you can track when testing the usability of your product. Here are some examples:

  • Success Rate: percentage of people that complete a task
  • Time Spent: average time spent on screen or the total time to completion
  • Error Rate: percentage of misclicks or exits from the funnel

Define the metrics you want to track and set internal goals to achieve. In this way, usability isn’t an abstract concept with no practical application but a defined metric you can work to improve.

Get the team all in

Invite the whole team to be part of the process: whether engineers, designers, or marketers—anyone can discover insights they can use in their day-to-day work.

Plus watching someone trying to use a thing you’ve worked on is a humbling experience. Not least because it reveals how many assumptions we make about our users, and how many of those are incorrect.

Parting words

“Asking users to adopt new behaviors or even modify their existing behaviors is very, very hard.”Khoi Vinh, Principal Design at Adobe

Just look at what happened with Instagram’s recent horizontal scrolling change. Lots of people revolted against it, the change was reversed in a few hours, and the update attributed to a bug.

Here’s the most important idea: if you don’t test with real users, you’ll overlook issues that will surface in a live product.

But if you do?

You remove design as one of the risk variables in building a product.

There are nuances and things you should and shouldn’t do, but this guide offered you the gist of what you need to know to start testing right now. So what are you waiting for?