Mar 31, 2020
For UX designers and product managers, usability testing is a vital—and exciting—part of the product creation process. Vital because finally you can get some solid data to support your design hunches, and to use as a foundation for future iterations. Exciting because it’s a chance to release the designs you’ve worked on for months out into the wild, to be tried and tested by an unbiased audience of real people.
There’s just one problem: finding test participants. It might seem simple, but anyone who’s organized a bunch of in-person usability tests knows it can be a real pain.
First, you have to find enough people willing to sacrifice their time, come to your office, and jump through virtual hoops for an hour. Then you have to get buy-in to conduct face-to-face interviews, which can be hard as they take so much time and have to be scheduled weeks in advance. What if your design team needs feedback now?
To avoid the headaches that come with in-person usability testing, a lot of design teams are turning to remote usability tests. These are normally done with a usability testing platform that records people completing the test, collects data, and generates insights that your design team can put into action right away.
Taking the remote approach to usability testing comes with some big pros:
But to really see the benefits of remote usability testing, you have to set up your test in a way that’ll get actionable results. Here’s our top tips for making a test that gets the data you need—fast.
There are two kinds of remote usability testing: moderated and unmoderated.
Moderated tests work much the same as traditional in-person usability tests, except the moderator and the user aren’t in the same room. Instead, the moderator observes the user on a video call using Zoom or similar software.
The advantage here is that you can still ask the user as many follow-up questions as you want, which means you can potentially get more varied answers from each session. Also, the user doesn’t need to trek to your office.
The downside is that it still requires a level of commitment on the user’s end—they have to do the test at a certain time with a specific setup, and the extra questions make it a more time-consuming experience all round. So you’ll probably get fewer takers, and they all need to be in your timezone. Unless conducting user tests at 3 am is your thing.
Unmoderated tests don’t have any back-and-forth between you and the user during the test. You create the tasks and write the questions in a usability testing platform like Maze, send them to users, then they complete the test alone. The platform feeds the results back to you when they finish.
The good thing about unmoderated tests is that they only take a few minutes for users to finish, so it’s way less hassle for them. This potentially means a lot more people completing the test. You also don’t need to schedule or attend the test yourself, so more tests can be completed in less time. All this adds up to faster results for your team with less effort.
But since no one watches over unmoderated usability tests, you can’t be there to make sure that users reach the end—or that they don’t wander off for a Twix halfway through. So choosing the right scope for your test is key.
For a more traditional usability testing process, go with moderated remote testing. For a quantitative, time-saving approach, try unmoderated.
The scope of your test can make or break its chances of success. Obviously you’ll want to test every inch of your product at some point. But with remote testing, it’s better to test specific flows than to throw everything into one mega test. There are a couple of reasons for this.
First, focusing your test on a few hypotheses will make your results much clearer one way or the other. The more designs you try to test at once, the longer your test—and the more blurry your results. Giving people fewer options lets you pinpoint design decisions and test them more rigorously.
Second, the shorter the test, the more likely the user is to finish it. This is especially important for unmoderated tests, as you can’t be there to guide their progress. And if they don’t make it to the end, you get distorted results. We recommend seven or eight tasks for unmoderated remote usability tests. The good thing is that distributing them is as easy as sending a link, so you can run small tests more frequently.
Moderated tests can be a little more complex, as you can have full conversations about what people are finding difficult and make notes on how they get stuck. And since the tests take more work to schedule, you might want to dig a little deeper to make the most of each one. Still, you should always keep tests on the short side to respect people’s time.
To get clearer patterns of data—and to make sure people finish—unmoderated remote tests shouldn’t be longer than seven or eight tasks. Moderated remote tests can be less focused, but the same principle applies.
Or even better, you already started looking. Because while it’s definitely easier to find people willing to take remote usability tests than in-person ones, it still takes time. Especially if you need users with a particular background or job title. Remember that one of the main benefits of remote usability testing is being able to test with a large sample size. So the more users you can find, the better.
Also, the earlier you find the right people, the earlier you can start testing in the design process. This could save a lot of pain undoing your hard work later down the line, as your product will be user-centric right down to its foundations.
Here’s a few places to start your search:
Whatever method you try, start asap so you can build up a pool of users that are ready for testing as soon as you need them. If you leave it to the last minute, finding people can become a painful blocker in the usability testing process.
Finding users is the biggest potential bottleneck for remote usability testing. So start looking as early as possible.
Clarity is super important when you create tasks for a remote unmoderated test. You won’t be there to clear up any ambiguity for the user, so your tasks need to be simple and self-explanatory. Well-written and structured tasks will get you more accurate results.
Here’s our top tips for task creation. To dive deeper into this topic, check out our full article on writing great usability tasks.
How you write and structure the tasks will largely determine the success of your remote usability test. Use simple language based on your user’s goals, and avoid making your tasks too complex—or too obvious.
Questions are a way to get more data out of your remote usability test. Even if you’ve gone for unmoderated testing, usability testing software like Maze lets you ask questions before and after each task, and at the end of the test. Follow up to get people’s opinion on specific design elements, or ask more general questions afterward for some qualitative feedback.
Since you’ll need to write the questions in advance for a remote test, they need to be word-perfect. Here’s a few pointers:
And here’s a few examples of well-written usability test questions:
If you want some more inspiration, you can take a look at the questions on the System Usability Scale (SUS). It’s a tried-and-tested usability survey that’s frequently used to measure product usability. The SUS even gives your product a usability score at the end.
You can also ask demographic questions to segment your users by age, occupation, education, etc. This is useful for spotting usability trends for different groups of people, but keep in mind that asking people personal questions can make them feel awkward. So ask carefully, and make demographic questions optional—or you could risk people bouncing before they start.
Check out our article for more on using questions in usability testing.
The wording of your questions has to be very precise for remote testing because you only get one shot. Triple check them before you send out the test.
Great designers know that you have to test everything. And that includes testing your usability test. The last thing you want is to send your remote test to 100 people, then realize there’s a task-destroying typo in the first sentence.
So share a pilot test with colleagues and get their feedback on what could be improved. Get people from different teams to try it out, as copywriters will see it with different eyes to the customer success team. Colleagues outside your own team will also experience it totally fresh, so their perspective will be closer to your users’.
Finally, send it in batches—not all at once. There’s a good chance you’ll realize something is off after the first batch or two. This way you’ll be able to fix it and avoid any major embarrassment.
Remote usability testing is a quick way to get actionable insights from users—while avoiding the hassle that comes with arranging tests in-person. With the right approach and a modern remote usability testing tool like Maze, unmoderated tests can give you deep, data-driven results that’ll guide your product design process.
Keep the scope specific, find users early, and write your tasks with ultra clarity to get the insights your design team needs.