The Ultimate Guide to UX Research
In the previous chapter, we talked about how you can conduct a card sorting session to solve issues related to your information architecture (IA). Card sorting is typically complemented by tree testing to validate the identified categories. In this chapter, we delve into this research method to learn how it can help you create a better user experience.
Run a tree test with Maze
Maze lets your run rapid tree tests and collect results in minutes. Get actionable insights to improve your product's navigation. Try it for free.
A tree test allows you to evaluate the effectiveness of a website or app’s navigation hierarchy and better organize the content on your site. The goal of tree testing boils down to answering the question, “Can users find what they are looking for?”
“Tree testing is a highly valuable exercise to get a clear view of what real users expect as topics in the navigation of a website and how these topics are clustered from primary to secondary. Tree testing should be the starting point for designing better digital applications.”
Tree testing, sometimes described as "reverse card sorting," evaluates a hierarchical category structure, called the tree. By getting participants to find the locations in the tree and asking them where they would click, you can find insights about the order of the topics and the effort to accomplish a goal.
Pioneered by design leader Donna Spencer, tree testing has been performed initially on paper using index cards. In that respect, tree testing is very similar to paper prototyping. Both methods are easy to do and give you actionable insights to improve the UX of your product.
“Tree testing helps you gather insight into people’s mental models of a product and how they would naturally think about exploring it.”
Sinead Davis Cochrane, UX Manager at Workday, says that tree testing helps you decipher the complexity of a navigation and significantly improve it:
“I think many designers underestimate the complexity of navigation design. Even if you already have a standard pattern that you’re leveraging, the information design can make or break your navigation.”
In the next sections, we run you through a step-by-step process of how to prepare and conduct a tree testing session.
As with any UX research method, the first step to running a tree test is to create a research plan and align with stakeholders on the objectives of the research. Plus, defining the research questions and communicating the timeline to the team are also key.
“Make sure that everyone is on board and understands what the test implies. For example, if the results come back and show that the current IA is not working, you should be able to allot enough time to make the appropriate changes and, maybe even test again.”
To conduct a tree test, start by getting your site structure ready, create tasks for your participants, and define the key metrics you’ll record to analyze the data gathered.
Keep in mind that during the tree testing session, only the text version of the site is given to your participants, who are asked to complete tasks to locate particular items on the site. It’s recommended that you keep these sessions short ranging from 15 to 20 minutes and ask participants to complete no more than 10 tasks.
Begin by outlining the tree structure with the categories, subcategories, and pages in your site or app. Being specific about your subcategories is important because it will prompt realistic user behavior. For instance, a category in the navigation could be called Resources. Respectively, the menu structure for subcategories can be Blog, Help Center, Guides.
Even if you want to test a particular area in your product, make sure your target audience understands how that area relates to the product as a whole. This will enable you to get actionable information you can act on when reviewing your results.
Next, create tasks for participants to find a page or location in a tree using a top-down approach. Just like in usability testing, writing good tasks is key when doing a tree test study.
For example, if you want to test the discoverability of the upgrade page in your product, you can create a task that asks participants to find the best way to upgrade the product. Here’s an example of a tree test task:
“You’ve signed up to a 7-day free trial for a budgeting app. You’ve enjoyed using the app and want to upgrade your account. Find how to do that.”
NNgroup recommends that for each task you write, you should also define the right answers that correspond to where the information is located within the tree, so you can automatically calculate success rates for each task.
Other best practices for writing great tasks include making tasks actionable, setting a scenario, and avoiding giving precise instructions to avoid bias.
When phrasing a task, it's best to avoid matching keywords in the tree.
Typically, tree testing shouldn’t take longer than 20 minutes and it should include up to 10 tasks. For example, Melanie from Shopify shares that for the product she works on, her team had to test the taxonomy structure of a list of services so they wanted to understand what path people would take when navigating different categories of services.
Here’s the task they created:
“Imagine you wanted to hire someone to help set up your business on Shopify; please find which service you’d have to hire for.”
When you’re preparing a tree test, an important element to take into consideration is the participants you will work with. The number of participants depends on a variety of factors like the type of testing you are conducting, your product’s target group, the confidence level you need, and the goal of your project.
Melanie Buset, Senior UX Researcher at Shopify, recommends using at least 50 users when running a tree test so you can identify user behavior trends and clear patterns. She says:
“Usually, a good rule of thumb is, once you start seeing pretty clear patterns emerge, then you’ve got enough participants. In my experience, having about 50 participants for tree testing is when you start to see these patterns clearly. It really doesn’t hurt to have more than 50, but I’d say aim for 50 minimum if possible. This also depends on the complexity of the problem you’re dealing with and what needs to be tested.”
The key thing for selecting the right participants is to spend time understanding your target audience and identifying who will be the most impacted if you were to make changes to your design.
For example, Mario Tedde, Senior UX Researcher at FedEx Express explains: “Let’s assume that you want to design a website that is going to be used by different types of personas such as psychiatrists, tutors, parents, and young adults that need mental care. If you decide to make one website that will fit all their needs, you need to consider performing tree testing with participants that represent all these personas.”
You can use tree testing with a high volume of participants since only a short amount of time is required from each user to complete the test. This is especially easy when running remote, unmoderated testing.
Tree testing can be run in-person or remotely using remote online tools such as Maze.
With in-person, moderated testing, the advantage is that you can ask participants why they made certain choices. Mario says:
“Moderated tree tests give you the opportunity to figure out the why behind a participant’s actions and identify the rationale behind their decisions. To avoid biased answers, I let the participant do the exercise in silence and only start asking questions when they have completed the task.”
However, remote testing is advantageous because of its ease of use and speed. You will only need a web browser for testers to participate and they can do so anywhere and any time, without you being present.
An important step in the process is organizing a pilot test before your official tree test session to see if the test makes sense and works as expected.
“Do a pilot test and practice with your team. Once you set up the actual tree test, ask someone to go through the tasks as a real participant would to ensure things will run smoothly.”
Atlassian recommends doing this by opening up the study to a small portion of participants from your panel. This approach will help you mitigate the risk of missing important details, adjust your instructions, and get more valuable insights for future sessions.
Pilot runs are useful as they bring new perspectives to the study. You will be able to find out what’s missing or what’s confusing and be prepared for the actual session.
This step is fairly straightforward. If you choose to conduct tree testing as a remote, unmoderated study, the tool will give you a link to the test that you can send to participants.
You can also follow-up the tree test with survey questions that participants will answer after or before they complete the tasks. The questions can help supplement the user research data you need to know about the participants, such as demographic information or their familiarity with the product.
Once all the participants have completed the test, you can start analyzing the results and making informed design decisions. If you want to test different versions of a tree—say a new version of a tree with the existing one, you can run split testing and compare the results of the new tree to the old version.
After participants complete a tree test, the results will be recorded in the tool you are using, allowing you to start analyzing them. Typically, the metrics you can analyze for a tree test include success rate, directness, average time to complete a task, and the path taken by users.
“Once you ask your participants to go through the test, the tool will highlight which entries went down the correct path(s) and which didn’t. Seeing where people went off the ‘ideal path’ will help you identify where the navigation issues are within your product.”
These results often tie back to questions like:
By analyzing the accumulated data, you can validate or invalidate your hypotheses and design a navigation that makes sense to users.
In the next chapter, we go over the five-second testing research method, which you can use to measure how well a design communicates a message.