How to test content like a pro: A step-by-step guide

Ryan Williams

Nov 5, 2020

Content is everywhere. It’s very often the first way your users will engage with your product. So it’s important that your content speaks to them. Content testing is how you make sure that your content is understood and well-perceived by your audience.

In this article, we’ll cover what content testing is, the methods of testing content available to you, and a step-by-step guide on how to plan and conduct your research. For an expert take on content testing, we spoke to Vaida Pakulyte, UX Research & Design at Electrolux, Steve Howe, Content Designer at Shopify, and Nicole Michaelis, UX Writer at Spotify. 

What is content testing?

In short, content testing is a research method used to understand if your content works for your users and readers. With content testing, you can check if the content you create resonates with users, speaks directly to their pain points, and provides enough context to help users complete tasks successfully. 

“I believe that we should invest in content testing—just like we do when testing prototypes, measuring analytics or interviewing our users.”

Vaida Pakulyte, UX Research & Design at Electrolux

Content testing is also used widely by marketers and content strategists as part of the content strategy. Here, it’s employed to see if your website’s content, such as landing page copy, is engaging enough to move your readers down the marketing funnel. 

“You need to make time for testing content as it touches all other aspects of your product. And who knows, you may discover something you never considered in the first place.”

Nicole Michaelis, UX Writer at Spotify

Bad content can completely disrupt a user’s experience of a product and leave them confused or unengaged, which may drive them away from using your product altogether. 

On the other hand, good content can make the user experience (UX) intuitive and prevent users from becoming confused or disrupted. 

“We all carry a lot of bias. Testing is a great way to move away from assumptions and become aware of things that may not be obvious to you. “

Nicole Michaelis, UX Writer at Spotify

What content to test, and what to test for

To put it plainly, wherever there is content in your product, it’s worth doing content testing. And we’re yet to see a product that uses no content at all (as interesting as that would be).

Think about every bit of content on your product:

  • The homepage
  • Error messages
  • Menu buttons
  • Filters
  • Product pages, etc. 

It all needs to be clear to your users, so it’s all worth testing. 

To decide which content to test, prioritize based on the urgency and importance the task holds for users. If it’s a critical message, such as an error message in payment flows, then it’s vital that the content is easily understood and helps the user achieve their goals. 

When planning to test content, provide context such as a design mockup or a prototype as a reference for users to evaluate the content. Include visual elements, buttons, and everything else that will be on the final design. Your users will be seeing your content in context. So, in most cases, that’s how you should test.

“Place the content in design, don’t just show a button with copy when there is no context. Users might not have the ability to evaluate content without a reference.”

Vaida Pakulyte, UX Research & Design at Electrolux

So, what are you testing for? Well, the criteria you’re looking to assess with content can be broken down into five main factors: 

  • Usability: Usability refers to testing how easy your product’s content is to engage with. Unlike writing an article, you don’t get a thousand or more words to explain something when creating product content. Brevity is everything, so if your users have to read a manual to use your product, then something is clearly going wrong.
  • Readability: This one’s simple. You want your content to be readable by your users. If they get to the end of a sentence and then have to read it again and again to understand, then the content quality needs to be improved. Remember that your users will expect to be able to scan the text rather than reading every word.
  • Accessibility: Accessibility means whether your product can be used by as many people as possible. A high accessibility score ensures that people with disabilities are able to use your product. For example, having a function that reads the content out loud would be more inclusive to blind users. Or, including subtitles for video content will enable people with a hearing loss to understand the content. Not only does this widen your audience, but it creates an inclusive product that everyone can use.
  • Searchability. This is to do with search engine optimization (SEO) and dictates whether people are able to find your website or product easily online. The content on your web pages is a big part of what guides Google’s rankings, so getting it right can hugely increase the numbers of people who find and read your content.
  • Tone and voice. Finally, you should consider the tone and voice of your content. Your product, like your brand, has a voice it uses to speak to users, so making sure each piece of content resonates with your audience is important.

Remember that your voice should remain consistent, but the tone will vary depending on the context. An online banking platform will have a different voice to a dating app. But the bank will use a different tone for an FAQ page to an update celebrating the company’s 100th year.

“Testing content helps me understand the emotion my designs need to convey: more formal, informal, chill, educational, and so on.”

Vaida Pakulyte, UX Research & Design at Electrolux

It’s important that you know how you want to talk to your users, that you’re matching it consistently across your product, and that your tone is right within its context. Testing your content with users will help you get that balance right.

Content testing methods

There are a number of methods for testing content, and each is particularly useful depending on your goals and use case.

Task-based usability testing

You can run a usability test to test content, with slight modifications to the usual way the method works. In a typical usability test, users complete pre-scripted tasks that are closed-ended (have a ‘correct’ answer) and shorter in form.
When testing content, keep the tasks open-ended and allow users to explore the product in their own time. The goal is to notice how users engage with a website or app, how they find the information they need, and whether they run into any confusion. When using this method for content testing, there’s not a definitive answer that you want the user to provide. Instead, you want to assess if the content is clear and enables people to use the product successfully.

Testing this way will provide you with qualitative information about what works and what doesn’t and give you insight into which content may need to be changed or improved. Apart from paying attention to what users say during the test, also notice their behavior: sometimes, the two may contradict each other. 

“Always keep the behavior in mind: what users do during the test might contradict or confirm what the user says.”

Vaida Pakulyte, UX Research & Design at Electrolux

Usability testing provides the perfect opportunity to make sure you and your users are on the same page in regards to the language used. There’s a real possibility that your design is using wordings that your users aren’t. It could be as simple as a difference between “settings” and “preferences.”  

Cloze test 

A Cloze test measures how well your readers can understand a piece of text through context and prior knowledge. It works by taking a 125- to 250-word piece of text and removing one every few words, typically every fifth or sixth. 

You present this text to a test participant and see how many of the missing words they can guess correctly. An overall score of 60% or above is deemed comprehensible enough to meet user needs.

Here’s a short example of what a Cloze test looks like:

When Mr. Bilbo Baggins __________ Bag End announced that __________ would shortly be celebrating __________ eleventy-first birthday with a __________ of special magnificence, there __________ much talk and excitement __________ Hobbiton.

Highlight test

A highlight test is particularly useful for seeing how people feel about your content, rather than testing for comprehension. As we pointed out before, your product’s voice is important, and this test is perfect for seeing how your copy resonates emotionally with users.

To do a highlight test, you’ll need a sample of your text and a red and green highlighter. You give participants the text and ask them to highlight in green what made them feel confident about the product, and in red what made them feel less confident.

If you’re testing for reactions and emotions other than confidence, you can just switch out “confident” for any other feeling.

Once this test is finished, you’ll then have results to analyze. You can easily detect which parts of the content are marked commonly in red and need to be improved first.

Five-second testing

In a five-second test, you present your users with a design and give them five seconds to look at it, then ask them questions. This method can be used to test landing page copy, UI content, and more. 

The questions you ask can range from broad, like “What do you think of the page?” or “What do you remember seeing?” to more specific ones, like “Which of the items are on sale?”. 

It’s best to start the test with broad questions to get general feedback and thoughts from your participants, then asking more specific questions to see what information stuck with them.

Here, you’re testing to see if the information on your page can be quickly absorbed, perceived, or understood. Five-second tests are useful for gaining insights into how your users perceive a message at first glance, and what value they take away.

A/B testing

An A/B test involves creating two variations of content and testing them with users on live websites. A/B tests can give you quantitative data on which content performs better. 

"One of the best ways to include more quantitative data in your UX research is to use A/B Testing and try testing copy. It is a much more real-life scenario when you put something in front of the users in a real product and get their feedback in numbers as well: allowing you to choose which word/phrase is a better one."

Vaida Pakulyte, UX Research & Design at Electrolux

This type of testing can provide useful metrics, such as the click-through rate (CTR) that allow you to determine the winning variation easily. However, to do well, you need to have a large sample size for statistical significance.
For example, Netflix is well-known for A/B testing artwork for titles to identify which images perform best at getting users to watch videos.

Readability test

Readability formulas are a computer-run testing tool that provides an analysis of your content. They’re a simple way of testing how easy your copy is to read by prioritizing small words and short sentences. 

However, if you’re truly aiming to test your content with real users, then readability formulas might not be the best option. While they may be an attractive solution, especially if you’re on a budget, they can often produce inconsistent results, and they don’t give any actionable feedback. For better results, use readability tests in combination with the other methods described above.

How to test content in 6 steps

So now you know why you should test content, and what you’re testing for, let’s cover how you can go about testing content.

We’ll break this process into six steps:

  1. Identify goals
  2. Choose your method and create the test
  3. Test the test
  4. Gather test participants 
  5. Conduct the test
  6. Analyze results

1. Identify goals 

Before you even start planning your test, you need to consider what your objective is. In short, who and what are you testing, and what are you testing to see?

You should know by this point who your intended users are, and what their needs and expectations from your product are. If your product is used by people from a range of demographics, then the product should be tested with users from each group. If your test underrepresents a demographic, then you risk creating content that doesn’t speak to all your personas.

So choose the content that you want to test, create a concrete hypothesis, and you’re ready to start outlining the test. 

2. Choose your method and create the test

With the who and why decided, you can then choose the most suitable method for testing and create the test. 

“How you design a test heavily impacts its result, so constantly questioning why and how you’re testing something is crucial.”

Nicole Michaelis, UX Writer at Spotify

For example: 

  • We want to test if our users can understand how to make in-app purchases. We will use a task-based usability test to see if they can successfully complete tasks with the app.
  • We want to test if our homepage speaks to our target demographic. We will use a highlight test to gauge their feelings about it.
  • We want to see how users perceive our product homepage at first glance. We will use a five-second test to measure this initial impression.

Tip: In your test, don’t ask the users if they “like” the content. Ask if they could navigate easily, if they understand a paragraph, or if the tone makes them feel comfortable. Remember that you’re not writing a fictional novel here. Within a product context, whether the reader “likes” your content isn’t relevant. What matters is whether it’s relevant and comprehensible.

3. Test the test

Once you have created your initial test, you want to see if it makes sense. Before you start finding test participants, grab some colleagues or other stakeholders, and run the test with them first. 

“Test the test! A mistake I’ve made in the past is making the test and then launching it straight to users. It was then that I discovered issues in the test as I was running it.”

Steve Howe, Content Designer at Shopify

Testing the test will ensure the instructions make sense and that your test participants can successfully understand what is asked of them. This allows them to focus on your content, rather than trying to decipher the test itself. 

4. Gather test participants

With your test created, it’s time to find some test users. 

If you’re running qualitative sessions, you need to find at least five participants. This number will give you a consensus to inform your content decisions. If you want to collect quantitative data, then it’s best to run the test with at least twenty people, and more if possible. Ultimately, how many people you need to recruit depends on the type of test you’re doing, the method, and the criticality of the project.

The big focus here is to find representative users for the real test. Proxy users, such as the people you work with, may not be the people who use your product. You’d get “positive feedback” since they already understand all the concepts you’re trying to teach, but this data isn’t reliable. 

Here’s our guide to finding and recruiting research participants.

5. Conduct the test

You now need to decide whether or not to moderate the test. There are advantages to each method. 

With a moderated test, you have greater control during the test. Having a facilitator present can provide an extra layer of feedback from the study. Moderating these studies allows you to ask follow-up questions specific to the user. You can understand their doubts while they’re having them, while also reassuring the participants during the test. 

If you’re moderating the test, allow enough time for participants to read the content and give you feedback. Unlike traditional usability testing where users complete tasks and talk out loud their thoughts, content testing may require significant stretches of time when both you and the participant are silent. Make sure to explain to the participants from the start that this is okay.

With unmoderated testing, you don’t need to find someone to act as a moderator for the test. This type of testing is usually done remotely and it allows you to run more tests at a faster rate, even simultaneously, which will give a greater amount of quantitative data for you to analyze. What’s more, it allows you to find users from different parts of the world, which can unlock insights about how people from different countries intersect with your content.

6. Analyze results

Once this round of testing is done, you’re now able to analyze the test results. You should now have an overall idea of how your piece of content works, as well as specifics to improve on—be it readability, voice, or any other important factor. 

Note how we said “this round of testing.” That’s because these tests aren’t a one-time thing. As your product and user base grow and evolve, you need to keep testing to see if your content works and speaks to them. 

“Testing is never completed. I believe everything in the product world is an iteration and should be seen as such.”

Nicole Michaelis, UX Writer at Spotify