For the past three years, I’ve been running usability tests on software applications. Most of the tests have been to measure and iterate on designs and interactions. However, lately many of my projects my team has taken on have been related to *testing content*. I realized that I needed a refresher on how to test content, and wanted to share with you what I’ve been reviewing. Highlighted below are some of the methods identified in the UX community to help evaluate whether the content on your Web sites is useful for your customers and prospects. Not surprisingly, testing content is actually very similar to testing design/interactions.
Content Testing Methods:
Iterative Testing. Kevin O’Connor and Colleen Jones’s 2010 IA Summit talk: “Testing Content: Early, Often, & Well” is a great resource on testing content (here is the audio). Not surprisingly, she talked about how important it is to test content iteratively, just like we test designs and interactions. She will often start with a baseline test of the content that is on the live site, then follow that up with concept testing, and then do a validation test. What was interesting was how she focused the *protocol* on three key content questions: Can users find and read the content they need? Do they understand the content? Can and will they *act* on the content?
5 Second Tests. Christine Perfetti wrote an excellent article: “5-Second Tests: Measuring Your Site’s Content Pages.” Using this method, you’d basically show participants a web page for 5 seconds, take it away, and then ask them for their initial impressions. Christine says you can ask them to describe back to you what they saw. She says users make important judgements in the first moments they visit a page. This method helps to uncover those judgements. Also, it can measure whether the calls to action on the page are apparent enough. She describes a case study with the American Red Cross, where 5 Second Tests helped them iterate on the design of their donation page, to make sure users knew what all the donation options were (i.e., donating airline miles, stock, clothes, etc.).
Inherent Value Tests. Christine Perfetti and Jared Spool talk about this method in a couple of articles, and in a podcast. They say you’d want to run an Inherent Value Test when your team needs to know how well a Web site communicates the inherent value the designers have put into it, and whether new customers understand the true value of the service. How it works: recruit two user groups, in the first phase, you recruit existing users, and interview them about what they like about the product (what they find valuable). This will help to identify features loyal customers miss.
Eye Tracking, Gaze Plotting, and Web Analytics. I talked with a our Technical Writer, Maureen Lau, who had recently attended the Nielsen Norman Group‘s course on “Writing for the Web.” I learned from her that the Nielsen Norman Group tests a lot of content with Eye Tracking, Gaze Plotting, and Google Web Optimizer. Eye Tracking tests how long they look at a particular area (the more red, the more they looked at it). Gaze Plotting, tests where their eyes jump to. She said you go through a similar process with content as you do with interactions (paper prototyping, iterating, trying to understand users’ needs, etc.). She said Eye Tracking and Gaze Plotting are great to see where people look, and what catches their eye, however you need to interview people at the same time, to see if they are actually comprehending what they have looked at. They will use the Google Web Optimizer to roll out an AB test, and see which content “performs” better. She told me about how readers typically read in an F-shaped pattern. Finally, she said we want to create tasks around the content. Since users often search for keywords, they suggested getting a better sense of what SEO keywords we use, and testing to see if they look for those key words, where they’d expect to look for it, etc.
Testing Content Concepts. Colleen Jones wrote an excellent article called “Testing Content Concepts” that walks through the specific protocol, probes, and questions to ask participants about when testing content. Colleen says a “Content Concept” is a mockup or draft of your content. She says if your run a usability test on your content, and your content is not working well, you should fix the problems and test again until it does. Colleen recommends testing in three levels of fidelity: content only, content in a wireframe, content in polished design. What I found most interesting in her article was her recommendation to focus observations on how usability participants work with the content. I also liked her recommendation of measuring how successful people are at reading, understanding, and remembering key messages in the content. Colleen discusses the biggest challenge in testing content: measuring how the content has influenced the participant to take action. She says “don’t press too hard, and thus, end up with misleading rationalizations.” One idea Colleen has that could work well for us, is to conduct a closing questionnaire that asks whether participants would now make a decision that differs from their typical decision, and to rate how well the content informed their decision.
I’ve never been so convinced of how critical it is to test not only how to navigate to content and how to find content, but to test the content itself. It is the content itself that drives influence and calls readers to action.
If you have any comments, questions, or alternative methods to test content, please leave a comment below!