Buffer

(Disclosure: This is stolen from the IAI listserve and not written by me. I don’t recall who the original author was as it was pulled over from an old blog but I found the thinking interesting.)

————————————

Let’s say you work for a bank and you’re creating information that
answers common questions — how to open a checking account; rules
around your IRA (Individual Retirement Account) like how much money
you can put in, who is eligible, etc.

You have tested the search, the navigation, design, of the site — but
what about the content? Do you have the right information that people
need to get the answers to their questions? Is it written in the best
way to address their questions? How do you test the content itself,
irrespective of the design/interface/brand?

Take several different versions of the same type of content — so in
these examples, maybe 5 different ways of explaining how to open a
checking account, and 5 different ways of describing IRA rules. Take
out all the branding, graphics, etc to “anonymize” the information.
Make them all look the same visually — black text, white background,
nothing fancy. Basically, make it look like the web circa 1993.

The 5 different versions might be different versions created
internally — Joe in Marketing thinks he has the best copy, but Sue
the technical writer likes her version better — or even versions from
competitors. Maybe you’ve just merged with another bank and you’re
trying to figure out which copy to use on the new site. Set it up in a
grid like this

Opening account 1 2 3 4 5
IRA rules 1 2 3 4 5

Each number is a link to a different version of the content. Randomize
the order so Joe’s versions aren’t always #1 and Bank A isn’t always
#2. Have a “key” so you know which number corresponds with each
source, but obviously don’t show that to participants.

When you do the testing, explain to people how the test is set up —
testing the content, visually may be ugly but that doesn’t matter, etc
— and ask them to comment on the information based on the way they
would use it. If they’d just skim it at home, then do that here; if
they’d go through it word by word at home, then do that here.

You’re looking for feedback on things like:
– the length of content — too long, too sort
– the level of information — too basic, too advanced
– the writing style — too formal, too casual
– the amount of detail — too detailed, too high-level
– the structure of content — headings, paragraphs, lists, links

Let people pick a topic that interests them (it may help to have more
than 2 options) and have them start reading the first version. (You
may want to vary the order so not everyone looks at #1 first.) Get
them to talk aloud as they’re reading and after they’ve finished
reviewing each one get them to list their major likes and dislikes and
rate it on a number scale. Repeat for each of the versions and get
them to pick their favorite at the end. Ask them if they were to
create an “all star” version, what it would include. (“I liked the
information in #2, the writing style of #4, and the structure of #5.)

As I mentioned before, I’ve done this several times and it’s been a
very enlightening experience. In our case, the content *is* the
product, so it’s hugely important that we get it right, but I’d
imagine this would be extremely useful for any site where content
plays a significant role.

It helps settle internal arguments about what content is “best,”
obtains objective competitive feedback, and allows you to create
guides/rules for writers to follow when creating content. By reducing
the number of variables involved (interface, visual design, search,
speed, etc. are all the same for each piece of content) you end up
getting honest feedback on the content itself that isn’t clouded by
other factors.

——————————————

1) Randomize the task sequence.
If all of the test sessions start with the same task, and proceed in the
same sequence, then you may be skewing your results. (In my view, the
participant’s mental model can be strongly influenced by the tasks you set.)
To avoid that, I randomize the task sequence between participants.

2) Include ‘non-specific’ goals.
I don’t always have a specific ‘piece of information’ that is a ‘target’.
Instead (or in addition) i give them general areas, topics, or ideas to
pursue and see how they go (and what they say about the journey). In fact,
if it works with the content, a scenario is best.

3) Let the participant set the goal.
In a preliminary chat, you can draw out what participants are interested in,
or what they want to know. Then ask them to find information on the site
that is relevant to their own stated interests/needs/wants. It feels less
like a ‘test’ of finding the ‘right answer’, that makes it less formal and
gives the participants more of a sense of control.

4) Pre-interview.
I find I can learn a lot in talking with the participant before they start
“the testing”. That’s where I’ll get an idea for a participant- set goal.
It’s also where I’ll learn a bit about their way of thinking and language,
which helps when I’m observing them.