Discount Usability Testing

This entry is part 4 of 12 in the series Medical Computing

In the first of this series, I tried to persuade you that your computer was human-illiterate, and we defined and discussed usability, memorability, and learnability. In the second, we discussed Tognazzini’s Paradox: how the hardest part of designing an effective program is often what seems the most trivial—sometimes simply a matter of changing a single word. In the third, we talked about design integrity, simplicity and abstraction. Now, let’s address “discount usability testing.”

When we talk about “usability testing” most of us think about expensive consultants, fancy labs with one-way mirrors and video recorders, and the like. Yes, usability testing can be done in such labs. Yes, companies like Microsoft have permanent million-dollar user testing labs.

But if you design, program or provide feedback on any portion of an ED information system – learn how to do some discount usability engineering. Usability guru Jakob Nielsen says: “I advise clients to avoid design agencies that are too arrogant to include user testing in their project plans.” For that matter, if you are a user: do some quick and dirty usability testing to document on how bad (or how good) your system is – either to demand a better system, or to demand that the vendor provide usability updates!

There are many ways to do usability testing. Think of big, dedicated labs with one-way mirrors and video cameras. It works, it’s useful, it’s expensive. But it’s not cost-effective. And it’s beyond the means of most ED-IT or hospital-IT companies, not to mention users who want to document their usability complaints about a vendor.

Another darling of the marketing divisions of large software companies is “focus groups.” And if you are an action-oriented emergency physician or nurse (or computer geek), and the term “focus group” makes you want to run in the other direction, your instincts are right. If you get a bunch of users together with an experienced facilitator, use standard icebreaking techniques to get started and then spend a day using standard brainstorming techniques, you end up with a fairly solid set of recommendations. Which turn out to be garbage. Yes, garbage. Good engineering studies show that the output of such processes is not useful in making software products more usable. Really. Jakob Nielsen says “Listening to what people say is misleading: you have to watch what they actually do.”

And user satisfaction surveys are just as bad. As Nielsen says: “what customers say and what customers do rarely line up; listening to customers uses the wrong method to collect the wrong data.”

In 1989, Jakob Nielsen first promoted discount usability engineering. He’d looked at big, expensive usability labs and found their output wanting. He suggested that all you really need is five users. Later he and others did mathematical analyses that essentially proved that there is no need to test more than five users. In estimating how much time and effort it really takes to do a good usability test of a system, he finally decided that two work days is all it takes. And, a college usability class was able to, with 15 hours of lectures, do a full usability engineering assessment of a large commercial website in an average of 39 hours per team. So ask vendors about the results of their usability testing before you buy. If a vendor whines “we’re not big enough to afford usability testing” just walk away and find another vendor.

Alan Cooper, the man who developed Visual Basic and sold it to Microsoft, has coined the term “User Interaction Design” to replace the old term “User Interface Design.” He emphasizes design and has issues with Jakob Nielsen’s approach. Cooper says (and rightly so) that usability testing doesn’t help if you’re testing is a jet-assisted 1964 Volkswagen beetle (or an equivalent software product). Cooper notably compares usability testing to sanding: it gets off the rough edges but won’t turn a chair into a table.  But once we get past this, we find that Cooper and Nielsen agree in important ways.

What’s the best way to test a piece of software? Take the software (or a mockup of it using software called demo-ware), and put naïve users in front of it. Sit behind the users with a notebook and pen and listen to what they say as they try to use it. Don’t ask questions until the user is all done with the task at hand, but answer questions when the user asks. Take notes. Lots of notes. Look for the “mistakes” the user makes. And make notes about these. And then figure out to change the software so the user doesn’t make “mistakes.” (There is even some talk about using pieces of paper with a design drawn in pencil to develop prototype systems – but this is quite controversial and we won’t get into it here.  Nielsen says the following:

1. Get representative users

2. Ask them to perform representative tasks with the design

3.Shut up and let the users do the talking

The third rule is surprisingly difficult, while rule #2 requires some experience to execute well.

Yes, there are classes and textbooks so you can learn how to be better at doing this testing, and yes, experience helps. Knowing usability principles so you can identify classic usability problems as soon as you see them helps. But really, the process is quite simple and easy.

Let’s give a few examples of classic usability errors that you can look out for when you do “discount usability testing” yourself. I’ll give you just a few quotes from Jakob Nielsen’s AlertBox online column. Although some relate specifically to Web usability, they apply to other software too.

Say, for example, that a user clicks the wrong button. It’s obvious to any observer that such behavior represents a design error. Listening to users’ comments prior to clicking usually tells you why they misunderstood the design, thus guiding you to make it better in the redesign.

The damage that unchanging link colors cause is one of the most tricky usability problems to identify in user testing. On any given page, users seem to understand the links just fine. Users almost never complain about link colors, as long as they’re distinct from the rest of the text and reasonably legible. Life is good, or so it seems.

Observe carefully, though, and you’ll notice that users frequently move in circles. They’ll visit the same page multiple times — not because they want to, but because they don’t realize that they’ve already been there. Users will give up when they’ve tried most links in a list, even though there’s one link that they haven’t tried; if the links don’t change colors, users don’t realize that there’s only one unvisited link remaining.

Use graphics to show real content, not just to decorate your screen.

Don’t include an active link to the homepage on the homepage.

Study a wide range of people: the young and old, utter novices, experts, Unix geeks, sales staff, physicians, repair technicians, administrative assistants, executives, users of different nationalities.

Watch those people perform a wide range of tasks: shopping, searching, planning vacations, researching school projects, managing an erupting oil well.

Observe them using a wide range of interface designs and styles. Ideally, the interfaces should feature different ways of solving the same design problem so that you can compare and contrast how different design details affect usability.

Experiment with a wide range of interaction platforms, from wall-sized “virtual windows” to pocket-sized PDAs. It can also help to watch people use text-only designs like a mainframe or classic Unix, or futuristic technologies like VR that might be currently useless, but can serve as a source of ideas.

Observe the user’s body language for indications of satisfaction or displeasure (smiles or frowns), as well as for laughs, grunts, or explicit statements such as “cool” or “boring.”

The best usability tests involve frequent small tests, rather than a few big ones. You gain maximum insight by working with 4-5 users and asking them to think out loud during the test. As soon as users identify a problem, you fix it immediately (rather than continue testing to see how bad it is). You then test again to see if the “fix” solved the problem.

To collect metrics, I recommend using a very simple usability measure: the user success rate. I define this rate as the percentage of tasks that users complete correctly. This is an admittedly coarse metric; it says nothing about why users fail or how well they perform the tasks they did complete. … However, I often grant partial credit for a partially successful task.

“Real Users Don’t Mind Complex Design”

Enthusiasts sometimes defend bleeding-edge technology and complex designs with the claim that users actually like sophisticated websites. Users, they assert, are smart enough to handle complicated design.

These enthusiasts labor under a miscomprehension about the Web’s fundamental nature. It is not a question of whether users are capable of overcoming complexity and learning an advanced user interface. It is a question of whether they are willing to do so.

In testing multiple groups of disparate users, you don’t need to include as many members of each group as you would in a single test of a single group of users. The overlap between observations will ensure a better outcome from testing a smaller number of people in each group. I recommend:

  • 3-4 users from each category if testing two groups of users
  • 3 users from each category if testing three or more groups of users (you always want at least 3 users to ensure that you have covered the diversity of behavior within the group)

In fact, Nielsen and others suggest that good usability testing alternates between “heuristic analysis” (simply going through the screens of a program with a list of usability principles like those above) and observational studies of actual users.

Next Time:

The next in this series will address Personas. You probably don’t know what that means, but we’ll tell you that it’s one of the key procedures to creating good software. Although we’ll keep you in suspense until the next article about exactly what “Personas” means, we won’t be displeased if you decide to research the topic yourself in the interim.

To Learn More

Jakob Nielsen is without question the Big Guru of usability, and his website is the first place to go. In particular, read some of the established usability guidelines. Scan through his Alertbox: Current Issues in Web Usability columns, including many classic essays on discount usability testing. Developers should check out the 3-day “camps” on discount usability testing.

Nielsen’s textbook Usability Engineering is a dry read, but the classic in the field. His Designing Web Usability is more readable, as is Homepage Usability: 50 Websites Deconstructed; both have much applicable to non-web usability as well.

Alan Cooper’s books also have essential usability information as well, and one of them, The Inmates Are Running the Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity, is an easy read and a succinct source of his ideas on usability testing.

Series NavigationDesign Integrity, Simplicity and AbstractionPersonas
Share

Tags: , , , , , , , ,

This entry was posted by kconover on Tuesday, December 29th, 2009 at 3:59 pm and is filed under Tutorials . You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.

One Comment

  1. kconoverNo Gravatar says:

    There is an interesting article at uxmatters about how many subjects you need for reasonable usability testing.

    http://www.uxmatters.com/mt/archives/2016/01/how-to-determine-the-right-number-of-participants-for-usability-studies.php

    Their answer is actually fairly definite: “about ten.”