Icon

In Icons, Pedagogic Vectors, Forms Design and Posture we briefly discussed icon design. (Icons, in this context, meaning the sketch-pictures on buttons that you can click.) The bottom line was that it’s hard to learn and remember what icons stand for.Icon

In Performance, Data Pixels, Location, and Preattentive Attributes we discussed how icons should be recognizable by preattentive attributes, so there is no Cognitive Friction to overcome when selecting the right icon on which to click.

In Color, we discussed the role of color in icons, coming down to the idea that icons shouldn’t use color, and should be grayscale. Read the rest of this entry »

Share

Signal-to-Noise Ratio

finding the numbers can be hard

finding the numbers can be hard

I work at the University of Pittsburgh Medical Center. UPMC has prioritized IT, and compared with many other academic medical centers, the IT department is fairly well-funded and well-staffed. The central IT umbrella spreads wide, including 16 major hospitals and numerous other facilities. UPMC uses Cerner for an inpatient electronic medical record (EMR) (and for outpatient settings). For clinical charting in the ED, we use Cerner’s PowerChart 2G, dictating into it using Dragon speech recognition. PowerChart is pretty klunky, as are its templates, and in our ED we use our own very simple PowerChart templates, basically a blank page into which to dictate.We in the ED built some standard templates and macros in Dragon, and docs, including residents, can customize or add new templates or speech macros as they wish, which speeds up dictation quite a bit.

However, we actively discourage the use of the PowerChart templates. Why? Because PowerChart templates have a seductive feature that the vendor and our IT people used to tout. But as it turns out, that feature trashes the signal-to-noise ratio of the chart.

Read the rest of this entry »

Share

Wireframes

A wireframe document for a person profile view

A wireframe document for a person profile view

A common technique for prototyping computer screens is to use wireframes. A recent article in UXmatters discusses wireframes, and asks whether wireframe prototypes are used by program designers as a substitute for real collaboration.

That’s a good question. But I think this is a better one: is showing wireframes to people a poor substitute for figuring out what users need to do, and then designing and refining a workflow process that works for them?

Wireframing, also known as paper prototyping (because it can all be done on paper, without wasting a single electron) can be an effective tool during design. However, it is not a substitute for sitting down with some of each class of users, using anthropological techniques to document the tasks they are accomplishing as they work, and using personas to guide the design of the computer-based work process for these classes of users, and then going back and using discount usability testing to refine the process.

Wireframes are good but not a substitute for either collaboration or task analysis.

Share

Suicide

Text MiningData mining has been a topic of interest to businesses and researchers for many decades. For physicians and other clinicians, and those designing systems for clinicians, data mining has been of less interest. Yes, you can use data mining to predict the volume of patients in your ED by day and hour. Yes, you can use data mining to order supplies more intelligently. But to improve patient care? Not so much.

Yes, research using data mining can provide us with some clinical information,  but such retrospective studies, especially subgroup analysis, can lead to egregious error, as summarized in a recent article in JAMA. It’s not a replacement for prospective, blinded and randomized studies. Read the rest of this entry »

Share

Anthropology

Lawrence of Arabia

Lawrence of Arabia

When doing usability testing (see Discount Usability Testing) we tend to act like anthropologists, observing people using computers as if they were savages performing quaint native rituals.

In a post in UXmatters, Jim Ross argues that we should also use the anthropological technique of participant observation: basically, going native. Or, in other words, trying to accomplish the user’s tasks on the computer ourselves.

There are arguments against this approach.

One is: Who’s doing “going native” to test the program? If it’s the coder who wrote the program in the first place, then it’s hard to argue that this is a legitimate test as the coder already knows the program inside and out.

Or is that true? A guy I know coded a program then six months later, as a user, couldn’t get it to work right without a few tries. While you might think this is a rare occurrence,  my personal usability analysis of many leading medical programs suggests this might occur fairly often.

In fact, you can argue that you should take the coders or tech support people, give them tasks often performed by users, and make them use the program over and over until they can get it right each time. Time them.

Carrying the argument further, the ideal person to “go native” is someone who is naïve to the program, yet has a background in usability: an independent usability analyst. I expect that usability consultants will recommend this highly (think: job security). That doesn’t mean it’s a bad idea.

Share
//commented out L sidebar 7/26/11 //