- Brittleness
- Robustness
- Diversity
- “Niche” Computer Systems
- Downtime
- Meaningful Use
- Efficiency
- Anticryptography
- Color
- RHIO
- “Wrong Patient”
- Cognitive Friction
- Dialog-Box Rooms
- Ignore
- What’s in a word?
- ALLCAPS
- Layers
- Consistency
- Menu
- Cost Disease
- RAND
- PHR
- Model T
- Giveaway
- Skeuomorphism
- Icon
- Signal-to-Noise Ratio
- Anti-Data Pixels
- iPhones
- Suicide
- Anthropology
- Wireframes
- Fitts’s Law
- Kludge
- Ebola
- Pop-Up
- Clicks
- Bad Apple
- Testing
- Bold
- Point-and-Click
- Anti-User Pixels
- Flat
- Glucose
The Federal government has warped the fabric of healthcare. By giving away money. They’ve done this both to doctors’ offices and hospitals, for “meaningful use” of healthcare information technology. You get the money only if you use software that the Feds certified to meet their criteria. This is supposed to get us to rapidly have interoperable, highly-functional and easy-to-use electronic health records. But… yes, there’s always a but.
One of the Federal electronic health record criteria is that the software has been tested for usability. And there’s the rub.
On February 21, 2012, the US National Institute of Standards and Technology (NIST) published (NISTIR 7804) Technical Evaluation, Testing and Validation of the Usability of Electronic Health Records. This establishes a process for usability testing, the EHR Usability Protocol (EUP): The EUP is a three-step process … (1) EHR Application Analysis, (2) EHR User Interface Expert Review, and (3) EHR User Interface Validation Testing.
Federally-certified vendors of electronic health records have to use a User Center Design process (UCD), for testing the usability of their software. They must test their software against a minimum of fifteen end-users according to the certification rules. Specifically, they must:
- Tell which user-centered design process they used, and if it is not a standard process such as the NIST EHR usability protocol above, provide a detailed explanation of how it works.
- Provide user-centered design test results for eight specific different EHR capabilities.
- Once the product is certified, make the usability testing reports public, with how many and what kind of users tested the software.
(During this whole blog post and any discussion of the issues raised by it, you must understand that the term “electronic health record” includes other functions of a Hospital Information System, such as computer-based practitioner order entry. Not my definition, but the Feds and in particular NIST seem to think that ordering medications is a function of a medical record system.)
A team from the National Center for Human Factors in Healthcare, at MedStar Health in Washington, DC, recently looked at “certified” EHR software to see if the vendor really did usability testing as required for certification and getting money from the government. They reported their results in the September 8 (2015) issue of JAMA.
Surprise! Lots of vendors cheated. Not just a touch of cheating, but “pants on fire” cheating.
They looked at 50 top EHR vendors, and specifically at their computer-based practitioner order entry (CPOE) components. Nine of the vendors didn’t make their reports public as required, so we don’t know if they followed the usability rules; you can draw your own conclusions about whether vendors who hide their test protocols and results followed the usability rules or not.
Fourteen of the forty-one didn’t provide information on their usability testing as required. Twenty-six used less than the required fifteen test subjects. Only nine used fifteen clinical people (doctors, nurses and the like) in testing as required. Two used only their own employees for usability testing, and seven didn’t use a single physician as a test subject!
Hmph.
There is a simple solution: decertify all the the cheating vendors, and tell hospitals and doctors that they have to pay back all the money the government paid them to install these EHRs. The doctors’ practices and hospitals could of course sue the vendors for contract violations and bad-faith contracting to try to get back the millions of dollars they just lost.
The simple solution isn’t always the best. I suspect vendors, doctors and hospitals would all agree on this.
Perhaps just shaming the vendors? That might help. Here is a “don’t buy this product” list of those vendors who cheated and should not have been certified by the Federal government:
…
On second thought, maybe I shouldn’t list them.
After all, nobody ever showed an association between amount of user testing and overall usability; indeed, I suspect the association is weak at best. One of the main themes of this website is how to objectively analyze the usability of medical software.
As discussed in Discount Usability Testing, testing and tweaking may improve usability. It’s like sanding to remove rough edges. But no amount of sanding will make a table into a chair, and no matter how smooth the table might be, it won’t work very well as a chair. You might not get splinters in your butt, but it’s still uncomfortable. (If you publicly equate usability testing with preventing splinters in the butt, please credit me, thanks.)
Usability testing can’t make poor user interaction design into good user interaction design. To use another analogy, usability needs to be in the recipe and baked in before you start decorating the cake. A brilliant, flexible design will do more for overall usability than lots of after-the-fact usability testing. After-the-fact usability testing, if the underlying design is seriously flawed, simply results in kludges that hurt overall usability.
The NIST report states: starting assumption: application designs have been optimized for general usability. In the case of EHR vendors, this is a completely unwarranted assumption. Of the usability principles laid out in this website, all major EHRs violate many. Any user of any electronic health record software will probably tell you that, compared to their cellphone or Google or Amazon, EHR usability sucks. The only reason they are successful, even with the best of the EHRs, is that doctors, nurses and other medical workers are very good at working around the fallibilities of their EHRs.
Even the ones who have done “proper” usability testing suck, although maybe a bit less than the competition. I say this with assurance as I have used most of the industry-leading EHRs, and all of them are painful to use; the best of the big EHR vendors are Cerner and Epic; in absolute terms – I don’t grade on a curve – they both get a C minus. I have never smashed a computer just to get back at the EHR vendor, but I have thought about it many time. Once, I saw one of my partners punch a computer monitor. I agree that the computer, or rather the head of the EHR’s design team, deserved it.
The NIST report mentioned above tells us:
2.2.2 Definition of Usability and Associated Measures
…our working definition of Usability, “The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.”
These terms are further defined as:
• Effectiveness: The accuracy and completeness with which users achieve specified goals.
• Efficiency: The resources expended in relation to the accuracy and completeness with which users achieve goals.
• Satisfaction: Freedom from discomfort and positive attitude toward use of the product.
• Context of use: Characteristics of the users, their tasks, and their organizational and physical environments.
In the International Organization for Standardization’s (ISO) most recent standards (i.e., ISO 25010), usability is included as one of eight attributes of software quality.
The sub-characteristics of usability are:
• Appropriateness recognizability: The degree to which the software product enables users to recognize whether the software is appropriate for their needs.
• Learnability: The degree to which the software product enables users to learn its application.
• Operability: The degree to which users find the product easy to operate and control.
• User error protection: The degree to which the system protects users against making errors.
• User interface aesthetics: The degree to which the user interface enables pleasing and satisfying interaction for the user.
• Accessibility: Usability and safety for users with specified disabilities.
Show me a doctor who, in 2015, thinks his or her EHR provides satisfaction: freedom from discomfort and positive attitude toward use of the product, operability: the degree to which users find the product easy to operate and control, and user interface aesthetics: the degree to which the user interface enables pleasing and satisfying interaction for the user, and I’ll show you a shill for the company.
But things are looking up. NIST has a new (September 2015) technical paper that moves in the right direction: Technical Basis for User Interface Design of Health IT. Overall it provides a good review of the design process for a good electronic health record, and lays out some specific usability principles as currently known. It’s a bit dry; you’ll probably have more fun if you browse this website for medical software specifics, and read Alan Cooper’s About Face for general user interaction design principles. However, it gives no nod to the specific needs of those in the Emergency Department, which is the best test-ground for any EHR; people in the ED are so busy that they are less tolerant of BS than those in the rest of the hospital. The ED also needs for the “EHR” to provide a tracking system.
Compared to the NIST document linked at the beginning of this article, it provides a schema for evaluating, not vendor usability testing procedures, but the usability of an electronic health record. However, it’s still based on numerical ratings (number of stars) from people – either naïve potential users trying to use the system, or usability experts – reporting on their experience with the system. It does provide specific categories in which these people should report, and for the “experts” it lists concepts such as affordance. But unlike a proposal based on Fitts’s Law or measuring the data pixel ratio of a screen, there is no attempt to objectify these subjective expert ratings.
If you’ve kept up with this website, you won’t find much new in this NIST report, though EHR vendors may.
In my requisite curmudgeon role, I will note that this report specifically states Limit alerts to those that are clinically essential to ensure safe and effective patient care. I just worked a 12-hour shift with a leading urgent-care EHR, and I counted 72 “alerts.” By “alert” I mean a dialog box that pops up in front of the EHR screen, and which won’t go away and let me get back to work until I deal with the dialog box. Twelve were “clinically essential to ensure safe and effective patient care” and related to medication interactions. The rest were either (a) reminding me to complete an item needed for billing (which, my CPA wife opines, may be necessary to stay in business; but there are better ways than pop-ups) or (b) (about 20%) totally spurious dialog boxes that had no relevance to billing or patient care for this particular visit. Example: when I closed the chart for a patient who was just there for a urine drug screen, a pop-up dialog box asked me if I had ordered a rapid strep test for the patient, and if so, to enter the order for it.
It is nice to have the principles in a single place, and to have official recognition of the need to implement these usability principles. It’s also nice for critical bloggers like me, as I can now pick on medical software and gleefully point out in great detail how they are not compliant with NIST best practices.
One quote from this NIST document: an effective EHR is likely to garner compliments from users about its usability. I eagerly await the day when I hear a single such compliment.
Tags: Alan Cooper, Meaningful Use, Charting, Information Technology, NIST, IT, Healthcare IT, Usability, User Interaction Design, User Interface, Discount Usability Testing, CPOE