- “Niche” Computer Systems
- Meaningful Use
- “Wrong Patient”
- Cognitive Friction
- Dialog-Box Rooms
- What’s in a word?
- Cost Disease
- Model T
- Signal-to-Noise Ratio
- Anti-Data Pixels
- Fitts’s Law
- Bad Apple
Redundancy with a stock of identical parts to replace failed components is a standard way to make industrial processes more reliable.
For an ED mission-critical computer system, which needs to be up and running 24/7/365, one way to implement this is to have a backup server always ready to go, with recent copies of the databases. This could be used also for “downtime” when changes must be made to the system. However, this is expensive in terms of hardware, software licensing (though that really shouldn’t be an issue if the software vendors knew what was good for them), but most of all in terms of system maintenance.
Back years ago a hospital that I worked at used Logicare to generate discharge instructions, originally a DOS version, later a Windows rewrite. (BTW, Logicare has the finest discharge instructions I’ve seen, and I’ve used 4 or 5 other programs to generate discharge instructions.) When the network went down, we could use a non-networked backup copy on a single PC in the ED. (I made some DOS batch files to allow us to switch back and forth between networked and standalone systems.) It kept us going when the network went down, which it did on a regular basis back in those days.
Redundancy may not be the best answer to plan for all point-failure modes. For example, keeping a mirrored server, constantly ready to take over from the primary server at a moment’s notice, might not be the best idea. Whatever killed the primary server (a virus?) might do precisely the same thing to the backup server.
Instead, one might have a backup mesh network between all the individual PCs, with the PCs having a different operating system than the server, with bits of the mission-critical information spread through all the PCs in a redundant manner. While limitations in our operating systems may currently prevent this, the idea still has merit.
A parallel with this idea is found in agriculture. Monocultures (large plantations of the precise same crop) are a mainstay of “modern” agriculture, and the green revolution with new, high-producing strains, has staved off global starvation. But monocultures have risks – a single predator or disease, against which the monoculture has no defense, can wreak havoc.
The classic example is the potato – a great, high-producing food crop, likely genetically engineered by inhabitants of the altiplano region of Peru before contact with Europeans (see the excellent books 1491: New Revelations of the Americas Before Columbus and Why We Eat What We Eat: How Columbus Changed the Way the World Eats for more). The potato quickly became a monoculture throughout many areas of Europe, but a new strain of potato blight caused the Great Potato Famine of the 1840s, classically discussed in the context of Ireland but also a major agricultural disaster across the Scottish Highlands and even in continental Europe.
Starting in 2008, a new wheat rust has been spreading through western Asia and head towards central Europe, threatening a similar famine.
Ecologists now discuss the possibility, especially in rain forest areas, of diverse farming, with many different species, lessening the risks of runaway plant diseases. While IT shops promote uniformity in operating systems and hardware, because it makes support much easier. However, this means that systems share vulnerabilities, and a single flaw in the operating system, or in similar hardware on different computers, might mean a sudden, catastrophic failure.
Should we embrace the idea of polyculture not only in agriculture but in hospital computer systems?