Recent reports bookend the promise and peril of computerization and the electronic medical record in health care. On the truly positive side, the Mayo Clinic and UnitedHealth Group have teamed up to form Optum Labs, a research group aimed at mining (initially) claims records for over 100 million people and 5 million clinical records. The lab underscores the hope and promise that focused data analytics can help improve day-to-day care, better estimate true costs, and contribute to the next generation of research. To illustrate, a study found a previously unsuspected interaction between Paxil and Pravastin, causing elevated blood sugar levels and the incumbent risks of hyperglycemia, particularly for diabetics. Researchers were able to combine Stanford, Vanderbilt, and Harvard databases to find 130 patients taking both drugs — enough to confirm the interaction. Part of the promise of computerization involves thousands of such discoveries.
But the other bookend is the peril, particularly as it relates to the electronic medical record. So far, the effort has yielded mixed results. A recent study shows little evidence that any of the projected $81B annual savings has resulted. Indeed, it appears that electronic records have made it easier to bill for more services, increasing costs for insurers and patients alike.
While we wholeheartedly believe in the promise, results to date do not surprise us. Automating poorly performing processes, in any industry, never yields good results. To paraphrase the great Dr. W. Edwards Deming, "automating a process that produces junk just allows you to produce more junk faster."
None of the promised benefits of computerization and electronic medical records can be realized without high-quality data, including comprehensive data standards. While the health care community has taken important first steps, it needs to move in a more comprehensive, powerful, and urgent fashion.
As a first step, many organizations have recognized the need for common data terminologies (e.g., LOINC, SNOMED, ICD-9, HL-7, etc.) and developed their own. Further the Office of the National Coordinator (ONC) has set benchmarks that ensure at least minimal connectivity. Within the domains where they are used consistently, such terminologies and benchmarks are fine.
But across the industry, they contribute to what we and others call "towers of medical Babel" (PDF). The "Babel" is exacerbated by the proliferation of vendors, each with their own proprietary approaches, leading to each tower becoming thicker, better fortified, and more isolated from others.
Since "the patient" is fundamental, the effort must start with the patient's identity. It is far too easy for Don Nielsen at one provider to be Donald E. Nielsen at a second and Dr. Donald Nielsen at a third. From a Big Data perspective, if Don Nielsen is taking Paxil and Donald E. is taking Pravastin, he would be excluded from the aforementioned study. To underscore the importance of this point, an estimated 715,000 people take both medications in the U.S. alone. But there is no way to identify all but a few.
Readers over 40 will surely recall assembling their own medical records, in pre-computer days, by driving from provider to provider. It was a frustrating job, but you knew your providers, making it possible to assemble the record correctly. Computers don't have this knowledge, obviating the effort.
It is easy to dismiss this example as technical arcana. It is anything but. Recall the experiences of the five blind men, each touching only one part of an elephant. Without the identity standard, the internist is like the man touching its trunk, the cardiologist is like the man touching its ear, the pharmacist the tail, the claims specialist the leg, and the nurse the back. It is no way to practice medicine, provide insurance, or provide medication!
There are also needs for more mundane standards, beyond patient identifiers. To compare treatments, one must have common definitions of illnesses, symptoms, the treatments provided, indicators of "getting better," and so forth. Without them it is simply too easy to translate "mild systolic flow murmur" into "underlying cardiac disease," "wheeze" into "asthma," and mild reactions to specific drugs into allergies. These sorts of misinterpretations were all too common without computers, and electronic medical records have done nothing to reduce their severity or number.
This list can go on and on, and the problem is getting worse. New treatments, new diagnostic tools, and new technologies such as smart pumps add new types of data to the medical lexicon almost daily. Even if all the needed standards were in place today, gaps would start to appear almost immediately.
In practically every area of health care, the benefits of computerization stem from the abilities to exchange data quickly and easily and to assemble large amounts of data for analysis. But this is not possible without first putting in place the proper messaging, structure, format, content, and definitional standards.
We are well aware that developing, promulgating, and enforcing needed data standards will be tough. At the same time, the lack of data standards is and must be recognized as a crisis. Failure to address it will compromise electronic medical records, hurt care, make everyone's job more difficult, and contribute to the unsustainable growth in cost. Fortunately, health care is loaded with smart, motivated, forward-thinking people who have a history of coming together in response to a crisis. Indeed, this one demands the same urgency as a patient with an acute abdomen, for whom any delay in surgery may prove fatal.