In the data world today, "big" dominates. But sometimes you don't need big. You need a small dose of exactly the right data. Data that bear precisely on the question at hand, that you understand deeply, and that you can trust. If such data are already at hand, great. But frequently they are not. And then, nothing beats a well-conceived, -designed, - controlled, -executed, and -analyzed experiment. Companies need to make sure experimentation is included in their "data toolkits," learn when to use it, and develop the skills to conduct effective experiments.
Let's consider a recent example: Boeing's lithium ion batteries for the Dreamliner 787. As you probably know, the issue has been all over the news for a couple of months. In two instances, the batteries have nearly caught fire, grounding the aircraft for over three months. The planes are now back in the air, but they still won't be carrying passengers for another month or so.
What Boeing needs right now is not big data but a sequence of experiments that do what previous tests did not: isolate the root causes of the problems that have occurred so far; verify that fixes being made really work; identify other problems that are yet to rear their ugly heads; predict how the batteries will perform under "worst-case scenarios"; and convince regulators and the flying public that the Dreamliner is safe for passengers. Over time, Boeing and its suppliers will almost certainly require still even more experiments to prevent future problems, to better predict battery life, and to test new designs, new manufacturing techniques, and new maintenance strategies. Some of these tests have and will be conducted under controlled conditions in laboratories, and some must be conducted under increasingly less controlled circumstances in the air.
In the unfolding data revolution, companies must develop the capabilities to experiment. But too many eschew it. This was the case in a couple of recent client engagements. In both cases, senior managers had posed a seemingly simple question. But the effort to assemble all the relevant information, across their disparate data warehouses, was daunting. Months went on, and the question remained unanswered. In both cases, a simple experiment, taking just a few weeks, would have filled the bill quickly, cheaply, and better than any alternative. In the vast majority of similar cases, we are not talking about a series of complex experiments under the extreme conditions facing Boeing — just small-scale, narrowly focused real-world trials.
Some may view experimentation as "old school," not up to the rigors of the unfolding data revolution. Quite the opposite — its fabled past is the best reason to employ it today! Experimentation has a rich and storied history in product development and market research. It has contributed to hundreds of thousands of improved products in nearly all sectors, from agriculture, to electronics, to medicine, and so on. And not just design — industrial experimentation has contributed to improvements in the technologies and processes needed to grow corn, assemble cars, find oil, and so forth. Industrial experimentation has a rich history in the service sector as well. Many Information Age companies, such as Google, already get this message. And over the years, I've helped many others conduct simple and effective experiments in areas as diverse as customer onboarding to policy deployment.
It is critical that companies understand why experimentation works, so they will know where to apply it. In short, when used properly, experimentation brings the power of the scientific method to the problems companies face today. This means the attendant focus, sharp definition of the question, careful design, data you can trust, and in-depth analyses — just what is called for in many situations.
Companies also must learn how to conduct experiments. They are hard work. It's all too easy to define the problem poorly, choose a bad sample, skimp on design, fail to calibrate instruments, or misinterpret the data. Boeing and its suppliers had, of course, conducted extensive battery tests. Still, as already noted, they missed the mark.
Even when the experiment itself is flawless, things can go wrong in the end. Most experiments involve sampling — a seemingly incomprehensible topic to many managers — so they don't trust it. I'll never understand why so many otherwise smart managers will trust a slightly off-target population of data that's known to be loaded with errors over a small, spot-on, high-quality sample, but they do! The only way I've found to combat this issue is to clearly explain the many benefits of experimentation and present them in a powerful, but balanced, manner.
To be clear, I am not advocating experimentation over big data. If you have data you can trust, by all means use them. And there are many instances where conducting an experiment is simply infeasible. You can't run an experiment to predict the advance of the flu or isolate potentially exploding manhole covers. One hopes that big data and experimentation will work hand in hand. It's not hard to imagine the day when chips are built into Boeing's battery cells to continually monitor the health of each cell and take it out of service when needed, obviating the need to experiment. Conversely, one expects there will be times when big data suggests a direction that demands further experimentation.
Companies that aim to score with data must not adopt a one-size-fits-all approach and blindly follow the crowd into big data. They need many approaches and tools in their data toolkits. For almost all, experimentation deserves a prominent spot in that toolkit. For many problems, it is the best approach. Companies must develop a deep appreciation for why and how it works. And give it a fair chance.