Supposedly set to launch this July — and then September — New York City announced that its innovative bike sharing program would be delayed until at least next spring. "The software doesn't work, duh," declared Mayor Michael Bloomberg, an ardent champion of urban cycling. "Until it works, we're not going to put it in... We did think there would be a possibility of a partial launch...." But that didn't happen.
The city, in fact, had the chance run a preliminary test in April for its $50 million, 10,000 bike initiative, which would be America's biggest. The Department of Transportation, however, had insisted such a pilot unnecessary. Whoops. As of this writing, no pilots are planned before next year's launch.
On a far larger scale and more caustically, former JCPenney and Macy's CEO Allen Questrom ripped into current JCPenney CEO (and Apple Store innovator) Ron Johnson's turnaround strategy for the struggling retailer. The particular object of his ire? "I'm shocked that they're going forward with this without even testing one or two stores to see how customers like it," Questrom told CNBC. "To do all of them at one time without testing the first one — you have to question what kind of strategy that is."
The strategic merits of bike sharing and department store "boutiquification" may be debatable. The organizational and operational benefits of targeted testing are not. When I've seen individuals, project teams and organizations humiliatingly — and expensively — fail at innovation, the odds are they overinvested in sophisticated analyses and underinvested in simple tests. Even worse are the "innovators" who insist they performed extensive testing before launch but a review reveals the tests were designed to "prove" and "validate" the innovation "worked." Any meaningful learning or insight was incidental.
My favorite excuses are the ones where team leaders piously declare there's simply not enough time or money for testing. "This was a crash project and comprehensive tests would add weeks to delivery and the field wants it now," the story goes. Of course, the field won't be thrilled with time, effort and cost of debugging what's been crashed.
These pathologies are nothing new. But the importance and pace of innovation rollouts demand a different design sensibility. Just as the "quality" and "lean production" movements of the 80s and 90s required quality to be designed — rather than inspected — in, innovators have got to demonstrate greater ingenuity and integrity around how they integrate real-world testing into their projects and processes. Just as with quality and lean, this turns out to be a cultural, organizational and technical challenge.
At one telecoms company with a disappointing history of troubled upgrades and delayed rollouts, I saw a key innovation team present its testing program and testing reviews to senior management. Jaws dropped. The sequencing assured that there was no way that serious issues could be addressed until after enormous sums of money had already been spent, and that collaboration would overwhelmingly center on solving problems rather than anticipating them. The entire testing culture treated testing as a debugging process rather than as a discovery opportunity.
In other words, by the time things were confronted in testing, it was too late. The economics had turned bad. This anti-test bias is particularly prevalent in organizations that pride themselves on being "innovative." Testing is simply the "necessary evil" and "hurdle" their innovations must pass through on the way to the real test of the marketplace. I saw this attitude in a lot of Web 2.0 start-ups that now no longer exist.
Yes, the benefits of testing should clearly outweigh its costs. Yes, too many organizations default to time-consuming and costly comprehensive testing protocols that, on balance, add little value or insight. But far too few organizations use targeted tests to simultaneously learn and accelerate the development process. Few things are more cost-effective than creatively cheap tests.
A predisposition to test is a predisposition to learn. A refusal to test is a refusal to learn. Innovators who avoid real-world tests aren't visionaries, they're cowards. The best innovation team leaders I've worked with are constantly challenging their people to come up with fast, cheap and unexpected ways to stress test their deliverable before they go live. Suppliers are brought in. Customers are brought in. Team members spend a day or three at customer sites not just ethnographically observing but running a test or two. Those tests are designed to elicit new knowledge, not merely confirm the brilliance of the original design.
Elite teams are constantly testing themselves and their innovations. You see this in the military's Special Forces and you see it in world-class medical institutions. A culture of learning is intimately entwined with a culture of tests. Those tests are less about validation than giving people greater insight into the strength and limitations of their innovation offerings. It's easy to tell them apart. Testing for validation is all about leaders looking for compliance and adherence to plans; testing for learning is about leadership that expects people to be attentive, agile and adaptive. What kind of leadership do you think leads to a sustainable innovation culture?
Yes, look at innovators, their schedules and their budgets. But if you want to understand how smart and serious they are, look at the testing regimes they submit their innovations to. Are you impressed? Or do they flunk?