Trust, but Verify

After hearing me talk about how much I enjoyed Gary Klein’s Sources of Power, a friend of mine forwarded me this Harvard Business Review article, titled Don’t Trust Your Gut, by Eric Bonabeau. Bonabeau takes on the recent books promoting the use of intuition in business, calling out Gary Klein specifically, and attempts to make the case that trusting intuition is dangerous in situations involving complex interdependencies, because “the required computations outstrip the mind’s processing capabilities”. He recommends using “a computational decision-support tool”.

If I were more cynical, I would speculate that Bonabeau works for a company making such tools, but I’ll leave the ad hominem attacks aside for now, instead attacking his ideas. For one thing, the very idea that complex interdependencies are more likely to be correctly tracked in software than in people’s brains is laughable to me. Social software, the best attempt of many smart people to capture the complex interplay of social relationships among people, is so inadequate that it can be described as autistic, despite trying to do a job that each of us can handle unconsciously. To expect that decision-support tools can capture even more complex situations in a realistic and useful manner is idealistic at best.

I think that the author significantly misunderstands the point of research such as Klein’s. Of course you should use algorithms in cases where the inputs and outputs of a process can be well described by numbers. But in the real world, the numbers are often so distorted as to be unrecognizable, if not completely made up (classification schemes are a good example). A good executive will know how to drill down and get to what’s really going on, where a program will take garbage in, and produce garbage out.

I would also cynically observe that most of the time, the quantitative decision making that happens is there for one, and only one, reason: to justify the decision that was already made by the gut reaction of the person in charge. The person in charge doesn’t want to be left responsible, so they force all of their subordinates to generate numbers to support the decision that’s already been made, and the subordinates are told when the numbers are “wrong” and need to be “tweaked”.

Instead of the model that Bonabeau proposes (spend lots of time up front with decision-support tools before making a decision), I would propose an alternative vision for management, courtesy of Ronald Reagan: Trust, but Verify. In other words, trust your gut and spend your time figuring out how you can verify or invalidate your gut decision as quickly as possible. This is somewhat influenced by the fact that I’m currently reading the book Experimentation Matters, which recommends front-loading any project with experiments so that you can change course sooner if you’re going down the wrong path.

I believe that this approach makes more sense for a variety of reasons. From my own personal experience, when I was Rush Chair at TEP, working in a complex time-crunched environment running the lives of 22 brothers and a multitude of freshmen, I quickly found out that it was far better to quickly make the wrong decision than to dither and eventually make the right one. If I made the wrong decision, people would go off, try to do it, figure out it didn’t work, and do the other choice. That would often happen in less time than I would have dithered in making the original decision. And even after all that dithering, I would often have made the wrong choice, so I just delayed the inevitable. Making decisions quickly, even the wrong ones, often lead to getting on the right path sooner.

Admittedly, this only works if you can get immediate feedback on a decision. But the tools to do that are growing more powerful every day, as the book describes. I think simulations are at least as likely to be accurate than most of the software that Bonabeau describes in his article, and simulations let you see the results of your decisions in a very swift and controlled fashion. Go with your gut, see what happens, and revise. Create a tight feedback loop, and run through several cycles, evaluating the results each time (as a side note, this is a process that Gary Klein describes expert decision-makers going through in the field). I’m a big fan of rapid prototyping for engineering development (see the book Serious Play for a description) and really don’t see why the same principles couldn’t and shouldn’t be applied to project management. Just as the classic “waterfall methodology” has been outmoded by strategies such as “extreme programming”, I expect that the typical “Gate-Phase Process” to eventually be outmoded by an “Extreme Management” strategy. “Extreme Management”, like extreme programming, would be a test-based methodology; you could make decisions from the gut quickly, but would immediately be looking for ways to verify that decision as quickly and cheaply as possible.

I think that if as much time and effort were spent on getting the iteration and feedback time for simulations down as is spent on the “decision-support tools” that Bonabeau recommends, the world of “Extreme Management” would not be far off. Maybe I should write a book!

P.S. Speaking of books, I wanted to get one more angle on this subject that’s related to my other book idea, stories. Imagine that there are two managers pitching a new product to an executive committee. One has a great story, explaining exactly what niche her new product will fill, and lots of specific details of how it will change the lives of people who buy it. She has some numbers to support her idea, but her emphasis is on the testimonials she’s gotten from customers who adore the idea of her product. The other manager puts up chart after chart of numbers demonstrating that there is an underserved market niche of some sort, but has talked to no customers and generates no excitement. Which one would be more convincing? I would say the one with the good story. A good story can be verified, and the numbers can be run. A product has to excite people; it can’t just be a soulless attempt to describe how Rational Evaluative Maximizing Models will benefit. But I’m totally biased, of course.