An interesting read with tenuous small business applications. It makes the case effectively that crowds can be “wise,” that a diverse group of independent individuals can arrive at collective decisions that are better than any of its individual members and which, reliably, can be used to predict the future. Look at the stock market, for example, or the point spreads on Sunday football games. When they’re working correctly, they predict the future value of a company, or the margin of victory. And they only get out of whack when the crowd that’s driving them stops being diverse and/or starts talking to one another. The business application is for companies to set up internal “decision markets,” in which a diverse group of employees “buy” stock in certain projects as a way of indicating which they think will be successful. The collective wisdom of the group will be reliably right. The methodology is currently being used by some of the largest companies to decide which new products to develop and by a new wave of polling firms to help predict the outcomes of political elections.
I enjoyed the book, but more for its interesting facts and ideas than for it applicability to my business environment. For example, there’s a short section on the history of the United States’ intelligence gathering agencies. Those that existed prior to World War II were part of the different branches of the military, and they were completely surprised by the attack on Pearl Harbor, even though it was later revealed that they had information in their possession at the time that, if it had been shared and analyzed effectively, would have clearly revealed what the Japanese were planning. In the wake of this embarrassment, the CIA was created, with the express purpose of serving as the “centralized” agency, responsible for coordinating the activities of the other intelligence services and overseeing the collective gathering and analyzing of information. In this mission the CIA utterly failed, spawning instead a myriad of new intelligence agencies, each being run as its own little fiefdom and doing everything but sharing information with its neighbors. The result, sixty-some years later, was the surprise that happened on September 11, 2001. In more ways than I originally thought, Pearl Harbor all over again.
Another interesting tidbit is something called the “ultimatum game,” which is perhaps the most-well-known experiment in behavioral economics. To quote Surowiecki:
The rules of the game are simple. The experimenter pairs two people. (They can communicate with each other, but otherwise they’re anonymous to each other.) They’re given $10 to divide between them, according to this rule: One person (the proposer) decides, on his own, what the split should be (fifty-fifty, seventy-thirty, or whatever). He then makes a take-it-or-leave-it offer to the other person (the responder). The responder can either accept the offer, in which case both players pocket their respective shares of the cash, or reject it, in which case both players walk away empty-handed.
If both players are rational, the proposer will keep $9 for himself and offer the responder $1, and the responder will take it. After all, whatever the offer, the responder should accept it, since if he accepts he gets some money and if he rejects, he gets none. A rational proposer will realize this and therefore make a lowball offer.
In practice, though, this rarely happens. Instead, lowball offers—anything below $2—are routinely rejected. Think for a moment about what this means. People would rather have nothing than let their “partners” walk away with too much of the loot. They will give up free money to punish what they perceive as greedy or selfish behavior. And the interesting thing is that the proposers anticipate this—presumably because they know they would act the same way if they were in the responder’s shoes. As a result, the proposers don’t make low offers in the first place. The most common offer in the ultimatum game, in fact, is $5.
Seems like we’re wired to be both greedy and to punish greediness in others. And here’s the final tidbit, this one with a real business application.
Decentralized markets work exceptionally well because the people and companies in those markets are getting constant feedback from customers. Companies that aren’t doing a good job or that are spending too much learn to adjust or else they go out of business. In a corporation, however, the feedback from the market is indirect. Different divisions can see how they’re doing, but individual workers are not directly rewarded (or punished) for their performance. And although corporate budgets should theoretically echo the market’s verdict on corporate divisions, in practice the process is often politicized. Given that, divisions have an incentive to look for more resources from the corporation than they deserve, even if the company as a whole is hurt. The classic example of this was Enron, in which each division was run as a separate island, and each had its own separate cadre of top executives. Even more strangely, each division was allowed to build or buy its own information-technology system, which meant that many of the divisions could not communicate with each other, and that even when they could, Enron was stuck paying millions of dollars for redundant technology.
The important thing for employees to keep in mind, then, is that they are working for the company, not for their division. Again, Enron took exactly the opposite tack, emphasizing competition between divisions and encouraging people to steal talent, resources, and even equipment from their supposed corporate comrades. This was reminiscent of the bad old days at companies like GM, where the rivalries between different departments were often stronger than those between the companies and their outside competitors. The chairman of GM once described the way his company designed and built new cars this way: “Guys in [design] would draw up a body and send the blueprint over and tell the guy, ‘Okay, you build it if you can, you SOB.’ And the guy at [assembly] would say, ‘Well, Jesus, there’s no damn way you can stamp metal like that and there’s no way we can weld this stuff together.’”
The beneficial effects of competition are undeniable, but serious internal rivalries defeat the purpose of having a company with a formal organization in the first place, by diminishing economies of scale and actually increasing the costs of monitoring people’s behavior. You should be able to trust your fellow workers more than you trust workers at other firms. But at a company like Enron, you couldn’t. And because the competition is, in any case, artificial—since people are competing for internal resources, not in a real market—the supposed gains in efficiency are usually an illusion. As is the case with today’s American intelligence community, decentralization only works if everyone is playing on the same team.
This sounds a lot like a place I used to work. The lesson: give people the tools of make their own decisions, but only if everyone is clear about who your competitors are.