Sunday, May 3, 2009

Review of The Black Swan by Nassim Taleb

Introduction
The Black Swan by Nassim Taleb is a skeptical view of the Rationalism employed in modern behavioral sciences. It is written with eloquence so that complex ideas and mathematics are made intuitively clear. It is particularly relevant for consumer and marketing research, which have continuously experienced embarrassing and costly failures such as New Coke, Life Savers Soda, Colgate Kitchen Entrees, Pond’s toothpaste, Clairol’s ‘Touch of Yogurt’ shampoo, Frito-Lay Lemonade, Pepsi AM, and Heinz’s All Natural Cleaning Vinegar.

Experienced industry professionals in leading companies did these projects. It is not just consumer research either, as American financial models have recently gone bust in a highly visible manner with the rest of the planet watching in total horror. Why the mixed results from research based on the Rationalist models?

Taleb gives a roadmap that not only explains the misuse of mathematics in such predictive attempts but also explains the fallacies of reasoning possible with Rationalism that lead to a false confidence in our undertakings and an understatement of the risk from random but material future events. The Black Swan is his metaphor for a risk from unknown events with consequential effects.

My review starts with Taleb’s recounting the numerous points of failure in Rationalist reasoning such as domain specificity, post hoc rationalization, the narrative fallacy, and silent evidence. It then explains the abuse of mathematics cited by Taleb starting with the circularity of statistics, and the pervasive but often invalid assumption that our distribution of attributes is non-scalable.

Examination
Nassim Taleb gives us a practioner’s guide to the pitfalls in Rationalist reasoning. He starts with the all too human tendency to wrongly translate an absence of proof into proof of absence concerning risk. His delightful example is a thought experiment with a turkey that is well fed and cared for by his human host. Using the inductive methods of Rationalism with day after day supporting proof, the turkey concludes that his human benefactors have his best interests at heart. There is a sudden “revision of belief” (p. 40) on the Wednesday just before Thanksgiving. Of human malice, the turkey had confused absence of proof with proof of absence regarding the risk he faced.

In the first section of his book, Taleb explains the common fallacies of Rationalism. These fallacies include (p. 50) the confirmation error, the narrative fallacy, and the distortion of silent evidence.

Confirmation Error
Taleb observes that the context of the information presented to us influences our thinking about that information (p. 53). The information does not stand on its own merit but that of its presentation context as well. Taleb calls this Domain Specificity, and Hawkins, et al (pp 299-300) call it contextual cues, and explain its impact on consumer behavior.

Another confirmation error is naïve empiricism (p. 55). This is the human inclination to look for support of our vision and to orient research with this positive frame of mind. It only takes past instances that confirm current proposals.

Narrative Fallacy
This is a predilection for simple explanations in place of complex truths. Taleb’s exposition uses a cognition model similar to the Elaboration Likelihood Model used in consumer behavior (see Hawkins, 2007, p 409-10). Taleb describes (p. 81) the cognitive model devised by the eminent psychologist Kahneman. This model organizes cognition into System 1 thinking and System 2 thinking. System 1 is intuitive and quick, relying on heuristic short cuts. It gives easy and obvious narratives but overemphasizes the emotional and the sensational.

System 2 is what we would characterize as central route processing. It is a derived sequence of thought. It is easy to retrace reasoning to rethink our strategy based on feedback. On the other hand, System 1 thinking is prone to narrative fallacies.

Narrative fallacies take several forms. One is Post Hoc Rationalization. This fallacy provides an artificial explanation of an event after the fact rather than establishing causal relationships during the event. Taleb gives the classic example (p. 65) of a group of consumers who each selected a pair of nylons from a set of twelve. A while later they were asked why they made their particular choice. The answers ranged from better color to better texture. The twelve pairs were in fact identical. Hawkins, et al (2007, p 326) report on a similar happening with Disney and Bugs Bunny.

Finally, the more randomness in information the harder it is to remember (p. 69). We therefore seek to summarize random information and impose our own order on it. We fold meanings into convenient dimensions of existing knowledge. This reduces the dimensionality making it less complex and so easier to store and retrieve. This makes the world look less random, and therefore less risky. This is why we tend to underestimate risk, especially risk that does not fit into our existing knowledge dimensions.

Distortion of Silent Evidence
History is a graveyard of Silent Evidence, as Taleb calls it. The simplification biases discussed above reduce complex evidence into convenient summaries. The omissions add to the silent evidence we ignore, which distorts our view of reality. The manifestation of silent evidence is a false sense of stability (p. 117).

The Scandal of Prediction
In the later sections of the book (pp 136-211), Taleb makes an intuitive case for why our predictive models fail. One critical aspect of a system being modeled is its scalability. In the behavioral sciences most ranges are assumed to be non-scalable. In other words, as you leave the mean, not only is the count less, but that it is increasingly less. This is a convenient assumption because it permits the use of statistical mathematics based on the Bell curve (a.k.a. Gaussian distribution) or a derivation of it.

This assumption about a Bell curve for our populations, as Taleb (2007, pp 229-247) argues, is “that great intellectual fraud.” This is not a true attribute of all the populations where behavioral scientists are applying statistical surveys in a wooden and perfunctory manner. He notes that while it is true for physical characteristics such as height and weight, it is usually not true for social measures. Ranges become scalable so the non-scalable assumption of the Bell Curve is invalid. Scalable system behavior leads to non-uniform concentrations rather than smooth distributions, they are thus Fractal. The rich get richer.


Fractal worlds follow a scalable power rule. As a simple illustration, Pareto found that 20% of the Italian population owned 80% of the land (p. 235), and the 20% of that top 20% owned 80% of that 80%. In such a world the top 1% owns a lot, 64% in Pareto’s case. Taleb also uses book sales as an example (p. 264). It does not follow a Bell Curve. It is fractal and follows a complex power rule. Because it is a fractal world it has winner take all, lop-sided distributions.


He uses height and wealth as examples of applying Bell Curve models in each world (non-scalable and scalable). For height, if you pick 100 people randomly, you will derive a meaningful understanding about the height of the population. Adding another person, the 101st, won’t measurably change the average or deviation, even if it is a tall person, say seven feet. This is the standard Gaussian system.

The social measure of wealth is different. If the 101st person you add is Bill Gates, the average and deviation is changed, appreciably. This is an extreme example to make a point, but Taleb discusses his days on Wall Street where non-scalable assumptions were made for scalable systems, which led to misunderstandings of risk and incorrect investment strategies. He shows how scalable systems are described by fractal mathematics.

Traditional mathematical modeling in social sciences, including behavioral sciences, is flawed. The process of employing mathematics starts with the Circularity of Statistics flaw (p. 269). We need data to know if the population is Gaussian or Fractal. But, we need to know if the population is Gaussian or Fractal to know how much data to collect to decide if it is Gaussian or Fractal.

Let’s say we can get by this problem. Then we encounter another problem for Gaussian distributions (p. 251). Gaussian models in pure mathematics are based on the assumption that each event is mutually exclusive. This is true of flipping a coin but not true in most social actions, where there is usually some cumulative advantage effect like learning. In other words, there should be improvement in the probability of a certain outcome over time because of the cumulative advantage effect of learning.

For the other case, if the model turns out to be Fractal rather than Gaussian, we still have problems (p. 272). Fractal mathematics for randomness does not yield precise answers. The Gaussian does and that is why scientists like to make Gaussian assumptions.

The next post will apply Taleb to consumer research.

References
Cacioppo, John and Richard Petty (1986.) The Elaboration Likelihood Model of Persuasion. Retrieved on April 13, 2009 from the EBSCOHost database.

EO (April 25, 2007).Many important ideas, many flaws that detract from the message. Retrieved on April 14, 2009 from http://www.amazon.com/Black-Swan-Impact-Highly-Improbable/dp/1400063515/ref=sr_1_1?ie=UTF8&s=books&qid=1239724317&sr=8-1

Hawkins, Del, David Mothersbaugh and Roger Best (2007). Consumer Behavior. McGraw-Hill/Irwin.

Johnson, Celia (March 6, 2009). 10 of the Best. BANDT-COM.AU. Retrieved on April 18, 2009 from EBSCOHOST.

Ortega y Gasset, Jose (1994). The Revolt of the Masses. W. W. Norton & Company.

Simon, H.A. (1960). Administrative Behavior: A Study of Decision-Making Processes in Administrative Organization. Macmillan.

Taleb, Nassim Nickolas (2007). The Black Swan. Random House.


Wallace, AFC (1963). Culture and Personality. Random House.

Wilson, L. and Ogden, J. (2004). Strategic Communications Planning For Effective Public Relations and Marketing, 4th Ed. Kendall/Hunt Publishing

No comments: