Title: The Black Swan
Author: Nassim Nicholas Taleb
Published: 2007, revised in 2010
There’s not a lot to say about this book that perhaps hasn’t already been said. First published in 2007 its influence has continued to be felt, and its advice ignored as people continue to be surprised by events.
I read the 2010 edition which has an additional 70 or so pages of essays ‘On Robustness and Fragility‘ responding to some feedback the first edition received and further developing a few of the ideas, some of which eventually became the subject of the 2012 follow-up Antifragile, which I had also read before this book.
I found Antifragile to be profound and even life-changing, and will review it here sometime soon, but in the meantime it inspired me to read Fooled by Randomness and The Black Swan. While the books can be read in any order, they do each develop the ideas of the previous so going back and reading out of sync meant some of the more detailed ideas in Antifragile appeared here in simpler form, but that is not a shortcoming of the work obviously, just my own reading order decisions.
Taleb’s sometimes conversational, sometimes pugilistic and often meandering tone has been commented on, criticised and praised by many readers. I find his style can at times present a small barrier to understanding the ideas, over all it gives the reader a genuine and honest feel for the personality of the author, and the power of the ideas generally shine through.
Scrolling through the reviews on Goodreads it’s certainly a polarising work, however a majority of the negative reviews I’ve come across often fail to understand the central points of the book, rather than refuting or disagreeing with them. The book seems to particularly offend those with a vested interest in statistics and models, and the author no doubt employs a particularly acerbic tone towards bankers and academics who rely on them.
I feel that the thesis of the work is not always communicated simply enough, so I will attempt to do so here.
- There are plenty of situations where a Gaussian bell curve is an applicable model (what the author calls “Mediocristan”) for example if you are analysing the average weight or height of a population.
- But there are plenty of situations where it does not apply (what the author calls ‘Extremistan’) where one single measurement can throw all models out, eg individual wealth, or book sales, are the examples most commonly given.
- Applying Mediocristan models to Extremistan is naive at best and dangerous at worst, and very often leads to one being surprised by an unexpected event a “Black Swans”, that defies the models. Using a model that is wrong or not applicable is worse and more dangerous than having no model at all.
There may be other interpretations, and of course there’s a wealth of detail not covered in the above summary, but a great many criticisms I have read do not seem fully aware of these points, and instead attempt a partial rebuttal based on, for example, finding cases where a bell curve does apply.
The author shows a righteous indignation at those who have wrongly relied on such models, caused enormous economic harm, and often escaped the consequences of their actions, such as former US Treasury Secretary and Citigroup Chairman Bob Rubin. He reserves a particular degree of scorn for Myron Scholes and his colleagues at Long-Term Capital Management (LTCM), a group of Nobel-prize winners and other highly decorated theorists who developed a guaranteed model to beat the market, which duly did so for about 3 years before then losing $4.6 billion after the Asian and Russian financial crises of 1997-8, and nearly took the whole banking system with it.
Scholes seems particularly unrepentant, arguing in an interview in the New York Times in 2009 that;
Interviewer: Some economists believe that mathematical models like yours lulled banks into a false sense of security, and I am wondering if you have revised your ideas as a consequence.
Scholes: I haven’t changed my ideas. A bank needs models to measure risk. The problem, however, is that any one bank can measure its risk, but it also has to know what the risk taken by other banks in the system happens to be at any particular moment.
Interviewer: What good is a theory of risk management if it applies to one tree instead of the forest?
Scholes: Most of the time, your risk management works. With a systemic event such as the recent shocks following the collapse of Lehman Brothers, obviously the risk-management system of any one bank appears, after the fact, to be incomplete. We ended up where banks couldn’t liquidate their risk, and the system tended to freeze up.
In light of the points raised by the book, this seems shockingly naive for someone of such reputed intelligence. We need models, even if they don’t work? And risk models work right up to the point something unexpected happens then they stop working, but that’s ok? The Black Swan makes a pretty clear case why this is not ok.
Taleb develops a couple of other related points very well, first is what he calls the ‘ludic fallacy’, essentially applying the rules of games or gambling to calculating probability in the real world. Academics rely on artificial models that assume certain parameters and rules and then work very well for a period of time within those parameters, until an event not conceived of by those original parameters comes along and blows the whole thing out of the water.
He is similarly critical of Platonicism, which he deinfes as the creation of, or reliance on, abstract models, where all the messiness of the real world is stripped away and perfection is assumed. It creates the illusion of certainty, but despite the fact such beliefs are periodically interrupted and overthrown by reality, that does not seem to have stop a great many people merely tweaking their models and coasting blindly into the next disaster. The idea of the Platonic versus the practical is illustrated through analogy with characters like Fat Tony prefers to be broadly right, and Dr John who ends up being precisely wrong.
It’s is now 13 years since the book was first published, and I know many institutions have ostensibly made changes, some banks even have desks in their risk departments now responsible for preparing for Black Swans, but at the same time nothing really changes. These new departments and new models can no more predict the future than the previous ones could and may even create an increased feeling of over-confidence. The point is that in life and in finance, the truly unexpected can occur, ie something you could not predict, and in the fragile world of banking the consequences are enormous.
The 2010 edition closes with the short essay quoting the Stoic phrase ‘Amor Fati‘, i.e. love (and accept the consequences of) fate. Should we sit back then and simply not prepare for these risks? Of course not, and of course Taleb expands on this enormously in his subsequent work Antifragile. But what is required as always, is a bit more awareness from people of the limits of their knowledge, a bit less certainty and less over-confidence.