I read Nassim Taleb’s The Black Swan this week and wow. It was quite a ride. I enjoyed it in large part because of the ideas, but also because Nassim makes fun of a lot of groups which I am not particularly friendly towards—CEOs, bankers, quants, and philosophers of the annoying variety (especially Hegel ). Combine that with a story with its heroes  and its villains  in an epic struggle over human thought, and it’s hard not to find yourself on the edge of your seat. I like to see it like a fantasy  epic, in which the side of evil has been encroaching on all of human life, spreading the use of the great intellectual fraud (GIF ) anywhere and everywhere its tentacles can slither into. With the vanguard of soulless economists and probability/statistics “scholars” sucking out the intellectual vivacity out of entire generations of students, we are left to wait for the promised heroes—Popper, Poincaré, Mandelbrot, Taleb, and (even) fate herself to bring an end to the menace of platonicity and rescue humanity’s future from the clutches of unknowledge. Just like Drogo in The Tartar Steppe we await this (ironically) black swan type event to bring meaning and glory to existence, though unlike him, the book delivers on its promises .
The Black Swan covers various different ideas with a couple main links—the limits of knowledge, uncertainty (and how it’s not studied by modern “probability” theory), and the curse of platonicity. Individually, most of his ideas are not particularly new or exciting, though we tend to forget them (the obvious can be the most potent, especially when it’s too obvious to actually implement). Combined, they put into words what is easy to feel (but hard to explain) when observing much of the social sciences and their use of “statistics” and “scientific” or “mathematical” methods. The doubt that one can easily feel regarding the “rigorous” economics and finance of today  is well characterized by its platonicity and disregard for the true nature of uncertainty in claiming to describe “the real world” (in what Taleb refers to as a form of “tunneling”). Just as know-how is not well described by complicated formal systems, know-what regarding the real world can also not always be well described by formal systems. What works in physics, usually does not work in the social sciences. I do not want to reduce Taleb’s work to a few sentences, since it tackles many little issues that together (broadly speaking) make up these three pillars in a way which is best left to be fundamentally abstract and/or non-conceptual, so I will discuss some examples which I found compelling and not dwell too much on what they mean. What they mean is best understood by reading his work, or observing them in real life.
The Black Swan is broken up into three sections, the first of which Nassim opens up by talking about various problems with our ability to handle uncertainty. Many of the problems tend to be character-based, but common if not ubiquitous. Our drive for platonicity and reductionism combined with our desire to know why (to have a narrative) and various “irrational” ways of thinking (different flavors of confirmation bias, confidence in ideas which we really shouldn’t be so certain of, tunnel vision on the aspects of situations that convenient ignore the unknown unknowns, an inability to consider silent evidence, and the ludic fallacy) make it hard to approach the uncertainty which we encounter in the world reasonably. I personally remember the confirmation bias chapter  as being very interesting because he gives examples that highlight the depth of our delusion. In one he presents us with a series : 2, 4, 6, and gives us a couple chances to guess the next number before providing a definitive prediction of the nature of the series. Most of us would probably guess something like 8 or 10 (I know I did, which is why I was shooketh), but the better approach would be to guess something like 5 and later 7. The rule is simply that the numbers are in increasing order, and it makes more sense to not immediately try to confirm the pattern we are seeing and instead narrow down our search from broad (any number) to narrower (increasing) to narrowest (perhaps only even numbers, though it turned out not to be the case).
I also found his discussion on our seeming need for platonicity and narratives to be interesting. It reminded me of the Buddhist view of how we are deluded (since we often confuse the platonistic model for the real thing), and seemed to be ingrained in our way of thinking. It’s hard to not ascribe reasons to things immediately (reasons which we somehow believe are not off the mark when they often are). He suggests that narratives are a method we use for remembering things (if you are a database guy, think of it like a method of indexing), which though obvious has deeper implications. From the point of view of an intelligent being, is it necessarily true that some form of “delusion” in the form of indexing will exist regarding the world? Even computers need to remember and/or process some data to get to something they are trying to “remember.” Consider a binary search tree: it has to compare various values with the key it’s checking to find the value of that given key. Would a computer see this as a “narrative” of sorts? It seems backed into the very essence of thought. I haven’t had time to ponder this deeply, but it seems like it could be fruitful since almost all our attempts at understanding intelligence are third person. As for platonicity, there are plenty of good examples, but rigorous economics is a good example. Consider the whole “assume actors are rational” semi-nonsense (yea the economists know it's not true, but the theoriticans don't know in their guts how false it is). Platonicity seems to be somewhat necessary for us to understand the world, since otherwise there is just too much information, but it is crucial that we avoid reifying it: the mistake those who fall into the ludic fallacy make. The ludic fallacy is, roughly speaking, about thinking excessively in the box and believing in idealisms that don’t describe the world. It ties in with various modern expert problems (which are also influenced by wild, as opposed to mild, randomness and other factors).
In the second section of the book Nassim explores our failures in prediction and how we seem to keep thinking that we can predict. This ties into our urge to narrate what’s going on. We tell stories and think that they are us “understanding” the truth, when in reality we are often making up interpretations. In this light, Nassim strongly dislikes “thinkers” such as Marx, Hegel, and Fukuyama. It’s actually funny when you look back at what these people thought. In the same vein, we don’t realize on a gut level that people in the future will be laughing at how we think. There seems to be a fundamental asymmetry between how we think about the future and how we think about the past. We don’t think of yesterday in the way we thought about yesterday two days ago. Thus when asked to predict tomorrow, we treat it just like narrated version of yesterday, in which we have the answers (and their logic) and can predict well. In reality, you can always find some axioms that can be used to derive what we see, so we are acting more like madmen than reasonable people in inventing the axioms that explain reality and believing that that is “why” things are.
Add to this constant narration the fact that we are so confident in our tales and find it hard to simply avoid predicting/narrating, and we have a bad prediction delusion. We are not getting better at predicting. Our reality is getting more and more complicated, adding complex systems for prediction which assume the previous simple configuration does not make us better at prediction; it can even make us worse. As a counterpoint, Nassim highlights his dream nation, an epistemocracy. A state led by leaders who know that they don’t know, and know that they don’t know what they don’t (and that they don’t know that either) and avoid this charlatan-like prediction bullshit and try to be more honest about what is really known . He is not optimistic about the prospects of such leaders actually getting into leadership positions, however, since everyone wants someone who can give them a story ("educated" people, or “intellectuals” tend to just have a different kind of story they like to hear). The fact that we cannot predict, however, can be good for our planning and/or approach to the world. I go into this in a tiny bit more detail later, but the main idea is that most great inventions and ideas came by accident. Thus, it makes sense that we should adopt strategies that can take advantage of opportunities that pop up. We should be prepared to jump on what we can and increase our surface area to beneficial black swans . Some good examples of this he highlights include scientific discoveries and various forms of “media” (in the broadest sense).
In the last section of the book Nassim explores some “grey” swans of extremistan. Namely, he talks about some more “technical” aspects of the bell curve and probability distributions more natural to the real world (such as the pareto distribution). If you haven’t read the book, the main idea here is that there are two main forms of probability. Those where the whole is not influenced by single events and those where it is. They are known, respectively, as mediocristan and extremistan. In mediocristan the bell curve works: it includes fields such as (most) of physics, (usually) sports, engineering, and human physical attributes. You can think of it as a world in which winners usually win by a slight edge, by consistency, and by many small events. In physics your cup doesn’t randomly levitate because despite quantum mechanical fluctuations, no one fluctuation can change the state of the cup in a large way: you’d need such an unlikely coincidence for all the fluctuations to not cancel eachother out, that it will probably never happen. In extremistan one event can change everything: consider modern warfare (nukes), social media virality, financial speculation, the stock market, wealth distributions, and book sales. A single outlier can change the whole. The pareto distribution is a distribution in extremistan (central limit theorem does not converge).
I’m not too interested in talking about the specific examples he goes into, but he does give some good advice regarding living in extremistan. For example, when investing in an extremistan-style environment (as most of us are) it makes sense to have most of our portfolio be extremely conservative, almost too conservative. Then we take a small fraction and put it into the most absurdly aggressive options possible (with the most leverage possible). This works because you will be earning most of your winnings from the unexpected single winners (think about venture capital firms, for example). You basically limit the amount you can bleed by keeping most of your money somewhere super safe, and then let randomness do her work on the rest of your portfolio. In extremistan it does not make sense to predict since it’s hard to estimate the source of randomness. Instead its better to set yourself up to benefit from randomness by being prepared and engaging in extremistan situations where the black swan has a big upside, not a big downside (if you can). Nassim encourages us to maximize the amount of serendipity in our lives since our society is becoming more and more extremistan-like than our ancient past. Going to parties is a good play in this regard.
Extremistan often has snowball effects (that’s why one event can change everything): the rich get richer and the poor get poorer. Losers tend to stay losers, and winners tend to stay winners. By opening various doors we improve our chances at that starting lucky break. That said, because things are so uncertain, you can spend a lot of time waiting for a black swan and just get unlucky. This can be very damaging for mental health, which is why if you are going to embark on a search for a black swan, it’s important to go as part of a group. In the same vein, since pain and joy usually are experienced at capped levels  it’s good to try and spread out joy and isolate pain (and if you don’t have a plan for this while waiting for the black swan, since you are consistently getting nothing for a long time, it’s likely you’ll be spreading out pain, all else equal). It’s important to have some happiness daily since daily sourness does not make you tougher: it makes you atrophy (another case of the snowball effect: fate is a cruel mistress I guess).
Overall, this is a good book with some solid lessons. It's worth looking over a second time. It's straightforward and simple, but it's these simple things which we tend to forget. I recommend reading this book as well because it's fun. Check out footnote 13 for something cool .
 He often insults people as well, which I personally found distasteful, though it does not detract in large part from the book. (return)
 Such as Poincaré and Benoît Mandelbrot, oh Benoît. Mandelbrot cannot possibly go wrong in Taleb’s eyes. Reading The Black Swan you’d think he was, intellectually, the second coming of Christ, and Sextus Empiricus, the first. (return)
 Plato, Robert Merton, and… Hegel… always, Hegel. (return)
 Except it’s economists, statisticians, traders and intellectuals—not elves, dwarves and Sauron’s minions waging war. (return)
 The Gaussian Distribution which does a bad job of characterizing most of real life, including where it’s applied to. (return)
 With help from history a couple years later. The Black Swan came out around 2007, a year before the worst financial crisis in the history of the world since the great depression in the 1930s (though it is not unlikely that the current COVID crisis will rank higher in our historical narratives once it is over). (return)
 ... though possibly a tiny bit more yesterday—remember this book is almost one and a half decades old! (return)
 And to all of you who’ve read about confirmation bias and are now thinking, yea I know all about this, you are an idiot. You are probably confirmation biasing right now, for Christ's sake. (return)
 This is actually from a behavioral experiment. (return)
 He talks about a figure called Montaigne who supposedly is a good model of how such a person would think. (return)
 While decreasing our surface area to the negative black swans, of course. (return)
 If there is good news you are happy; it does not matter quite so much how good the news is. The same applies for bad news. (return)
 Bonus footnote regarding narratives and remembering: sometimes when you add a narrative it's easier to remember things. For example, if I say "she killed him", it might be harder to remember than "he cheated and then she killed him out of jealousy" even though naively the second one has more information. Now, we would usually say this makes sense: we need more information to index well and be able to find that fact which we are remembering. However, what if we actually thought of it differently; what if we saw the second sentence as having less information and the first as having more. In this latter view, language is tricking us; the way our brains work, information is best measured differently. This sort of relativistic aspect of the "quantity" of information based on the context is not something I've thought deeply about, but which seems fascinating. (return)