To make more of the books I've read and remember them better, I've started to keep notes while reading. I mostly follow the procedure outlined on Farnam Streetcache, but realized that realized that publicly posting my notes forces me to put a bit more thought into them. So here we go!
The main point of the book is that we humans are incredibly bad at dealing with probability, our only hope is to acknowledge our weakness and work around it. According to Taleb, the core generator of these ideas:
We favor the visible, the embedded, the personal, the narrated, and the tangible; we scorn the abstract.
The book is mostly a collection of thoughts that are all related to randomness, in my summary I mostly try to follow the original ordering.
Taleb starts off with (fictional?) accounts to show that repeated luck can lead to (temporary) appearance of success. It is therefore better to instead judge by the average across all possible outcomes, not just the one that was actually realized. (Playing Russian roulette to win 10 million seems really good in 5 worlds and extremely bad in 1, this has ties to quantum physics' many world interpretation).
Next he proposes to use Monte Carlo simulations to evaluate many possible histories, use them to get an intuition how randomness works. He points out that we are bad at learning from history (hindsight bias) and that frequent news are mostly noise: the older information is the likelier that it is useful (because noise will have been forgotten). In the short term you mostly observe the variation, only in the long term the returns become visible. (This is a problem if it's infeasible to wait long enough).
He continues to talk about how if something could be generated randomly it's likely just noise (randomly generated papers in certain fields). An aside into information theory and the definition of entropy would be of order here, I recommend the excellent Information Theory, Inference, and Learning Algorithmscache.
Curiously, the successful people at any moment are likely to be the ones best adapted to the current situation, not to the next one. Thus they are very vulnerable to unlikely events (the famous black swans). This is similar to overfitting in machine learning. As a corollary: We consider people good because they had success, but the success might have been due to luck, not skill.
Not just the frequency (probability) of events is important, but also their payoff: The most probably event might not be the most important. If a stock has a 70% chance of going up by 1% and 30% of going down by 10%, you should bet on it going down.
He ties back to his point about history from above, don't learn from shallow history ("this has never happened before") but from history in general (it happened in a similar field). Still, statistical knowledge garnered from the past is of dubious use if the underlying situation (probability distribution) keeps changing.
This first parts concludes with a treatise of the the problem of induction: data can only be used to disprove something, never to prove something. In general, there are two kinds of theories:
- known to be wrong, as rejected by tests
- not yet known to be wrong, but exposed to being proven wrong
Anything outside (if it can't be disproved) is not a theory. Taleb references Karl Popper quite often. I haven't yet read him, so I can comment on that, but I will no definitely look at some of his writing.
While discussing Popper, he raises question that maybe some type of knowledge does not increase with information, but we can't know which type (e.g. crash in a domain where it has never happened before). I'm curious how this agrees with Bayesian statistics, especially updating? Is it just about the special case where the time scale is too short to have seen the unlikely bad event yet? Are there always such bad events, just increasingly unlikely? I think this requires further though.
He concludes by saying that he uses statistics to make aggressive bets, but not for managing risks or exposure.
Part 2 - Human Biases
Whether past performance predicts future performance depends on the total number of performers. It's important to count the number of investors in order to calculate the conditional probability of successful runs (= what's the chance of someone having successfully invested for 15 years in a row?)
When backtesting (running a trading strategy against historical data) with many strategies one will turn out to be good just by chance - always take population size into account!
The book continues with a small detour into non-linearity, he explains how small advantages can accumulate and snowball into a big differences (network effects, etc).
Part 3 - How to deal with Randomness
Humans naturally adopt irrational behavior in response to randomness - Taleb also mentions Skinner's experiment of randomly dispensing food to animals: they start to adopt strange rituals. We have to use tricks to prevent us from doing so - we can restrict access to information until it crosses predetermined thresholds (when trading), not keep chocolate under our desk, etc.
It's also important to not get attached to your opinions. Don't keep a stance just because you've previously done so: If you were choosing for the first time now, what would you do?
Ultimately, randomness will always have the last word - don't judge solely based on results! Take actions/choices into account. He recommends Stoicism to deal with randomness, I can only agree. Seneca's Letters from a Stoiccache are a great introduction.
Taleb mentions in asides that his trading strategy consists of betting on the occurrence of rare events (he calls them black swans), steadily losing a little money and every now and then gaining a lot. He does a good job of explaining how bad we are at dealing with randomness and how most traders are just plain lucky, but does not actually give much advice on how to trade. Still, it's a entertaining while still educational book and I recommend it.