Taleb is known for his book The Black Swan, which I own and have read but apparently never wrote a review for. While his writing style is a little annoying, the central concepts of nonlinearity and power-law distributions are worth remembering (described in a Long Now talk here if you want the short version).
His latest book, Antifragile, is about how to live in a world with black swans and nonlinear outcomes. His writing style was even more annoying in this one, as he insults everybody that dares to disagree with him, and wanders off on irrelevant tangents as he apparently fired his editor. So I’m writing this post to summarize the few interesting ideas I got out of the book so you don’t have to actually read it.
The core concept is that in a world of volatility, where black swans can explode out of nowhere unexpectedly, there are three ways to deal with change – Taleb uses examples from financial investing to illustrate:
- Fragile – unstable equilibriums where volatility is bad because it is likely to tip you into a worse state – in other words, you have limited upside and unlimited downside. A fragile investment strategy would be one where you borrow money to invest in something with a slightly higher return – if you are overleveraged and something goes wrong (and something always goes wrong), you will be bankrupt many times over, and even if you are successful, you didn’t make that much money.
- Robust – stable states which don’t change under volatility. A robust investment strategy would be one where you don’t invest in the market at all – just stick it in your bank account and do nothing – you can’t get hurt by change but you can’t benefit from it either.
- “Antifragile” – states that are at the bottom of the state space, so they benefit from volatility (or as the subtitle says “Things that gain from disorder”). He uses a lot of fancy concepts like convexity and concavity and references to abstruse options theory, but the basic idea is that you have limited downside and unlimited upside. An antifragile investment strategy would be what Taleb describes as a “barbell” strategy, with 90% of your wealth in robust investments that are rock-solid, and the last 10% spread around a number of highly speculative investments that have big upsides (e.g. angel investing, big options bets, etc) so that if a black swan does take off, you have a piece of it. The reason the strategy is antifragile is that your downside is limited (only 10% of your wealth), but the upside is large – even a small piece of a positive black swan like a Google or Facebook is hugely valuable.
One important aspect to Taleb’s mindset is that he believes that prediction is impossible in a non-linear world. If things can blow up unexpectedly (as he described in The Black Swan), and if very small differences in initial conditions can lead to wildly disparate results (as he described in Fooled by Randomness), then any strategy that depends on accurately predicting the future is doomed to failure. Hence the focus on creating antifragile states which have limited downside and let you keep the upside – in such states, you don’t have to predict the future, as most volatility will benefit you. Forecasting the future only matters if you are in a fragile state, and leads to your downfall when you get the forecast wrong, as Wall Street did in 2006-2007 – Taleb points out that Wall Street was taking the fragile bet of limited upside (ongoing subprime mortgage payments at a higher interest rate) and unlimited downside (holding the option if the mortgage defaulted).
That’s basically the entire message of the book: don’t think you can predict the future, and put yourself in an antifragile state where you limit your downside and keep the upside (in case you get lucky with a black swan).
Taleb also goes off on all sorts of other rants that are sometimes amusing but only marginally relevant to his central argument:
- When investing in anything, don’t assume you can forecast the future and predict what will pay off – better to spread your investments around than make big bets. As he says, “Small amounts per trial, lots of trials, broader than you want. Why? Because in Extremistan, it is more important to be in something in a small amount than to miss it.”
- Taleb has a long rant against iatrogenics, which is preventable harm due to medical intervention. His point is that medical intervention in cases where the patient is not imminently dying is a fragile intervention, with limited upside (in most cases the patient will get better if the doctor does nothing), and unlimited downside (unexpected side effects leading to death). Obviously, the opposite is true in cases where the patient is dying which makes that an antifragile intervention – there is limited downside (they are dying anyway), and unlimited upside (intervention might save them). That all makes sense (and I already had a policy of never going to the doctor for anything routine), but I think he goes too far with his contention that the best way to kill somebody is to get them a personal physician who will feel compelled to intervene.
- I agree with his larger point that when we are uncertain of the consequences (and we should generally be uncertain), we should bias towards inaction. Many people bias towards action to make them look like they are contributing and doing their job, but it often contributes to a worse result e.g. active fund managers are generally outperformed by index funds.
- I don’t agree with his bias that natural selection is always correct. He believes that nature has done a perfect job of crafting our bodies to fix themselves, but I think he is missing the point that natural selection only works on situations where reproduction is affected. Cancer is a problem because it happens after reproduction, and generally beyond the life expectancy of anybody before the 20th century – there is no way for natural selection to have dealt with cancer, so expecting our bodies to deal with it naturally doesn’t make sense.
- He has a nice discussion of comparative advantage – the idea in economics that it is best to specialize and only do one thing as everybody gets better goods at cheaper prices in that theoretical world. His point is that the world changes, and if you put all of your effort into one thing, you are in a fragile state with the unlimited downside if that thing is made worthless for some reason (e.g. being an expert in Cobol was great for years, until it was worthless).
- Another nice observation is that the best way to predict what technologies will last into the future is to look at how old the technology is. A technology that has been around for a hundred years (e.g. the car) is more likely to be around for another hundred years than a technology that has only existed for a year (e.g. Snapchat). This is consistent with my long-standing bias against technological determinism – technology does not drive social change – technology is adopted when it serves a social need. Technology that has been around for decades has been shown to serve a social need, and is thus likely to stick around.
All in all, I can’t recommend the book as it is really terribly written, but I think the core ideas are worth considering. If you’ve read it, I’d love to get your take, and if you haven’t, let me know if this summary is useful.
P.S. I’m hosting a salon in SF this Wednesday evening with this as a potential discussion topic – contact me for details if you’re interested in attending.