I hope this note finds you well in these strange and uncertain times.
We often need to make decisions. Some decisions we can make in our heads — "should I buy 6 eggs or a dozen?". But many decisions we can't — "should I buy a house or rent?" — so we need tools to help us think: models.
A model can be anything from 2 numbers scribbled on the back of a napkin, to 100s of numbers tied together in a spreadsheet, to an entire software program performing millions of calculations.
These models are simplified versions of reality that can help us to think better by using numbers. With calculators and spreadsheets, we can build models to help us decide where to go on holiday, how much money to save, and how many employees to hire.
But there's one ingredient that our models, and therefore our decisions, tend to neglect — uncertainty.
“Don't cross a river if it is four feet deep on average.”
Nicholas Taleb likes to say this, and I'm sure we'd all agree with him. Yet in most of our decisions, we wade into precisely these rivers by making estimates based on averages: we don't account for uncertainty.
Now more than ever, we need to account for uncertainty when making decisions. Unfortunately, there are many wrong ways to do it.
Let's figure out the right way.
Suppose you want to do some budgeting. The economy is unstable, and you want to know how long your savings could last. You start off with a basic calculation in your head:
I have $40,000 saved up. On average, my expenses are $2,500/month. So my savings would last 16 months.
A reasonable first approximation. But not a plan you should rely on — if there are a few months where your expenses are higher, you'll end up running out of money sooner than expected.
Here's how you might try to account for this uncertainty.
Your expenses fluctuate from month to month. So instead of just looking at an average, you consider the "best-case" and "worst-case" scenarios. In the best case, you think your expenses would be $2,000, and in the worst case, $3,000. So overall, your Expenses are probably $2,000–$3,000 per month.
To make your model more accurate, you add some extra ingredients:
- Income: You have a side hustle that you think will probably make $500 – $1,000 per month
- Rate of Return: Your savings are invested across various asset classes, from which you'll probably earn 2–7% per year.
"Probably" is an important word here, because these aren't really best- and worst-case scenarios — in the real best-case scenario, you'd win the lottery (every single month), but it's not worth planning for that.
So let's suppose that "probably" means "90% likely". You think that 90% of the time, your Expenses, Income, and Rate of Return fall within those ranges. And overall, you'd like to be 90% sure that you won't run out of money prematurely.
Here's a summary of your model:
- Starting cash: $40,000
- Monthly expenses: $2,000 – $3,000
- Monthly income: $500 – $1,000
- Annual returns: 2% – 7%
These are a lot of numbers, so you move the model out of your head and into a computer.
This is what the model tells you about how long your savings will last:
- Best-case: 48 months
- Average-case: 24 months
- Worst-case: 16 months
This is helpful — you should try to budget for the worst-case, not the average (and certainly not the best).
It's worth remembering that 16 months isn't really the worst-case, rather, it's some kind of "probably". But not the kind of "probably" you were looking for —
You wanted a 90% chance of not running out of money, but the worst-case in our model is actually a lot more conservative than that. If we follow it, there's actually a 99.5% chance we don't run out. (There's a similar effect for the best-case)
What's more, the average-case scenario is wrong too — it's actually slightly more pessimistic than the true average outcome.
Where did it all go wrong?
Read on for an explanation of what happened. But before that, here's an interactive model letting you explore the dynamics yourself.
Change the inputs at the top to see how the affects the results underneath. The chart shows the true "Probably" in light orange, with the true average in dark orange. The misleading "best-" and "worst-" cases are shown in blue and red respectively. The misleading "average" has been omitted for clarity.
Humans are bad at reasoning about probabilities —
The infamous Monty Hall Problem shows that most of us wouldn't win the car on a game show. The Birthday Problem shows that you only need 23 people in a room for two of them to probably share a birthday — much less than we'd expect. And by Littlewood's law, we actually experience "one in a million" events roughly once a month.
Because probability is so unintuitive, we reduce complex things down to single numbers — averages. But like Taleb's four-feet-deep river, averages can be very misleading.
1. Average assumptions don't lead to average outcomes
In general, we can't expect average assumptions to lead to the average outcome.
Our model's average scenario turned out to be more pessimistic than the true average because of the mathematics of compounding returns. Instead of getting into that, here's a more obvious example to illustrate the point:
Consider a buffet that consists of 10 dishes. Each dish takes an average of 1 hour to prepare, and they can be prepared simultaneously. What's the average time it takes for the buffet to be ready?
Some dishes will take less than 1 hour to prepare, and others will take more. But every dish is needed for the buffet to be ready, so all it takes is 1 late dish to delay the entire meal. It's highly likely that at least 1 dish will take longer than an hour, so the average time for the buffet is longer than 1 hour.
(This reasoning is why projects are almost never completed on time.)
Whenever a model does anything more complicated than basic addition and subtraction, the average assumptions will likely not lead to the average outcome.
2. Extreme assumptions can lead to more extreme outcomes
In general, we can't expect extreme assumptions (like best- and worst-cases) to lead to equally extreme outcomes.
Our model's worst-case scenario was actually significantly less likely than we expected. Here's why:
Our worst-case scenario assumes that all of our worst-case assumptions (high Expenses, low Rate of Return, low Income) occur simultaneously, every single month. In practice, this almost never happens — it's expected that some things will go wrong some of the time, but it's very rare for everything to go wrong all of the time. (The same applies to the best-case scenario)
In practice, there may be situations in which it's more likely for many many things to go wrong simultaneously — during a global recession, many parts of a business are often simultaneously impacted, so this should be taken into account when setting up the model.
Our model was fairly simple, but the more uncertain variables there are, the more misleading a simple scenario-based approach becomes:
Consider a CEO planning a company's annual budget. Each of her 10 VPs thinks that their departments will need $800k–$1M across the year (they're 90% sure they won't need more than $1M). Each operates independently — higher budget requirements for one department have no connection to the others. The CEO wants to be 90% sure that the company has enough cash in the bank for every department. How much should she budget for?
If she simply set aside $1M for each of the departments — $10M total — then the CEO would actually be more than 99.9% sure of having enough cash. It turns out that she can be 90% sure with just $9.2M, giving her an extra $800k to invest back into the company.
Being too optimistic has obvious costs, but being too pessimistic means you'll end up making unnecessary sacrifices to try to mitigate risks that don't actually exist.
Humans are bad at reasoning about probabilities... so we should leave it to computers.
"The last thing you do before climbing on a ladder to paint the side of your house is to give it a good shake. By bombarding it with random physical forces, you simulate how stable the ladder will be when you climb on it. You can then adjust it accordingly so as to minimize the risk that it falls down with you on it." – The Flaw of Averages
What if we could do the equivalent of shaking the ladder, but for our model?
Recall the following dialogue from the 2019 blockbuster Avengers: Endgame:
- Dr. Strange: I went forward in time... to view alternate futures. To see all the possible outcomes of the coming conflict.
- Peter Quill: How many did you see?
- Dr. Strange: Fourteen million six hundred and five.
- Tony Stark: How many did we win?
- Dr. Strange: ...One.
Like Dr. Strange, we too can "view alternate futures" via simulation.
By using our uncertain assumptions, we can simulate thousands of possible scenarios to see what our probable range of outcomes would be.
If we run a lot of simulations and find that our savings last at least 18 months in most of them, then we can safely conclude that (according to our model), we'll probably be fine for at least 18 months.
Great. How can we actually run simulations like these?
1. Spreadsheet plugins
There's no reasonable way to run simulation in a spreadsheet without using plugins. The most popular Excel plugins for this are @RISK (Palisade) and Crystal Ball (Oracle). The most popular Sheets plugin is Risk Solver.
These are powerful tools, but are very technical — they have a steep learning curve and require a decent amount of probability/statistics knowledge.
Causal is a number-crunching tool that runs simulations out-of-the-box. Like a spreadsheet, you can create models from scratch using your own assumptions and formulas, but unlike a spreadsheet, your assumptions can take on uncertain values.
To define an uncertain value for your Monthly Expenses, you can write an expression like 1500 to 2500. Causal will automatically simulate thousands of possible scenarios with different Monthly Expenses in each month, to show you what the likely range of outcomes would be.
It's great that we're increasingly working with numbers to guide our decisions, enabled by tools like spreadsheets.
To get to the next level of numeracy, we need to start working with uncertainty, and we need new kinds of tools to enable this. (Probably)