I love behavioral economics. There are so many key concepts that fundamentally change the way you see human behavior. The mix of one part economics with two parts psychology is a powerful one that might just enable us to make the world a better place for people to live. These ideas are not thoroughly sourced, but most come from books I’ve read or classes I’ve taken. This isn’t an academic paper, so get off my back, okay? Also, enjoy! I hope to open you up to a different perspective.
Good Books on Behavioral Economics:
See this site for a list of good books on the subject.
Classical economics is the study of the production, distribution, and consumption of goods. Economic theory has traditionally focused on interplay between supply, demand, and cost. This framework tends to describe people as rational decision makers who maximize their utility function according to the information available. This has led to a lot of really great systems that have produced tons of wealth and advancement for civilization. One small hitch….people aren’t really rational utility maximizers, and people don’t have the resources to access equal information. This has led to all kinds of problems, like ineffective health insurance, environmental devastation, predatory practices, and generally bad decision making. Enter behavioral economics.
Behavioral Economics is the study economics that takes into account how humans really think and behave. There are a ton of cool ideas that can be used to understand economic behavior better and to manipulate people into making better decisions. That sounds terrible and terrifying, but it’s great! I’ll outline a few of the ideas from Behavioral Economics that I like the most.
Teach the Garden to Weed Itself
King Solomon once settled a maternity dispute by ordering the child be divided equally between the two women. One of the women quickly gave up the claim to the child, preferring the child live with someone else than die. The other woman thought it fair that neither keeps the child. Solomon changed his mind and gave the child to the first woman.
Soloman made a decision that caused the parties to reveal their true intentions. In Superfreakonomics, authors Dubner and Levitt describe how they helped a British bank to develop an algorithm to help catch terrorists. The effort started in response to learning of the unusual banking behavior of the 9/11 terrorists. In the book, the authors describe in detail all of the unusual banking behavior: Terrorists tend to have one large deposit, with small withdrawals that don’t correspond to normal expenses; and despite having families, terrorists never buy life insurance. They suggested that clever terrorists should actually buy the cheap life insurance plans offered by the banks. When the authors visited the UK, they were blasted in the news and interviews for letting terrorists know the secrets the banks were using to catch them. For years, the authors feigned shame while secretly smiling. They revealed a few years later that they worked with banks to create cheap life insurance plans that were secretly referred to as “terrorist life insurance.” People don’t tend to buy life insurance from banks, especially cheap, bad plans. So when a small group of people all of the sudden started buying these plans in a certain time frame, the authorities knew who to investigate. It was an epic set-up.
Another, more practical example is in basic hiring and college acceptance practices. The trick is for the process to be rigorous enough so that only the people you most want to hire are left standing. The online clothing store Zappos is well-known for their fun and rewarding work environment. Despite the low pay, positions at Zappos call centers are highly coveted. Zappos’ hiring practices have had to make a garden that weeds itself. Zappos, after along hiring process, offers their employees $2000 if they quit and never come back. That means someone could get through their hiring process and quit on the first day with $2000 in their pocket without having worked at all. This sounds like a waste of money, but the company understands that a bad employee costs a lot more money than that. Their productivity is low, they require more resources, etc. By offering $2000, Zappos allows the garden to weed itself of workers who do not truly wish to contribute to their company.
You are down to your last $100 bill. You can bet on whether a coin flips heads or tails. You know that the coin has flipped tails 14 times in a row. Should you put your money on tails, because it’s hot right now? Or should you put your money on heads, because the luck has run dry on tails?
The answer is neither. You should invest that $100 into some food and clothes so you can go interview for a job and enter into a gamblers anonymous program, because you clearly have a problem. Seriously, though, the probability of the next coin flip has nothing to do with the previous flips. That information doesn’t tell you anything about what the next flip will be. The Gambler’s Fallacy is the mistaken idea that the past history of random events informs future outcomes.
This is important in economics because you can apply it to all kinds of decisions, like whether to buy or sell a stock, or what kind of insurance to get, etc. Behavioral economics confirms time and time again how poor human intuition is at dealing with probabilities and making logical decisions based on probabilities. See also the Monty Hall problem for fun, counterintuitive probabilities.
Behavioral Economics studies the effect of human bias on decision making. Consider the following thought experiment: Suppose there is a room full of 100 people. You have the choice whether or not to push a button. If you push the button, half of the people will die instantly. If you don’t push the button, a coin will be flipped. If the coin is heads, everyone in the room will die instantly, and if the coin is tails, everyone is saved. Pause here–would you push the button? Game theory tells us that the decision whether to push the button is irrelevant–both decisions have an expected value of 50 human lives. When surveyed, however, an overwhelming majority of people choose not to push the button. Why is this?
The inertia bias is the idea that we are predisposed to do nothing, to keep the status quo. If we don’t push the button, the decision is out of our hands. If we pushed the button, then it feels like we killed the 50 people. We’d rather not have the burden, even if it means all 100 die. We can push this thought experiment further to really test the inertia bias. What if pushing the button only kills 49 people? Then the expected value would tell us to push the button (higher expected value than leaving it to the coin flip). I’m sure there is research with precise numbers on this, but the main point is that people will refuse to push the button even if it kills fewer than 50 people.
The inertia bias can be exploited for good, as well. For instance, a policy that defaults all drivers to be organ donors drastically increases the number of organ donors and ultimately saves lives. If someone doesn’t want to be an organ donor, they can opt out–but most people won’t because of the inertia bias. Setting defaults is an incredibly powerful way to take advantage of the inertia bias for good. Other examples include setting employee defaults for health insurance and retirement plans. People can do the research and choose their own plans (or opt out altogether), but the default encourages them to make the decision to save.
This one is a fun irrationality. We share this with other primates. The idea is that we hate losing things more than we like gaining things. Let’s say I give you 5 M&M’s, and right before you are about to eat them, I snatch them back. You’re pissed! Why, though? You have exactly what you started with. Rationally, you should be neutral. This means that losing 5 M&M’s sucks worse than gaining 5 M&M’s is awesome. There’s a lot of evolutionary baggage behind this one and it can lead to some irrational economic behavior.
One way loss aversion isn’t great is that it causes people to save less. Let’s say you get a raise of 5%. Awesome, you’re going to put all that extra cash into savings, right? Well, if you do that, your loss-averse brain will take it as a pay cut, even though you are making exactly the same amount as you were making 5 seconds ago. One way to counter this is to enter a savings plan that scales automatically with pay increases. Such a plan might automatically put 2% of that 5% raise into a savings/retirement account so it still feels like you are getting a 3% raise. You don’t get the whole raise and have to give some back.
Paralysis by Analysis
In classical economics, more choices and more information means better decisions and happier customers. Unfortunately, there is a hidden cost to the human brain processing more information and more choices. Imagine two stores that sell jam. Store A sells 3 kinds of jam: strawberry, grape, and raspberry. Store B sells 37 kinds of jam. Store B is the jam king. They have boysenberry, they have maple blackberry, they have 6 kinds of strawberry, etc. An economist might conclude that Store B is making a killing. All of their customers are way happier because they get to buy the exact jam of their preference. It turns out this isn’t true at all. In Store A, you go to the jam, pick one, and move on. In Store B, you go to the jam, spend minutes or hours reading labels, choosing one, changing your mind, and often just walk away because it’s too overwhelming. If you do manage to pick a jam, you’re really unhappy with it because you can’t help but wonder if there was another jam that you would have liked better (that’s the loss aversion).
Feeling overwhelmed by choice has been referred to as “paralysis by analysis.” Behavioral Economics teaches us that, when possible, limit choices. That sounds counterintuitive, but having a few quality choices is always better than having a multitude of choices. Research says 5-10 choices is optimal–which is consistent with the average working memory of 7-10 bits of information a person can remember in the short-term.
If you have ever watched sports or if you listen to political talking heads, then this idea might give you a new perspective. Have you ever noticed before a big football game that the commentators make all kinds of predictions? And did you keep a detailed list of those predictions and analyze how accurate they were after careful post-game consideration? Neither does anyone. When a sports commentator makes a prediction that doesn’t come true, no one notices or cares. There is no cost to making bad predictions. When a prediction is true, however, the commentator is likely to lord is over their colleagues with the time-honored “I told you so” smugness that we all love. Perhaps if sports commentators or psychics had to pay a fee every time they were wrong, there’d be a different story.
Philip Tetlock is a political scientists who studies the accuracy of predictions. His studios focus on small, testable predictions like, “what will be the effect of upcoming political situation X in the Middle East on oil prices in the next 6 months,” or, “What will the stock market be like in one year?” He has organized prediction challenges like these open to the public with the Good Judgments project, and has found a class of people he refers to as “superforcasters.” It turns out that experts often don’t fair much better in their accuracy than a computer program that was written to always make the baseline prediction of “no change.” Superforcasters are more accurate, but partly because they have more self-doubt than their bombastic expert counterparts. How cool would it be to be a superforcaster?
Stay tuned for part 2!