When Stephen Dubner interviewed Steven Levitt in 2003, he didn’t expect to get anything more than material for a New York Times Magazine article. Much to his surprise, he found a brilliant economic mind who thought a lot like he did, with keen interest in unexpected questions and the practical application of economic theory. They decided to do a project together, and Freakonomics was born. The popular book takes an expert’s economic understanding and applies it to confounding real-life situations, explaining all along why a study some consider dry is actually the answer to life’s most interesting riddles. Check out the 11 key insights from Freakonomics below.
Economics is about more than numbers; when applied to the right situation, it can make a complicated world easier to understand.
The study of economics uses tools that can be applied to much more than just investment and real estate. At its core, it’s a science of measurement that capitalizes on understanding the influence of any given factor on a multitude of other factors.
Understanding incentive is key to understanding economics. There are three primary types of incentive: economic, moral, and social.
An economic incentive is simple: will this action cost me more than its worth? Moral incentives are less cut and dry, and depend on individual perceptions of right and wrong. Social incentives are slightly simpler, driven by a desire to act within the social norm of acceptable behavior. A combination of all three is the surest way to encourage or deter behavior.
There’s an art to incentivization, and it’s easy to get it wrong.
Influencing behavior with incentives is complicated, because introducing a new incentive (say, a fine for parents who pick up their children late from daycare) can displace a preexisting “invisible” incentive (the guilt parents feel when picking up their children late).
Incentives affect people differently, and influence the same person differently depending on the day.
The same disincentives keeping you from robbing a bank are at play with a bank-robber, but he is unaffected by these influences. Similarly, you’re subject to differing degrees of influence yourself. For instance, when a bagel salesman used an “honor system” to get payment for bagels he left in workplace breakrooms, he found that good weather meant more money in the box, and bad weather or stressful periods of the year (like Christmas and Thanksgiving) meant people payed less — even though the incentives stayed the same.
Experts can cheat their clients thanks to the information gap that got them hired in the first place.
When an expert’s clients don’t have all the details, they also don’t understand all the incentives. You hire a real estate agent because they know more than you do about the housing market, pricing, and sales. But they might not have your best interests in mind. While your goal is selling the house for as much as possible, the real estate agent is likely more interested in selling it quickly, meaning their cut is a bit smaller but more immediate.
Experts cheat people with more than just an information gap: they take advantage of vulnerabilities like fear and anxiety.
Be wary of your emotional state when making an economic transaction that’s unfamiliar to you. It’s hard to negotiate a fair price when you’re uninformed, and much more difficult when you’re also emotionally compromised.
The internet makes it harder for people to take advantage of the information asymmetry.
With free price-comparison websites, it’s easy to avoid paying too much for insurance and other services that were once a mystery to anyone outside of the field.
When you’re the seller, avoid information gaps as much as possible.
When there’s an obvious information gap — say, no photo in an online dating profile — the uninformed party tends to assume the worst. Thus, as a seller, you have to anticipate the information the buyer is going to expect, and provide satisfactory detail.
Our perception of risk is flawed thanks to our vivid imaginations. You can’t always trust your gut.
When we can readily envision the risk in question, we give that risk more weight than it statistically (and rationally) deserves. For instance, when we think of our children playing around swimming pools, we rarely think about the risk of death. However, when we think about kids playing around guns, we’re much more concerned — despite the fact that the pool is a more likely source of danger.
The correlation/causation fallacy leads us to incorrect conclusions.
When two factors increase simultaneously — say, money and success in a political campaign — we’re quick to relate them as a cause and effect. With closer study, however, it’s often the case that these factors were both results of another influencing factor, or are entirely unrelated.
The most dramatic effects often have remote, subtle causes.
When trying to explain the drop in crime in the United States in the early 90s, experts looked to policy changes (new gun control laws, etc) and an improving economy. The true source of change, though, was legal abortion, which diminished the number of children born into disadvantaged families.