LitCharts assigns a color and icon to each theme in Thinking, Fast and Slow, which you can use to track the themes throughout the work.
Intuition, Deliberation, and Laziness
Human Fallibility and Overconfidence
Stories and Subjectivity vs. Statistics and Objectivity
Choices, Losses, and Gains
Summary
Analysis
There are many scenarios in which people make predictions: economists forecasting unemployment, the military predicting casualties, producers predicting audiences, etc. Some predictions involve precise calculations, but others involve intuition and System 1. The predictions that rely on intuition can stem from skill and expertise (like chess masters) but some of them can stem from the operation of heuristics, which can lead to mistakes.
Chapter 18 focuses on taming our intuitive predictions. While some people (like chess masters) can rely on intuition because of their learned expertise, for almost everyone else it is important to motivate System 2 and avoid some of the biases we inherently rely on.
Active
Themes
Kahneman reintroduces Julie, a current senior at a state university who read fluently when she was four years old. He asks what readers believe her GPA is. System 1 makes several quick calculations, creating a causal link between Julie’s reading and her current GPA. System 1 evaluates how precocious a child is who reads at age 4, and what percentile of GPA might correspond with this achievement.
This is another case in which System 1 relies on its tendency to create cause and effect where there might be none, and also to rely on easier questions in order to answer something more complicated. System 1 draws an equivalent connection between reading ability at age 4 and GPA here, but GPA is based on many additional factors.
Active
Themes
Kahneman then describes another question he and Tversky once asked people. After describing a freshman student as “intelligent, self-confident, well-read, hardworking, inquisitive,” they then asked people what percentage of descriptions of freshman would impress them more. The answers were generally in the top 15% but not in the top 3%. When they then asked other participants what percentage of freshman obtain a higher GPA than this student, their answers remained the same, despite the fact that predicting someone’s GPA from five adjectives is bound to yield an inaccurate answer.
Like the Julie example above, this serves as another case in which System 1 replaces a hard question with an easier one—and Kahneman makes the simplification clear by asking one group the simple question (what percentage of descriptions would impress you more) and asking one group the difficult question (what percentage of freshman would obtain a higher GPA), demonstrating that they yield the same answers.
Active
Themes
In the case of Julie, Kahneman writes, it is necessary to perform several calculations for an accurate answer. 1) Start with the estimate of average GPA (the base rate). 2) Determine the GPA that matches your impression of the evidence (your intuition). 3) Estimate the correlation between reading precocity and GPA. 4) If the correlation is .3, move 30% of the distance from the average to the matching GPA. This approach builds on intuition but regresses it toward the mean.
Kahneman’s explanation gives people the tools to avoid their inherent laziness. In an example like Julie’s, he argues that it is important not simply to rely on System 1 (intuition), but to factor it into calculations that System 2 should perform in order to gain a more accurate answer.
Kahneman writes that the biases we find in predictions that are expressed on a scale (like Julie’s GPA) are similar to the biases observed in judging probabilities of outcomes (as in the example in which people are asked what kind of graduate student Tom W is). In each case, it is important to start with the baseline and the intuition and aim for an intermediate number between those responses.
In the case of both Julie and Tom W, we either ignore the basic statistical information that we have (like the base rates of a given field) or we do not acknowledge our ignorance of the base rates at all. Instead, we use the narrative presented to us to shape our intuitions.
Correcting intuitive predictions is a task that System 2 handles. It requires significant effort and often means that it will be difficult to predict extreme outcomes. Objections to this last principle are important, however: to a venture capitalist, missing out on the next Google is more important than the risk of making an investment that ultimately fails. The goal of some jobs is to call extreme cases correctly.
Kahneman uses this example to show that sometimes it is important to be able to predict something extreme. But this allows people who are not venture capitalists (or something similar) to realize that it is more important to do the proper calculations to make predictions.
Kahneman writes that the most valuable contribution of these corrective measures is that they require people to think about how much they know. He presents an example in which a department is hiring two candidates: Kim and Jane. Kim leaves a strong impression and has great recommendations but has no substantial track record of scientific productivity. Jane’s research record is excellent, but her interviews are less sparkling. Kim is an extreme example, and she is more likely to regress toward the mean. Thus, Jane might be the more solid candidate.
The example of Kim and Jane is particularly salient because it is a circumstance that many people might encounter in their own fields, as interviewing and hiring people is very common to a lot of jobs. Thus, Kahneman is able to introduce an example that people might actually remember and avoid making decisions purely with their intuition.