Part I: Thinking About Thought
Chp 1. Everything’s an Inference
It’s possible to make fewer errors in judgment by following a few simple suggestions implicit in this chapter.
Remember that all perceptions, judgments, and beliefs are inferences and not direct readouts of reality. This recognition should prompt an appropriate humility about just how certain we should be about our judgments, as well as a recognition that the views of other people that differ from our own may have more validity than our intuitions tell us they do.
Be aware that our schemas affect our construals. Schemas and stereotypes guide our understanding of the world, but they can lead to pitfalls that can be avoided by recognizing the possibility that we may be relying too heavily on them. We can try to recognize our own stereotype-driven judgments as well as recognize those of others.
Remember that incidental, irrelevant perceptions and cognitions can affect our judgment and behavior. Even when we don’t know what those factors might be, we need to be aware that much more is influencing our thinking and behavior than we can be aware of. An important implication is that it will increase accuracy to try to encounter objects and people in as many different circumstances as possible if a judgment about them is important.
Be alert to the possible role of heuristics in producing judgments. Remember that the similarity of objects and events to one another can be a misleading basis for judgments. Remember that causes need not resemble effects in any way. And remember that assessments of the likelihood or frequency of events can be influenced simply by the readiness with which they come to mind.
Many of the concepts and principles you’re going to read about in this book are helpful in avoiding the kinds of inferential errors discussed in this chapter. These new concepts and principles will supplement, and sometimes actually replace, those you normally use.
Chp 2. The Power of the Situation
One of the main lessons of these first two chapters is that there is vastly more going on in our heads than we realize. The implications of this research for everyday life are profound.
Pay more attention to context. This will improve the odds that you’ll correctly identify situational factors that are influencing your behavior and that of others. In particular, attention to context increases the likelihood that you’ll recognize social influences that may be operating. Reflection may not show you much about the social influences on your own thinking or behavior. But if you can see what social influences might be doing to others, it’s a safe bet you might be susceptible as well.
Realize that situational factors usually influence your behavior and that of others more than they seem to, whereas dispositional factors are usually less influential than they seem. Don’t assume that a given person’s behavior in one or two situations is necessarily predictive of future behavior. And don’t assume that the person has a trait or belief or preference that has produced the behavior.
Realize that other people think their behavior is more responsive to situational factors than you’re inclined to think—and they’re more likely to be right than you are. They almost certainly know their current situation—and their relevant personal history—better than you do.
Recognize that people can change. Since the time of the ancient Greeks, Westerners have believed that the world is largely static and that objects, including people, behave as they do because of their unalterable dispositions. East Asians have always thought that change is the only constant. Change the environment and you change the person. Later chapters argue that a belief in mutability is generally both more correct and more useful than a belief in stasis.
These injunctions can become part of the mental equipment you use to understand the world. Each application of the principles makes further applications more likely because you’ll be able to see their utility and because the range of situations in which they can be applied will consequently increase.
Chp 3. The Rational Unconscious
This chapter has many implications for how we should function in daily life. Here are a few of the most important.
Don’t assume that you know why you think what you think or do what you do. We don’t know what may have been the role played by little-noticed and promptly forgotten incidental factors. Moreover, we often can’t even be sure of the role played by factors that are highly salient. Why should you give up belief in self-knowledge, and do so at the cost of self-confidence? Because you’re less likely to do something that’s not in your best interest if you have a healthy skepticism about whether you know what you really think or why you really do the things you do.
Don’t assume that other people’s accounts of their reasons or motives are any more likely to be right than are your accounts of your own reasons or motives. I frequently find myself telling other people why I did something. When I do that I’m often acutely aware that I’m making this up as I go along and that anything I say should be taken with more than a grain of salt. But my hearers usually nod and seem to believe everything I say. (With psychologists I usually have the grace to remind them there is no particular reason to believe me. Don’t try that with nonpsychologists.)
But despite my recognition that my explanations are somewhere between “probably true” and “God only knows,” I tend to swallow other people’s explanations hook, line, and sinker. Sometimes I do realize that the person is fabricating plausible explanations rather than reporting accurately, but more typically I’m as much taken in as other people are taken in by my explanations. I really can’t tell you why I remain so gullible, but that doesn’t prevent me from telling you to carry a saltshaker around with you.
The injunction to doubt what people say about the causes of their judgments and behavior, incidentally, is spreading to the field of law. Increasingly it’s recognized that what witnesses, defendants, and jurors say about why they did what they did or reached the conclusions that they came to are not to be trusted—even when they are doing their level best to be perfectly honest.15
You have to help the unconscious help you. Mozart seems to have secreted music unbidden. (And if you saw the movie Amadeus, you know that he frequently wrote down the output without ever blotting a note.) But for ordinary mortals, creative problem solving seems to require consciousness at two junctures.
1. Consciousness seems to be essential for identifying the elements of a problem, and for producing rough sketches of what a solution would look like. The New Yorker writer John McPhee has said that he has to begin a draft, no matter how crummy it is, before the real work on the paper can begin. “Without the drafted version—if it did not exist—you obviously would not be thinking of things that would improve it. In short, you may be actually writing only two or three hours a day, but your mind, in one way or another, is working on it twenty-four hours a day—yes, while you sleep—but only if some sort of draft or earlier version already exists. Until it exists, writing has not really begun” (McPhee, 2013). Another good way to kick the process off, McPhee says, is to write a letter to your mother telling her what you’re going to write about.
2. Consciousness is necessary for checking and elaborating on conclusions reached by the unconscious mind. The same mathematicians who say that a given solution hit them out of the blue will tell you that making sure the solution was correct took hundreds of hours of conscious work.
The most important thing I have to tell you—in this whole book—is that you should never fail to take advantage of the free labor of the unconscious mind.
I teach seminars by posting a list of thought questions to serve as the basis for discussion for the next class. If I wait until the last minute to come up with those questions, it’s going to take me a long time and the questions won’t be very good. It’s tremendously helpful for me to sit down two or three days before my deadline—just for a few minutes—and think about what the important questions might be. When I later start to work on the questions in earnest, I typically feel as if I’m taking the questions by dictation rather than creating them. If you’re a student, a question for you: When is the right time to begin working on a term paper due the last day of class? Answer: The first day of class.
If you’re not making progress on a problem, drop it and turn to something else. Hand the problem off to the unconscious to take a shot at it. When I used to do calculus homework, there would always come a point when I hit a problem that I absolutely could make no progress on. I would stew over the problem for a long time, then move on in a demoralized state to the next problem, which was typically harder than the previous one. There would follow more agonized conscious thought until I shut the book in despair. Contrast this with how a friend tells me that he used to deal with the situation of being stumped on a calculus problem. He would simply go to bed and return to the problem the next morning. As often as not the right direction to go popped into his head. If only I had known this person when I was in college.
I hope that having a clearer understanding of how your mind works will make it easier to understand how useful the concepts in this book can be. The fact that it may seem to you that it’s unlikely that a given concept would be helpful doesn’t mean you wouldn’t use it—and use it properly—if you knew it. And the more you use a given concept, the less aware of using it you will become.
Part II: The Formerly Dismal Science
Chp 4. Should You Think Like an Economist?
Microeconomists are not agreed on just how it is that people make decisions or how they should make them. They do agree, however, that cost-benefit analysis of some kind is what people normally do, and should do.
The more important and complicated the decision, the more important it is to do such an analysis. And the more important and complicated the decision is, the more sensible it is to throw the analysis away once it’s done.
Even an obviously flawed cost-benefit analysis can sometimes show in high relief what the decision must be. A sensitivity analysis might show that the range of possible values for particular costs or benefits is enormous, but a particular decision could still be clearly indicated as the wisest. Nevertheless, have a salt cellar handy when an economist offers you the results of a cost-benefit analysis.
There is no fully adequate metric for costs and benefits, but it’s usually necessary to compare them anyway. Unsatisfactory as it is, money is frequently the only practical metric available.
Calculations of the value of a human life are repellent and sometimes grossly misused, but they are often necessary nonetheless in order to make sensible policy decisions. Otherwise we risk spending great resources to save a few lives or fail to spend modest resources to save many lives.
Tragedies of the commons, where my gain creates negative externalities for you, typically require binding and enforceable intervention. This may be by common agreement among the affected parties or by local, national, or international agencies.
Chp 5. Split Milk and Free Lunch
Expended resources that can’t be retrieved should not be allowed to influence a decision about whether to consume something that those resources were used to obtain. Such costs are sunk, no matter what you do, so carrying out the action for which the costs were incurred makes sense only if there is still a net benefit from it. No point in eating sour grapes just because they were expensive. Corporations and politicians get the public to pay for goods and projects in order to justify past expenditures because most people don’t understand the sunk cost concept.
You should avoid engaging in an activity that has lower net benefit than some other action you could take now or in the future. You shouldn’t buy a thing, or attend an event, or hire a person if such an action likely blocks a more beneficial action. At least that’s the case when immediate action is not strictly necessary. You should scan a decision of any consequence to see whether opportunity costs may be incurred by it. On the other hand, obsessive calculation of opportunity costs for small matters is a cost in itself. True, you can’t have chocolate if you choose vanilla, but get over it.
Falling into the sunk cost trap always entails paying unnecessary opportunity costs. If you do something you don’t want to do and don’t have to do, you automatically are wasting an opportunity to do something better.
Attention to costs and benefits, including sunk cost and opportunity cost traps, pays. The thinkers over the centuries who have urged some form of cost-benefit analysis are probably right. There’s evidence that people who make explicit cost-benefit decisions and avoid sunk costs and opportunity costs are more successful.
Chp 6. Foiling Foibles
Loss considerations tend to loom too large relative to gain considerations. Loss aversion causes us to miss out on a lot of good deals. If you can afford a modest loss in order to have an equal chance for a larger gain, that’s the way you should normally bet.
We’re overly susceptible to the endowment effect—valuing a thing more than we should simply because it’s ours. If you have an opportunity to divest something at a profit but feel reluctant to do so, ask yourself whether that’s simply because of your ownership of the thing rather than some other factor such as expected net value for keeping the thing. Sell your white elephants no matter how much room you have in your attic for them. The people who tell you to give away every article of clothing you haven’t used for a year are right. (Do what I say, not what I do. I periodically shuffle shirts around in my closet that I haven’t worn in a decade because there is after all a chance I might buy a jacket that one of them would look good with.)
We’re a lazy species: we hang on to the status quo for no other reason than that it’s the way things are. Put laziness to work by organizing your life and that of others so that the easy way out is actually the most desirable option. If option A is better than option B, give people option A as the default and make them check a box to get option B.
Choice is way overrated. Too many choices can confuse and make decisions worse—or prevent needed decisions from being made. Offer your customers A or B or C. Not A through Z. They’ll be happier and you’ll be richer. Offering people a choice implies that any of the alternatives would be rational to pick; spare people the freedom of making a wrong choice in ignorance of your opinion of what would be the best alternative. Tell them why you think option A is best and what the considerations are that might make it rational to choose something different.
When we try to influence the behavior of others, we’re too ready to think in terms of conventional incentives—carrots and sticks. Monetary gain and loss are the big favorites among incentives. But there are often alternative ways of getting people to do what we want. These can be simultaneously more effective and cheaper. (And attempts at bribery or coercion are remarkably likely to be counterproductive.) Just letting people know what other people do can be remarkably effective. Want people to use less electricity? Tell them they’re using more than their neighbors. Want students to drink less alcohol? Tell them their fellow students are not the lushes they think they are. Rather than pushing people or pulling people, try removing barriers and creating channels that make the most sensible behavior the easiest option.
Part III: Coding, Counting, Correlation, and Causality
Chp 7. Odds and Ns
Observations of objects or events should often be thought of as samples of a population. Meal quality at a given restaurant on a given occasion, performances of a given athlete in a particular game, how rainy it was during the week we spent in London, how nice the person we met at the party seems to be—these all have to be regarded as samples from a population. And all assessments that are pertinent to such variables are subject to error of some degree or other. The larger the sample, other things being equal, the more the errors will cancel one another out and bring us closer to the true score of the population. The law of large numbers applies to events that are hard to attach a number to just as much as to events that can readily be coded.
The fundamental attribution error is primarily due to our tendency to ignore situational factors, but this is compounded by our failure to recognize that a brief exposure to a person constitutes a small sample of a person’s behavior. The two errors lie behind the interview illusion—our drastic overconfidence that we know what a person is like given what the person said or did in a thirty-minute encounter.
Increasing sample size reduces error only if the sample is unbiased. The best way to ensure this is to give every object, event, or person in the population an equal chance of appearing in the sample. At the very least we have to be attentive to the possibility of sample bias: Was I relaxed and in pleasant company when I was with Jane at Chez Pierre or was I uptight because my judgmental sister-in-law was also there? Larger samples just make us more confident about our erroneous population estimates if there is bias.
The standard deviation is a handy measure of the dispersion of a continuous variable around the mean. The larger the standard deviation for a given type of observation, the less confident we can be that a particular observation will be close to the mean of the population of observations. A big standard deviation for a type of investment means greater uncertainty about its value in the future.
If we know that an observation of a particular kind of variable comes from the extreme end of the distribution of that variable, then it’s likely that additional observations are going to be less extreme. The student who gets the highest grade on the last exam is probably going to do very well indeed on the next exam, but isn’t likely to be the one who gets the highest grade. The ten stocks with the highest performance in a given industry last year are not likely to constitute the top ten this year. Extreme scores on any dimension are extreme because the stars aligned themselves just right (or just wrong). Those stars are probably not going to be in the same position next time around.
Chp 8. Linked Up
Accurate assessment of relationships can be remarkably difficult. Even when the data are collected for us and summarized, we’re likely to guess wrongly about the degree of covariation. Confirmation bias is a particularly likely failing: if some As are Bs, that may be enough for us to say that A is associated with B. But an assessment of whether A is associated with B requires comparing two ratios from a fourfold table.
When we try to assess correlations for which we have no anticipations, as when we try to estimate the correlation between meaningless or arbitrarily paired events, the correlation must be very high for us to be sure of detecting it. Our covariation detection abilities are very poor for events separated in time by more than just a few minutes.
We’re susceptible to illusory correlations. When we try to assess the correlation between two events that are plausibly related to each other—for which we’re prepared to find a positive correlation—we’re likely to believe there is such a correlation even when there isn’t. When the events aren’t plausibly related, we’re likely to fail to see a positive correlation even when a relatively strong one exists. Worse—we’re capable of concluding there is a positive relationship when the real relationship is negative and capable of concluding there is a negative relationship when the real relationship is positive.
The representativeness heuristic underlies many of our prior assumptions about correlation. If A is similar to B in some respect, we’re likely to see a relationship between them. The availability heuristic can also play a role. If the occasions when A is associated with B are more memorable than occasions when it isn’t, we’re particularly likely to overestimate the strength of the relationship.
Correlation doesn’t establish causation, but if there’s a plausible reason why A might cause B, we readily assume that correlation does indeed establish causation. A correlation between A and B could be due to A causing B, B causing A, or something else causing both. We too often fail to consider these possibilities. Part of the problem here is that we don’t recognize how easy it is to “explain” correlations in causal terms.
Reliability refers to the degree to which a case gets the same score on two occasions or when measured by different means. Validity refers to the degree to which a measure predicts what it’s supposed to predict. There can be perfect reliability for a given measuring instrument but no validity for the instrument. Two astrologers can agree perfectly on the degree to which Pisces people are more extroverted than Geminis—and there most assuredly is no validity to such claims.
The more codable events are, the more likely it is that our assessments of correlation will be correct. For readily codable events such as those determined by ability, our assessment of correlations across two occasions can be quite accurate. And we recognize that the average of many events is a better predictor of the average of many other events of the same kind than measurement of a single event is for another single event—when the events in question are influenced by some ability. Even for abilities, though, gain in predictability from observation on one occasion to predictability based on the average of many occasions tends to be substantially greater than we realize. Our assessments of the strength of relationships based on difficult-to-code events such as those related to personality can be wildly off the mark, and we show little or no recognition of the extent to which observations of many such events are a far better guide to future behavior than are observations of a few such events.
Caution and humility are called for when we try to predict future trait-related behavior from past trait-related behavior unless our sample of behavior is large and obtained in a variety of situations. Recognizing how difficult it is to code behavior of a particular kind may alert us to the possibility that our predictions about that kind of behavior are particularly susceptible to error. Reminding ourselves of the concept of the fundamental attribution error may help us to realize that we may be overgeneralizing.
Part IV: Experiments
Chp 9. Ignore the HiPPO
Assumptions tend to be wrong. And even if they didn’t, it’s silly to rely on them whenever it’s easy to test them. A/B testing is child-simple in principle: create a procedure you want to examine, generate a control condition, flip a coin to see who (or what) gets which treatment, and see what happens. A difference found using a randomized design establishes that something about the manipulation of the independent variable has a causal influence on the dependent variable. A difference found by using correlational methods can’t guarantee that the independent variable actually exerts an effect on the dependent variable.
Correlational designs are weak because the researcher hasn’t assigned the cases to their condition. For example, lots of homework versus little, radio ads versus circulars, high income versus low income. If you don’t randomly assign cases—people, or animals, or agricultural plots—to a condition, you invite on board all kinds of uncertainties. Cases at one level of the independent variable may differ from those at another level in any number of ways, some of which can be identified and some of which can’t. Any of the measured variables, or variables not measured or even conceived of, could be producing the effect rather than the independent variable of interest. And it might even be that the variable presumed to be dependent is actually producing differences in the variable presumed to be the independent one.
The greater the number of cases—people, agricultural plots, and so on—the greater the likelihood that you’ll find a real effect and the lower the likelihood that you will “find” an effect that isn’t there. If a difference is shown by a statistical test of some sort to be of such a magnitude that it would occur less than one time in twenty by chance, we say it’s significant at the .05 level. Without such a test, we often can’t know whether an effect should be considered real.
When you assign each case to all of the possible treatments, your design is more sensitive. That is to say, a difference of a given magnitude found by a “within design” is more likely to be statistically significant when tested in a “between” design. That’s because all the possible differences between any two cases have been controlled away, leaving only the treatment difference as the possible cause of the relationship.
It’s crucial to consider whether the cases you’re examining (people in the case of research on humans) could influence one another. Whenever a given case might have influenced other cases, such that any one case could have had an impact on other cases, there’s a lack of statistical independence. N is the number of cases that can’t influence one another. Classroom A has an N not of the number of children in it but of just 1. (An exception would exist if influence could safely be considered to be minimal or nonexistent, such as when students take an exam in a room with cubicles where there is no talking.)
Chp 10. Experiments Natural and Experiments Proper
Sometimes we can observe relationships that come close to being as convincing as a genuine experiment. People whose childhoods were spent in circumstances that would have resulted in relatively great exposure to bacteria are less prone to some autoimmune diseases. When this is found across a large number of quite different circumstances—hygienic versus less hygienic countries, farms versus cities, pets versus no pets, vaginal versus Caesarian birth, and so on—the observations begin to be very suggestive. Such observations led scientists to conduct actual experiments that established that early exposure to bacteria does in fact reduce the likelihood of autoimmune diseases.
The randomized control experiment is frequently called the gold standard in scientific and medical research—with good reason. Results from such studies trump results from any and all other kinds of studies. Randomized assignment ensures that there are no differences in any variable between experimental and control cases prior to the manipulation of the independent variable. Any difference found between them can usually be assumed to be due only to the scientist’s intervention. Double-blind randomized control experiments are those where neither the researcher nor the patient knows what condition the patient is in. This type of experiment establishes that only the intervention, and not something about the patients’ or doctors’ knowledge of the intervention, could have produced the results.
Society pays a high cost for experiments not carried out. Because of failure to carry out randomized experiments, we don’t know whether the $200 billion paid for Head Start was effective in improving cognitive abilities or not. Because of randomized control experiments, we do know that some high-quality pre-K programs are enormously effective, resulting in adults who function in much healthier and more effective ways. Proper experiments on pre-K techniques stand a chance of resulting in huge cost savings and great benefits to individuals and society. D.A.R.E. programs don’t produce less teen drug or alcohol use, Scared Straight programs result in more crime, not less, and grief counselors may be in the business of increasing grief rather than reducing it. Unfortunately, in many domains, society has no means of ensuring that interventions are always tested by experiment and no way of guaranteeing that public policy must take into account the results of experiments that are carried out.
Chp 11. Eekonomics
Multiple regression analysis (MRA) examines the association between an independent variable and a dependent variable, controlling for the association between the independent variable and other variables, as well as the association of those other variables with the dependent variable. The method can tell us about causality only if all possible causal influences have been identified and measured reliably and validly. In practice, these conditions are rarely met.
The fundamental problem with MRA, as with all correlational methods, is self-selection. The investigator doesn’t choose the value for the independent variable for each subject (or case). This means that any number of variables correlated with the independent variable of interest have been dragged along with it. In most cases, we will fail to identify all these variables. In the case of behavioral research, it’s normally certain that we can’t be confident that we’ve identified all the plausibly relevant variables.
Despite the above facts, MRA has many uses. Sometimes it’s impossible to manipulate the independent variable. You can’t change someone’s age. Even when we have an experiment, it adds to our confidence to know that the experimentally demonstrated relationship holds in a natural ecology. And MRA is in general vastly cheaper than experiments, and it can identify relationships that would be important to examine experimentally.
When a competently conducted experiment tells you one thing about a given relationship and MRA tells you another, you normally must believe the experiment. Of course, a badly conducted experiment tells you no more than MRA, sometimes less.
A basic problem with MRA is that it typically assumes that the independent variables can be regarded as building blocks, with each variable taken by itself being logically independent of all the others. This is usually not the case, at least for behavioral data. Self-esteem and depression are intrinsically bound up with each other. It’s entirely artificial to ask whether one of those variables has an effect on a dependent variable independent of the effects of the other variable.
Just as correlation doesn’t prove causation, absence of correlation fails to prove absence of causation. False-negative findings can occur using MRA just as false-positive findings do—because of the hidden web of causation that we’ve failed to identify.
Chp 12. Don’t Ask, Can’t Tell
Verbal reports are susceptible to a huge range of distortions and errors. We have no file drawer in our heads out of which to pull attitudes. Attitude reports are influenced by question wording, by previously asked questions, by “priming” with incidental situational stimuli present at the time the question is asked. Attitudes, in other words, are often constructed on the fly and subject to any number of extraneous influences.
Answers to questions about attitudes are frequently based on tacit comparison with some reference group. If you ask me how conscientious I am, I will tell you how conscientious I am compared to other (absent-minded) professors, my wife, or members of some group who happen to be salient because they were around when you asked me the question.
Reports about the causes of our behavior, as you learned in Chapter 3 and were reminded of in this chapter, are susceptible to a host of errors and incidental influences. They’re frequently best regarded as readouts of theory, innocent of any “facts” uncovered by introspection.
Actions speak louder than words. Behavior is a better guide to understanding people’s attitudes and personalities than are verbal responses.
Conduct experiments on yourself. The same methodologies that psychologists use to study people can be used to study yourself. Casual observation can mislead about what kinds of things influence a given outcome. Deliberate manipulation of something, with condition decided upon randomly, plus systematic recording, can tell you things about yourself with an accuracy unobtainable by simply living your life and casually observing its circumstances.
Part V: Thinking, Straight and Curved
Chp 13. Logic
Logic divests arguments of any references to the real world so that the formal structure of an argument can be laid bare without any interference from prior beliefs. Formal logic, contrary to the opinions of educators for twenty-six hundred years, doesn’t constitute the basis of everyday thought. It’s primarily a way of thinking that can catch some kinds of errors in reasoning.
The truth of a conclusion and the validity of a conclusion are entirely separate things. The conclusion of an argument is valid only if it follows logically from its premises, though it may be true regardless of the truth of the premises or whether it follows logically from the premises. An inference need not be logically derivable from any other premises, but it gains in claims for credence if it can be shown to have logical as well as empirical support.
Venn diagrams embody syllogistic reasoning and can be helpful or even necessary for solving some categorization problems.
Errors in deductive reasoning are sometimes made because they map onto argument forms that are inductively valid. That’s part of the reason we’re susceptible to making deduction errors.
Pragmatic reasoning schemas are abstract rules of reasoning that underlie much of thought. These include deontic rules such as the permission schema and the obligation schema. They also include many inductive schemas discussed in this book such as those for statistics, cost-benefit analysis, and reasoning in accord with sound methodological procedures. Pragmatic reasoning schemas are not as general as the rules of logic because they apply only in specific situations, but some of them rest on logical foundations. Others, such as Occam’s razor and the concept of emergence, are widely applicable but don’t rest on formal logic. Still others are merely empirical generalizations of great practical utility, such as the fundamental attribution error.
Chp 14. Dialectical Reasoning
Some of the fundamental principles underlying Western and Eastern thought are different. Western thought is analytic and emphasizes logical concepts of identity and insistence on noncontradiction; Eastern thought is holistic and encourages recognition of change and acceptance of contradiction.
Western thought encourages separation of form from content in order to assess validity of arguments. A consequence is that Westerners are spared some logical errors that Easterners make.
Eastern thought produces more accurate beliefs about some aspects of the world and the causes of human behavior than Western thought. Eastern thought prompts attention to the contextual factors influencing the behavior of objects and humans. It also prompts recognition of the likelihood of change in all kinds of processes and in individuals.
Westerners and Easterners respond in quite different ways to contradictions between two propositions. Westerners sometimes actually believe a strong proposition more when it is contradicted by a weak proposition than when encountering it by itself. Easterners may actually believe a weak proposition more when it is contradicted by a strong proposition than when encountering it by itself.
Eastern and Western approaches to history are very different. Eastern approaches emphasize context, preserve the order of events and emphasize the relations between them, and encourage empathy with historical figures. Western approaches tend to slight contextual factors, are less concerned about preservation of the sequence of events, and emphasize causal modeling of historical processes.
Western thought has been influenced substantially by Eastern thought in recent decades. Traditional Western propositional logic has been supplemented by dialectical principles. The two traditions of thought provide good platforms for critiquing each other. The virtues of logical thought seem more obvious in light of dialectical failings, and the virtues of dialectical thought appear more obvious in light of the limitations of logical thought.
Reasoning about social conflict by younger Japanese is wiser than that of younger Americans. But Americans gain in wisdom over their life span and Japanese do not. Japanese, and undoubtedly other East Asians, are taught about how to avoid and resolve social conflict. Americans are taught less about it and have more to gain as they grow older.
Part VI: Knowing the World
Chp 15. KISS and Tell
Explanations should be kept simple. They should call on as few concepts as possible, defined as simply as possible. Effects that are the same should be explained by the same cause.
Reductionism in the service of simplicity is a virtue; reductionism for its own sake can be a vice. Events should be explained at the most basic level possible. Unfortunately, there are probably no good rules that can tell us whether an effect is an epiphenomenon lacking causal significance versus a phenomenon emerging from interactions among simpler events and having properties not explainable by those events.
We don’t realize how easy it is for us to generate plausible theories. The representativeness heuristic is a particularly fertile source of explanations: we are too inclined to assume that we have a causal explanation for an event if we can point to an event that resembles it. Once generated, hypotheses are given more credence than they deserve because we don’t realize we could have generated many different hypotheses with as little effort and knowledge.
Our approach to hypothesis testing is flawed in that we’re inclined to search only for evidence that would tend to confirm a theory while failing to search for evidence that would tend to disconfirm it. Moreover, when confronted with apparently disconfirming evidence we’re all too skillful at explaining it away.
A theorist who can’t specify what kind of evidence would be disconfirmatory should be distrusted. Theories that can’t be falsified can be believed, but with the recognition that they’re being taken on faith.
Falsifiability of a theory is only one virtue; confirmability is even more important. Contra Karl Popper, science—and the theories that guide our daily lives—change mostly by generating supporting evidence, not by discovering falsifying evidence.
We should be suspicious of theoretical contrivances that are proposed merely to handle apparently disconfirmatory evidence but are not intrinsic to the theory. Ad hoc, post hoc fixes to theories have to be suspect because they are too easy to generate and too transparently opportunistic.
Chp 16. Keeping It Real
Science is based not only on evidence and well-justified theories—faith and hunches may cause scientists to ignore established scientific hypotheses and agreed-upon facts. Several years ago, the literary agent John Brockman asked scores of scientists and public figures to tell him about something they believed that they couldn’t prove—and he published their responses in a book.4 In many instances, an individual’s most important work was guided by hypotheses that could never be proved. As laypeople we have no choice but to do the same.
The paradigms that underlie a given body of scientific work, as well as those that form the basis for technologies, industries, and commercial enterprises, are subject to change without notice. These changes are often initially “underdetermined” by the evidence. Sometimes the new paradigm exists in uneasy partnership with the old, and sometimes it utterly replaces the old.
Different cultural practices and beliefs can produce different scientific theories, paradigms, and even forms of reasoning. The same is true for different business practices.
Quasi-rational practices by scientists, and cultural influences on belief systems and reasoning patterns, may have encouraged postmodernists and deconstructionists to press the view that there are no facts, only socially agreed-upon interpretations of reality. They clearly don’t live their lives as if they believed this, but they nevertheless expended a colossal amount of university teaching and “research” effort promulgating these nihilistic views. Did these teachings contribute to the rejection of scientific findings in favor of personal prejudices so common today?