Book Report: Mindware

mindware-coverSociety pays dearly for all the experiments it could have conducted but didn’t. Hundreds of thousands of people have died, millions of crimes have been committed, and billions of dollars have been wasted because people have bulled ahead on their assumptions and created interventions without testing them before they were put into place. (Mindware)

Title: Mindware – Tools For Smart Thinking

Author: Richard E. Nisbett

Publisher: Doubleday

Publication Date: 2015

Origin/Intention: I want to be a good thinker, a good evaluator; someone who doesn’t succumb to bias, someone who carefully considers options and makes good decisions. That’s why I read Superforecasting, that’s why I read The HEAD Game, that’s why I read Thinking, Fast and Slow, and that’s why I read Mindware – for a crash course in how to think better. My intention is to come aware with a larger and more developed set of tools that’ll help me improve my cognition.

Summary: Mindware is a crash course in becoming a better thinker.

Nisbett breaks the book down into five sections, each of which gives the reader a set of practical tools and important lessons to aid with cognition.

  • Part I: Thinking About Thought focuses on “thinking about the world and ourselves – how we do it, how we flub it, how to fix it, and how we can make far better use than we do of the dark matter of the mind, namely the unconscious”
  • Part II: The Formerly Dismal Science is “about choices – how classical economists think choices are made and how they think they out to be made, and why modern behavioral economics provides description of actual choice behavior and prescriptions for it that are better and more useful in some ways than those of classical economics”
  • Part III: Coding, Counting, Correlation, and Causality “is about how to make categorizations of the world more accurately, how to detect relationships among events, and just important, how to avoid seeing relationships when they aren’t there”
  • Part IV: Experiments is “about causality: how to distinguish between cases where one event causes another and cases where events occur close together in time or place but aren’t causally related; how to identify if the circumstances in which experiments – and only experiments – can make us confident that events are related causally”
  • Part V: Thinking, Straight and Curved examines “two very different types of reasoning. One of these, logic, is abstract and formal and has always been central to Western thought. The other, dialectical reasoning, consists of principles for deciding about the truth and practical utility of propositions about the world.”
  • Part VI: Knowing the World is “about what constitutes a good theory about some aspect of the world. How can we be sure that what we believe is actually true? Why is it that simpler explanations are normally more useful than more complicated ones? How can we avoid coming up with slipshod and overly facile theories? How can theories be verified, and why should we be skeptical of any assertion that can’t, at least in principle, be falsified?”

My Take: I found Mindware to be a valuable book: it reinforced quite a few things that I already knew (I’ve had a fair amount of statistical and scientific training, plus I’ve read about many of the topics covered) and introduced me to some new concepts, ideas, and techniques. Plus, it caused me to come to the conclusions that I put too much stock in multiple regression analysis, so I’ll have to look out for that in future.

I’ve recommended Mindware to a few people who I think would appreciate its contents and benefit from or make use of its crash-course nature.

I also appreciated the little thought experiments and illustrative examples used throughout, as they really drove home some points. For instance, I’m reminded that I need to pay attention to larger contexts, rather than diving into details. Annoyingly, this point was reinforced on a recent vacation, when I visited a friend and noted some interesting artwork on his wall but failed to put the pieces together into a whole. ARGH!

Mindware isn’t a short read, and might overwhelm folks who don’t hold genuine interest in the subject, but it’s definitely a great way to decrease credulity, to increase skepticism, and to provide the tools we need to critically evaluate the overwhelming amount of information (and opinion masquerading as fact) that we encounter every day.

Read This Book If: …You want to be a better thinker.

Notes and Quotes

Introduction

“The key is learning how to frame events in such a way that the relevance of the principles to the solutions of particular problems is made clear, and learning how to code events in such a way that the principles can actually be applied to the events.”

  • p11: “The key is learning how to frame events in such a way that the relevance of the principles to the solutions of particular problems is made clear, and learning how to code events in such a way that the principles can actually be applied to the events.”

Part I: Thinking About Thought

A full appreciation of the degree to which our understanding of the world is based on inferences makes it clear how important it is to improve the tools we use to make those inferences.”

  • p15: “Our understanding of the world is always a matter of construal – of inference and interpretation…A full appreciation of the degree to which our understanding of the world is based on inferences makes it clear how important it is to improve the tools we use to make those inferences.”
  • p23 talks about lots of ways to influence people via spreading activation, and includes many of the same examples as Drunk Tank Pink
  • p23: note to self
  • Here’s a handy manipulative tip from p24: “Want someone you’re just about to meet to find you to be warm and cuddly? Hand them a cup of coffee to hold. And don’t by any means make that an iced coffee.” Actually, on this note, perhaps in the future when I interview people I’ll make sure to hold an icy beverage, so as to avoid (or at least limit) unconscious impressions.
  • Oh hey, p25 explicitly references Drunk Tank Pink, so there ya go
  • p25, quite reminiscent of Words That Work: “Framing can also be a matter of choosing between warring labels. And those labels matter not just for how we think about things and how we behave towards them, but also for the performance of products in the marketplace and the outcome of public policy debates. Your ‘undocumented worker’ is my ‘illegal alien.’ Your ‘freedom fighter’ is my ‘terrorist.’ Your ‘inheritance tax’ is my ‘death tax.’ You are in favor of abortion because you regard it as a matter of exercising ‘choice.’ I am opposed because I am ‘pro-life.'”
  • p26, with potential implications for my fledgling consulting practice! “The effort heuristic encourages us to assume that projects that took a long time or cost a lot of money are more valuable than projects that didn’t require so much effort or time.”

“The best predictor of future behavior is past behavior. You’re rarely going to do better than that. Honesty in the future is best predicted by honesty in the past, not by whether a person looks you steadily in the eye or claims a recent religious conversion.”

  • p31: “The lesson here is one of the most powerful in all psychology. The best predictor of future behavior is past behavior. You’re rarely going to do better than that. Honesty in the future is best predicted by honesty in the past, not by whether a person looks you steadily in the eye or claims a recent religious conversion. Competence by an editor is best predicted by prior performance as an editor, or at least by competence as a writer, and not by how verbally clever a person seems or how large the person’s vocabulary is.”
  • p32, from the handy Summing Up section of Chapter 1:
    • Remember that all perceptions, judgments, and beliefs are inferences and not direct readouts of reality
    • Be aware that our schemas affect our construals
    • Remember that incidental, irrelevant perceptions and cognitions can affect our judgment and behavior – it will increase accuracy to try to encounter objects and people in as many different circumstances as possible if a judgment about them is important
    • Be alert to the possible role of heuristics in producing judgments
  • p34: “The failure to recognize the importance of contexts and situations and the consequent overestimation of the role of personal dispositions is, I believe, the most pervasive and consequential inferential mistake we make. The social psychologist Lee Ross has labeled this the fundamental attribution error.”
  • p35, in the same vein as Success and Luck, and also reminiscent of Matthew Syed’s story of how he became a table tennis champion, with a rare amount of access to facilities and top-level coaching: “There was not likely another teenager in the world who had the kind of access to computers that [Bill] Gates had. Behind many a successful person lies a string of lucky breaks that we have no inkling about.”
  • p39: “People can find it hard to penetrate beyond appearances and recognize the extent to which social roles affect behavior, even when the random basis of role assignment and the prerogatives of particular roles are made abundantly clear. And, of course, in everyday life it’s often less clear why people occupy the roles they do, so ti can be very difficult to separate role demands and advantage from the intrinsic attributes of the occupant of the role.”
  • p39: “The fundamental attribution error gets us in trouble constantly. We trust people we ought not to, we avoid people who really are perfectly nice, we hire people who are not all that competent – all because we fail to recognize situational forces that may be operating on the person’s behavior. We consequently assume that future behavior will reflect the dispositions we infer from present behavior.”

“The fundamental attribution error gets us in trouble constantly. We trust people we ought not to, we avoid people who really are perfectly nice, we hire people who are not all that competent – all because we fail to recognize situational forces that may be operating on the person’s behavior. We consequently assume that future behavior will reflect the dispositions we infer from present behavior.”

  • p41: “We should choose our acquaintances carefully because we’re going to be highly influenced by them.”
  • This bit from p43 reminds me of a story a colleague relayed to me – he had conducted an interview and was at first bemused, but then infuriated, because the candidate kept mirroring his body language to a comical degree: “It’s not just attitudes and ideology that are influenced by other people. Engage in a conversation with someone in which you deliberately change your bodily position from time to time. Fold your arms for a couple of minutes. Shift most of your weight to one side. Put one hand in a pocket. Watch what your conversation partner does after each change and try not to giggle. ‘Ideomotor mimicry’ is something we engage in quite unconsciously. When people don’t do it, the encounter can become awkward and unsatisfying. But the participants won’t know what it is that went wrong. Instead: ‘She’s kind of a cold fish’; ‘we don’t share much in common.'”
  • p45: “It’s important to know that people generally think that their own behavior is largely a matter of responding sensibly to the situation they happen to be in – whether that behavior is admirable or abominable. We’re much less likely to recognize the situational factors other people are responding to, and we’re consequently much more likely to commit the fundamental attribution error when judging them – seeing dispositional factors as the main or sole explanation for their behavior.”
  • p47 has a couple of examples that showed me that I really need to put conscious effort into recognizing and paying attention to larger contexts, rather than immediately zooming in on specific attributes
  • p48 sums up Chapter 2:
    • Pay more attention to context. This will improve the odds that you’ll correctly identify situational factors that are influencing your behavior and that of others.
    • Realize that situational factors usually influence your behavior and that of others more than they seem to, whereas dispositional factors are usually less influential than they seem
    • Realize that other people think their behavior is more responsive to situational factors than you’re inclined to think – and they’re more likely to be right than you are
    • Recognize that people can change
  • p50: “As should be clear from the two chapters you’ve just read, a huge amount of what influences our judgments and our behavior operates under cover of darkness… Although it feels as if we have access to the inner workings of our minds, for the most part we don’t. But we’re quite agile in coming up with explanations for our judgments and behavior that bear little or no resemblance to the correct explanations.”
  • p55 with a neat point that reminded me of part of this surprisingly neat Nautilus article, Why the Emoji Was Inevitable : “Part of the reason conscious consideration of choices can lead us astray is that it tends to focus exclusively on features that can be verbally described. And typically those are only some of the most important features of objects. The unconscious considers what can’t be verbalized as well as what can, and as a result makes better choices.”

“Part of the reason conscious consideration of choices can lead us astray is that it tends to focus exclusively on features that can be verbally described. And typically those are only some of the most important features of objects. The unconscious considers what can’t be verbalized as well as what can, and as a result makes better choices.”

  • p61…I’m a big fan/advocate of mulling: “The most important thing to know about the unconscious is that it’s terrific at solving certain kinds of problems that the conscious mind handles poorly if at all. But although the unconscious mind can compose a symphony and solve a mathematical problem that’s been around for centuries, it can’t multiple 173 by 19. Ask yourself to figure that out as you drift off to sleep and see if the product pops into your mind while you’re brushing your teeth the next morning. It won’t.”
  • p64 sums up Chapter 3:
    • Don’t assume that you know why you think what you think or do what you do
    • Don’t assume that other people’s accounts of their reasons or motives are any more likely to be right than are your accounts of your own reasons or motives
    • You have to help the unconscious help you
  • p66: “The most important thing I have to tell you – in this whole book – is that you should never fail to take advantage of the free labor of the unconscious mind.”

“The most important thing I have to tell you – in this whole book – is that you should never fail to take advantage of the free labor of the unconscious mind.”

Part II: The Formerly Dismal Science

Calculations of the value of a human life are repellent and sometimes grossly misused, but they are often necessary nonetheless in order to make sensible policy decisions.

  • p72 starts a section that’s quite reminiscent of the analytic lessons in The HEAD Game
  • p75: “As I pointed out in the previous chapter, the unconscious needs all possible relevant information, and some of this information will be generated only by conscious processes. Consciously acquired information can then be added to unconscious information, and the unconscious will then calculate an answer that it delivers to the conscious mind. Do by all means perform your cost-benefit analysis for the decisions that really matter to you. And then throw it away.”

“Do by all means perform your cost-benefit analysis for the decisions that really matter to you. And then throw it away.”

  • p82 sums up Chapter 4:
    • Microeconomists are not agree on just how it is that people make decisions or how they should make them
    • The more important and complicated the decision, the more important it is to do [cost-benefit] analysis. And the more important and complicated the decision is, the more sensible it is to throw the analysis away once it’s done.
    • Even an obviously flawed cost-benefit analysis can sometimes show in high relief what the decision must be
    • There is no fully adequate metric for costs and benefits, but it’s usually necessary to compare them anyway
    • Calculations of the value of a human life are repellent and sometimes grossly misused, but they are often necessary nonetheless in order to make sensible policy decisions
    • Tragedies of the commons, where my gain creates negative externalities for you, typically require binding and enforceable intervention
  • p86, on sunk costs: “Policy makers who are not economists often spend your money for no better reason than to rescue money they’ve already spent… That bad money is sunk. Even more sinister is the politician who urges continuing a war, putting more lives at risk, ‘so that the fallen shall not have died in vain.'”

“Policy makers who are not economists often spend your money for no better reason than to rescue money they’ve already spent… That bad money is sunk. Even more sinister is the politician who urges continuing a war, putting more lives at risk, ‘so that the fallen shall not have died in vain.'”

  • p93 sums up Chapter 5:
    • Expended resources that can’t be retrieved should not be allowed to influence a decision about whether to consume something that those resources were used to obtain
    • You should avoid engaging in an activity that has lower net benefit than some other action you could take now or in the future
    • Falling into the sunk cost trap always entails paying unnecessary opportunity costs
    • Attention to costs and benefits, including sunk cost and opportunity traps, pays
  • p96: “The performing arts presenters at my university made good use of the endowment effect in their promotional campaigns. Sending people a twenty-dollar voucher they can use for ticket purchase nets 70 percent more ticket sales than mailing them a letter with a promo code for a twenty-dollar discount. People don’t want to lose money by failing to cash in on the voucher they possess; but they’re willing to forgo the possible gain of using the promo code when they buy their tickets.”
  • p96: Loss aversion produces inertia. Changing our behavior usually involves a cost of some kind.”
  • p97: “The biggest problem with loss aversion is that it prompts a status quo bias.”
  • p103, with a wide range of possible beneficial behavior modification applications (e.g., energy usage, voting turn-out, healthy habits, savings rates, etc.): “Knowledge that others are behaving better than one would be inclined to think they are is often much more effective than preaching – which can backfire by suggesting that bad practices are more widespread than they actually are.”
  • p105 sums up Chapter 6:
    • Loss considerations tend to loom too large relative to gain considerations
    • We’re overly susceptible to the endowment effect – valuing a thing more than we should simply because it’s ours
    • We’re a lazy species: we hang on to the status quo for no other reason than that it’s the way things are
    • Choice is way overrated. Too many choices can confuse and make decisions worse – or prevent needed decisions from being made. Offer your customers A or B or C. Not A through Z.

Part III: Coding, Counting, Correlation, and Causality

“I can’t stress enough how important it is to actually collect data in a systematic way and then carry out calculations in order to determine how strong the association is between two variables. Just living in the world and noticing things can leave you with a hopelessly wrong view about the association between two events. Illusory correlation is a real risk.”

  • p116-118 contains a section on The Interview Illusion, which really interests me. I’ve long considered (obsessed over?) how best to interview and evaluate candidates for job positions
  • p116, on people putting substantial weight on a short interview versus a larger body of work: “The problem here is that judgments about a person based on small samples of behavior are being allowed to weigh significantly against the balance of a much larger amount of evidence.”
  • p117, adding some numbers, and reminiscent of Daniel Kahneman’s experiences interviewing candidates for officer training in the Israeli military: “But predictions based on the half-hour interview have been shown to correlate less than .10 with performance ratings of undergraduate and graduate students, as well as with performance ratings for army officers, businesspeople, medical students, Peace Corps volunteers, and every other category of people that has ever been examined. That’s a pretty pathetic degree of prediction – not much better than a coin toss. it wouldn’t be so bad if people gave the interview as much weight as it deserves, which is little more than to let it be a tiebreaker, but people characteristically undermine the accuracy of their predictions by overweighting the value of the interview relative to the value of other, more substantial information.”

“To bring home the lesson of the interview data: Given a case where there is significant, presumably valuable, information about candidates for school or a job that can be obtained by looking at the folder, you are better off not interviewing candidates.

  • p117: “To bring home the lesson of the interview data: Given a case where there is significant, presumably valuable, information about candidates for school or a job that can be obtained by looking at the folder, you are better off not interviewing candidates. If you could weight the interview as little as it deserves, that wouldn’t be true, but it’s almost impossible not to overweight it because we tend to be so unjustifiably confident that our observations give us very good information about a person’s abilities and traits.”
  • p117: “We ought to be thinking about the interview as a very small, fragmentary, and quite possibly biased sample of all the information that exists about the person. Think of the blind men and the elephant, and try to force yourself to believe you’re one of those blind men.”
  • p126 sums up Chapter 7:
    • Observations of objects or events should often be thought of as samples of a population
    • The fundamental attribution error is primarily due to our tendency to ignore situational factors, but this is compounded by our failure to recognize that a brief exposure to a person constitutes a small sample of a person’s behavior
    • Increasing sample size reduces error only if the sample is unbiased
    • The standard deviation is a handy measure of the dispersion of a continuous variable around the mean
    • If we know that an observation of a particular kind of variable comes from the extreme end of the distribution of that variable, then it’s likely that additional observations are going to be less extreme
  • p135: “I can’t stress enough how important it is to actually collect data in a systematic way and then carry out calculations in order to determine how strong the association is between two variables. Just living in the world and noticing things can leave you with a hopelessly wrong view about the association between two events. Illusory correlation is a real risk.”
  • Man, people are stubborn and dumb, p136: “In fact, virtually no response to any Rorschach card tells you anything at all about a person. Hundreds of thousands of hours and scores of millions of dollars were spent using the test before anyone bothered to see whether there was any actual association between responses and symptoms. And then for decades after the lack of association was established, the illusion of correlation kept the test in circulation, and more time and money were wasted.”
  • p144: “The most effective way to avoid making unjustifiably strong inferences about someone’s personality is to remind yourself that a person’s behavior can only be expected to be consistent from one occasion to another if the context is the same. And even then, many observations are necessary for you to have much confidence in your prediction.”
  • p145 sums up Chapter 8:
    • Accurate assessment of relationships can be remarkably difficult
    • When we try to assess correlations for which we have no anticipations, as when we try to estimate the correlation between meaningless or arbitrarily paired events, the correlations must be very high for us to be sure of detecting it
    • We’re susceptible to illusory correlations
    • The representativeness heuristic underlies many of our prior assumptions about correlation
    • Correlation doesn’t establish causation, but if there’s a plausible reason why A might cause B, we readily assume that correlation does indeed establish causation

Part IV: Experiments

“Society pays dearly for all the experiments it could have conducted but didn’t. Hundreds of thousands of people have died, millions of crimes have been committed, and billions of dollars have been wasted because people have bulled ahead on their assumptions and created interventions without testing them before they were put into place.”

  • I love this Will Durant quote from p147: “Inquiry is fatal to certainty.”
  • p148 touches on what’s at stake: “Society pays dearly for all the experiments it could have conducted but didn’t. Hundreds of thousands of people have died, millions of crimes have been committed, and billions of dollars have been wasted because people have bulled ahead on their assumptions and created interventions without testing them before they were put into place.”
  • p150, on data-driven decisions in web design and the 2008 Obama campaign: “Instead of basing decisions about web design on HiPPOs – the derisive term for the ‘highest-paid person’s opinion’ – they were acting on incontrovertible facts about what worked best.”
  • p150, in a section on A/B testing, has this brief introduction before a number of neat examples that pertain to election campaigns: “Website designers have learned what social psychologists discovered decades ago about their intuitions concerning human behavior in novel situations. As Siroker puts it, ‘Assumptions tend to be wrong.'”
  • Here’s one such example that seems worth remembering: “There is now a large body of research on what works for getting out the vote. Which is more effective at getting people to the polls: telling people that turnout is expected to be light or that turnout is expected to be heavy? You might think that telling people that voting is going to be light would make them more likely to vote. A quick cost-benefit analysis shows that your vote would count for more than if turnout was heavy. But remember how susceptible people are to social influence. They want to do what other people like them are doing. If most are drinking a lot, they’ll go along; if they’re not drinking a lot, they’ll cut back. If most people are reusing their towels in their hotel room, so will they. And so telling voters there will be heavy turnout in their precinct turns out to be much more effective than saying there will be light turnout.”
  • p156 sums up Chapter 9:
    • Assumptions tend to be wrong. And even if they didn’t, it’s silly to rely on them whenever it’s easy to test them.
    • Correlated designs are weak because the researcher hasn’t assigned the cases to their condition
    • The greater the number of cases – people, agricultural plots, and so on – the greater the likelihood that you’ll find a real effect and the lower the likelihood that you will “find” an effect that isn’t there
    • When you assign each case to all of the possible treatments, your design is more sensitive
    • It’s crucial to consider whether the cases you’re examining (people in the case of research on humans) could influence one another. Whenever a given case might have influenced other cases, such that any one case could have had an impact on other cases, there’s a lack of statistical independences. N is the number of cases that can’t influence one another.
  • p159: Chapter 10 promises to illustrate “…how disastrous it can be when societies decide to rely on assumptions about the effects of interventions rather than conducting experiments about their effects.”…and it does, covering things like the side-effects of our ultra-hygienic society to well-intentioned but ineffective government programs like Scared Straight, D.A.R.E., Head Start, grief counselling, and others.
  • Here’s one infuriating example from p168 that’s familiar to anyone who’s read Black Box Thinking: “A study commissioned by the Washington State Institute for Public Policy estimated that every dollar spent on Scared Straight incurs crime and incarceration costs of more than two hundred dollars.” Note that it says “incurs”, not “prevents”. The explanation continues, “Why doesn’t Scared Straight work? It certainly seems to me that it should. We don’t know why it doesn’t work, and we certainly don’t know why it should be counterproductive, but that doesn’t matter. It’s a tragedy that it was invented and a crime that it hasn’t been stopped. Why hasn’t it been stopped? I’ll venture the guess that it just seems so obvious that it should work. Many people, including many politicians, prefer to trust their intuitively compelling causal hypotheses over scientific data.”
  • p169: note to self
  • p169 touches on the social and economic cost of ignorance, stubbornness, and assumptions: “Meanwhile, programs that damage young people being conducted, and programs that help are underused or used not at all. Society is paying a high price in dollars and human suffering for wrong assumptions.”

“Programs that damage young people being conducted, and programs that help are underused or used not at all. Society is paying a high price in dollars and human suffering for wrong assumptions.”

  • p169 sums up Chapter 10:
    • Sometimes we can observe relationships that come close to being as convincing as a genuine experiment
    • The randomized control experiment is frequently called the gold standard in scientific and medical research – with good reason
    • Society pays a high cost for experiments not carried out
  • p170: note to self
  • p171: “Assumptions are so often wrong when it comes to human behavior that it’s essential to conduct experiments if at all possible to test any hypothesis about behavior that matters.”
  • p175: note to self
  • p175: “In many instances, Multiple Regression Analysis gives one impression about causality, and actual randomized control experiments give another. In such cases we should believe in the results of the experiments.”
  • p177 (I’ve probably put too much stock in MRA studies, myself, without truly understanding/appreciating their weaknesses): “The problem with correlational studies such as those based on MRA is that they are by definition susceptible to errors based on self-selection.”
  • p179…like, how is this shit legal? Oh, there’s the answer. “There is almost no evidence one way or another about the effects of any of the other fifty thousand or so food supplements that are on the market. Most of the evidence we do have about any given supplement indicates that it’s useless; some indicates it’s actually harmful. Unfortunately, lobbying by the supplement industry resulted in Congress exempting supplements from federal regulation, including any requirement that manufacturers do experimental studies of actual effectiveness. As a consequence, billions of dollars are spent every year on nostrums that are useless or worse.”
  • p179: note to self
  • p182 introduced to me a neat French term: “Why might Fryer and Levitt have been willing to assume that an MRA study could be powerful and accurate enough to cast doubt on the implications of experimental studies? I suspect it’s because of what the French call déformation professionelle – the tendency to adopt the tools and point of view of people who share one’s profession. For many of the types of research that economics do, MRA is the only available option.”
  • This chunk from p184 is just a great example…especially the presumption behind the ethical objection to experimental scrutiny: “Some eminent economists don’t seem to recognize the value of experiments at all. The economist Jeffrey Sachs started an extremely ambitious program of health, agricultural, and educational interventions in a small number of African villages, with the intent of improving quality of life. The program’s cost is very high relative to alternatives, and it has been severely criticized by other development experts. Though some of Sachs’s villages improved their residents’ conditions, similar villages in Africa improved more without his intervention. Sachs could have ended the criticism by randomly assigning similar village to his treatment condition versus no treatment and showing that his villages made better progress than those control villages. Sachs has refused to conduct this experiment on what he described as ‘ethical grounds.’ What’s unethical is not to conduct experiments when they’re feasible. Sachs spent a great deal of other people’s money, but we have no idea whether that money improved people’s lives more than alternative, possibly less expensive programs would have.”

“What’s unethical is not to conduct experiments when they’re feasible.”

  • p187: “Many of my fellow psychologists are going to be distressed by my bottom line here: such questions as whether academic success is affected by self-esteem, controlling for depression, or whether the popularity of fraternity brothers is affected by extroversion, controlling for neuroticism, or whether the number of hugs a person receives per day confers resistance to infection, controlling for ages, educational attainment, frequency of social interaction, and a dozen other variables, are not answerable by MRA. What nature hath joined together, multiple regression analysis cannot put asunder.”
  • p187: “Correlation doesn’t prove causation. But the problem with correlational studies is worse than that. Lack of correlation doesn’t prove lack of causation – and this mistake is made possibly as often as the converse error.”

“Correlation doesn’t prove causation. But the problem with correlational studies is worse than that. Lack of correlation doesn’t prove lack of causation – and this mistake is made possibly as often as the converse error.”

  • p189 sums up Chapter 11:
    • Multiple regression analysis (MRA) examines the association between an independent variable and a dependent variable… The method can tell us about causality only if all possible causal influences have been identified and measured reliably and validly. In practice, these conditions are rarely met.
    • The fundamental problem with MRA, as with all correlational methods, is self-selection
    • Despite the above facts, MRA has many uses
    • When a competently conducted experiment tells you one thing about a given relationship and MRA tells you another, you normally must believe the experiment
    • A basic problem with MRA is that it typically assumes that all the independent variables can be regarded as building blocks, with each variable taken by itself being logically independent of all the others. This is usually not the case, at least for behavioral data.
    • Just as correlation doesn’t prove causation, absence of correlation fails to prove absence of causation
  • p191, introducing Chapter 12: “This chapter will show you how a variety of behavioral measures can provide much more trustworthy answers to questions about people’s attributes and states than their verbal reports can.”
  • p194, but I’m sure there’s no way unscrupulous surveyors would take advantage… “The truth is, the answer to just about every question concerning attitudes and behavior can be pushed around – often by things that seem utterly fortuitous or silly.”

“The truth is, the answer to just about every question concerning attitudes and behavior can be pushed around – often by things that seem utterly fortuitous or silly.”

  • p195: “Many attitudes are extremely context-dependent and constructed on the fly. Change the context and you change the expressed attitude. Sadly, even trivial-seeming circumstances such as question wording, the type and number of answer categories used, and the nature of the preceding questions are among the contextual factors that can profoundly affect people’s reports of their opinions. Even reports about attitudes of high personal or social importance can be quite mutable.”
  • p201 (applies very well to politicians): “The take-home lesson of this section: whenever possible don’t listen too much to people talk the talk, watch them walk the walk.”
  • p201: “In the great chain of investigation strategies, true experiments beat natural experiments, which beat correlational studies (including multiple regression analyses), which, any day, beat assumptions and Man Who statistics. Failure to use the best scientific methodology can have big costs – for individuals, for institutions, and for nations.”
  • p203 sums up Chapter 12:
    • Verbal reports are susceptible to a huge range of distortions and errors
    • Answers to questions about attitudes are frequently based on tacit comparison with some reference group
    • Reports about the causes of our behavior…are susceptible to a host of errors and incidental influences
    • Actions speak louder than words
    • Conduct experiments on yourself

Part V: Thinking, Straight and Curved

The truth of a conclusion and the validity of a conclusion are entirely separate things.

  • p221 sums up Chapter 13 (that’s right, I didn’t have any other notes!)
    • Logic divests arguments of any references to the real world so that the formal structure of an argument can be laid bare without any interference from prior beliefs
    • The truth of a conclusion and the validity of a conclusion are entirely separate things
    • Venn diagrams embody syllogistic reasoning and can be helpful or even necessary for solving some categorization problems
    • Errors in deductive reasoning are sometimes made because they map onto argument forms that are inductively valid
    • Pragmatic reasoning schemas are abstract rules of reasoning that underlie much of thought
  • p224, with something to keep in mind to avoid feeling all superior about logic: “So how did the Chinese manage to make great progress in mathematics, as well as invent hundreds of important things that the West invented much later or not at all, if they lacked a tradition of logic? We’re forced to acknowledge that a civilization can make enormous strides without ever paying much attention to formal logic. This is true not only of China but of all cultures in East Asia with roots in the Confucian tradition, including Japan and Korea.”
  • p228: “So Western thought can get things wrong in its rush to stamp out a seeming contradiction rather than entertaining the possibility that both propositions might have some truth. Eastern thought can get things wrong by finding weak propositions more plausible when contradicted because of an attempt to bolster a weak proposition in order to split the difference with a contradictory but stronger argument. The logical and dialectical systems of thought have a lot to learn from each other: each gets some things right that the other gets wrong.”
  • p241 sums up Chapter 14
    • Some of the fundamental principles underlying Western and Eastern thought are different
    • Western thought encourages separation of form from content in order to assess validity of arguments
    • Eastern thought produces more accurate beliefs about some aspects of the world and the causes of human behavior than Western thought
    • Westerners and Easterners respond in quite different ways to contradictions between two propositions
    • Eastern and Western approaches to history are different
    • Western thought has been influenced substantially by Eastern thought in recent decades
    • Reasoning about social conflict by younger Japanese is wiser than that of younger Americans. But Americans gain in wisdom over their life span and Japanese do not

Part VI: Knowing the World

We aren’t obligated to pay attention to theories that are untestable. Which isn’t to say we’re not allowed to believe untestable theories – just that we need to recognize their weakness compared to theories that are. I can believe anything I want about the world, but you have to reckon with it only if I provide evidence for it or an air-tight logical derivation.

  • p248 has a saying attributed to Google: “Done is better than perfect.”
  • p251: “We don’t realize how easy it is for us to explain away evidence that would seem on the surface to contradict our hypotheses. And we fail to generate tests of a hypothesis that could falsify the hypothesis if in face the hypothesis is wrong. This is one type of confirmation bias.”
  • p255: “We aren’t obligated to pay attention to theories that are untestable. Which isn’t to say we’re not allowed to believe untestable theories – just that we need to recognize their weakness compared to theories that are. I can believe anything I want about the world, but you have to reckon with it only if I provide evidence for it or an air-tight logical derivation.”
  • p256 (see the example in my confirmation bias link, above): “Many of the theories we come up with in everyday life are utterly lacking in constraints. They’re cheap and lazy, tested if at all by searching only for confirmatory evidence, and too readily salvaged in the face of contradictory evidence.”
  • p260: “There is an asymmetry: empirical generalizations can be refuted but can’t be proved true because they always rest on inductive evidence that could be refuted at any moment by an exception.”
  • p263 sums up Chapter 15
    • Explanations should be kept simple
    • Reductionism in the service of simplicity is a virtue; reductionism for its own sake can be a vice
    • We don’t realize how easy it is for us to generate plausible theories
    • Our approach to hypothesis testing is flawed in that we’re inclined to search only for evidence that would tend to confirm a theory while failing to search for evidence that would tend to disconfirm it
    • A theorist who can’t specify what kind of evidence would be disconfirmatory should be distrusted
    • Falsifiability of a theory is only one virtue; confirmability is even more important
    • We should be suspicious of theoretical contrivances that are proposed merely to handle apparently disconfirmatory evidence but are not intrinsic to the theory
  • p273 sums up Chapter 16
    • Science is based not only on evidence and well-justified theories – faith and hunches may cause scientists to ignore established scientific hypotheses and agreed-upon facts
    • The paradigms that underlie a given body of scientific work, as well as those that form the basis for technologies, industries, and commercial enterprises, are subject to change without notice
    • Different cultural practices and beliefs can product different scientific theories, paradigms, and even forms of reasoning
    • Quasi-rational practices by scientists, and cultural influences on belief systems and reasoning patterns, may have encouraged postmodernists and deconstructionists to press the view that there are no facts, only socially agreed-upon interpretations of reality. They clearly don’t live their lives as if they believed this, but they nevertheless expended a colossal amount of university teaching and “research” effort promulgating these nihilistic views. Did these teachings contribute to the rejection of scientific findings in favor of personal prejudices so common today?
  • p279 shares this interesting anecdote from the author’s past: “Many years ago I attended a psychology department talk by someone who billed himself as a computer scientist. Not many people used that job title in those days. The speaker began by announcing, ‘I am going to deal with the question of what it might mean to humans’ conceptions of themselves if one day computers could beat any international chess master, write a better novel or symphony than any human, and solve fundamental questions about the nature of the world that have stumped the greatest intellects in history.’ His next utterance produced an audible gasp from the audience. ‘I want to make two things clear at the outset. First, I don’t know whether computers will ever be able to do those things. Second, I’m the only person in the room with a right to an opinion on the question.’ The second sentence has rung in my ears ever since that day. The speaker shocked me into the habit of subjecting other people’s claims – and my own – to the expertise test.”
  • p282 closes the book off, and includes these recommendations “for how to approach the question of expert opinion about matters that are important to you or society as a whole. 1. Try to find out whether there is such a thing as expertise about the question. There is no expertise about astrology. 2. If there is such a thing as expertise, try to find out whether there is a consensus among the experts. 3. If there is a consensus, then the stronger that consensus seems to be, the less choice you have about whether to accept it.”

“I want to make two things clear at the outset. First, I don’t know whether computers will ever be able to do those things. Second, I’m the only person in the room with a right to an opinion on the question.”

Lee Brooks is the founder of Cromulent Marketing, a boutique marketing agency specializing in crafting messaging, creating content, and managing public relations for B2B technology companies.

Tagged with: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
Posted in Books, Everything, Leadership, Math and Science
One comment on “Book Report: Mindware
  1. […] (Richard Nisbett is the author of Mindware): “One of the reasons that the tale is so powerful is that, despite the motivated reasoning […]

What do *you* think?

Enter your email address and get posts delivered straight to your inbox.

Archives