Everything Is Obvious Book Cover

Some actions seem illogical and beyond our comprehension. We often find ourselves engaging in heated debates about what makes sense to us. But amidst this perplexity, have we ever stopped to ponder whether people from vastly different social and cultural backgrounds share our perspectives? Have we ever considered that a similar case might arise, where even ourselves in the future might call ourselves out, saying that it does not make sense?

Dr. Duncan J. Watts takes us on a captivating journey through the fascinating world of common sense in his book, “Everything Is Obvious: Once You Know The Answer.” Common sense as a double-edged sword is an integral part of our daily lives, both a blessing and a curse. While it guides our decisions, it can also cloud our judgment when crucial choices loom before us.

Dr. Watts skillfully unveils common sense as a stealthy element in the vast landscape of our minds. Despite its profound influence over our decision-making, we often remain ignorant of its intricacies. This book urges us to challenge the status quo, encouraging us to step beyond the boundaries of what we think we know.

Through thought-provoking insights, Dr. Watts reveals how our perceptions are shaped by our unique cultural lens. The book serves as a reminder that true understanding lies in embracing these differences rather than dismissing them.

In a world where common sense often rules, Dr. Watts argues that sometimes all we need is the extraordinary and the extraordinary lies in uncommon sense.

This book challenges readers to reevaluate their beliefs in common sense. So, if you’re ready to embark on a mind-bending exploration of the complexities of human thought and behavior, this book is a must-read. Prepare to be enlightened, inspired, and perhaps, a little unsettled, as you discover a world beyond your own perspective.

Summary

The myth of common sense

Common sense is not so much a worldview as a grab bag of logically inconsistent, often contradictory beliefs, each of which seems right at the time but carries no guarantee of being right any other time.

  • Common sense exhibits some mysterious quirks, one of the most striking of which is how much it varies over time, and across cultures.
  • Common sense is “common” only to the extent that 2 people share sufficiently similar social and cultural experiences.
  • Disagreement over matters of common sense are hard to resolve because it’s unclear to either side on what grounds one can even conduct a reasonable argument.
  • Once we start to examine our own beliefs, it becomes increasingly unclear even how the various beliefs we espouse at any given time fit together.
  • We have the impression that our particular beliefs are all derived from some overarching philosophy, but the reality is that we arrive at them quite independently, often haphazardly.

Plans fail, in other words, not because planners ignore common sense, but rather because they rely on their own common sense to reason about the behavior of people who are different from them.

Duncan J. Watts, Everything is Obvious: Once You Know The Answer.

Two features of common sense

Carl Taylor in the American Sociological Society in Chicago in 1946 defines the features of common sense:

  1. Common sense is overwhelmingly practical, meaning that common sense is more concerned with providing answers to questions than in worrying about how it came by the answers.
  2. The power of common sense lies in its ability to deal with every concrete situation. Common sense just “knows” what the appropriate thing to do is in any particular situation, without knowing how it knows it.

The misuse of common sense

  • We are using it to reason about how other people behaved—or will behave—in circumstances about which we have at best an incomplete understanding. At some level, we understand that the world is complicated, and that everything is somehow connected to everything else. But when we read some story, we don’t try to understand how all these different problems fit together. We just focus on the one little piece of the huge underlying tapestry of the world that’s being presented to us at that moment, and form our opinion accordingly.
  • Everyone will have his or her own views, and that these views will be logically inconsistent or even contradictory. Some may believe that people are poor because they lack certain necessary values of hard work and thrift, while others may think they are genetically inferior, and others still may attribute their lack of wealth to lack of opportunities, inferior systems if social support, or other environmental factors. All these beliefs will lead to different proposed solutions, not all of which can be right. Yet policy makers empowered to enact sweeping plans that will affect thousands or millions of people are no less tempted to trust their intuition about the cause of poverty than ordinary citizens reading the newspaper.

Too much intuition

We accept that if we want to learn how the world works, we need to test our theories with careful observations and experiments, and then trust the data no matter what our intuition says. And as laborious as it can be, the scientific method is responsible for essentially all the gains in, understanding the natural world that humanity has made over the past few centuries. But when it comes to the human world, where our unaided intuition is so much better that it is in physics, we rarely feel the need to use the scientific method.

How common sense fails us?

The combination of intuition, experience, and received wisdom on which we rely to generate common sense explanations of the social world also disguises certain errors of reasoning that are every bit as systematic and pervasive as the errors of common sense physics.

  • First type of error: our mental model of individual behavior is systematically flawed.

When we think about why people do what they do, we invariably focus on factors like incentive, motivations, and beliefs, of which we are consciously aware. As sensible as it sounds, decades of research in psychology and cognitive science have shown that this view of human behavior encompasses just the tip of the proverbial iceberg. Some trivial or seemingly irrelevant factors also do matter. The result is that no matter how we carefully we try to put ourselves in someone else’s shoes, we are likely to make serious mistakes when predicting how they’ll behave anywhere outside of the immediate here and now.

  • Second type of error: Our mental model of collective behavior is even worse.

The basic problem is whenever people get together in groups, they interact with one another, sharing information, spreading rumors, passing along recommendations, comparing themselves, rewarding and punishing each other behaviors → these influences pile up in unexpected ways → generating collective behavior that is “emergent” in the sense that it cannot be understood solely in terms of its component parts → sometimes we invoke fictitious “representative individuals” like “the crowd”, “the workers”, whose actions stand in for the actions and interactions of the many. And sometimes we single out “special people”, like “leaders”, “visionaries”, or “influencers” → the result: our explanations of collective behavior paper over most of what is actually happening.

  • Third type of error: we learn less from history than we think we do, and that this misperception in turn skews our perception of the future.

We seek explanations of events only after the fact, our explanations place far too much emphasis on what actually happened relative to what might have happened but didn’t. Moreover, because we only try to explain events that strike us as sufficiently interesting, our explanations account only for a tiny fraction even of the things that do happen. The result is that what appear to us to be causal explanations are in fact just stories—descriptions of what happened that tell us little, if anything, about the mechanisms at work. Nevertheless, because these stories have the form of causal explanations, we treat them as if they have predictive power. In this way, we deceive ourselves into believing that we can make predictions that are impossible, even in principle.

Common sense often works just like mythology.

Common sense explanations give us the confidence to navigate form day to day and relieve us of the burden of worrying about whether what we think we know is really true, or is juts something we happen to believe → the cost is that we think we have understood things that in fact we have simply papered over with a plausible-sounding story. And because this illusion of understanding in turn undercuts our motivation to treat social problems the way we treat problems in medicine, engineering, and science, the unfortunate result is that common sense actually inhibits our understanding of the world.

Common sense and rationality

  • Rational choice theory have expanded the scope of what is considered rational behavior dramatically to include not just self-interested economic behavior, but also more realistic social and political behavior as well.
    • All such theories tend to include variations on 2 fundamental insights:
      • First, people have preferences for some outcomes over others
      • Given these preferences on the first point, they select among the means available to them as best they can to realize the outcomes that they prefer.
    • Its implications: all human behavior can be understood in terms of individuals’ attempts to satisfy their preferences.

In Freakonomics, Steven Levitt and Stephen Dubner illustrate the explanatory power of rational choice theory:

  • If we want to understand why people do what they do, we must understand the incentives that they face and hence their preference for one outcome vs another. When someone does something that seems strange or puzzling to us, rather than writing them off as crazy or irrational, we should instead seek to analyze their situation in hopes of finding a rational incentive.

Thinking is about more than thought

  • The implicit assumption that people are rational until proven otherwise is a hopeful, even enlightened, one that in general ought to be encouraged.
  • Rationalizing human behavior is precisely an exercise in stimulating, in our mind’s eye, what it would be like to be the person whose behavior we are trying to understand. Only when we can imagine this simulated version of ourselves responding in the manner of the individual in question do we really feel that we have understood the behavior in question.
  • Our mental simulations have a tendency to ignore certain types of factors that turn out to be important. The reason is that when we think about how we think, we instinctively emphasize consciously accessible costs and benefit such as those associated with motivations, preferences, and beliefs.
  • An individual’s choices and behavior can be influenced by “priming” them with particular words, sounds, or other stimuli.
  • Our responses can also be skewed by the presence of irrelevant numerical information. This is called anchoring effect → affects all sorts of estimates that we make, from estimating the number of countries in the African Union to how much money we consider to be a fair tip or donation.
    • Whenever you receive a solicitation from a charity with a “suggested” donation amount, in fact, or a bill with a precomputed top percentages, you should suspect that your anchoring bias is being exploited—because by suggesting amounts on the high side, the requestor is anchoring your initial estimate of what is fair.
  • Individual preferences can also be influenced dramatically simply by changing the way a situation is presented. Emphasizing one’s potential to lose money on a bet, makes people more risk averse while emphasizing one’s potential to win has the opposite effect, even when the bet itself is identical. Even more puzzling, an individual’s preferences between 2 items can be effectively reversed by introducing a third alternative. Depending on which third option is introduced, in other words, the preference of the decision maker can effectively be reversed between A and B, even though nothing about either has changed. What’s even stranger is that the third option—the one that causes the switch in preferences—is never itself chosen.

The influencers and The Law of The Few

  • Our perceptions of who influences us may say more about social and hierarchical relations than influence per se.
  • The Law of The Few is two hypothesis that have been mashed together:
    • Some people are more influential than others
    • The influence of these people is greatly magnified by some contagion process that generates social epidemics
  • The Law of The Few claims that the effect would be much greater–that the disproportionality should be “extreme”—but what we found was the opposite. Typically the multiplier effect for an influencer like this was less than 3, and in many cases, they were not any more effective at all. Influences may exist, in other words, but not the kind of influences posited by The Law of The Few.
    • The reason is simply that when influence is spread via some contagious process, the outcome depends far more on the overall structure of the network than on the properties of the individuals who trigger it.
    • Social epidemics require just the right conditions to be satisfied by the network of influence. And as it turned out, the most important condition had nothing to do with a few highly influential individuals at all. Rather, it depended on the existence of a critical mass of easily influenced people who influence other easy-to-influence people. When this critical mass existed, even an average individual was capable of trigger a large cascade—just as any spark will suffice to trigger a large forest fire when the conditions are primed for it. Conversely, when the critical mass did not exits, not even the most influential individual could trigger any more than a small cascade. The result is that unless one can see where particular individuals fit into the entire network, one cannot say much about how influential they will be—no matter what you can measure about them.
  • There really was nothing special about these individuals—because we had created them that way. The majority of the work was being done not by a tiny percentage of people who acted as the triggers, but rather by the much largely mass of easily influenced people. What we concluded, therefore, is that the kind of influential person whose energy and connections can turn your book into a bestseller or your product into a hit is most likely an accident of timing and circumstances. An “accidental influential” as it were.

History, the fickle teacher

  • What we know happened, happened, not something else, because they can only be constructed after we know the outcome itself, we can never be sure how much these explanations really explain, versus simply describe.
  • Any one of these factors (re: historical events) might have been at least as responsible for the drop in violence as the surge. Or perhaps it was some combination, or perhaps it was something else entirely. How are we to know?
    • One way to be sure would be to “rerun” history many times.
    • In reality, of course this experiment got run only once, and so we never got to see all other versions of it that may or may not have turned out differently. As a result, we can’t ever really be sure what caused the drop in violence. But rather than producing doubt, the absence of “counterfactual” versions of history tends to have the opposite effect—namely tat we tend to perceive what actually happened as having been inevitable. This tendency called creeping determinism is related to the better-known phenomenon of hindsight bias, the after-the-fact tendency to think that we “knew it all along.”
    • Creeping determinism is subtly different form hindsight bias and even more deceptive. Hindsight bias can be counteracted by reminding people of what they said before they knew the answer or by forcing them to keep records of their predictions. but even when we recall perfectly accurately how uncertain we were about the way events would transpire—even when we concede to have been caught completely by surprise—we still have a tendency to treat the realized outcome as inevitable.

It’s not over till it’s over

  • Within the narrow confines of a movie narrative, it seems obvious that the right time to evaluate everything should be at the end. But in real life, the situation is far more ambiguous. Just as the characters in a story don’t know when the ending is, we can’t know when the movie of our own life will reach its final scene. And even if we did, we could hardly go around evaluating all choices, however trivial, in light of our final state on our deathbed.
  • Choices that seem insignificant at the time we make them may one day turn out to be of immense import. And choices that seem incredibly important to use now may later seem to have been of little consequence. We just won’t known until we know. And even then we still may not know, because it may not be entirely up to us to decide.
  • In reality, the events that we label as outcomes are never really endpoints. Instead, they are artificially imposed milestones.
  • Something always happens afterward, and what happens afterward is liable to change our perception of the current outcome, as well as our perception of the outcomes that we have already explained.

Whoever tells the best story wins

  • Psychologists have found that simpler explanations are judged more likely to be true than complex explanations, not because simpler explanations actually explain more, but rather just because they are simpler.
  • Somewhat paradoxically, explanations are also judged to be more likely to be true when they have informative details added, even when the extra details are irrelevant or actually make the explanations less likely.
  • In addition to their content, moreover, explanations that are skillfully delivered are judged more plausible than poorly delivered ones, even when the explanations themselves are identical. And explanations that are intuitively plausible are judged more likely than this that are counterintuitive.
  • Finally, people are observed to be more confident about their judgements when they have an explanation at hand, even when they have no idea how likely the explanation is to be correct.
  • The key difference between science and storytelling, however, is that in science we perform experiments that explicitly test our “stories.” And when they don’t work, we modify them until they do.
    • Because history is only run once, however, our inability to do experiments effectively excludes precisely the kind of evidence that would be necessary to infer a genuine cause-and-effect relation. In the absence of experiments, therefore, our storytelling abilities are allowed to run unchecked, in the process burying most of the evidence that is left, either because it’s not interesting or doesn’t fit with the story we want to tell. Expecting history to obey the standards of scientific explanation is therefore not just unrealistic, but fundamentally confused.

The dream of prediction

  • Although the experts performed slightly better than random guessing, they did not perform as well as even a minimally sophisticated statistical model. Even more surprisingly, the experts did slightly better when operating outside their area of expertise than within it.

Author: Duncan J. Watts

Publication date: 1 July 2011

Publisher: Atlantic Books

Number of pages: 352 pages


Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *