Politics, Science

Tribal cognition: a few additional notes

Free Photo: Tobacco Flower

In response to my recent post on tribal cognition as a barrier to reason-based political deliberation, a reader draws my attention to a 2012 New York Times Op-Ed. in which Cass Sunstein proposes a theory very close to what I called “tribal cognition”:

In the face of entrenched social divisions, there’s a risk that presentations that carefully explore both sides will be counterproductive. And when a group, responding to false information, becomes more strident, efforts to correct the record may make things worse.

Can anything be done? There is no simple term for the answer, so let’s make one up: surprising validators.

People tend to dismiss information that would falsify their convictions. But they may reconsider if the information comes from a source they cannot dismiss. People are most likely to find a source credible if they closely identify with it or begin in essential agreement with it. In such cases, their reaction is not, “how predictable and uninformative that someone like that would think something so evil and foolish,” but instead, “if someone like that disagrees with me, maybe I had better rethink.”

It follows that turncoats, real or apparent, can be immensely persuasive. If civil rights leaders oppose affirmative action, or if well-known climate change skeptics say that they were wrong, people are more likely to change their views.

In fact, a recent interview at Vox with Stephan Lewandowsky, author of The Debunking Handbook, suggests that many psychologists have already embraced Sunstein’s proposal. That is, they recognize that the perceived political identity of both messenger and message can influence whether someone is receptive to an evidence-based argument. In other words, it appears that psychologists studying political communication already view “cultural cognition” and (what I called) “tribal cognition” as distinct, and recognize that both can play important roles in thwarting reason-based deliberation. (In fact, the idea that people will be more open to persuasion by experts they perceive as sharing their values already appears in the Kahan et al. “HPV Vaccine” article from 2008 — before the Sunstein Op-Ed.!)

Continue reading

Standard
Politics, Science

Sources of political disagreement: “tribal cognition” versus “cultural cognition”

Free Photo: Balloon Test

Why does the presentation of persuasive evidence — even evidence of a scientific consensus — so often fail to resolve political debates? How is it, for example, that so much of the American public on the right refuses to accept the scientific consensus regarding the causes and risks of climate change?

For a while now, I’ve thought that Dan Kahan’s theory of “cultural cognition” offered the most persuasive answer to these questions. Kahan rejects the idea that the problem lies in Republicans’ lack of information about climate science. Offering more evidence isn’t going to resolve the issue at this point. It might even aggravate the problem.

Rather, Kahan offers empirical evidence that the Republican resistance to climate science is an example of a more general phenomenon: the human tendency to arrive at conclusions that are congenial to our cultural values, and to resist, dismiss, or attack conclusions that threaten our values and identities.

But the more I’ve learned about the specifics of the cultural cognition theory, the more I’ve felt like it leaves something out.

In this post, I’d like to propose a hypothesis that complements cultural cognition’s explanation for the frequent failures of evidence-based discussion to lead to increased agreement on politically charged issues. When I first heard about Kahan’s work, I thought that the theory I’m about to present was what he meant by “cultural cognition.” But as I’ve read more about his work, it’s become clear to me that the idea I have in mind is a distinct one.

I’ll call the hypothesis “tribal cognition.”

Continue reading

Standard
Literature, Science

Hubert Dreyfus, Artificial Intelligence, and the Needing Machine

“You helped me discover my ability to want.” — Samantha, the operating system in Spike Jonze’s Her

As I understand it, research into computers and thinking has basically proceeded along two tracks. In the spirit of a thought experiment, I’d like to suggest a third track — the creation of what I’ll call a “needing machine.”

But first, let me sketch the two main tracks. My sketch is based largely on a narrative offered by Edward Feigenbaum in a recent interview. The first main track of research into computers and thinking belongs to the field of cognitive science, which is closely aligned with psychology. Cognitive science focuses on the attempt to formalize the ways that human beings think. The idea is that once human thought has been formalized, it could conceivably be programmed into a computer that would then be able to mimic human thought. The dream would be the creation of an artificial brain embodied in a computer, capable of understanding in ways that are similar to the ways a human being understands.

The second main track of research is what gets called “artificial intelligence.” Unlike cognitive science, artificial intelligence is less concerned with how humans think, and more concerned with using computers to accomplish particular, concrete tasks. It’s more aligned with computer science than with psychology. Initially, some computer scientists assumed that the path toward useful computer cognition would rely on insights into human cognition. But this isn’t how things turned out. The achievements of artificial intelligence have not resulted from building computers that think like human beings any more than the achievements of mechanized flight have resulted from building airplanes whose wings flap like the wings of birds. Deep Blue didn’t think like a human chess player, and Google’s search engine doesn’t think like a human librarian. Both were designed by human beings to solve very particular problems using methods suited to computers with little or no concern for whether those methods resembled the methods used by human beings to solve similar problems. The fact that a computer was able to outperform an exceptionally qualified human being in chess says more about the limits of chess as a test of cognition than it does about computers’ thinking abilities.

Continue reading

Standard