Literature, Philosophy, Science

Structuralism, Poststructuralism, and the Decline of the Literary Humanities

It seems hard to believe, from our current vantage point in which the academic study of literature appears to be in a state of perpetual crisis, that there was a time, not so long ago, when the literary humanities reigned over an expanding scholarly empire — one that was not unlike the empire of the quantitative social sciences, and especially economics, today. Instead of literary academics feeling tempted or obligated to apply quantitative methods to the study of literature — as, for example, Franco Moretti has done, with results of (predictably, it seems to me) real but limited value — non-literary scholars felt tempted or obligated to become conversant in literary theory.

I was reminded of this while reading some essays by Jerome Bruner, an academic psychologist who died in 2016. In works like “Life as Narrative” (1987), Bruner found it useful to draw on literary theory about the structure of narratives as a source of ideas for understanding his own field, and even for designing empirical experiments. He cites Vladimir Propp, Frank Kermode, and Paul de Man, among many others.

Who outside of literary academia reads the works of literary academics today? What happened?

I would like to propose, a little controversially, that the literary humanities finds itself in its current state of isolation in part because of its rejection of structuralism. By “structuralism,” I do not mean only what Lévi-Strauss meant when he introduced the term. I mean something more broad: arguments that attempt to reduce complex, unwieldy human phenomena into relatively simple structures that can then be used to make predictions. The kind of models that the structuralist anthropologist Mary Douglas developed, for example. In its turn to poststructuralism, American literary academia developed a profound antipathy toward this kind of thought — an antipathy, I would argue, that has discouraged literary scholars from developing insights and models that might be of use outside of academic literary studies.

When literary scholarship turned against structuralism, it also implicitly turned against modeling. But models are a large part of what we use to make sense of our worlds, and they are one of the primary ways that ideas move between academic disciplines. To reject the search for predictively useful models is to invite the kind of intellectual isolation in which literary academia currently finds itself.

Continue reading

Literature, Science

Hubert Dreyfus, Artificial Intelligence, and the Needing Machine

“You helped me discover my ability to want.” — Samantha, the operating system in Spike Jonze’s Her

As I understand it, research into computers and thinking has basically proceeded along two tracks. In the spirit of a thought experiment, I’d like to suggest a third track — the creation of what I’ll call a “needing machine.”

But first, let me sketch the two main tracks. My sketch is based largely on a narrative offered by Edward Feigenbaum in a recent interview. The first main track of research into computers and thinking belongs to the field of cognitive science, which is closely aligned with psychology. Cognitive science focuses on the attempt to formalize the ways that human beings think. The idea is that once human thought has been formalized, it could conceivably be programmed into a computer that would then be able to mimic human thought. The dream would be the creation of an artificial brain embodied in a computer, capable of understanding in ways that are similar to the ways a human being understands.

The second main track of research is what gets called “artificial intelligence.” Unlike cognitive science, artificial intelligence is less concerned with how humans think, and more concerned with using computers to accomplish particular, concrete tasks. It’s more aligned with computer science than with psychology. Initially, some computer scientists assumed that the path toward useful computer cognition would rely on insights into human cognition. But this isn’t how things turned out. The achievements of artificial intelligence have not resulted from building computers that think like human beings any more than the achievements of mechanized flight have resulted from building airplanes whose wings flap like the wings of birds. Deep Blue didn’t think like a human chess player, and Google’s search engine doesn’t think like a human librarian. Both were designed by human beings to solve very particular problems using methods suited to computers with little or no concern for whether those methods resembled the methods used by human beings to solve similar problems. The fact that a computer was able to outperform an exceptionally qualified human being in chess says more about the limits of chess as a test of cognition than it does about computers’ thinking abilities.

Continue reading