Literature, Science

Hubert Dreyfus, Artificial Intelligence, and the Needing Machine

“You helped me discover my ability to want.” — Samantha, the operating system in Spike Jonze’s Her

As I understand it, research into computers and thinking has basically proceeded along two tracks. In the spirit of a thought experiment, I’d like to suggest a third track — the creation of what I’ll call a “needing machine.”

But first, let me sketch the two main tracks. My sketch is based largely on a narrative offered by Edward Feigenbaum in a recent interview. The first main track of research into computers and thinking belongs to the field of cognitive science, which is closely aligned with psychology. Cognitive science focuses on the attempt to formalize the ways that human beings think. The idea is that once human thought has been formalized, it could conceivably be programmed into a computer that would then be able to mimic human thought. The dream would be the creation of an artificial brain embodied in a computer, capable of understanding in ways that are similar to the ways a human being understands.

The second main track of research is what gets called “artificial intelligence.” Unlike cognitive science, artificial intelligence is less concerned with how humans think, and more concerned with using computers to accomplish particular, concrete tasks. It’s more aligned with computer science than with psychology. Initially, some computer scientists assumed that the path toward useful computer cognition would rely on insights into human cognition. But this isn’t how things turned out. The achievements of artificial intelligence have not resulted from building computers that think like human beings any more than the achievements of mechanized flight have resulted from building airplanes whose wings flap like the wings of birds. Deep Blue didn’t think like a human chess player, and Google’s search engine doesn’t think like a human librarian. Both were designed by human beings to solve very particular problems using methods suited to computers with little or no concern for whether those methods resembled the methods used by human beings to solve similar problems. The fact that a computer was able to outperform an exceptionally qualified human being in chess says more about the limits of chess as a test of cognition than it does about computers’ thinking abilities.

It turns out that computers can perform some tasks very well, especially those that can be accomplished by searching among a set of well-defined possibilities very quickly and comparing them according to well-defined formal criteria. But computers are nowhere close to performing even the simplest versions of other basic cognitive tasks, like answering questions that require common sense or understanding the logic of many simple sentences. In fact, so little computer science research today is dedicated to developing computers that would be able to think and understand like human beings that some computer scientists have begun to lament the abandonment of this line of research. No computer in the world today has anything remotely approaching the cognitive capacities — the ability to understand, to learn — that even a two-year-old child possesses.

One person who, famously, has never been optimistic regarding the capacity of computers to think like human beings is Hubert Dreyfus, the Berkeley philosopher and Heidegger scholar. His ideas — which derive from his reading of existentialist philosophers including Heidegger — are, it seems to me, the starting point for a possible third track of research into computers and thinking.

I’ve only been exposed to Dreyfus’ critiques of the early optimism of AI through secondary sources, but I’ve always assumed — based on Dreyfus’ background in the study of Heidegger — that one of his critiques must be that something like need, or caring, or a lack, must conceptually precede practical meaning, which in turn conceptually precedes rational, linguistic thought. In other words, to put things in a crude and probably misleading version of Heideggarian terms: if no one in the world had ever needed to pound something with a hard object, and somebody had stumbled across a hammer one day, that hammer wouldn’t have been a hammer — it wouldn’t have shown up as something-to-pound-objects-with. If it were possible for something to have absolutely no relevance to us, it would have no meaning — no being — in our world at all. Every meaning that anything has in the world is made possible, on a Heideggarian view, by some underlying practical need, which in turn is made possible by our having been thrown into the world as human beings who are finite, incomplete, needing beings — neither a perfectly self-present God nor a perfectly un-self-present rock. To illustrate with a contrast: if you were God, and there were no difference between your thoughts and the world, then nothing would have any meaning for you. In fact, there wouldn’t even be a world in any sense we could recognize, nor would there be any you. There would be no difference between thought and reality, subject and object, and so there would be no thought or reality or subject or object.

Or, to translate this rough sketch into even rougher Heideggerese: “Being-as-such” (something like finitude or a lack of self-present completeness) is the condition of possibility for “being” (the meaning that anything has), and practical, “ready-to-hand” meanings (like the meaning of a hammer as useful-for-hammering-nails) are conceptually prior to and make possible “present-at-hand” meanings (like the meaning of a hammer as one might describe it analytically, “objectively,” from a detached and theoretical point of view — in the root sense of “theoretical”: viewing things as a spectator rather than a participant). According to this simplified (but probably still borderline incomprehensible) reading of Heidegger, and I assume according to Dreyfus as well, the Cartesian rationalists get it all wrong when they begin their understanding of human beings and human thought with disembodied, uncommitted, rational calculation and inference.

So, since human need makes practical meaning possible, and practical meaning makes rational or theoretical thinking possible, any attempt to replicate human thought simply by codifying some of the formal methods involved in one narrow, hyperrational domain of human thought — high-level, abstract symbolic manipulation — is very unlikely to succeed.

I assume this must have been one of Dreyfus’ critiques of the overoptimistic early claims made on behalf of AI, or something close to one of them. In any case, it’s a position that one can imagine someone taking: to the extent that AI focuses on imitating a narrow, disembodied, disinterested, hyperrational, relevance-blind, formalistic subset of human “thinking,” it is unlikely to arrive at a machine that understands in anything resembling the way that human beings understand.

One direction in which this line of thinking might lead is toward the conclusion that the attempt to make computers think like human beings is a hopeless project, and has been hopeless from the start. I’m not sure whether Dreyfus believes this, but in any case, it’s not my focus here. What has always interested me is another direction in which something like Dreyfus’ line of thought might be taken. The direction might best be viewed as a thought experiment, or a possible subject for a work of science fiction. Assuming that a computer can never be made to think like a human being so long as a computer can never need and experience relevance like a human being, might it nevertheless be possible to construct a machine that is driven in its cognitive activities by something like need — by some programmable, computerized analogy of need that makes possible something like a computerized analogue of relevance?

Could something like need every be programmed? If so, what would it look like?

Of course, I don’t think there is a single, determinate answer to this question. It’s more of a provocation, like a metaphor.

To restate the idea: Up to now, efforts to make truly thinking machines, rather than machines that perform extensively designed, extremely narrow calculating and data-processing tasks, have aimed to imitate human thinking, and perhaps they have failed in part because they ignored the roots of human cognition in embodied human need. It is difficult to imagine a computer being programmed to experience embodied human need any time soon. Yet it is less difficult to imagine a computer being programmed to act on the basis of something like need — to search, cognitively, for a completeness that will never arrive. The “need” motivating the computer would be a distinctly computer-like need, the kind of need that a computer might have, and the interest in the exercise would lie in seeing what kind of “thought” resulted from the computer’s acting on the basis of this “need.”

I note that the “thought” might be utterly incomprehensible to us. As Wittgenstein said, if a lion could speak, we would not be able to understand it. I actually don’t think Wittgenstein is entirely right about that. I imagine if a lion could speak, there are a few basic things we could understand in what it was saying — some expressions we could understand and translate as expressions of hunger, pleasure, fear, and so on — precisely because the form of life of a lion is not so entirely alien to our own. But I’m not sure that I’d be able to understand anything said by a highly developed needing machine.

When I was thinking of turning the thought experiment of a needing machine into a little work of science fiction, I concluded that the needing machine’s software would have to rely heavily on machine learning. That is, it wouldn’t be the case that the machine would just express its “need” by performing a series of repetitive, unchanging operations on some inexhaustible mass of raw materials, such as text and images from the Internet. Computers are already doing that today. Instead, the machine would express its need by continually reprogramming its own operations in order to better approach its unreachable cognitive goal. I imagined it would reprogram itself partly by inserting some kind of random mutations into its code, or into the code of semi-autonomous offspring, in a Darwinian process of natural selection. The needing machine’s first project might be to do some relatively simple task that we already know computers can do, but the needing machine would not be given the method, and the researchers would see if the needing machine could arrive at a way of doing the thing somehow through its own machine learning. The result would have ended up being strange and new, suited to a computer rather than a human being. Or the needing machine would have been given a project similar to the projects of a virus or bug, which also seem to engage in very simple cognition-like tasks. Later in the story, the needing machine would have been given the project of developing something like curiosity, and then the ability to engage in something like communication, and so on.

All of these science fiction speculations seem like a possible extension of Dreyfus’ line of thought concerning AI, but I wonder whether anyone has actually tried to create something like a simple version of a needing machine. I doubt that anyone has, both because of the apparent hostility of AI practitioners to Dreyfus’ point of view, and because the experiment would have so little immediate practical value.

Standard

One thought on “Hubert Dreyfus, Artificial Intelligence, and the Needing Machine

  1. Pingback: Thomas Sheehan on Heidegger | Against the Logicians

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s