(__Attention conservation notice__: Seriously, I don’t intend to sit down and write 1,400 words (plus about 600 quoted words) about artificial intelligence, but … it just happens that way.)
Daniel Dennett is the philosopher whom geeks love. In this, and in a couple other respects, he is the heir to Betrand Russell’s throne. One of Russell’s many claims to fame was a philosophical program built around applying scientific and mathematical methods to philosophical questions, in the hopes of giving them definite answers. That didn’t really work out so well, but, as I’ve mentioned before, mistakes are valuable; we tend to sneer at them more than we should.
Dennett’s approach is based around natural selection, whereas Russell’s was based around mathematical logic. For my money, natural selection is more likely to tell us enduring truths about human knowledge than the predicate calculus. Dennett ran as far as he could with the implications of natural selection in [book: Darwin’s Dangerous Idea], where he contended that Darwin’s discovery is “universal acid”: if you accept it at the level of speciation, then you’re forced to accept it at every other layer — all the way up from the structure of atoms to the large-scale structure of the universe.
In [book: Darwin’s Dangerous Idea], and in [book: Brainchildren], Dennett continues to repeat his refrain that philosophers are deeply uncomfortable with the idea of bringing Darwin to the last holdout, namely the human mind. Dennett says philosophers hate the idea that the mind might be seated in the brain, hate that they might lose their hold over another corner of the intellectual universe when another part of the world loses its mystery, and hate that what separates the human mind from that of other species might just be a difference of degree rather than of kind.
When judging whether other humans, or machines, or animals are intelligent, Dennett advocates taking what he calls “the intentional stance”. I can’t do better than Dennett at explaining what this is:
> Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in most instances yield a decision about what the agent ought to do; that is what you predict the agent will do.
Note the empirical content of this stance. If you want to judge whether a robot is rational, you make some predictions about how it would behave if it were acting rationally; then compare the predictions to the actual behavior. There’s no [foreign: a priori] assumption in here about how a robot “in principle” could never act rationally. There is only the comparison of predicted rational behavior to actual behavior.
Dennett’s claim throughout [book: Brainchildren] is that artificial intelligence has brought much of philosophy to the “put up or shut up” stage: if you want to argue about cognition, you will soon enough have to compare your arguments to the results of programming a mind on a computer.
Of course, the potential from computer experimentation doesn’t just extend to making philosophers look bad. When we speculate that various problems in human cognition are “easy,” we get to try to solve those problems on a computer and see (as it turns out) how wrong we were. The most interesting difficulty of this sort in [book: Brainchildren] is the “frame problem”. It’s best too illustrate this with a delightful example from the book:
> Once upon a time there was a robot, named R1 by its creators. Its only task was to fend for itself. One day its designers arranged for it to learn that its spare battery, its precious energy supply, was locked in a room with a time bomb set to go off soon. R1 located the room and the key to the door, and formulated a plan to rescue its battery. There was a wagon in the room, and the battery was on the wagon, and R1 hypothesized that a certain action which it called PULLOUT(WAGON, ROOM) would result in the battery being removed from the room. Straightaway it acted, and did succeed in getting the battery out of the room before the bomb went off. Unfortunately, however, the bomb was also on the wagon. R1 *knew* that the bomb was on the wagon in the room, but didn’t realize that pulling the wagon would bring the bomb out along with the battery. Poor R1 had missed that obvious implication of its planned act.
>
> Back to the drawing board. “The solution is obvious,” said the designers. “Our next robot must be made to recognize not just the intended implications of its acts, but also the implications about their side effects, by deducing these implications from the descriptions it uses in formulating its plans.” They called their next model, the robot-deducer, R1D1. They placed R1D1 in much the same predicament that R1 had succumbed to, and as it too hit upon the idea of PULLOUT(WAGON, ROOM) it began, as designed, to consider the implications of such a course of action. It had just finished deducing that pulling the wagon out of the room would not change the color of the room’s walls, and was embarking on a proof of the further implication that pulling the wagon out would cause its wheels to turn more revolutions than there were wheels on the wagon — when the bomb exploded.
Lots of problems turn out to be much harder than they look at first sight, because we humans solve them without any difficulty. Build a mind from scratch, though, and you realize the awesome complexity of what we do “without thinking.” (My friend Dan Milstein captured and expanded upon this idea in a captivating talk a few months back. I only wish it had been captured on tape.)
There is one final, ultimate test for robot intelligence that no one has really succeeded in dislodging, namely the Turing test. This famous test places a human in one room with a teletype machine, a robot in another, and a judge in a third. The judge is asked to have a conversation with both the robot and the human, and to guess which one is the robot. If the judge can’t do so with better-than-chance odds, the robot is deemed intelligent. The basic insight here is that carrying on an intelligent conversation brings in so many other elements of human intelligence that it would be impossible to sound intelligent without actually *being* intelligent. You’d need lots of experience from living in the world, some humor, the ability to understand the context behind what someone asks you, and millions of other things besides — things that we’re not even aware of because we do them so easily (the frame problem again).
Needless to say, we’re nowhere near writing a computer program that passes the Turing test — a caveat that Dennett lays out up front. Indeed, the least satisfying part of [book: Brainchildren] is that, despite its pretensions about replacing a lot of vague philosophy with scientifically grounded results on human minds, Dennett lays out surprisingly few such results. Those he does lay out are
* a bunch of robots failing the Turing test
* the schematic — prior to any building — of a robot he and colleagues are working on called “Cog”
* a trip he took into the wilds of Kenya with a couple animal researchers, wherein Dennett claims that he taught them the intentional stance. Now, this struck me as unbelievable:
> But what does the [Moving-Into-The-Open] grunt [from vervet monkeys] mean? I suggested to Robert and Dorothy that we sit down and make a list of possible translations and see which we could eliminate or support on the basis of evidence already at hand.
Robert and Dorothy hadn’t already thought of this on their own? I’d be curious to interview Robert and Dorothy, to see whether they find the intentional stance quite as novel as Dennett makes out.
* Robert Hofstadter’s Fluid Analogies Research Group, found in Hofstadter’s [book: Fluid Concepts and Creative Analogies] and in Melanie Mitchell’s [book: Analogy-Making as Perception]. These sound like real steps forward, and I’m excited to read them. I’ve violated my earlier promise, and have gone ahead and reserved these from the library. (This is how it always goes.)
If [book: Brainchildren] needs anything, it is more science and less philosophy. The desire for more empirical results is inevitable, and makes [book: Brainchildren] self-undermining, given how much time Dennett spends castigating his own colleagues in philosophy for their lack of empirical rigor. (He really has it in for Jerry Fodor, about whom I know nothing.) I have to assume that this was deliberate on Dennett’s part, and I expect that he views his role as more of a ground-clearer: dispense with a lot of silly ideas (e.g., the “Chinese Room”) about why intelligent robots are impossible, so that others can then move in and get actual work done.
[book: Brainchildren] suggests that a lot of this work has divided between “top-down” and “bottom-up” approaches. Top-down approaches would begin, in essence, from the intentional stance: we want to solve a particular problem (vision, or boundary detection, or understanding context-filled sentences), so we write a program that can do this. The bottom-up approach would instead start at the level of neural hardware: build a device that looks like a brain — maybe a large collection of McCulloch-Pitts neurons — and don’t work on the top-level problem until the lower levels have been established. If we want any model of the mind to be complete, and we want it to say something about actual human minds (rather than about, say, artificial intelligence considered as an abstract problem), the top level will need to be consistent with a bottom layer that looks like a human brain; that is, if the top-level program could only be implemented on a supercomputer the size of the Pentagon, it probably doesn’t have much to say about human minds. So the top-down and bottom-up approaches both have their merits.
It seems likely that any progress toward an artificially intelligent machine will involve some intermediate steps where the machine doesn’t act like a full-scale human, but acts like what you might call a toy human. It can’t carry on an intelligent conversation about any topic that might reasonably come up, say, but maybe it can talk about wombats. (Imagine writing a computer program that simulates a conversation with a kid who has behavioral problems.) We’ll learn some things from this, which we’ll fix in the next iteration.
Indeed, one of the great lessons I learned from [book: Brainchildren] was the further wisdom of “less thinking, more testing”: a lot of rather silly arguments could be short-circuited by developing rough, ugly prototypes that solve a small corner of a problem; instead of talking about “machine intelligence in principle”, we could then talk about performance *relative to an existing benchmark*. Let’s not talk about a phantom; let’s talk about this, here, now, and how we could improve upon it.
While others push toward that goal, Dennett clears some room for them to work. He deserves our thanks.
__P.S.__: Cosma Shalizi has, of course, a brilliant review of [book: Brainchildren] as well.
The problem is that Dennett likes to pretend that the real problem of consciousness is not a problem at all. Most clearly stated, the real problem is consciousness is reconciling our experience of the world from a first-person perspective with our experience of the world from a third-person experience. We can easily explain the cognitive functions of the brain from a third person perspective, but we can’t explain why it feels like that. To say, as Dennett does, that the world as we experience it from the first-person perspective is epiphenomenal is to badly contradict ordinary experience. Just because I recognize that my perception of red is the product of a complicated neuro-physical interaction with photons and their reflection off of that apple in front of me, does not mean that my experience of the redness of the apple is nothing other than that physical-mechanical process. Dennett likes to say that the whole business of first-person experience is an illusion or some left-over mythical shadow. This seems phenomenally wrong-headed. I am much more inclined to doubt the existence of atoms than the existence of the pressure on my fingertips that I feel as I am typing.
Jerry Fodor is much maligned by scientists, most recently for his book “What Darwin Got Wrong.” He has also taken Steven Pinker to task in his book, “The Mind Doesn’t Work That Way.” So, I can’t imagine you’ll be a huge fan of Fodor. I, on the other hand, am much more in Fodor’s camp than Pinker’s or Dennett’s. He’s a fabulous analyst and insightful critic of where scientists like to wave their hands at difficult problems that don’t fit into their current methodological paradigm. Incidentally, Fodor essentially established the functionalist program in Philosophy of Mind (in the 1970s) that is the current paradigm of cognitive neuroscience, so he has some science cred, too. (I have looked at his argument against Darwin pretty carefully, and I think he has a very legitimate point: namely, that evolution attempts presumes to talk about a teleological arrow toward adaptivity, but cannot provide a mechanism for the prevalence of adaptive characteristics rather than characteristics that just happen to come along with adaptive characteristics in the organism, what he calls “free riders”.)
On the issue you examine above, the “intentional stance.” Very unsophisticated animals, like many insects, display intentionality in this sense. But they would also fail a Turing test, presumably. Only the most sophisticated of animals can, in any meaningful sense, communicate. And this is only the most rudimentary element that everyone agrees is necessary for an experience of conscious rationality like ours. There are lots of other things that go into it, like creative imagination, sympathetic understanding, prevalence of normative judgments, etc. that are not easily built out of linguistic communication. In other words, Dennett is demonstrating the capacity for contemporary neuroscience to explain the most basic forms of consciousness and then papering over a bunch of distinctions and pretending that this explains all of consciousness as we experience it.
LikeLike
Sorry for the length of that comment.
LikeLike
0 Pingbacks