David Gelernter is Prof. of computer science at Yale. In his essay "Dream-Logic, the Internet, and Artificial Thought," he raises the wonderful question: Can computers think? In regard to this question "here is an unfortunate truth: today's mainstream ideas about human and artificial thought lead nowhere." Gelernter gives three wrong assumptions that lead people astray when trying to answer our question.
1) Many people believe that "thinking" is basically the same as "reasoning." I feel certain Gelernter is correct that this assumption is false. Reasoning is a subset of thinking. 'But when you stop work for a moment, look out the window and let your mind wander, you are still thinking. Your mind is still at work. This sort of free-association is an important part of human thought. No computer will be able to think like a man unless it can free-associate.'
2) Many people believe that reality is one thing and your thoughts are something else. Thinking includes things like, for example, dreaming and hallucinating. "No computer will be able to think like a man unless it, too, can inhabit its thoughts; can disappear into its own mind."
3) Many people believe that the thinker and the thought are separate. "When a person is dreaming, hallucinating — when he is inside a mind-made fantasy landscape — the thinker and his thought-stream are not separate. They are blended together. The thinker inhabits his thoughts. No computer will be able to think like a man unless it, too, can inhabit its thoughts; can disappear into its own mind."
Gelernter then says: "What does this mean for the internet: will the internet ever think? Will an individual computer ever think?" For anyone who has wondered about the answer to such questions the rest of Gelernter's essay is worth reading. His answer to these questions are: No. For the following reasons:
- "Intelligence" can only mean human or human-like intelligence. The idea that the internet will develop some new kind of intelligence is "nonsense." "If your new form of intelligence is human-like, it's not new. If it isn't human-like, it's not intelligence." Just like if some substance tastes like chocolate it isn't chocolate, and if it tastes like chocolate it's not something new, it is chocolate.
- Human-like intelligence cannot emerge, in principle, on the internet because "the raw materials are wrong." A computer will never have first-person subjective experience. This is called, in philosophy, the "absent qualia" problem.
"Furthermore, human consciousness and thought emerged from a mechanism (genetic mutation) that allowed endless, nuanced variations to be tested — under the uncompromising pressure of survival or death. Neither condition holds on the internet as we know it. Expecting intelligence to emerge on the internet is like expecting a car to move when you floor the accelerator, even though it has no motor."- We could have a computer that seems to have consciousness, but "as far as we know, there is no way to achieve consciousness on a computer or any collection of computers." A software model of consciousness is not itself consciousness. "The result would be a computer that seems to think. It would be a zombie (a word philosophers have borrowed from science fiction and movies): the computer would have no inner mental world; would in fact be unconscious. But in practical terms, that would make no difference. The computer would ponder, converse and solve problems just as a man would. And we would have achieved artificial or simulated thought, "artificial intelligence.""
- There can be "no cognitive spectrum without emotion." "But emotions are produced by brain and body working together."
- Gelernter concludes: "One day all these problems will be solved; artificial thought will be achieved. Even then, an artificially intelligent computer will experience nothing and be aware of nothing. It will say "that makes me happy," but it won't feel happy. Still: it will act as if it did. It will act like an intelligent human being. And then what?"