Alan Turing believed that some day a machine would be able to pass the Turing test (repeatedly to rule out luck or flaws of the interrogator or witness).
It must be noted that AI is useful regardless of failing on this test. Paradoxically, it is currently also beneficial that some `hard AI problem' are currently unsolved. It allows for alleviating automated abuseA.2.
In advance, along with the proposal, Turing has addressed several objections that people may have:
- Theological objection: ``God has endowed only humans with the gift of a soul and to be able to think.''. Turing replies that God could create such a machine if He wishes so.
- 'Heads in the Sand' Objection: 'The consequences of machines thinking would be too dreadful. Let us hope and believe that they cannot do so.'.
- Mathematical objection: See below and the view of Penrose (section 4.1.2). Still a topic of debate.
- Lady Lovelace's Objection: ``[this machine] can do whatever we know how to order it to perform''. Turing argues that we may one day know how to make it perform well enough to pass his test.
- Argument from Continuity in the Nervous System. See [Tur50]
- Informality of Behavior argument. [Ibid.]
- The Argument from Extra-Sensory Perception. [Ibid.]
- Consciousness. Still a topic of debate. Also by Penrose, section 4.1.2 and [Ibid.]
- Arguments from Various Disabilities. [Ibid.]
The objection of Penrose [Pen90] is mostly based on Searle's `Chinese room' argument and Gödel's mathematical argument. Turing defends his opinion, but inconclusively: ``Those who hold to the mathematical argument would, I think, mostly be willing to accept the imitation game as a basis for discussion.''.
The test assesses AI in the category of ``human intelligence'' (see table 1.1 in section 1.3.2). It is not so clear with the `thinks like' / `acts like' classification. When the machine acts like a human and if it passes the test there would be no way to tell to what degree it thinks like a human.
So far, the test has not been passed when all requirements were applied. This illustrates the difficulty of solving `hard AI problems'. It appears that Alan Turing realized this, he concluded his proposal as follows.
``We can only see a short distance ahead, but we can see plenty there that needs to be done.''
Perhaps one day, when plenty of work is finished, the machine would succeed. And when it does, what will the next challenge be? (see figure A.1)
Figure A.1:
Turing Test 2.0. Courtesy of xkcd.com (CC ShareAlike license)
|
Erik de Bruijn
2007-10-19