Communications of the ACM vol. 35, no. 12 (Dec. 1992), pp. 13-14.
In response to: Wilkes, M. V., ``Artificial Intelligence as the Year 2000 Approaches'', Communications of the ACM vol. 35, no. 8 (Aug. 1992), pp. 17-20.
See also: Letter by H. J. R. Grosch, Communications of the ACM vol. 36, no. 5 (May 1993), p. 16.
The essence of Wilkes' argument is that digital computers cannot simulate the operation of the analog brain. Not only is the operation of the brain digital in part -- neurons fire in discrete spikes -- but Wilkes falsely assumes that the brain can make use of unlimited analog accuracy.
In biology, nature is not a mathematician, but an engineer: organisms are built of imperfect components to live in a world where things can and do go wrong. A design of brains that relied on perfectly accurate analog computation would not work: no analog computer, biological or electronic, is perfectly accurate and stable. The human body experiences variations in body temperature during the day and in blood nutrient levels between meals; both of these affect the operation of neurons. We lose neurons with age and experience variations in mental function when we are tired, sleepy, hungry, or emotionally upset, haven't had our morning coffee, or have had a few drinks -- but remain intelligent throughout. It is ludicrous to suggest that digital computers, which even in single precision carry six digits of accuracy, are incapable of simulating an analog machine whose components seldom achieve two digits of stability in their operation.
A.I. is a large and really hard problem, in a class with other great challenges such as understanding the human genome or conquering cancer. No one should suppose that it will be achieved easily, or that failure to achieve it by some estimated date implies impossibility. There has indeed been excessive hype from some A.I. researchers, but there also has been real progress. Anyone who suggests that no progress has been made since Turing's day simply does not know the field.