Dean's Scholars Seminar
Do Scientists Have Crystal Balls?
What happens when accomplished scientists attempt to predict the future of their science? Are they more likely to be right than a typical science fiction author? If they make mistakes, is there a systematic bias:
- Are scientists more likely to predict faster or slower progress than actually occurs?
- When scientists have been overly optimistic, why did their predictions fail to come true?
We can divide the scientists' predictions into two categories:
- Is it because the technical challenges turned out to be harder than they thought?
- Is it because there wasn't enough money spent on developing answers?
- Is it because people didn't want the technology?
Are scientists any good at answering question 1? Are they any good at answering question 2? Do they even try on question 2?
- Where will the technology be in 10 or 50 or 100 or 500 years?
- What will society look like as a result of the technological change?
We can look at these issues in the context of almost any scientific discipline. I propose that we start with artificial intelligence (AI), although we should look at other areas as well.
AI is an interesting domain to consider for several reasons. People have been fantasizing about building machines with human-like capabilities for hundreds of years. Now we're close enough to being able to do it that it's tempting to make serious and very optimistic predictions about what we'll be able to do and when. In the seminar, we'll look at many different predictions spread out over the last 100 or more years. Most of the ones we'll look at were written by scientists, i.e., by the people who ought to know. We'll look at what artificial intelligence actually can do and see how good the scientists' crystal balls are. We'll also consider some other less scientific forecasts and see how they compare. Remember Hal?
We'll find predictions about the future of AI all over the map. The most interesting (although not necessarily the most accurate) ones of course are at the fringes -- both fringes. So we'll see people saying:
- All the hard problems will soon be solved. There will be machines that function as well or better than people and the world will be a completely different place.
- It is not possible (because they're not human, or they don't "feel" or they're made of silicon or whatever) for machines to think.
We'll have to look at these two very different kinds of predictions ("it will happen" vs. "it can never happen on principle") somewhat differently. The "it will happen" predictions may fail either because some insurmountable hurdle is reached or because things just take longer than was originally thought. The "it can never happen on principle" predictions may fail if the principles were wrong and the impossible turns out to be possible after all.
On reserve in the PCL is a collection of books that focus on attempts to predict the evolution of various technologies, with an emphasis on AI. To see a list of the books that are on reserve visit
the reserve list site and select our class.
If you're going to read just two books for this class, I recommend:
- Corn, Joseph & Brian Horrigan, Yesterday's Tomorrow: Past Visions of the American Future, 1996.
Kurzweil, Ray, The Age of Spiritual Machines, 1999.
The Big Picture: Prediciting the Future of Science and Technology
Focus: on Artificial Intelligence
From the Perspective of Science
Where Do We Stand Today?
Speech and Language
Where Will We Be Tomorrow?
"One of the most difficult challenges in trying to predict the future of technology is to distinguish between what will be commonplace from what may be marely feasible. That whichi sfeasible may not be economical for any but the most special of circumstances. Most infrastructure evolves and emerges in an incremental fashion. Each increment is, in effect, economically self-supporting on some basis." -- Vinton G. Cerf, "When They're Everywhere", in Beyond Calculation: the Next Fifty Years of Computing, 1997
"There are many rules for prediciting the future. One well-known rule is that most short-term estimates are optimistic, while long term predictions are pessimistic." -- R. W. Hamming, "How to Think About Trends", in Beyond Calculation: the Next Fifty Years of Computing, 1997
See the books on reserve in the PCL.
- An AI researcher, Brian Scassellati of MIT, responds to the question, is the plot of the movie AI plausible? As quoted in the Sept. 4, 2001 issue of PC Magazine:
Question:Will robots ever be able to feel love? Reply:It depends on what you mean by feel. I think that we're clearly going to be able to build something that shows the outward characteristics -- something that displays the correct emotions. And then I think it's really a philosophical question of "Does this thing really feel emotions?" If we can show that outward appearance, that's basically the same thing. It's the only way we understand people anyway, so in that way, the robot has actually become a complete surrogate for another person.
Question:What place will robots have in our lives? Reply:
In the next five to ten years, more robots will become commercial entities. You can already buy a robot that will mow your lawn. In the 20-year range, we'll see robotic systems that are social. In 100 years, robotic systems are going to become less identifiable as robots. Robotic systems will become part of our daily life and part of ourselves.
Voices from the Past (in chronological order within each category)
McCorduck, Pamela, Machines Who Think, Freeman, 1979. A history of AI, starting way before the first computer.
"As We My Think"(1945) Vanevar Bush's proposal for the Memex system, "which is a sort of mechanized private file and library. ... A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory." The information is stored on microfilm and everything is indexed.
"Computing Machinery and Intelligence" (1950) In this paper, Turing proposes the Turing Test (an imitation game, in which a machine attempts to behave like a person and fool a human interrogator) for machine intelligence. He also said, "I believe that in about fifty years' time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning."
In 1990 Hugh Loebner agreed to underwrite an annual contest designed to implement the Turing Test. Dr. Loebner
pledged a Grand Prize of $100,000 (the Loebner Prize and a Gold Medal for the first computer whose responses were indistinguishable from a human's. Each year an
annual prize of $2000 and a bronze medal is awarded to the most human computer. No computer has yet won the grand prize.
From the Perspective of Literature and the Media
Science Fiction Literature
Verne, Jules, Paris in the Twentieth Century, written in 1863 and published in 1994. Review