The term “singularity” has been used to describe, “the moment when a civilization changes so much that its rules and technologies are incomprehensible to previous generations. Think of it as a point-of-no-return in history.” The term is now often used to describe the point at which computers will overtake people in “intelligence”. This point is also called the “intelligence explosion”. Ray Kurzweil has written extensively on this, for example.
There’s an interesting organization that used to be called The Singularity Institute. They’ve changed their name to The Machine Intelligence Research Institute (MIRI). This appears to have happened in an attempt to distance themselves from Ray Kurzweil (see this short article). Their focus hasn’t changed however: they are concerned with what will happen to us once machines surpass us in general intelligence. On the MIRI website, there is an interesting essay on the idea of “Friendly AI”. Read it. You can also wander around other parts of the Institute’s web site for more ideas on this issue. Their FAQ page is very interesting.
Then write short answers to the following questions (which use the term “singularity” but think of “intelligence explosion” if you prefer):
1. If we manage our research right, what is one likely, very high-impact positive result of reaching the singularity? Do not say, “a cure for cancer”.
2. If we manage our research wrong, what is one likely, every high-impact negative result of reaching the singularity?
3. Do you hope you’re around to see the singularity? Why or why not?
Bring you answers to class and come prepared to discuss them.
 “What is the Singularity and Will You Live to See It?” (http://io9.com/5534848/what-is-the-singularity-and-will-you-live-to-see-it )