Forum for Artificial Intelligence

 

About FAI

The Forum for Artificial Intelligence meets every other Friday at 3pm to discuss topics in artificial intelligence. After the formal talk, we continue our conversation at the Crown and Anchor. All are welcome to attend.

Please send questions or comments to Kenneth Stanley or Tal Tversky.

Full Schedule: 2/22/163/93/234/64/204/275/4

February 2nd
3:00pm

Prof. Bruce Porter [email][web]
Director, Artificial Intelligence Laboratory
Director, Knowledge-Based Systems Research Group [web]
UT Department of Computer Sciences

Dr. Ken Barker
Knowledge-Based Systems Research Group
UT Computer Science Department

Building Large Knowledge Bases from Components

Our experience building the Botany Knowledge Base confirms that knowledge engineering (i.e. encoding domain knowledge in a computational form) is the bottleneck in building large knowledge-based systems. The goal of our research is to develop a new, simpler method for knowledge engineering, one which constructs large knowledge bases by instantiating and assembling small, reusable components. This has led us to confront basic issues of semantic composition, such as the nature of the `building blocks' and composition operations that enable common-sense reasoning.

February 16th
3:00pm

Dr. Leslie Pack Kaelbling [email][web]
MIT Artificial Intelligence Laboratory [web]

Why Robbie Can't Learn:
The Difficulty of Learning in Autonomous Agents

In recent years, machine learning methods have enjoyed great success in a variety of applications. Unfortunately, on-line learning in autonomous agents has not generally been one of them. Reinforcement-learning methods that were developed to address problems of learning agents have been most successful in off-line applications. In this talk, I will briefly review the basic methods of reinforcement learning, point out some of their shortcomings, argue that we are expecting too much from such methods, and speculate about how to build complex, adaptive autonomous agents. I will back up the speculations with recent results demonstrating that a small amount of human-provided input can dramatically speed learning in a real mobile robot.

March 9th
3:00pm

Paper Discussion: "Theory of Mind for a Humanoid Robot" [pdf]
by Brian Scassellati
MIT Artificial Intelligence Laboratory

If we are to build human-like robots that can interact naturally with people, our robots must know not only about the properties of objects but also the properties of animate agents in the world. One of the fundamental social skills for humans is the attribution of beliefs, goals, and desires to other people. This set of skills has often been called a theory of mind. This paper presents the theories of Leslie and Baron-Cohen on the development of theory of mind in human children and discusses the potential application of both of these theories to building robots with similar capabilities. Initial implementation details and basic skills (such as finding faces and eyes and distinguishing animate from inanimate stimuli) are introduced. I further speculate on the usefulness of a robotic implementation in evaluating and comparing these two models.

Note: Brian Scassellati will be giving a job talk on Thursday, March 29 at 11am in ACES 2.302.

March 23rd
3:00pm

James Bednar [email][web]
Neural Networks Research Group [web]
UT Department of Computer Sciences

Cara Cashon [email]
Children's Research Laboratory [web]
UT Department of Psychology

About Face: Measuring and Modeling Infants' Face Perception

This will be a two-part presentation on the development of face perception in infancy. We will focus on one key question: To what extent is face processing domain-specific, i.e. relying on algorithms and circuitry specific to faces, and to what extent is it driven by general learning mechanisms? J. Bednar will review empirical studies from newborns through 2 to 3 months of age, and will present computational modeling results that show how domain-specific preferences can be integrated seamlessly into a learning model. C. Cashon will discuss these issues from a behavioral information-processing perspective and will present data from studies on infants' (from 4- to 7-months of age) processing of faces. These results will be contrasted with previous findings with infants' processing of objects.

April 6th
3:00pm
ACES 3.408

Paper Discussion:
"Cognitive Multi-character Systems for Interactive Entertainment" [pdf]

by John Funge, Sony Computer Entertainment America &
Steven Shapiro, University of Toronto

Researchers in the field of artificial intelligence (AI) are becoming increasingly interested in computer games as a vehicle for their research. From the researcher s point of view this makes sense as many interesting and challenging AI problems arise quite naturally in the context of computer games. Of course, the hope is that the relationship is a symbiotic one so that the incorporation of AI techniques will lead to more interesting and enjoyable computer games. One question that arises, however, is how far this process can continue? In particular, what, if any, are the technical roadblocks to applying new AI research to interactive entertainment, and what would be the expected benefits? In this paper, we will therefore take a critical look at some AI techniques on the horizon of our own current research in developing the software infrastructure required to view interactive entertainment applications as cognitive multi-character systems.

For more papers on gaming and AI visit the AAAI Syposium on Artificial Intelligence and Interactive Entertainment.

April 20th
3:00pm
ACES 2.402

Prof. Peter Macneilage [web] [email]
UT Department of Psychology [web]

A Lowly Origins View of Speech

The Chomskyan view of the evolution of speech is that it results from a genetic mutation that gave us, totally from scratch, an abstract innate knowledge of sound categories and sound sequencing patterns. Neodarwinian theory requires descent with modification not a biological "big bang" for speech. What was available to be modified and what modifications occurred? The "Frame/Content" theory of evolution says that speech began when we superimposed an already existing capacity for mandibular oscillation (jaw opening and closing) on vocal fold vibration (voice) to form syllable frames - open for vowels, closed for consonants. Beyond this, I will argue that particular sound patterns found in the simple output of modern infants, which are also present in languages, might have been in the earliest language because they involve basic biomechanical properties of the production system, and self organizational pressures, both of which were present in the earliest speakers.

April 27th
3:00pm
ACES 2.402

Prof. Thomas R. Shultz [web] [email]
Director, Laboratory for Natural and Simulated Cognition [web]
Department of Psychology [web]
McGill University [web]

Recruitment Algorithms in Neural Network Modeling

Artificial neural networks (ANNs) that grow their own internal topology as well as learn their connection weights show a number of advantages over static ANNs in terms of learning speed, ability to learn difficult problems, and cognitive modeling. One of these so-called generative learning algorithms, cascade-correlation (CC), builds a network topology by recruiting new hidden units into the network when error can no longer be reduced. CC has been applied to a variety of problems in cognitive development, including PiagetŐs conservation task. Such simulations have clarified a number of longstanding developmental issues about knowledge representation, representation change, stages, transitions, perceptual effects, and constructivism. A principal limitation of ANN simulations is that they fail to use existing knowledge in new learning by always beginning with random connection weights. A new extension of CC, called Knowledge-based Cascade-correlation (KBCC), overcomes this limitation by being able to recruit its own previously learned networks, in competition with single hidden units. Recruitment of relevant knowledge can significantly speed learning, and it has potential for building more accurate cognitive models.

Recommended Reading:

Buckingham, D., & Shultz, T. R. (2000). The developmental course of distance, time, and velocity concepts: A generative connectionist model. Journal of Cognition and Development, 1, 305-345. [PDF]

Shultz, T. R., & Rivest, F. (2000). Using knowledge to speed learning: A comparison of knowledge-based cascade-correlation and multi-task learning. Proceedings of the Seventeenth International Conference on Machine Learning (pp. 871-878). San Francisco: Morgan Kaufmann. [PDF]

May 4th
3:00pm
ACES Auditorium

Harold Henry Chaput [web][email]
UT Department of Computer Sciences [web]

Built to Serve: The Ethics of Engineering a Slave Race

The field of Artificial Intelligence states as its goal the creation of an intelligent being through means other than reproduction. The purpose of this endeavor is to get machines to do tasks that can currenly only be performed by humans. It is not science fiction to say, then, that AI is working towards building a class of intelligent beings to perform labor. And yet, outside of science fiction, very little time is spent considering the moral implications of this work. Are the scientists and engineers building intelligent agents prepared to deal with the consequences of their success? There are many reasons offered for dismissing these concerns, most of which have been used throughout history to justify human slavery.

But perhaps the most prevalent dismissal is the belief that artificially intelligent beings are impossible, or at least very far in the future. However, the accellerating rate of innovation and discovery in computer science, neuroscience and psychology make the goal of AI seem more possible and proximal than ever. Moreover, I contend that, given the goal of AI to create intelligent beings, arguments about its possibility or timeframe do not absolve scientists of their responsibility. The first artificially intelligent being that is activated will be the dawn of a new age in humanity, and how that being is treated will set a precedent from that day forward. However remote or unlikely that day may be, we should start preparing for it now.

Past Schedules

Fall 2000

Spring 2000

fai (fA) n. Archaic. [Middle English]: Faith.