Friday December 5, 2003 — 11:30 a.m.

Taylor 3.128

Multilingual Natural Language Processing: Integrating Insights from Linguistic Theory

Dr. Jonas Kuhn [web]

University of Texas at Austin, Linguistics Department

For English, an abundance of linguistic resources are available, including text corpora annotated with full syntactic and some limited semantic information. Many of the Natural Language Processing (NLP) techniques developed in the past decade rely heavily on such resources; by exploiting richly annotated corpora, the linguistic development effort for the derived NLP systems can be kept fairly low.

This talk reviews some options for moving from NLP systems for English to a multilingual scenario. Manual corpus annotation on a scale comparable to English is clearly not an option for more than a few other languages. Thus, NLP techniques that directly incorporate some of the cross-linguistic generalizations from linguistic theory may have an advantage in the multilingual context (although the development effort for the first few languages is comparatively large). The Parallel Grammar Development Project (ParGram) is a long-term collaborative effort (involving the Palo Alto Research Center, Fuji Xerox, the Universities of Stuttgart, Bergen, and other institutions) for developing linguistically rich grammars of English, French, German, Norwegian, Japanese, and other languages. The grammars are written in accordance with syntactic generalizations holding across these languages, while at the same time there is an emphasis on broad coverage of real-life corpora, using the state-of-the-art parsing and generation system XLE. For disambiguation, a log-linear probability model is trained from corpus data. As recent experiments show, the ParGram approach makes a rapid development of new grammars for typologically similar languages possible (a Korean grammar was bootstrapped off the Japanese grammar).

Another route for multilingual NLP is the exploitation of parallel corpora (of aligned translated text) for "projecting" linguistic information from English to the target language(s). This promising approach may also benefit significantly from linguistic insights into the inventory of morphological and syntactic means of expression across languages, especially when the information to be projected is of a deeper, semantic nature.

About FAI...

The Forum for Artificial Intelligence meets every other week to discuss topics in artificial intelligence. FAI is a brown-bag lunch series this semester, so feel free to bring lunch with you. All are welcome to attend.

Please send questions or comments to
Matt MacMahon or Jeff Provost.

Full Schedule:

Friday October 31st, 2003
11:30am

Taylor Hall 3.128

POMDPs: Who Needs Them?

Dr. Anthony Cassandra [web]

St. Edward's University, Department of Computer Sciences

Uncertainty permeates many decision making tasks. The partially observable Markov decision process (POMDP) is a computational model that can account for uncertainties in both action effects and perceptual stimuli. Most of my research has focused on algorithms and techniques for finding optimal and near-optimal solutions to problems that could be formulated as POMDPs.

After a detour through the world of corporate R&D and fledgling startups, I have more recently become involved in two separate projects, both of which have me revisiting the POMDP model and considering its usefulness. One project involves research in cognitive psychology in the domain of human spatial navigation. Here, POMDP models help serve as the benchmark for tracking differences in human performance across numerous environmental changes. The second project involves the use of POMDP models to help improve the decision making process in a distributed agent environment. The purpose of the model in this system is to improve the overall robustness of the agent community in the face of significant failures across the network.

Friday November 7, 2003
10:00 am

ACES 2.402

Manifold Representations for Reinforcement Learning

Dr. William Smart [web]

Washington University at St. Louis

Reinforcement learning (RL) is a powerful machine learning paradigm for learning control policies for autonomous agents in worlds with discrete state spaces. However, attempts to generalize many of the key algorithms to deal with continuous multi-dimensional state spaces have not been completely successful. In this talk, we consider one particular set of techniques, known as value-function approximation (VFA). VFA replaces the tabular value-function representation of traditional algorithms with a general-purpose function approximator. We describe a particular failure mode of VFA, and argue that the cause is a flawed assumption about the topology of the space over which VFA operates. We propose a solution to this problem, based on the theory of differential manifolds, in which we create a new space over which to approximate the value function. By explicitly modeling the topology of this new space we can more accurately model the value function, and (we believe) establish convergence proofs, currently missing from VFA. We will present our initial experimental results, and discuss the current focus of our work.

Friday November 21st, 2003
11:30am

Taylor Hall 3.128

The RoboCup Challenge: Progress and Research Results in Robot Soccer

Dr. Peter Stone [web]

UT Austin Artificial Intelligence Lab

RoboCup (The Robot Soccer World Cup) is an attempt to promote AI and robotics research by providing a common task, soccer, for evaluation of various theories, algorithms, and agent architectures. An ongoing international initiative currently involoving more than 3000 participants from over 35 different countries, RoboCup has conducted 7 international competitions and workshops and is a growing challenge domain for AI and robotics. Past competitions have been staged using a complex, software simulation, two different sizes of wheeled robots, and Sony Aibo (4-legged) robots, each of which has encouraged different challenging and innovative research directions. The long-term goal is to foster the creation of humanoid robots that can compete against the best human soccer teams by the year 2050.

In 2003, we created UT Austin Villa, a new RoboCup team. We met with some success in the 4-legged league and were the champions of the on-line coach competition in the simulation league. This talk gives an overview of the RoboCup initiative; recaps our experiences leading up to the 2003 competition; and gives some insight into our past and on-going research related to this domain. A recent positive result is the use of machine learning to learn the fastest known walk on the Sony Aibo robot.

Friday December 5th, 2003
11:30am

Taylor Hall 3.128

Multilingual Natural Language Processing: Integrating Insights from Linguistic Theory

Dr. Jonas Kuhn [web]

University of Texas at Austin, Linguistics Department

For English, an abundance of linguistic resources are available, including text corpora annotated with full syntactic and some limited semantic information. Many of the Natural Language Processing (NLP) techniques developed in the past decade rely heavily on such resources; by exploiting richly annotated corpora, the linguistic development effort for the derived NLP systems can be kept fairly low.

This talk reviews some options for moving from NLP systems for English to a multilingual scenario. Manual corpus annotation on a scale comparable to English is clearly not an option for more than a few other languages. Thus, NLP techniques that directly incorporate some of the cross-linguistic generalizations from linguistic theory may have an advantage in the multilingual context (although the development effort for the first few languages is comparatively large). The Parallel Grammar Development Project (ParGram) is a long-term collaborative effort (involving the Palo Alto Research Center, Fuji Xerox, the Universities of Stuttgart, Bergen, and other institutions) for developing linguistically rich grammars of English, French, German, Norwegian, Japanese, and other languages. The grammars are written in accordance with syntactic generalizations holding across these languages, while at the same time there is an emphasis on broad coverage of real-life corpora, using the state-of-the-art parsing and generation system XLE. For disambiguation, a log-linear probability model is trained from corpus data. As recent experiments show, the ParGram approach makes a rapid development of new grammars for typologically similar languages possible (a Korean grammar was bootstrapped off the Japanese grammar).

Another route for multilingual NLP is the exploitation of parallel corpora (of aligned translated text) for "projecting" linguistic information from English to the target language(s). This promising approach may also benefit significantly from linguistic insights into the inventory of morphological and syntactic means of expression across languages, especially when the information to be projected is of a deeper, semantic nature.

Past Schedules

Spring 2003

Fall 2002

Spring 2002

Fall 2001

Spring 2001

Fall 2000

Spring 2000

fai (fA) n. Archaic. [Middle English]: Faith.