UTCS Colloquium/AI-Louis-Philippe Morency/USC: "Computational Study Of Nonverbal Social Communication," ACES 2.402, Monday, May 24, 2010, 11:00 a.m.

Contact Name: 
Jenna Whitney
Date: 
May 24, 2010 11:00am - 12:00pm

There is a sign-up schedule for this event that can be found
at http://www.cs.utexas.edu/department/webeven

t/utcs/events/cgi/list_events.cgi

Type of Talk: UTCS Colloquium/

AI

Speaker/Affiliation: Louis-Philippe Morency/USC

Date/Ti

me: Monday, May 24, 2010, 11:00 a.m.

Location: ACES 2.402

H

ost: Kristen Grauman

Talk Title: Computational Study Of Nonverbal Soc

ial Communication

Talk Abstract:

The goal of this emerging rese

arch field is to recognize, model and predict
human nonverbal behavio

r in the context of interaction with virtual humans,
robots and other
human participants. At the core of this research field is
the need fo

r new computational models of human interaction emphasizing the
multi-

modal, multi-participant and multi-behavior aspects of human behavior.
This multi-disciplinary research topic overlaps the fields of multi-modal

interaction, social psychology, computer vision, machine learning

and
artificial intelligence, and has many applications in areas as di

verse as
medicine, robotics and education.

During my talk

, I will focus on three novel approaches to achieve efficient
and robu

st nonverbal behavior modeling and recognition: (1) a new visual
track

ing framework (GAVAM) with automatic initialization and bounded drift

which acquires online the view-based appearance of the object, (2) the use

of latent-state models in discriminative sequence classification
(Latent-Dynamic CRF) to capture the influence of unobservable factors onnonverbal behavior and (3) the integration of contextual information
(specifically dialogue context) to improve nonverbal prediction and
recognition.

Speaker Bio:

Dr. Louis-Philippe Morency is c

urrently research assistant professor at the
University of Southern Ca

lifornia (USC) and research scientist at USC
Institute for Creative Te

chnologies where he leads the Multimodal
Communication and Computation
Laboratory (MultiComp Lab). He received his
Ph.D. from MIT Computer S

cience and Artificial Intelligence Laboratory in
2006. His main resear

ch interest is computational study of nonverbal social
communication,
a multi-disciplinary research topic that overlays the fields
of multi

-modal interaction, machine learning, computer vision, social
psych

ology and artificial intelligence. He developed "Watson", a re

al-time
library for nonverbal behavior recognition and which became th

e de-facto
standard for adding perception to embodied agent interfaces

. He received
many awards for his work on nonverbal behavior computati

on including three
best-paper awards in 2008 (at various IEEE and ACM

conferences). He was
recently selected by IEEE Intelligent Systems as

one of the "Ten to Watch"
for the future of AI research.