Forum for Artificial Intelligence

Archive


Forum for Artificial Intelligence

[ Home   |   About FAI   |   Upcoming talks   |   Past talks ]



This website is the archive for past Forum for Artificial Intelligence talks. Please click this link to navigate to the list of current talks.

FAI meets every other week (or so) to discuss scientific, philosophical, and cultural issues in artificial intelligence. Both technical research topics and broader inter-disciplinary aspects of AI are covered, and all are welcome to attend!

If you would like to be added to the FAI mailing list, subscribe here. If you have any questions or comments, please send email to Catherine Andersson.






[ Upcoming talks ]





Mon, August 13
2:00PM
Michael Bowling
University of Alberta
Abstraction with an Adversary
Fri, September 7
11:30AM
Silvio Savarese
Univ of Michigan
Understanding the 3D World from Images
Fri, September 14
11:00AM
Jeff Clune
Cornell University
Automatically generating regular, modular neural networks with computational abstractions of evolution and developmental biology
Fri, September 21
11:00AM
Warren Powell
Princeton
Unifying the Jungle of Stochastic Optimization
Thu, October 18
11:00AM
Alex Smola
UC Berkeley
Learning at Scale
Fri, October 19
11:00AM
Wolf Ketter
Erasmus University
The Power Trading Agent Competition
Fri, November 9
11:00AM
Richard Voyles
University of Denver
Distributed Sensing, Computation and Actuation: From Heterogeneous Wireless Control Networks to Structured Computational Polymers
Fri, November 9
3:30PM
Geert-Jan Kruijff
German Research Center for Artificial Intelligence
Asymmetric agency and social sentience in modeling situated communication for human-robot teaming
Fri, November 16
11:00AM
Dr. Karthik Dantu
Harvard University
Challenges in Building a Swarm of Robotic Bees
Wed, November 28
11:00AM
Jude Shavlik
University of Wisconsin-Madison
Improving Learning and Inference in Statistical Relational Learning
Fri, November 30
11:00AM
Tamara Berg
Stony Brook University
Learning from Descriptive Text
Fri, December 7
11:00AM
Chris Callison-Burch
Johns Hopkins University
Large-scale paraphrasing for natural language understanding and generation
Thu, January 24
1:00PM
Dr. Xiaofeng Ren
University of Washington and Intel Science and Technology Center for Pervasive Computing
RGB-D Perception: Solving Real-World Computer Vision with Consumer Depth Cameras
Fri, February 1
11:00AM
Mohan Sridharan [talk 1]
Texas Tech University
Towards Autonomy in Human-Robot Collaboration
Fri, February 1
3:00PM
Mohan Sridharan [talk 2]
Texas Tech University
Integrating Answer Set Programming and Probabilistic Planning on Robots
Fri, February 8
11:00AM
Joshua Bongard
University of Vermont
How Embodied Cognition Could Shape the Way Roboticists Think
Wed, February 13
11:00AM
Qiang Yang
Hong Kong University of Science and Technology
Transfer learning: Algorithms and Applications
Thu, February 14
11:00AM
Gregory Dudek
McGill School of Computer Science
Robot Teams to Assist Humans in Scientific Discovery
Wed, February 20
11:00AM
Trevor Darrell
Univ of Cal Berkeley
Visual Learning for Real-World Interaction
Fri, March 1
11:00AM
Mirella Lapata
School of Informatics, University of Edinburgh
Grounded Models of Semantic Representation
Wed, March 27
11:00AM
Gregory Kuhlmann, Jefferson Provost
Apple
Fraud Prevention for eCommerce: A Data Driven Approach
Fri, April 5
11:00AM
Mark Riedl
Georgia Institute of Technology
Intelligent Narrative Generation: Creativity, Engagement, and Cognition
Fri, April 12
11:00AM
Keith Sullivan
George Mason University
Hierarchical Multiagent Learning from Demonstration
Fri, April 19
11:00AM
Justin Hart
Yale University
Robot Self Modeling
Mon, July 22
2:00PM
Charless Fowlkes
UC Irvine
How can object detectors exploit growing quantities of training data?
Mon, July 29
2:00PM
Fei Sha
USC
Probabilistic Models of Learning Latent Similarity
Fri, August 2
11:00AM
Juergen Schmidhuber
Swiss AI Lab IDSIA
Neural Network ReNNaissance
Tue, August 6
11:00AM
Jan Peters
Technische Universitat Darmstadt
Machine Learning of Motor Skills for Robotics
Mon, August 12
11:00AM
Shivaram Kalyanakrishnan
Yahoo! Labs Bangalore
PAC Subset Selection in Stochastic Multi-armed Bandits
Fri, August 23
11:00AM
Percy Liang
Stanford University
Learning Latent-Variable Models of Language

Monday, August 13, 2012, 2:00PM



Abstraction with an Adversary

Michael Bowling   [homepage]

University of Alberta

The Computer Poker Research Group at the University of Alberta has for well over a decade developed the strongest poker playing programs in the world. We have tested them in competition against other programs, winning 20 of 33 events since the inauguration of the AAAI Computer Poker Competition in 2006. We have also tested them against top professional players, becoming the first to beat professional poker players in a meaningful competition in 2008. This success all originates from our key approach: when facing an intractably large game, abstract the game to a smaller one and reason in that game. Recently, this approach has been shown to be on shaky ground, or rather on no ground at all. In this talk, I'll be looking down to see what, if anything, the poker success is standing on; and what this line of research means for real-world applications which don't involve an apparent adversary.

About the speaker:

I am an associate professor at the University of Alberta. My research focuses on machine learning, games, and robotics, and I'm particularly fascinated by the problem of how computers can learn to play games through experience. I am the leader of the Computer Poker Research Group, which has built some of the best poker playing programs on the planet. The programs have won international AI competitions as well as being the first to beat top professional players in a meaningful competition. I am also a principal investigator in the Reinforcement Learning and Artificial Intelligence (RLAI) group and the Alberta Ingenuity Centre for Machine Learning (AICML). I completed my Ph.D. at Carnegie Mellon University, where my dissertation was focused on multiagent learning and I was extensively involved in the RoboCup initiative. My research has been featured on the television programs Scientific American Frontiers, National Geographic Today, and Discovery Channel Canada, as well appearing in the New York Times, Wired, on CBC and BBC radio, and twice in exhibits at the Smithsonian Museums in Washington, DC.

Friday, September 7, 2012, 11:30AM



Understanding the 3D World from Images

Silvio Savarese   [homepage]

Univ of Michigan

In this talk I will introduce a novel paradigm for jointly addressing two fundamental problems in computer vision: 3D reconstruction and object recognition. Most of the state-of-the-art methods deal with these two tasks separately. Methods for object recognition typically describe the scene as a list of object class labels, but are unable to account for their 3D spatial organization. Most of the approaches for 3D scene modeling produce accurate metric reconstructions but are unable to infer the semantic content of their components. A major line of work from my lab in recent years is to explore methodologies that seek to fill this gap and to coherently describe objects and object components while simultaneously integrating their 3D spatial arrangement in the scene's physical space. This research is relevant to many application areas such as autonomous navigation, robotics, automatic 3D modeling of urban environments and surveillance.

About the speaker:

Silvio Savarese is an Assistant Professor of Electrical and Computer Engineering at the University of Michigan, Ann Arbor. After earning his Ph.D. in Electrical Engineering from the California Institute of Technology in 2005, he joined the University of Illinois at Urbana-Champaign from 2005 - 2008 as a Beckman Institute Fellow. He is recipient of a TWR Automotive Endowed Research Award in 2012, an NSF Career Award in 2011 and Google Research Award in 2010. In 2002 he was awarded the Walker von Brimer Award for outstanding research initiative. He served as workshops chair and area chair in CVPR 2010, and as area chair in ICCV 2011. He will be an area chair in CVPR 2013. His research interests include computer vision, object recognition and scene understanding, activity recognition, shape representation and reconstruction, human visual perception and visual psychophysics

Friday, September 14, 2012, 11:00AM



Automatically generating regular, modular neural networks with computational abstractions of evolution and developmental biology

Jeff Clune   [homepage]

Cornell University

I will describe how to combine computational abstractions of evolution and developmental biology to automatically produce modular, regular neural networks (digital models of brains). The properties generated, such as functional modules, symmetries, and repeated motifs, are desirable properties found in biological brains. These properties are key innovations in our quest to generate artificially intelligent robots that rival their natural counterparts. Such structurally organized neural networks can exploit the regularity of problems and increasingly outcompete previous methods as problem regularity increases. Moreover, the functional modularity of such networks enables building block modules to be quickly rewired, facilitating learning and adaptation to new challenges. I will also briefly describe how the same algorithm can generate complex, recognizable three-dimensional objects, enabling us to simultaneously design the bodies of robots along with their neural controllers.

About the speaker:

Jeff Clune is a Postdoctoral Fellow in Hod Lipson's lab at Cornell University, funded by a Postdoctoral Research Fellowship from the US National Science Foundation, and will soon be an assistant professor in the Computer Science Department at the University of Wyoming. Jeff has a bachelor's degree in philosophy from the University of Michigan and a Ph.D. in computer science and a master's degree in philosophy from Michigan State University.

Friday, September 21, 2012, 11:00AM



Unifying the Jungle of Stochastic Optimization

Warren Powell   [homepage]

Princeton

The variety of applications and computational challenges in stochastic optimization has created a diverse set of communities that use names such as Markov decision processes, stochastic programming, approximate dynamic programming, reinforcement learning, stochastic search, simulation-optimization, and stochastic control. Dividing these communities are differences in terminology and notation, although the more subtle differences are in the motivating applications which includes issues such as scalar vs. vector-valued decisions, and model-based vs. model-free applications. In this talk, I will provide a modeling framework for sequential decision problems, and use this to unify several major fields by identifying four fundamental classes of policies. I will then create bridges between several communities, identifying similarities hidden by differences in notation and terminology, as well as important differences.

About the speaker:

Warren B. Powell is a professor in the Department of Operations Research and Financial Engineering at Princeton University, where he has taught since 1981. He is the director of CASTLE Laboratory (http://www.castlelab.princeton.edu), which specializes in the development of stochastic optimization models and algorithms with applications in transportation and logistics, energy, health and finance. His work in transportation produced a network planning model that was used by the entire LTL trucking industry, and a real-time dispatch model for truckload trucking that is being used to dispatch over 65,000 trucks. He pioneered the development of approximate dynamic programming for high-dimensional resource allocation problems that is being used in rail, truckload trucking, the Air Force, and for the management of spare parts for aircraft. He recently established the Princeton Laboratory for Energy Systems Analysis (http://energysystems.princeton.edu) to take this work into the area of energy systems. The author/coauthor of over 180 refereed publications, he is an Informs Fellow, and the author of Approximate Dynamic Programming: Solving the curses of dimensionality, and coauthor of Optimal Learning (both published by Wiley). He is a recipient of the Wagner prize, and has twice been a finalist in the prestigious Edelman competition. He has also served in a variety of editorial and administrative positions for Informs, including Informs Board of Directors, Area Editor for Operations Research, President of the Transportation Science Section, and numerous prize and administrative committees.

Thursday, October 18, 2012, 11:00AM



Learning at Scale

Alex Smola   [homepage]

UC Berkeley

In this talk Alexander Smola will give an overview of a number of problems arising when learning at scale. After an overview of problems and systems used in large scale inference, he will discuss strategies for parameter distribution and how they can be used to perform inference in massive graphical models. Subsequently he will discuss methods for accelerating function evaluation. This addresses the issues of scalability both in terms of efficiency and problem size.

About the speaker:

Alexander Smola studied physics in Munich at the University of Technology, Munich, at the Universita degli Studi di Pavia and at AT&T Research in Holmdel. During this time he was at the Maximilianeum München and the Collegio Ghislieri in Pavia. In 1996 he received the Master degree at the University of Technology, Munich and in 1998 the Doctoral Degree in computer science at the University of Technology Berlin. Until 1999 he was a researcher at the IDA Group of the GMD Institute for Software Engineering and Computer Architecture in Berlin (now part of the Fraunhofer Geselschaft). After that, he worked as a Researcher and Group Leader at the Research School for Information Sciences and Engineering of the Australian National University. From 2004 onwards he worked as a Senior Principal Researcher and Program Leader at the Statistical Machine Learning Program at NICTA.

Friday, October 19, 2012, 11:00AM



The Power Trading Agent Competition

Wolf Ketter   [homepage]

Erasmus University

The Power Trading Agent Competition (Power TAC) is a competitive simulation that models a "liberalized" retail electrical energy market, where competing business entities or "brokers" offer energy services to customers through tariff contracts, and must then serve those customers by trading in a wholesale market. Brokers are challenged to maximize their profits by buying and selling energy in the wholesale and retail markets, subject to fixed costs and constraints. Costs include fees for publication and withdrawal of tariffs, and distribution fees for transporting energy to their contracted customers. Costs are also incurred whenever there is an imbalance between a broker's total contracted energy supply and demand within a given timeslot. The simulation environment models a wholesale market, a regulated distribution utility, and a population of energy customers, situated in a real location on Earth during a specific period for which weather data is available. The wholesale market is a relatively simple call market, similar to many existing wholesale electric power markets, such as Nord Pool in Scandinavia or FERC markets in North America, but unlike the FERC markets we are modeling a single region, and therefore we do not model location-marginal pricing. Customer models include households and a variety of commercial and industrial entities, many of which have production capacity (such as solar panels or wind turbines) as well as electric vehicles. All have "real-time" metering to support allocation of their hourly supply and demand to their subscribed brokers, and all are approximate utility maximizers with respect to tariff selection, although the factors making up their utility functions may include aversion to change and complexity that can retard uptake of marginally better tariff offers. The distribution utility models the regulated natural monopoly that owns the regional distribution network, and is responsible for maintenance of its infrastructure and for real-time balancing of supply and demand. The balancing process is a market-based mechanism that uses economic incentives to encourage brokers to achieve balance within their portfolios of tariff subscribers and wholesale market positions, in the face of stochastic customer behaviors and weather-dependent renewable energy sources. The broker with the highest bank balance at the end of the simulation wins. I'll report results and insights from the first competition from September 2012.

About the speaker:

Wolf Ketter is Associate Professor of Information Systems at the Department of Decision and Information Sciences at the Rotterdam School of Management of the Erasmus University. He received his Ph.D. in Computer Science from the University of Minnesota in 2007. He founded and runs the Learning Agents Research Group at Erasmus (LARGE) and the Erasmus Center for Future Energy Business. The primary objective of LARGE is to research, develop, and apply autonomous and mixed-initiative intelligent agent systems to support human decision making in the area of business networks, electronic markets, energy grids, and supply-chain management. The energy research enables the robust, intelligent, efficient, and sustainable energy networks of the future. He is the founder and chair of the Erasmus Forum for Future Energy Business, an annual forum held in Rotterdam. He was co-chairing the TADA workshop at AAAI 2008, the general chair of the Trading Agent Competition (TAC) 2009, is member of the board of directors of the Association for Trading Agent Research (ATAR) since 2009, and its chair since 2010. He is leading Power TAC, a new TAC competition on energy retail markets; the pilot competition was held at International Joint Conference of Artificial Intelligence (IJCAI) 2011 in Barcelona and the first inaugural competition will be held at AAMAS 2012 in Valencia. Since 2011 Wolfgang also serves as the chair of the IEEE Task Force on Energy Markets. He was the program co-chair of the International Conference of Electronic Commerce (ICEC) 2011. His research has been published in various information systems, and computer science journals such as AI Magazine, Decision Support Systems, Electronic Commerce Research and Applications, Energy Policy, European Journal of Information Systems, INFORMS OR/MS Today, INFORMS Information Systems Research, and International Journal of Electronic Commerce. He serves on the editorial board of Electronic Commerce Research and Applications. He will be the keynote speaker at the International KES Conference on Agents and Multi-agent Systems Technologies and Applications, Dubrovnik, Croatia, 2012, the Agents and Data Mining Interaction Workshop, Valencia, Spain 2012, and has been the keynote speaker at the Scandinavian Energy Conference in Oslo in 2011, many distinguished lectures at international conferences, and renowned universities, such as Harvard, University of Minnesota, University of Liverpool, RWTH Aachen, University of Connecticut, TU Delft, KIT, University of Mannheim, University of St. Thomas, etc.

Friday, November 9, 2012, 11:00AM



Distributed Sensing, Computation and Actuation: From Heterogeneous Wireless Control Networks to Structured Computational Polymers

Richard Voyles   [homepage]

University of Denver

Robotics and Cyber-Physical Systems are ushering in a new age of engineering design with new techniques and new materials. The old way of design in which we assume decoupled, low-order, block-diagonal models is breaking down at all levels and all scales. This presents numerous problems as our ad hoc design methods are not able to properly account for, test and validate systems of greatly increasing complexity. But it also presents numerous opportunities for new capabilities, such as soft robotics, in which the behavior of a designed artifact is tightly coupled to its environment.

In this talk, I will describe steps we are taking at opposite ends of the spectrum of distributed control infrastructure to realize advanced, intelligent systems. At the coarse end of the spectrum, we are developing a framework for heterogeneous wireless control networks that incorporates reconfigurable hardware, as well as reconfigurable software, into design and implementation tools for dynamic, self-adaptive systems. Based on our Port-Based Object/Real-Time OS (PBO/RT), we are developing tools for software code migration and hardware partial dynamic reconfiguration to realize an embedded virtual machine that simplifies hardware-independent distributed control design.

At the fine end of the spectrum, we are using shape deposition manufacturing techniques to produce 1-D, 2-D and 3-D polymer building blocks that incorporate sensing, actuation, cognition, and structure into convenient, specifiable smart materials. Our cognitive architecture is based on fully-interconnected Synthetic Neural Networks, which implement parallel artificial neurons from polymer electronics. We have produced memristive bistable devices to create artificial synapses and have a simple design for a single-transistor artificial soma to achieve a sigmoidal activation function, yielding the possibility of producing synthetic, trainable, massively parallel cognitive circuits. Our actuation mechanisms, which traditionally have been difficult to achieve in all-polymer materials with usable power levels, are based on active and passive fluids. We are using "active" fluid-based actuation schemes, such as water hammer based impulsive actuation, to channel meaningful forces for actuation as well as "passive" fluid-based actuation from electrorheological and magnetorheological fluids which can be used to dampen forces.

About the speaker:

Dr. Voyles is NSF Program Director in the newly formed I‐Corps Program that grants up to $50,000 per project. Dr. Voyles received the B.S. in Electrical Engineering from Purdue University, M.S. from the De‐ partment of Mechanical Engineering at Stanford University in 1989, and Ph.D. in Robotics from the School of Com‐puter Science at Carnegie Mellon University in 1997. In addition to I‐Corps, he is a Program Director in the National Robotics Initiative, and Robust Intelligence. His research interests are in the areas of cyber physical systems, robotics and artificial intelligence; development of small, resource‐constrained robots and robot teams for urban search and rescue and surveillance; sensors and sensor calibration, particularly haptic and force sensors, real‐time control, and robotic manipulation. Dr. Voyles’ industrial experience includes Dart Controls, IBM Corp., Integrated Systems, Inc., and Avanti Optics. He has also served on the boards of various start‐ups and non‐profit groups, includ‐ ing The Works, a hands‐on, minds‐on engineering discovery center.

Friday, November 9, 2012, 3:30PM



Asymmetric agency and social sentience in modeling situated communication for human-robot teaming

Geert-Jan Kruijff   [homepage]

German Research Center for Artificial Intelligence

The talks looks at situated collaboration between humans and robots, for example in complex situations like disaster response. This collaborative context gives rise to several issues. One, we need to start from the assumption of *asymmetric agency*: Humans and robots experience and understand the world differently. The asymmetry implies that symbols cannot be considered abstract types, embodying an objective truth. Instead, different agents employ different types in building up understanding, or more precisely they construct subjective judgments as proofs for why a particular type can be applied to some experience. This then gives rise to another issue, namely how these different actors could then arrive at some mutually shared understanding or "common ground." We see this as the need to align judgments, and formally construct this as an alignment between (abductive) proofs over multi-agent beliefs and intentions. This places grounding meaning in context in a new light: Grounding meaning, between actors, is the alignment of judgments against subjective experience, within a social, situated context. The intentional aspects of social dynamics play thereby just as much a role as do beliefs. In a collaborative activity, meaning becomes socially and situationally construed referential content, with a semi-persistent nature that is subject both to social dynamics (why the meaning is construed and used) and environment dynamics (what the meaning is construed for). This requires an actor, particularly a robot, to be explicitly aware of these social dynamics and its own role(s) in it. The talk captures this through the notion of social sentience.

About the speaker:

Geert-Jan Kruijff is Senior Researcher at the Language Technology Lab, at the German Research Center for Artificial Intelligence (DFKI GmbH) in Saarbrücken, Germany. There, he leads the Talking Robots group, doing several international projects on human-robot interaction. He obtained his engineering title ("ir") from the University of Twente in the Netherlands, and his PhD in informatics/mathematical linguistics from Charles University, in Prague/Czech Republic. During his studies he spent time at Texas Tech University (1993/94) and the University of Edinburgh (1999-2001). He is interested in developing theories and systems for exploring what makes a robot understand and produce spoken dialogue. Research ranges from fundamental research, to applied-and-out-in-the-field work such as Urban Search & Rescue, or child-robot interaction in hospital settings.

Friday, November 16, 2012, 11:00AM



Challenges in Building a Swarm of Robotic Bees

Dr. Karthik Dantu   [homepage]

Harvard University

The RoboBees project is a 5-yr $10M NSF Innovation in Computing effort to building a swarm of flapping-wing micro-aerial vehicles (MAV). Each MAV is projected to be 1g in weight, run on about 500 mw of power, and be about 3 cm long. A swarm of RoboBees is estimated to contain a few hundred RoboBees similar to bees in nature. There are numerous challenges in designing flapping wing vehicles at this size. These challenges are broadly divided into brain, body, and colony areas. The Brain area is to design custom low-power computing and sensing onboard along with the power electronics to drive the entire system. The Body focuses on novel actuation mechanisms, bio-mimetic wing design, as well as novel control mechanisms to control a RoboBee. The Colony effort deals with programming and coordination of a swarm of such MAVs targeting specific applications such as crop pollination and urban search-and-rescue. In this talk, I will describe some of the advances made along these lines with an emphasis on coordination of a swarm of RoboBees.

About the speaker:

Karthik Dantu is a postdoctoral fellow in the School of Engineering and Applied Sciences at Harvard University. His interests are broadly in designing large-scale systems that combine computing, communication, sensing, and actuation such as multi-robot systems, networked embedded systems, and cyber-physical systems. As part of the RoboBees project, his work has focused on programming and coordination of swarms of MAVs. Prior to Harvard, he obtained his PhD. under the guidance of Prof. Gaurav Sukhatme in the Computer Science Dept. at University of Southern California working on various aspects of connectivity and coordination in both static and mobile sensor networks.

Wednesday, November 28, 2012, 11:00AM



Improving Learning and Inference in Statistical Relational Learning

Jude Shavlik   [homepage]

University of Wisconsin-Madison

The two primary mathematical underpinnings of artificial intelligence have been first-order predicate logic and probability. Over the 15 or so years there has been substantial research activity on approaches that combine the two, producing various forms of probabilistic logic. Within machine learning, this work is commonly called Statistical Relational Learning (SRL).

At Wisconsin we have been investigating an approach to SRL where we learn probabilistic concepts expressed as a sequence of first-order regression trees. In such trees, nodes are expressions in first-order logic and the leaves are numbers (hence the phrase 'regression trees,' rather than the more common 'decision trees'). I will present our learning algorithms for two SRL knowledge representations, Relational Dependency Networks (RDNs) and Markov Logic Networks (MLNs), and describe their performance on a variety of 'real world' testbeds, including comparison to alternate approaches.

Time permitting, I will also present our work on using a relational database management system (RDBMS) and an optimization method called 'dual decomposition' to substantially speed up inference ('question answering') in MLNs. Our approach allowed us to handle inference in an MLN testbed with 240 million facts (which lead to 64 billion 'factors' in the grounded Markov network).

Joint work with Bernd Gutmann, Kristian Kersting, Tushar Khot, Sriraam Natarajan, Feng Niu, Chris Re, and Ce Zhang. Papers available at http://pages.cs.wisc.edu/~shavlik/mlrg/publications.html

About the speaker:

Jude Shavlik is a Professor of Computer Sciences and of Biostatistics and Medical Informatics at the University of Wisconsin - Madison, and is a Fellow of the American Association for Artificial Intelligence. He has been at Wisconsin since 1988, following the receipt of his PhD from the University of Illinois for his work on Explanation-Based Learning. His current research interests include machine learning and computational biology, with an emphasis on using rich sources of training information, such as human-provided advice. He served for three years as editor-in-chief of the AI Magazine and serves on the editorial board of about a dozen journals. He chaired the 1998 International Conference on Machine Learning, co-chaired the First International Conference on Intelligent Systems for Molecular Biology in 1993, co-chaired the First International Conference on Knowledge Capture in 2001, was conference chair of the 2003 IEEE Conference on Data Mining, and co-chaired the 2007 International Conference on Inductive Logic Programming. He was a founding member of both the board of the International Machine Learning Society and the board of the International Society for Computational Biology. He co-edited, with Tom Dietterich, "Readings in Machine Learning." His research has been supported by DARPA, NSF, NIH (NLM and NCI), ONR, DOE, AT&T, IBM, and NYNEX.

Friday, November 30, 2012, 11:00AM



Learning from Descriptive Text

Tamara Berg   [homepage]

Stony Brook University

People communicate using language, whether spoken, written, or typed. A significant amount of this language describes the world around us, especially the visual world in an environment, or depicted in images or video. In addition there exist billions of photographs with associated text available on the web; examples include web pages, captioned or tagged photographs, and video with speech or closed captioning. Such visually descriptive language is potentially a rich source of 1) information about the world, especially the visual world, 2) training data for how people construct natural language to describe imagery, and 3) guidance for where computational visual recognition algorithms should focus efforts. In this talk I will describe several projects related to images and descriptive text, including our recent approaches to automatically generating natural language describing images, our newly released collection of 1 million images with captions, and explorations of how visual content relates to what people find important in images. All papers, created datasets, and demos are available on my webpage at: http://tamaraberg.com/

About the speaker:

Tamara Berg received her B.S. in Mathematics and Computer Science from the University of Wisconsin, Madison in 2001. She then completed a PhD from the University of California, Berkeley in 2007 and spent 1 year as a research scientist at Yahoo! Research. She is currently an Assistant Professor in the computer science department at Stony Brook University and a core member of the consortium for Digital Art, Culture, and Technology (cDACT). Her research straddles the boundary between Computer Vision and Natural Language Processing with applications to large scale recognition and multimedia retrieval.

Friday, December 7, 2012, 11:00AM



Large-scale paraphrasing for natural language understanding and generation

Chris Callison-Burch   [homepage]

Johns Hopkins University

I will present my method for learning paraphrases - pairs of English expressions with equivalent meaning - from the bilingual parallel corpora, which are more commonly used to train statistical machine translation systems. My method pairs English phrases like (thrown into jail, imprisoned) when they shared an aligned foreign phrase like festgenommen. Because bitexts are large and because a phrase can be aligned many different foreign phrases (including phrases in multiple foreign languages), the method extracts a diverse set of paraphrases. For thrown into jail, we not only learn imprisoned, but also arrested, detained, incarcerated, jailed, locked up, taken into custody, and thrown into prison, along with a set of incorrect/noisy paraphrases. I'll show a number of method for filtering out the poor paraphrases, by defining a paraphrase probability calculated from translation model probabilities, and by re-ranking the candidate paraphrases using monolingual distributional similarity measures.

In addition to lexical and phrasal paraphrases, I'll show how the bilingual pivoting method can be extended to learn meaning-preserving syntactic transformations like the English possessive rule or dative shift. I'll describe a way of using synchronous context free grammars (SCGFs) to represent these rules. This formalism allows us to re-use much of the machinery from statistical machine translation to perform sentential paraphrasing. We can adapt our "paraphrase grammars" to do monolingual text-to-text generation tasks like sentence compression or simplification.

I'll also briefly sketch future directions for adding a semantics to the paraphrases, which my lab will be doing for the upcoming DARPA DEFT program.

About the speaker:

Chris Callison-Burch is an Associate Research Professor in the Computer Science Department at Johns Hopkins University, where he has built a research group within the Center for Language and Speech Processing (CLSP). He has accepted a tenure-track faculty job at the University of Pennsylvania starting in September 2013. He received his PhD from the University of Edinburgh's School of Informatics and his bachelors from Stanford University's Symbolic Systems Program. His research focuses on statistical machine translation, crowdsourcing, and broad coverage semantics via paraphrasing. He has contributed to the research community by releasing open source software like Moses and Joshua, and by organizing the shared tasks for the annual Workshop on Statistical Machine Translation (WMT). He is the Chair of the North American chapter of the Association for Computational Linguistics (NAACL) and serves on the editorial boards of Computational Linguistics and the Transactions of the ACL.

Thursday, January 24, 2013, 1:00PM



RGB-D Perception: Solving Real-World Computer Vision with Consumer Depth Cameras

Dr. Xiaofeng Ren   [homepage]

University of Washington and Intel Science and Technology Center for Pervasive Computing

Kinect-style depth cameras offer real-time synchronized color and depth data in a convenient package at a consumer price. Such RGB-D cameras are dramatically changing the research and application landscapes of vision, robotics and HCI. I will take you through our journey of investigating and promoting the joint uses of color and depth toward rich sensing solutions under real-world conditions, from 3D modeling of indoor environments to fine-grained recognition of objects, scenes and activities. Our main approach is feature learning, designing and learning rich features in hierarchical structures that seamlessly apply to both color and depth. Our work on hierarchical matching pursuit uses efficient sparse coding algorithms, namely Orthogonal Matching Pursuit and K-SVD, as building blocks to extract rich features at varying scales and deformations, outperforming hand-designed features by large margins on both color and RGB-D object recognition. Such learned features also help to improve the states of the art on a variety of tasks such as scene classification, labeling and segmentation. RGB-D perception shines in both robustness and efficiency, on the fast track of becoming the general sensing solution for future pervasive and context-aware systems.

About the speaker:

Xiaofeng Ren is a research scientist at Intel Labs and an affiliate assistant professor at the University of Washington. His research interests are broadly in the areas of computer vision and its applications, including image features, grouping and segmentation, object recognition, scene understanding, and video analysis. His current focus is on understanding and solving computer vision problems in everyday life settings. He received his Ph.D. from University of California, Berkeley and his B.S. from Zhejiang University. Prior to joining Intel in 2008, he was on the research faculty of Toyota Technological Institute at Chicago.

Friday, February 1, 2013, 11:00AM



Towards Autonomy in Human-Robot Collaboration

Mohan Sridharan [talk 1]   [homepage]

Texas Tech University

Real-world domains characterized by non-deterministic action outcomes and unforeseen dynamic changes frequently make it difficult for robots to process all sensory inputs, model the entire domain or operate without any human feedback. Humans, on the other hand, are unlikely to have the time and expertise to interpret raw sensory inputs or provide elaborate feedback in complex domains. Therefore, a central challenge in robotics research is to enable robots to use sensor inputs to operate autonomously when possible, acquiring and using high-level human feedback based on need and availability.

We seek to enable such autonomy by jointly addressing the associated learning, adaptation and collaboration challenges, exploiting their mutual dependencies to create novel opportunities to address the individual challenges. In this talk, I shall describe the interplay between hierarchical planning and bootstrap learning algorithms that enable one or more robots to: (a) represent, reason with and revise (common sense) domain knowledge; (b) adapt learning, sensing and information processing to the task at hand; (c) learn models of domain objects based on contextual and appearance-based cues; and (d) acquire and use high-level human feedback based on need and availability. I shall present results of evaluating these algorithms in simulation and on robots deployed in indoor and outdoor domains. If time permits, I shall briefly illustrate use of the underlying algorithms in other application domains such as climate science and agricultural irrigation management.

About the speaker:

Mohan Sridharan is an Assistant Professor of Computer Science at Texas Tech University. Prior to his current appointment, he was a Research Fellow in the School of Computer Science at University of Birmingham (UK), working on the EU Cognitive Systems (CoSy) project between August 2007 and October 2008. He received his Ph.D. (Aug 2007) in Electrical and Computer Engineering at The University of Texas at Austin. Dr.Sridharan's research interests include machine learning, planning, computer vision and cognitive science, as applied to autonomous mobile robots. Furthermore, he is interested in designing learning and inference algorithms for big data domains characterized by a significant amount of uncertainty.

Friday, February 1, 2013, 3:00PM



Integrating Answer Set Programming and Probabilistic Planning on Robots

Mohan Sridharan [talk 2]   [homepage]

Texas Tech University

To collaborate with humans, robots need the ability to represent, reason with and revise domain knowledge; adapt sensing and processing to the task at hand; and learn from human feedback. In this talk, I describe the integration of non-monotonic logic programming and probabilistic decision making to address these challenges. Specifically, Answer Set Programming (ASP) is used to represent, reason with and revise domain knowledge obtained from sensor inputs and human feedback, while hierarchical partially observable Markov decision processes (POMDPs) are used to adapt visual sensing and information processing to the task at hand. All algorithms are evaluated in simulation and on wheeled robots localizing target objects in indoor domains.

About the speaker:

Mohan Sridharan is an Assistant Professor of Computer Science at Texas Tech University. Prior to his current appointment, he was a Research Fellow in the School of Computer Science at University of Birmingham (UK), working on the EU Cognitive Systems (CoSy) project between August 2007 and October 2008. He received his Ph.D. (Aug 2007) in Electrical and Computer Engineering at The University of Texas at Austin. Dr.Sridharan's research interests include machine learning, planning, computer vision and cognitive science, as applied to autonomous mobile robots. Furthermore, he is interested in designing learning and inference algorithms for big data domains characterized by a significant amount of uncertainty.

Friday, February 8, 2013, 11:00AM



How Embodied Cognition Could Shape the Way Roboticists Think

Joshua Bongard   [homepage]

University of Vermont

Embodied cognition dictates that intelligent behavior is something that emerges out of interactions between an animal's (or robot's) body, brain and environment. In this talk I will give three examples of how embodied cognition can change the way we approach robotics. First, I will show how robots that `grow legs' can master legged locomotion faster than robots with fixed legs. Second, I will show how environmental complexity leads to the evolution of complexly-shaped robots. And finally I will demonstrate some first simulations of a physical soft robot developed by the Whitesides group at Harvard.

About the speaker:

Josh Bongard is an associate professor in Computer Science at the University of Vermont. He was named a Microsoft Research New Faculty Fellow in 2006 and a member of the TR35: MIT Technology Review's top 35 innovators under the age of 35 in the same year. In 2011 he was awarded a Presidential Early Career Award for Scientists and Engineers (PECASE) by U. S. President Barack Obama. He currently serves as a vice chair of the UVM Complex Systems Spire, and is the co-author of the popular science book entitled "How the Body Shapes the Way We Think: A New View of Intelligence" (MIT Press).

Wednesday, February 13, 2013, 11:00AM



Transfer learning: Algorithms and Applications

Qiang Yang   [homepage]

Hong Kong University of Science and Technology

In machine learning and data mining, we often encounter situations where we have an insufficient amount of high-quality data in a target domain,but we may have plenty of auxiliary data in related domains. Transfer learning aims to exploit these additional data to improve the learning performance in the target domain. In this talk, I will give an overview on some recent advances in transfer learning problems and discuss some innovative applications such as learning in heterogeneous cross-media domains and in online recommendation systems.

About the speaker:

Qiang Yang is the head of Huawei Noah's Ark Research Lab and a professor in the Department of Computer Science and Engineering, Hong Kong University of Science and Technology. His research interests include data mining and artificial intelligence including machine learning, planning and activity recognition. He is a fellow of IEEE, IAPR and AAAS. He received his PhD from Computer Science Department of the University of Maryland, College Park in 1989. He had been an assistant/associate professor at the University of Waterloo between 1989 and 1995, and a professor and NSERC Industrial Research Chair at Simon Fraser University in Canada from 1995 to 2001. He was an invited speaker at IJCAI 2009, ACL 2009, SDM 2012, WSDM 2013, etc. He was elected as a vice chair of ACM SIGART in July 2010. He is the founding Editor in Chief of the ACM Transactions on Intelligent Systems and Technology (ACM TIST), and is on the editorial board of IEEE Intelligent Systems and several other international journals. He has served as a PC co-chair and general co-chair of several international conferences, including ACM KDD 2010 and 2012, ACM RecSys 2013, ACM IUI 2010, etc. He serves as an IJCAI trustee and will be the PC chair for IJCAI 2015.

Thursday, February 14, 2013, 11:00AM



Robot Teams to Assist Humans in Scientific Discovery

Gregory Dudek   [homepage]

McGill School of Computer Science

I have been working with my students of the development of robots that can operate in outdoor environments, and particularly in shallow water (littoral environments) using the Aqua2 hexapod. Most recently, we have examined the use of a team of robots to survey shallow water coral reefs, and then return selected video footage in near real time to observers who may be located arbitrarily far away. This work entails coordination between flying and swimming vehicles, as well as interaction with human scuba divers. Due to the complexity of the environment and inherent communication limitations, the problems of multi-robot rendezvous, human-robot interaction, and dynamic task repartitioning all must be taken into consideration.

One key aspect of this problem is the automated selection of the most salient and notable features of the environment, to make the best use of the limited available bandwidth. We are specifically interested in the real-time summarization and detection of the most interesting events in a video sequence, for use by humans who will analyze the data either in real time, or offline. This selection process is driven by an unsupervised topic learning framework that operates in real time. The results of this effort to date seem to have potential utility not only in environmental assessment, which has been our primary target application, but to a range of potential robotics applications.

About the speaker:

Gregory Dudek is a Professor with the School of Computer Science, James McGill Chair, Associate Member of the Department of Electrical Engineering and a member of the McGill Research Centre for Intelligent Machines (CIM). In 2010 he was also awarded the Canadian Image Processing and Pattern Recognition Award for Research Excellence and also for Service to the Research Community. He currently serves as Director of the McGill School of Computer Science and recently became Scientific Director of the NSERC Canadian Field Robotics Network, a national research network. He has served previously as Director of the McGill Research Center for Intelligent Machines. He obtained his PhD in Computer Science (computational vision) from the University of Toronto. He has published over 200 research papers on subjects including visual object description and recognition, marine robotics, robotic navigation and map construction, distributed system design and biological perception. He has chaired, edited and been otherwise involved in numerous national and international conferences, journals, and professional activities concerned with Robotics, Machine Sensing and Computer Vision.

Wednesday, February 20, 2013, 11:00AM



Visual Learning for Real-World Interaction

Trevor Darrell   [homepage]

Univ of Cal Berkeley

Contemporary vision research focuses on recognition challenges and data derived from media available on the Internet. While the large-scale datasets this has enabled have led to great progress, methods tuned to these challenges can surprisingly underperform on many real-world problems, and miss opportunities afforded by situated sensing strategies. In this talk I'll present recent results which leverage environment and domain constraints for large-scale recognition: I'll review new methods for domain adaptation, schemes for learning for complex fine-grained categories from limited training data using pose-normalized descriptors, and techniques for learning calibrated photometric models from unstructured image sets.

About the speaker:

Prof. Trevor Darrell’s group is co-located at the University of California, Berkeley, and the UCB-affiliated International Computer Science Institute (ICSI), also located in Berkeley, CA. Prof. Darrell is on the faculty of the CS Division of the EECS Department at UCB and is the vision group lead at ICSI.. Darrell’s group develops algorithms to enable multimodal conversation with robots and mobile devices, and methods for object and activity recognition on such platforms. His interests include computer vision, machine learning, computer graphics, and perception-based human computer interfaces. Prof. Darrell was previously on the faculty of the MIT EECS department from 1999-2008, where he directed the Vision Interface Group. He was a member of the research staff at Interval Research Corporation from 1996-1999, and received the S.M., and PhD. degrees from MIT in 1992 and 1996, respectively. He obtained the B.S.E. degree from the University of Pennsylvania in 1988, having started his career in computer vision as an undergraduate researcher in Ruzena Bajcsy's GRASP lab.

Friday, March 1, 2013, 11:00AM



Grounded Models of Semantic Representation

Mirella Lapata   [homepage]

School of Informatics, University of Edinburgh

A popular tradition of studying semantic representation has been driven by the assumption that word meaning can be learned from the linguistic environment, as approximated by naturally occurring corpora. However, ample evidence suggests that language is grounded in perception and action. In this talk we focus on grounded models of word meaning and discuss how these can be formulated based on linguistic and visual data.

A major question in developing such models is the provenance of the visual modality. We investigate whether it can be approximated by feature norms (i.e., attributes native speakers consider important in describing the meaning of a word), image labels (i.e., keywords describing the objects depicted in an image) or automatically computed visual attributes. A second question concerns the mechanisms by which the two modalities can be integrated. We present a comparative study of models that create a bimodal meaning either by abstracting the two modalities into a joint semantic space or simply by concatenating them. Experimental results suggest that textual data can indeed be used for approximating visual information, and that joint models are superior.

About the speaker:

Mirella Lapata is a Professor at the School of Informatics at the University of Edinburgh. She hold an MLT from the Language Technologies Institute at Carnegie Mellon University, and a PhD in Natural Language Processing from the University of Edinburgh. Her research interests include statistical natural language processing, with an emphasis on unsupervised methods, mathematical programming, and generation applications. She serves as an associate editor of the Journal of Artificial Intelligence Research (JAIR) and is an action editor for Transactions of the Association for Computational Linguistics (TACL). She is the first recipient (2009) of the British Computer Society and Information Retrieval Specialist Group (BCS/IRSG) Karen Sparck Jones award. She has also received best paper awards in leading NLP conferences and financial support from the EPSRC (the UK Engineering and Physical Sciences Research Council).

Wednesday, March 27, 2013, 11:00AM



Fraud Prevention for eCommerce: A Data Driven Approach

Gregory Kuhlmann, Jefferson Provost   [homepage]

Apple

A consistent problem plaguing online merchants today is the growth and evolution of online credit card fraud. Multi-million dollar organized crime rings use stolen credit card numbers to steal and re-sell both physical and digital goods. The recent emergence of transferable digital goods such as gift certificates and in-game currencies means that risk decisions must be made in fractions of a second on enormous transaction volumes. In addition, pervasive use of online credentials linked to payment instruments extends the scope of the problem to online identity theft. The problem is highly adversarial, with fraudsters capable of adapting to and overcoming unsophisticated prevention measures within days or even hours.

In this talk, we will describe the problem of eCommerce fraud and outline various detection measures that online merchants employ. Like many merchants, Apple leverages techniques from data mining, machine learning, and statistics to identify fraud patterns and adapt to new trends. Some topics that we will discuss include evaluating patterns in historical data, building predictive models, inferencing through transaction linkages, active learning and anomaly detection.

About the speaker:

Dr. Jefferson Provost received his Ph.D. from the UT AI Lab in 2007, where he studied machine learning for robot navigation. Since then he has worked applying machine learning for account protection and fraud detection at Amazon.com and Apple, Inc. Dr. Greg Kuhlmann received his Ph.D. from UT Austin in 2010 under the supervision of Dr. Peter Stone. His graduate work included reinforcement learning, robotics and general game playing. In industry, he has applied machine learning to communication link analysis for national defense, network anomaly detection and fraud prevention at 21CT and Apple, Inc.

Friday, April 5, 2013, 11:00AM



Intelligent Narrative Generation: Creativity, Engagement, and Cognition

Mark Riedl   [homepage]

Georgia Institute of Technology

Storytelling is a pervasive part of the human experience--we as humans tell stories to communicate, inform, entertain, and educate. Indeed there is evidence to suggest that narrative is a fundamental means by which we organize, understand, and explain the world. In this talk, I present research on artificial intelligence approaches to the generation of narrative structures using planning and case-based reasoning. I discuss how computational story generation capabilities facilitate the creation of engaging, interactive user experiences in virtual worlds, computer games, and training simulations. I conclude with an ongoing research effort toward generalized computational narrative intelligence in which a system learns from experiences mediated through crowdsourcing platforms.

About the speaker:

Mark Riedl is an Assistant Professor in the Georgia Tech School of Interactive Computing and director of the Entertainment Intelligence Lab. Dr. Riedl's research focuses on the intersection of artificial intelligence, virtual worlds, and storytelling. The principle research question Dr. Riedl addresses through his research is: how can intelligent computational systems reason about and autonomously create engaging experiences for users of virtual worlds and computer games. Dr. Riedl earned a PhD degree in 2004 from North Carolina State University, where he developed intelligent systems for generating stories and managing interactive user experiences in computer games. From 2004 to 2007, Dr. Riedl was a Research Scientist at the University of Southern California Institute for Creative Technologies where he researched and developed interactive, narrative-based training systems. Dr. Riedl joined the Georgia Tech College of Computing in 2007 and in 2011 he received a DARPA Young Faculty Award for his work on artificial intelligence, narrative, and virtual worlds. His research is supported by the NSF, DARPA, the U.S. Army, and Disney.

Friday, April 12, 2013, 11:00AM



Hierarchical Multiagent Learning from Demonstration

Keith Sullivan   [homepage]

George Mason University

Developing agent behaviors is often a tedious, time-consuming task consisting of repeated code, test, and debug cycles. Despite the difficulties, complex agent behaviors have been developed, but they required significant programming ability. In this talk, I'll present an alternative approach: training via learning from demonstration. The system, Hierarchical Training of Agent Behavior (HiTAB), iteratively learns agent behaviors represented as hierarchical finite state automata. By manually decomposing complex behaviors into simpler sub-behaviors, HiTAB requires a limited number of examples. While this places HiTAB closer to programming by demonstration rather than machine learning, it allows novice users to rapidly train complex agent behaviors.

The multiagent situation presents difficulties due to the large, high-dimensional learning space. Furthermore, as a supervised learning method, HiTAB requires that agents be told which micro-level behaviors to perform in various situations. This is sufficient for a single agent since there is no distinction between micro- and macro-level behaviors. However, in the multiagent setting, an experimenter only knows which macro-level behavior to achieve, and not the associated micro-level behaviors, which presents a difficult inverse problem. HiTAB uses an agent hierarchy to decrease the gap between micro- and macro-level behaviors. This hierarchy permits rapid training of (potentially) large numbers of agents using a small number of samples.

I will present results from simulation and real robots (including RoboCup) demonstrating HiTAB's wide applicability.

About the speaker:

Keith Sullivan is a PhD candidate in computer science at George Mason University, supervised by Professor Sean Luke. His dissertation developed methods to train cooperative multiagent behaviors using learning from demonstration. His research interests include robotics, multiagent learning, and stochastic optimization. He helped develop MASON, an open-source Java multiagent simulation toolkit, and ECJ, a Java-based, open-source evolutionary computation toolkit. During his PhD, he received grants for extended visits to Minoru Asada in Osaka and Danielle Nardi in Rome. He is the creator and leader of the RoboPatriots, GMU's humanoid robot soccer team, and co-developer of the FlockBots, an open-source differential drive robot meant for embodied multiagent systems research.

Friday, April 19, 2013, 11:00AM



Robot Self Modeling

Justin Hart   [homepage]

Yale University

Traditionally, models of a robot's kinematics and sensors have been provided by designers through manual processes. These models are used for sensorimotor tasks, such as manipulation and stereo vision. However, traditional techniques yield static models based on one-time calibrations or ideal engineering drawings; models that often fail to represent the actual hardware, or in which individual unimodal models, such as those describing kinematics and vision, may disagree with each other. My research instead constructs robots that learn unified models of themselves adaptively and online. My robot, Nico, creates a highly-accurate self-representation through experience, and is able to use this self-representation for novel tasks, such as inferring the perspective of a mirror by watching its own motion reflected therein. This represents an important step in the disciplined study of self-awareness in robotic systems.

About the speaker:

Justin Hart is a Ph.D. Candidate in the Department of Computer Science at Yale University, where he is advised by Professor Brian Scassellati. His research focuses on robotic self-modeling, in which robots learn models of their bodies and senses through data sampled during operation. He also has performed signifcant work in human-robot interaction, including studies on creating trust by manipulating social presence, attributions of agency, and the creation of lifelike motion. His work has recently been featured in the Society of Manufacturing Engineers Innovation Watch List, and has appeared in media outlets such as New Scientist, BBC News, GE Focus Foward Films, and Google Solve for X.

Monday, July 22, 2013, 2:00PM



How can object detectors exploit growing quantities of training data?

Charless Fowlkes   [homepage]

UC Irvine

A natural question for computer vision is whether accuracy of existing systems for object detection is limited by the amount of available training data or by the features and learning algorithms used to train them. I'll describe a series of experiments we carried out to understand how discriminatively trained template-based detectors perform as we increase both the amount of positive training examples and the model complexity. These results suggest some new ways in which template-based detectors can be grown non-parametrically to take better advantage of data. I will also present some recent results on using synthetically generated data in order to learn appearance models for partially occluded people.

About the speaker:

Charless Fowlkes is an Associate Professor in the Dept. of Computer Science at the University of California, Irvine. His research interests are in computational vision and applications to biological image analysis. Prior to joining UCI he received a PhD from UC Berkeley in 2005 a BS from Caltech in 2000. He is a recipient of an NSF CAREER award and a Marr best-paper prize.

Monday, July 29, 2013, 2:00PM



Probabilistic Models of Learning Latent Similarity

Fei Sha   [homepage]

USC

Inferring similarity among data instances is essential to many learning problems. So far, metric learning is the dominant paradigm. However, similarity is a richer and broader notion than what metrics entail. In this talk, I will describe Similarity Component Analysis (SCA), a new approach overcoming the limitation of metric learning algorithms. SCA is a probabilistic graphical model that discovers latent similarity structures. For a pair of data instances, SCA not only determines whether or not they are similar but also reveal why they are similar (or dissimilar). Empirical studies on the benchmark tasks of multiway classification and link prediction show that SCA outperforms state-of-the-art metric learning algorithms

About the speaker:

Fei Sha is the Jack Munushian Early Career Chair and an assistant professor at the University of Southern California, Dept. of Computer Science. His primary research interests are machine learning and its applications to speech and language processing, computer vision, and robotics. He had won outstanding student paper awards at NIPS 2006 and ICML 2004. He was selected as a Sloan Research Fellow in 2013, won an Army Research Office Young Investigator Award in 2012, and was a member of DARPA 2010 Computer Science Study Panel. He has a Ph.D from Computer and Information Science from U. of Pennsylvania and B.Sc and M.Sc in Biomedical Engineering from Southeast University (Nanjing, China).

Friday, August 2, 2013, 11:00AM



Neural Network ReNNaissance

Juergen Schmidhuber   [homepage]

Swiss AI Lab IDSIA

Our fast, deep / recurrent neural networks have many biologically plausible, non-linear processing stages. They won eight recent international pattern recognition competitions in a row, and are the first machine learning methods to achieve human-competitive or even superhuman performance on well-known vision benchmarks. We also can evolve big NN controllers without any supervision, using "compressed" encodings of NN weight matrices represented indirectly as a set of Fourier-type coefficients. Recently, the largest, evolved, vision-based NN controller to date, with over 1 million weights, learned to drive a car around a track using raw video images from the driver's perspective in the TORCS driving game.

About the speaker:

Prof. Jürgen Schmidhuber is with the Swiss AI Lab IDSIA & USI & SUPSI (ex-TUM CogBotLab & CU). Since age 15 or so his main scientific ambition has been to build an optimal scientist, then retire. This is driving his research on self-improving Artificial Intelligence. His team won many international competitions and awards, and pioneered the field of mathematically rigorous universal AI and optimal universal problem solvers. He also generalized the many-worlds theory of physics to a theory of all constructively computable universes - an algorithmic theory of everything. His formal theory of creativity & curiosity & fun (1990-2010) explains art, science, music, and humor.

Tuesday, August 6, 2013, 11:00AM



Machine Learning of Motor Skills for Robotics

Jan Peters   [homepage]

Technische Universitat Darmstadt

Autonomous robots that can assist humans in situations of daily life have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. However, learning techniques have yet to live up to this promise as only few methods manage to scale to high-dimensional manipulator or humanoid robots. In this talk, we investigate a general framework suitable for learning motor skills in robotics which is based on the principles behind many analytical robotics approaches. It involves generating a representation of motor skills by parameterized motor primitive policies acting as building blocks of movement generation, and a learned task execution module that transforms these movements into motor commands. We discuss learning on three different levels of abstraction, i.e., learning for accurate control is needed to execute, learning of motor primitives is needed to acquire simple movements, and learning of the task-dependent "hyperparameters" of these motor primitives allows learning complex tasks. We discuss task-appropriate learning approaches for imitation learning, model learning and reinforcement learning for robots with many degrees of freedom. Empirical evaluations on a several robot systems illustrate the effectiveness and applicability to learning control on an anthropomorphic robot arm. These robot motor skills range from toy examples (e.g., paddling a ball, ball-in-a-cup) to playing robot table tennis against a human being.

About the speaker:

Jan Peters is a full professor (W3) for Intelligent Autonomous Systems at the Computer Science Department of the Technische Universitaet Darmstadt and at the same time a senior research scientist and group leader at the Max-Planck Institute for Intelligent Systems, where he heads the interdepartmental Robot Learning Group. Jan Peters has received the Dick Volz Best 2007 US PhD Thesis Runner Up Award, the Robotics: Science & Systems - Early Career Spotlight, the INNS Young Investigator Award, and the IEEE Robotics & Automation Society's Early Career Award.

Watch Online

Monday, August 12, 2013, 11:00AM



PAC Subset Selection in Stochastic Multi-armed Bandits

Shivaram Kalyanakrishnan   [homepage]

Yahoo! Labs Bangalore

We consider the problem of selecting, from among n real-valued random variables, a subset of size m of those with the highest means, based on efficiently sampling the random variables. This problem, which we denote Explore-m, finds application in a variety of areas, such as stochastic optimization, simulation and industrial engineering, and on-line advertising. The theoretical basis of our work is an extension of a previous formulation using multi-armed bandits that is devoted to identifying just the single best of n random variables (Explore-1). Under a PAC setting, we provide algorithms for Explore-m and bound their sample complexity.

Our main contribution is the LUCB algorithm, which, interestingly, bears a close resemblance to the well-known UCB algorithm for regret minimization. We derive an expected sample complexity bound for LUCB that is novel even for single-arm selection (Explore-1). We then improve the problem-dependent constant in this bound through a novel algorithmic variant called KL-LUCB. Experiments affirm the relative efficiency of KL-LUCB over other algorithms for Explore-m. Our contributions also include a lower bound on the worst case sample complexity of such algorithms.

About the speaker:

Shivaram Kalyanakrishnan is a scientist at Yahoo! Labs Bangalore. His primary research interests lie in the fields of artificial intelligence and machine learning, spanning areas such as reinforcement learning, agents and multiagent systems, humanoid robotics, multi-armed bandits, and on-line advertising. He obtained his Ph.D. in Computer Science from the University of Texas at Austin (2011), and his B.Tech. in Computer Science and Engineering from the Indian Institute of Technology Madras (2004). He has extensively used robot soccer as a test domain for his research, and has actively contributed to initiatives such as RoboCup and the Reinforcement Learning competitions.

Friday, August 23, 2013, 11:00AM



Learning Latent-Variable Models of Language

Percy Liang   [homepage]

Stanford University

Effective information extraction and question answering require modeling of the deep syntactic and semantic structures of natural language. At the same time, acquiring training data specifying these full structures is prohibitively expensive. Our goal is therefore to learn models of language that can induce latent structures (e.g., parse trees) from raw observations (e.g., sentences) in an unsupervised way. First, I will present spectral methods for learning latent parse tree models. In contrast to existing algorithms for learning latent-variable models such as EM, our method has global convergence guarantees. Second, I will present a semantic model that maps natural language questions to answers via a latent database query, and show results on large-scale question answering.

About the speaker:

Percy Liang is an Assistant Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His research focuses on methods for learning richly-structured statistical models from limited supervision, most recently in the context of semantic parsing in natural language processing. He won a best student paper at the International Conference on Machine Learning in 2008, received the NSF, GAANN, and NDSEG fellowships, and is also a 2010 Siebel Scholar.

[ FAI Archives ]

Fall 2022 - Spring 2023

Fall 2021 - Spring 2022

Fall 2020 - Spring 2021

Fall 2019 - Spring 2020

Fall 2018 - Spring 2019

Fall 2017 - Spring 2018

Fall 2016 - Spring 2017

Fall 2015 - Spring 2016

Fall 2014 - Spring 2015

Fall 2013 - Spring 2014

Fall 2012 - Spring 2013

Fall 2011 - Spring 2012

Fall 2010 - Spring 2011

Fall 2009 - Spring 2010

Fall 2008 - Spring 2009

Fall 2007 - Spring 2008

Fall 2006 - Spring 2007

Fall 2005 - Spring 2006

Spring 2005

Fall 2004

Spring 2004

Fall 2003

Spring 2003

Fall 2002

Spring 2002

Fall 2001

Spring 2001

Fall 2000

Spring 2000