Forum for Artificial Intelligence

Archive


Forum for Artificial Intelligence

[ Home   |   About FAI   |   Upcoming talks   |   Past talks ]



This website is the archive for past Forum for Artificial Intelligence talks. Please click this link to navigate to the list of current talks.

FAI meets every other week (or so) to discuss scientific, philosophical, and cultural issues in artificial intelligence. Both technical research topics and broader inter-disciplinary aspects of AI are covered, and all are welcome to attend!

If you would like to be added to the FAI mailing list, subscribe here. If you have any questions or comments, please send email to Catherine Andersson.






[ Upcoming talks ]





Fri, September 16
11:00AM
Nanyun (Violet) Peng
University of California, Los Angeles
Event-Centric Natural Language Understanding with Less Supervision
Fri, September 30
11:00AM
Sasa Misailovic
University of Illinois at Urbana-Champaign
Static Analysis for Differentiable Programming
Fri, October 7
11:00AM
Malihe Alikhani
University of Pittsburgh
Towards Inclusive and Equitable Human Language Technologies
Fri, October 14
11:00AM
Vicente Ordóñez
Associate Professor, Dept. of Computer Science, Rice University
On the Success of Large Scale Vision and Language Models
Fri, October 28
11:00AM
Mohit Bansal
University of North Carolina (UNC)
Unified and Efficient Multimodal Pretraining Across Vision and Language
Mon, October 31
4:00PM
Noam Brown
Facebook AI Research
ReBeL: Combining Deep Reinforcement Learning and Search for Imperfect-Information Games
Fri, November 4
11:00AM
Shinji Watanabe
Carnegie Mellon University
Explainable End-to-End Neural Networks for Far-Field Conversation Recognition
Fri, November 11
11:00AM
Fabio Ramos
University of Sydney and NVIDIA
Unleashing the Power of Differentiable Simulation with Probabilistic Inference for Sim2Real and Policy Learning
Fri, November 18
11:00AM
Colin Raffel
University of North Carolina, Chapel Hill
Building Machine Learning Models like Open-Source Software
Fri, January 20
11:00AM
Raymond J. Mooney
Uniersity of Texas at Austin
The New Era of Big Science AI: How can academics adapt to the new reality
Fri, January 27
11:00AM
Zachary Lipton
Carnegie Mellon University
Responsible Machine Learning’s Causal Turn: Promises and Pitfalls
Fri, February 24
11:00AM
Lerrel Pinto
New York University
Versatile Robot Learning through ‘Easier’ Robot Teaching
Fri, March 10
11:00AM
Pulkit Agrawal
MIT
Robot Learning for the Real World
Fri, April 21
11:00AM
Jonathan Kummerfeld
University of Sydney
Collaborative Human-AI Systems for Databases, Diplomacy, and more

Friday, September 16, 2022, 11:00AM



Event-Centric Natural Language Understanding with Less Supervision

Nanyun (Violet) Peng   [homepage]

University of California, Los Angeles

Events are central to human interactions with the world, yet event-centric natural language understanding (NLU) is challenging because it requires the understanding of the interactions between the event triggers and the arguments usually span multiple sentences. The complex event argument structures and abstract event relations also create challenges for collecting abundant of human annotations to train the model. In this talk, I will discuss how do we design deep structured models that compile the problem structures into constraints and combine them with deep neural networks for event and relation extraction. I will also introduce our recent works on leveraging generative language models for zero-shot and low-shot event extraction from multiple languages. Finally, I will briefly introduce some datasets and resources we contributed for event-centric understanding.

About the speaker:

Nanyun (Violet) Peng is an Assistant Professor of Computer Science at the University of California, Los Angeles. She received her Ph.D. in Computer Science from Johns Hopkins University, Center for Language and Speech Processing. Her research focuses on generalizability of NLP models, with applications to creative language generation, low-resource information extraction, and zero-shot cross-lingual transfer. Her works have won an Outstanding Paper Award at NAACL, the Best Paper Award at AAAI Deep Learning on Graphs workshop, and have been featured an IJCAI early career spotlight. Her research have been supported by several DARPA, IARPA, NIH grants and several industrial research awards.

Watch Online

Friday, September 30, 2022, 11:00AM



Static Analysis for Differentiable Programming

Sasa Misailovic   [homepage]

University of Illinois at Urbana-Champaign

While formal reasoning in the machine learning domain has achieved remarkable success in certifying properties specified over complex, highly non-linear functions (e.g., DNNs), lifting this formal reasoning to these function's derivatives has been severely understudied. Further, because gradient computations and more broadly differentiable programming make up the backbone of modern machine learning, the needs of practitioners have rapidly outpaced formal development on the programming languages side. To address these challenges, I present a framework for abstract interpretation of differentiable programming and show how it allows us to cleanly, formally and fully-compositionally reason about both a function (e.g., DNN or optimization objective) and its derivatives in a sound manner, even in the face of points of non-differentiability. For instance, this idea can be used for the new problem of computing Lipschitz certificates of compositions of neural networks with contextual perturbations, as well as formally certifying properties over an optimization landscape, even if these functions are only piecewise differentiable. I also show how this framework can be generalized to arbitrary order-derivatives as well as instantiated with more expressive abstract domains to further improve the generality and precision. With this work, we make the first step toward unlocking the potential to define, analyze and verify a brand-new set of formal properties expressed over derivatives, for a broad and general class of programs.

About the speaker:

Sasa Misailovic is an Associate Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign. He received his PhD from MIT in Summer 2015. He works at the intersection of programming languages, compilers, and software engineering. His work rethinks programming systems for modern applications in which noisy data, unreliable communication, approximate computation, and uncertain decisions are fundamental traits. At Illinois, his work redesigns the entire computing stack, from applications to hardware, with the abstractions for approximation and uncertainty as central tenets for developing machine learning, data analytics, multimedia processing, and robotics programs. His research attacks three "grand challenges" for future programming systems: (1) How can the next-generation programming systems and hardware interoperate to give acceptably accurate results with maximum performance and energy savings? (2) How can these systems rigorously quantify the noise and inaccuracy from the inputs and computation? (3) How can these systems help developers write reliable software that operates in uncertain environments? Find more about Sasa's research at http://misailo.cs.illinois.edu/research.html.

Watch Online

Friday, October 7, 2022, 11:00AM



Towards Inclusive and Equitable Human Language Technologies

Malihe Alikhani   [homepage]

University of Pittsburgh

The rapidly changing technological landscape and societal needs of language technology users call for a deeper understanding of the impact of natural language processing models on user behaviors. This calls for designing culturally responsible and inclusive language technologies that can benefit a diverse population. I present three of our recent works on discourse-aware text generation models for automatic social media moderation and mediation, equitable dialogue generation models based on learning theory, and multimodal machine learning models for sign language processing. Finally, I describe my research vision: to build inclusive and collaborative communicative systems and grounded artificial intelligence by leveraging the cognitive science of language use and formal methods of machine learning.

About the speaker:

Malihe Alikhani is an assistant professor of computer science in the School of Computing and Information at the University of Pittsburgh. She got her Ph.D. in computer science and a graduate certificate in cognitive science from Rutgers University in 2020. Her research interests center on using representations of communicative structure, machine learning, and cognitive science to design practical and inclusive NLP systems for social good. Her work has received multiple best paper awards at ACL, UAI, INLG, and UMAP.

Watch Online

Friday, October 14, 2022, 11:00AM



On the Success of Large Scale Vision and Language Models

Vicente Ordóñez   [homepage]

Associate Professor, Dept. of Computer Science, Rice University

Large-scale models pretrained on paired images with textual descriptions have had an outsized impact on computer vision in the past two years. While previous computer vision models were usually built to recognize a limited set of object categories, current models based on paired image-text pretraining can be used to recognize an arbitrary set of object categories with minimal effort. We demonstrate how some limitations of these models can be overcome to enable their use for visual grounding and in low-resource settings. Moreover, we also demonstrate how vision-language models could be used for purely textual tasks such as machine translation by taking advantage of images as an intermediate signal. Another area that has received increased attention is text-to-image synthesis. We showcase Text2Scene, our compositional scene generator, and discuss how it relates to ongoing efforts in obtaining increasingly general and realistic text-to-image synthesis models. Finally, I will discuss some current limitations, as well as opportunities for improving and applying these models in real-world scenarios.

About the speaker:

Vicente Ordóñez-Román is an Associate Professor in the Department of Computer Science at Rice University and an Amazon Visiting Academic at Amazon Alexa AI. His research focus is on building visual recognition models that can perform tasks that leverage both images and text. He is a recipient of a Best Paper Award at the conference on Empirical Methods in Natural Language Processing (EMNLP) 2017 and the Best Paper Award -- Marr Prize at the International Conference on Computer Vision (ICCV) 2013. He has also been the recipient of an NSF CAREER Award, an IBM Faculty Award, a Google Faculty Research Award, and a Facebook Research Award. From 2016-2021, he was an Assistant Professor in the Department of Computer Science at the University of Virginia. He obtained a Ph.D. in Computer Science at the University of North Carolina at Chapel Hill and has also been a visiting researcher at the Allen Institute for Artificial Intelligence and a visiting professor at Adobe Research.

Watch Online

Friday, October 28, 2022, 11:00AM



Unified and Efficient Multimodal Pretraining Across Vision and Language

Mohit Bansal   [homepage]

University of North Carolina (UNC)

In this talk, I will present work on enhancing the important aspects of unification, generalization, and efficiency in large-scale pretrained models across vision and language modalities, via different methods and directions of visual grounding for improving both multimodal and text-only NLU tasks. We will start by discussing joint vision and language pretraining models such as LXMERT (large-scale cross-modal pretraining). Next, we will present VL-T5 to unify several multimodal tasks (such as visual question answering, referring expression comprehension, visual reasoning/entailment, visual commonsense reasoning, captioning, and multimodal machine translation) by treating all these tasks as text generation. We will then discuss the direction of improving text-only NLU tasks via visually-grounded supervision and distillation from image and video knowledge transfer (Vokenization, VidLanKD). Finally, we will look at parameter/memory efficiency in VL pretraining via adapter/sidetuning, sparse sampling, and audio replacement methods. I will conclude with some big next challenges in this area to think about.

About the speaker:

Dr. Mohit Bansal is the John R. & Louise S. Parker Professor in the Computer Science department at University of North Carolina (UNC) Chapel Hill. He received his PhD from UC Berkeley and his BTech from IIT Kanpur. His research expertise is in natural language processing and multimodal machine learning, with a particular focus on grounded and embodied semantics, human-like language generation, and interpretable and generalizable deep learning. He is a recipient of DARPA Director's Fellowship, NSF CAREER Award, Army Young Investigator Award, Google Focused Research Award, Microsoft Investigator Fellowship, and outstanding paper awards at ACL, CVPR, EACL, COLING, and CoNLL. His service includes ACL Executive Committee, ACM Doctoral Dissertation Award Committee, Program Co-Chair for CoNLL 2019, ACL Americas Sponsorship Co-Chair, and Associate/Action Editor for TACL, CL, IEEE/ACM TASLP, and CSL journals. Webpage: https://www.cs.unc.edu/~mbansal/

Watch Online

Monday, October 31, 2022, 4:00PM



ReBeL: Combining Deep Reinforcement Learning and Search for Imperfect-Information Games

Noam Brown   [homepage]

Facebook AI Research

The combination of deep reinforcement learning and search has led to a number of high-profile successes in perfect-information games like Chess and Go, best exemplified by AlphaZero. However, prior algorithms of this form cannot cope with imperfect-information games like Poker. In contrast, ReBeL is a general framework for self-play reinforcement learning and search that provably solves any two-player zero-sum game. In the simpler setting of perfect-information games, ReBeL reduces to an algorithm similar to AlphaZero. Results in two different imperfect-information games show ReBeL converges to an approximate Nash equilibrium. We also show ReBeL achieves superhuman performance in heads-up no-limit Texas hold'em poker, while using far less domain knowledge than any prior poker AI.

About the speaker:

Noam Brown is a Research Scientist at Meta AI (FAIR) working on multi-agent artificial intelligence, with a particular focus on imperfect-information games. He co-created Libratus and Pluribus, the first AIs to defeat top humans in two-player no-limit poker and multiplayer no-limit poker, respectively. He has received the Marvin Minsky Medal for Outstanding Achievements in AI, was named one of MIT Tech Review’s 35 Innovators Under 35, and his work on Pluribus was named by Science to be one of the top 10 scientific breakthroughs of 2019. Noam received his PhD from Carnegie Mellon University, where he received the School of Computer Science Distinguished Dissertation Award.

Watch Online

Friday, November 4, 2022, 11:00AM



Explainable End-to-End Neural Networks for Far-Field Conversation Recognition

Shinji Watanabe   [homepage]

Carnegie Mellon University

This presentation introduces some of our group’s attempts at building an end-to-end network that integrates various speech processing modules into a single neural network while maintaining explainability. We will focus on far-field conversation recognition as an example and show how to unify automatic speech recognition, denoising, dereverberation, separation, and localization. We will also introduce our latest techniques for combining self-supervised learning, careful pre-training/fine-tuning strategies, and multi-task learning within our integrated network. This work achieved the best performance reported in the literature on several noisy reverberant speech recognition benchmarks, reaching the clean speech recognition performance.

About the speaker:

Shinji Watanabe is an Associate Professor at Carnegie Mellon University, Pittsburgh, PA. He received his B.S., M.S., and Ph.D. (Dr. Eng.) degrees from Waseda University, Tokyo, Japan. He was a research scientist at NTT Communication Science Laboratories, Kyoto, Japan, from 2001 to 2011, a visiting scholar at Georgia institute of technology, Atlanta, GA, in 2009, and a senior principal research scientist at Mitsubishi Electric Research Laboratories (MERL), Cambridge, MA USA from 2012 to 2017. Prior to the move to Carnegie Mellon University, he was an associate research professor at Johns Hopkins University, Baltimore, MD, USA, from 2017 to 2020. His research interests include automatic speech recognition, speech enhancement, spoken language understanding, and machine learning for speech and language processing. He has published more than 300 papers in peer-reviewed journals and conferences and received several awards, including the best paper award from the IEEE ASRU in 2019. He serves as a Senior Area Editor of the IEEE Transactions on Audio Speech and Language Processing. He was/has been a member of several technical committees, including the APSIPA Speech, Language, and Audio Technical Committee (SLA), IEEE Signal Processing Society Speech and Language Technical Committee (SLTC), and Machine Learning for Signal Processing Technical Committee (MLSP).

Watch Online

Friday, November 11, 2022, 11:00AM



Unleashing the Power of Differentiable Simulation with Probabilistic Inference for Sim2Real and Policy Learning

Fabio Ramos   [homepage]

University of Sydney and NVIDIA

Differentiable simulation can play a key role in scaling reinforcement learning to higher dimensional state and action spaces, while, at the same time, leveraging recent probabilistic inference methods for Bayesian domain randomization. In this talk, I will discuss advantages and disadvantages of differentiable simulation and connect it with two methods that use differentiability to speed up Bayesian inference, stochastic gradient Langevin dynamics and Stein Variational Gradient Descent. Our resulting Bayesian domain randomization approach can quickly produce posterior distributions over simulation parameters given real state-action trajectories, leading to robust controllers and policies. I will show examples in legged locomotion, robotics manipulation, and robotics cutting.

About the speaker:

Fabio is a Principal Research Scientist at NVIDIA, and Professor in machine learning and robotics at the School of Computer Science, University of Sydney. Before, Fabio was a co-Director of the Centre for Translational Data Science, and previously an Australian Research Council (ARC) Research Fellow at the Australian Centre for Field Robotics. Fabio's research is focused on modelling and understanding uncertainty for prediction and decision making tasks, and includes Bayesian statistics, data fusion, anomaly detection, and reinforcement learning. Over the last ten years Fabio has applied these techniques to robotics, mining and exploration, environment monitoring, and neuroscience. His research has been recognized with several Best Paper and Best Student Paper Awards in conferences such as ECML, RSS, L4DC and IROS.

Watch Online

Friday, November 18, 2022, 11:00AM



Building Machine Learning Models like Open-Source Software

Colin Raffel   [homepage]

University of North Carolina, Chapel Hill

Pre-trained models have become a cornerstone of modern ML pipelines thanks to the fact that they can provide improved performance with less labeled data on downstream tasks. However, these models are typically created by a resource-rich research group that unilaterally decides how a given model should be built, trained, and released, after which point it is left as-is until a better pre-trained model comes along to completely supplant it. In contrast, open-source development has proven that it is possible for a distributed community of contributors to work together to iteratively build complex and widely-used software. This kind of large-scale distributed collaboration is made possible through a mature set of tools including version control, continuous integration, merging, and more. In this talk, I will present a vision for building machine learning models in the way that open-source software is developed. I will also discuss our preliminary work on model merging, cheaply-communicable patches, hyper-distributed training on volunteer computing, and a version control system for model parameters.

About the speaker:

Colin Raffel is an Assistant Professor at UNC Chapel Hill and a Faculty Researcher at Hugging Face. His work aims to make it easy to get computers to do new things. Consequently, he works in machine learning (enabling computers to learn from examples) and natural language processing (enabling computers to communicate in natural language).

Watch Online

Friday, January 20, 2023, 11:00AM



The New Era of Big Science AI: How can academics adapt to the new reality

Raymond J. Mooney   [homepage]

Uniersity of Texas at Austin

Recently high-impact progress in AI has involved immense neural models trained on massive datasets developed by large industry teams using huge computational resources. AI has entered a fundamentally new era of “big science” largely controlled by commercial industry, and most academic researchers are struggling to find their role in this new reality. Three options seem to present themselves: 1) Find small niche problems that complement industrial research without directly competing with it; 2) Push for large, open, publicly-funded computational and data infrastructure to support academic “big science” AI; 3) Collectively develop open, large models in a distributed manner using open-source methodology. To start the new semester with a broader issue-oriented seminar, I will briefly introduce these ideas and hopefully lead a productive, open discussion on these issues with the larger UT AI community.

Watch Online

About the speaker:

Check Prof. Mooney's website at here: https://www.cs.utexas.edu/~mooney/

Friday, January 27, 2023, 11:00AM



Responsible Machine Learning’s Causal Turn: Promises and Pitfalls

Zachary Lipton   [homepage]

Carnegie Mellon University

With widespread excitement about the capability of machine learning systems, this technology has been instrumented to influence an ever-greater sphere of societal systems, often in contexts where what is expected of the systems goes far beyond the narrow tasks on which their performance was certified. Areas where our requirements of systems exceed their capabilities include (i) robustness and adaptivity to changes in the environment, (ii) compliance with notions of justice and non-discrimination, and (iii) providing actionable insights to decision-makers and decision subjects. In all cases, research has been stymied by confusion over how to conceptualize the critical problems in technical terms. And in each area, causality has emerged as a language for expressing our concerns, offering a philosophically coherent formulation of our problems but exposing new obstacles, such as an increasing reliance on stylized models and a sensitivity to assumptions that are unverifiable and (likely) unmet. This talk will introduce a few recent works, providing vignettes of reliable ML’s causal turn in the areas of distribution shift, fairness, and transparency research.

Watch Online

About the speaker:

Dr. Zachary Lipton is an Assistant Professor of Machine Learning and Operations Research at Carnegie Mellon University (CMU). He holds appointments in the Machine Learning Department in the School of Computer Science (primary), Tepper school of Business (joint), Heinz School of Public Policy (courtesy) and Societal Computing (courtesy). Dr. Lipton’s research spans core machine learning methods and theory, their applications in healthcare and natural language processing, and critical concerns, both about the mode of inquiry itself, and the impact of the technology it produces on social systems. He is director of the Approximately Correct Machine Learning Intelligence Lab.

Friday, February 24, 2023, 11:00AM



Versatile Robot Learning through ‘Easier’ Robot Teaching

Lerrel Pinto   [homepage]

New York University

A fundamental goal in robotics is to learn complex and dexterous behaviors in diverse real-world environments. But what is the fastest way to teach robots in the real world? — Among the prominent options in our robot learning toolbox, sim2real requires careful modeling of the world, while real-world self-supervised learning or RL is far too slow. Currently, the only reasonably efficient approach that I know of is imitating humans. But making imitation learning feasible on real robots is not ‘easy’. They often require complicated demonstration collection setups, rely on having expert roboticists train them, and even then need a significant number of demonstrations to learn effectively. In this talk, I will present two ideas that can make robots learn far easier than they currently are. First, to collect demonstrations more easily we will use vision-based demonstration collection devices. This allows untrained humans to easily collect demonstrations from consumer-grade products. Second, to learn from these visual demonstrations, we will use new imitation learning algorithms that put data efficiency at the forefront. These tools and algorithms allow for teaching a wide range of dexterous skills within an hour of human effort.

Watch Online

About the speaker:

Lerrel Pinto is an Assistant Professor of Computer Science at NYU. His research interests focus on machine learning for robots. He received a Ph.D. degree from CMU in 2019 after which he did a Postdoc at UC Berkeley. His work on large-scale robot learning received the Best Student Paper Award at ICRA 2016, Best Paper Award finalist at IROS 2019, and CoRL 2022. Several of his works have been featured in popular media such as The Wall Street Journal, TechCrunch, MIT Tech Review, Wired, and BuzzFeed among others. His recent work can be found on www.lerrelpinto.com.

Friday, March 10, 2023, 11:00AM



Robot Learning for the Real World

Pulkit Agrawal   [homepage]

MIT

Robots are getting competent at understanding and converting complex natural language commands describing household tasks into step-wise instructions. Yet, they fare poorly at executing such instructions. Accurate and reliable execution of sensorimotor skills (e.g., locomotion, opening doors, object manipulation, etc.) is a critical missing piece in developing robotic butlers. I will outline a framework for learning sensorimotor skills involving complex contact-rich interactions. The developed systems are real-world-ready: they exhibit generalization, robustness, and run in real-time using onboard computers and commodity sensors. I will describe the framework using the following case studies: (i) a dexterous manipulation system capable of re-orienting novel objects. (ii) a quadruped robot capable of fast locomotion and manipulation on diverse natural terrains. (iii) object re-arrangement system tested on manipulating out-of-distribution object configurations.

Watch Online

About the speaker:

Pulkit Agrawal is the Steven and Renee Finn Chair Assistant Professor in the Department of Electrical Engineering and Computer Science at MIT, where he directs the Improbable AI Lab. His research interests span robotics, deep learning, computer vision, and reinforcement learning. His work received the Best Paper Award at Conference on Robot Learning 2021 and the Best Student Paper Award at the Conference on Computer Supported Collaborative Learning 2011. He is a recipient of the Sony Faculty Research Award, Salesforce Research Award, Amazon Research Award, a Fulbright fellowship, etc. Before joining MIT, he co-founded SafelyYou Inc., received his Ph.D. from UC Berkeley, and Bachelor's degree from IIT Kanpur, where he was awarded the Directors Gold Medal.

Friday, April 21, 2023, 11:00AM



Collaborative Human-AI Systems for Databases, Diplomacy, and more

Jonathan Kummerfeld   [homepage]

University of Sydney

No task in NLP is perfectly solved, even by the latest language models, between intrinsic ambiguity and subtle edge cases. Meanwhile, generative models hallucinate, reproduce bias, and do not justify or explain their outputs. In order to effectively incorporate NLP models into deployed systems, we will need to design them with the interface between people and AI in mind. In this talk, I will describe several projects that aim to improve results by following this human-AI centered approach. First, I will describe how we improved text-to-SQL conversion by introducing human-editable explanations generated with a direct mapping from the natural language explanation back to SQL. Second, I will present work on developing a bot to play Diplomacy, a board game that requires rich communication in natural language between players to form alliances, make plans, and negotiate on strategies. I will conclude with a lightning round of highlights from other work going on in my group in the broad space of human-AI systems.

Watch Online

About the speaker:

Jonathan K. Kummerfeld is an Assistant Professor in the School of Computer Science at the University of Sydney. He completed his Ph.D. at the University of California, Berkeley, and was previously a postdoc at the University of Michigan, and a visiting scholar at Harvard. Jonathan’s research focuses on interactions between people and NLP systems, developing more effective algorithms, workflows, and systems for collaboration. He has been on the program committee for over 50 conferences and workshops. He currently serves as the Co-CTO of ACL Rolling Review, and is a standing reviewer for the Computational Linguistics journal and the Transactions of the Association for Computational Linguistics journal. For more details, see his website: https://www.jkk.name

[ FAI Archives ]

Fall 2022 - Spring 2023

Fall 2021 - Spring 2022

Fall 2020 - Spring 2021

Fall 2019 - Spring 2020

Fall 2018 - Spring 2019

Fall 2017 - Spring 2018

Fall 2016 - Spring 2017

Fall 2015 - Spring 2016

Fall 2014 - Spring 2015

Fall 2013 - Spring 2014

Fall 2012 - Spring 2013

Fall 2011 - Spring 2012

Fall 2010 - Spring 2011

Fall 2009 - Spring 2010

Fall 2008 - Spring 2009

Fall 2007 - Spring 2008

Fall 2006 - Spring 2007

Fall 2005 - Spring 2006

Spring 2005

Fall 2004

Spring 2004

Fall 2003

Spring 2003

Fall 2002

Spring 2002

Fall 2001

Spring 2001

Fall 2000

Spring 2000