Thursdays 3:30-6:30 pm
Unique # 54425
Instructor: Kristen Grauman
TA: Harshdeep Singh
Office hours: by appointment, CSA 114 (modular building near ENS)
Reading for next week:
Learning Tag Relevance by Neighbor Voting for Social Image Retrieval, by X. Li, C. Snoek, and M. Worring. MIR 2008.
Why We Tag: Motivations for Annotation in Mobile and Online Media, by M. Ames and M. Naaman, CHI 2007.
This is a graduate seminar course in computer vision. We will survey and discuss current vision papers relating to object recognition and content-based retrieval for images and videos. The goals of the course will be to understand current approaches to some important problems, to actively analyze their strengths and weaknesses, and to identify interesting open questions and possible directions for future research.
Topics will include:
· recognition models for objects
· image/video search and the web
· fast indexing methods
· the image annotation process
· holistic scene recognition
· considering language (text) with visual cues
· the role of context in recognition
· unsupervised and semi-supervised learning from images
See the syllabus and list of selected papers for more details.
Students will be responsible for writing paper reviews each week, participating in discussions, presenting a paper and demo, and completing a project (done in pairs). More details are below.
Courses in computer vision and/or machine learning.
Ability to understand and do a high-level analysis of conference papers in this area.
Please talk to me if you are uncertain if this course will be a good match for your background.
Students are expected to do the assigned reading, participate in class discussions, write two paper reviews each week, and complete a final project. In addition, everyone will be responsible for giving two presentations: one that involves doing background research on a topic (using 2-3 papers from the provided list), and one that involves an experimental demo relevant to one of the topics. The two presentations should be on different topics. Details on each of these elements are provided here.
Grades in the class will be determined as follows:
· 20% Participation (including attendance, in-class discussions, paper reviews)
· 20% Paper presentation
· 20% Demo presentation
· 40% Final project (including proposal, progress report, and final paper)
Please read the UTCS code of conduct.
March 19: Spring break, no class
March 26: Project proposals due
16 April 23: Project progress reports / drafts
May 7: Final project papers due
May 7 and 8: Final project presentations 3:30 pm – 6: 30 pm [Note unusual date, Friday the 8th]
There is no required textbook for this course, as we will get most of our content from the papers we read. However, you may find these books useful references. They are on reserve at the PCL library.
· Computer vision, Linda G. Shapiro and George C. Stockman.
· Introductory techniques for 3-D computer vision, Emanuele Trucco and Alessandro Verri.
· Multiple view geometry in computer vision, Richard Hartley and Andrew Zisserman.
· OpenCV (open source computer vision library)
· Weka (Java data mining software)
· Netlab (Matlab toolbox for data analysis techniques, written by Ian Nabney and Christopher Bishop)
Past semesters at UT:
· 6.870 Object Recognition and Scene Understanding, MIT, Antonio Torralba
· 16-721 Learning-based Methods in Vision, CMU, Alyosha Efros
· 252C Selected Topics in Vision & Learning, UCSD, Serge Belongie
· CMPT882: Recognition Problems in Computer Vision, SFU, Greg Mori
· CS 598: High-Level Recognition in Computer Vision, Princeton, Fei-Fei Li