Research InterestsI am primarily interested in active learning, object recognition, detection and segmentation. I worked with Prof. Kristen Grauman on active selection and annotation for recognition, segmentation and detection problems on images and videos.
"Active Visual Category Learning"
S. Vijayanarasimhan, Ph.D Thesis, UT Austin 2011. Abstract
"Actively Selecting Annotations from Objects and Attributes"
A. Kovashka, S. Vijayanarasimhan and K. Grauman, TO APPEAR in ICCV 2011
"Large-Scale Live Active Learning: Training Object Detectors with Crawled Data and Crowds (Oral)"
S. Vijayanarasimhan and K. Grauman, in CPVR 2011 Abstract
"Efficient Region Search for Object Detection"
S. Vijayanarasimhan and K. Grauman, in CVPR 2011 Abstract
"Hashing Hyperplane Queries to Near Points with Applications to Large-Scale Active Learning"
P. Jain, S. Vijayanarasimhan and K. Grauman, in NIPS 2010 Abstract
"Cost-sensitive Active Visual Category Learning"
S. Vijayanarasimhan and K. Grauman, TO APPEAR in IJCV 2010 Abstract
"Far-Sighted Active Learning on a Budget for Image and Video Recognition"
S. Vijayanarasimhan, P. Jain and K. Grauman, in CVPR 2010 Abstract
"Visual Recognition and Detection Under Bounded Computational Resources"
S. Vijayanarasimhan and A. Kapoor, in CVPR 2010 Abstract
"Top-Down Pairwise Potentials for Piecing Together Multi-Class Segmentation Puzzles"
S. Vijayanarasimhan and K. Grauman, in IEEE workshop on Perceptual Organization in Computer Vision 2010 Abstract
"Cost Sensitive Active Visual Category Learning"
S. Vijayanarasimhan and K. Grauman, The Learning Workshop, Clearwater, FL, April 2009
"What's It Going to Cost You?: Predicting Effort vs. Informativeness for Multi-Label Image Annotations"
S. Vijayanarasimhan and K. Grauman, in CVPR 2009 Abstract
Multi-Level Active Prediction of Useful Image Annotations for Recognition (Oral)
S. Vijayanarasimhan and K. Grauman, in NIPS 2008 Abstract
Keywords to Visual Categories: Multiple-Instance Learning for Weakly Supervised Object Categorization
S. Vijayanarasimhan and K. Grauman in CVPR 2008 Abstract
Additional analysis to appear in Human Computation Workshop (HCOMP) at AAAI 2011.
Annotation data for MSRC (v1) dataset
Here's annotation (object segmentation) and timing data collected on the MSRC version 1 dataset from a large number of anonymous users using the Mechanical Turk interface. This readme file provides details on the data. Also checkout our CVPR 2009 paper for more details. These are some example segmentations that were approved.
If you find this data useful for your research please feel free to use it and kindly use the following bibtex entry for citations. BIBTEX
Source code used in the Semantice Robot Vision Challenge
The Semantic Robot Vision Challenge (SRVC) is a workshop conducted every year where fully autonomous robots receive a text list of objects that they are to find. They use the web to automatically find image examples of those objects in order to learn visual models. These visual models are then used to identify the objects in the robot's cameras.
I participated in the 2008 event and applied a Multiple-Instance Learning based approach for the problem. Here's the complete source code of our method. This readme file should get you started if you would like to apply our method. The source files for compiling the MIL classifier can be downloaded from here.
FriendsVinodh Muralidaran Murtaza Deepak Shyamnath Vishal Anirudh Anirudh Prasad Arunachalam Srikanth Sid
Department of Computer Science
University of Texas at Austin
1 University Station
Austin, TX 78712-0233
Email: svnaras at cs dot utexas dot edu