Research
This page is devoted to my research in Computer Science at the University of Texas at Austin. My research interests are in computer vision and machine learning.

Video Super Resolution

I am currently studying video, or multi-frame, super resolution. Spatial resolution is defined by how close two lines can be in physical space and still be resolvable in the digital image. The idea of video super resolution is increase the spatial resolution of an image by exploiting the temporal information from surrounding frames, and possibly other information learned from large data sets.


3D Point Cloud Analysis and Reconstruction

3D image data is increasingly more prevalent and demands attention from the computer vision community. 3D object recognition and scene reconstruction methods are useful in autonomous robots (such as self-driving cars or SLAM applications), as well as many new consumer products such as the Xbox Kinect. Depth cameras and 3D scanners reveal information about an object's geometry and encode surface features unambiguously, in contrast to 2D images.

Source code for Point Cloud Registration Experiments is available on GitHub!


I studied feature detection in 3D point clouds and depth image data. These keypoints uniquely describe distinct surface features of an object in the scene based on properties such as the local surface geometry. Keypoints are required for scene reconstruction (stitching) and are often the key ingredient in object recognition.

Here are slides that I made for a presentation on the paper Robust Global Registration that presents integral volume features for point cloud registration: robust_global_registration.pdf

Source code for Integral Volume Features is available on GitHub!


Previously, I briefly worked on a project to extend the code for computing the generalized Voronoi diagram (GVD) to include intra-object medial axis computations.

This research is part of the Computational Visualization Center lab supervised by Dr. Chandrajit Bajaj.


Miscellaneous Tests and Demos:

Related Coursework: Computer Graphics (Fall 2011), Topology (Spring 2013), Physical Simulation and Animation for Computer Graphics (Fall 2014), Machine Learning (Spring 2014), Computational Statistics (Spring 2015), Visual Recognition (Spring 2016).




Spherical K-Means

Previous Research Project

My previous research project in data mining was to implement a parallel version of the spherical k-means algorithm. The spherical k-means algorithm is a rendition of classic k-means clustering. It is useful for automatically partitioning large numbers of text documents into a collection of k clusters. The algorithm identifies meaningful connections between various text documents and groups similar content together. Spherical k-means is also sometimes known as k-means with cosine similarity.

The algorithm works on pre-processed text data which is stored as a matrix of document vectors. First, the documents are randomly partitioned, and centroid vectors (averages) are calculated for each cluster. Then, each document is assigned to the cluster whose centroid is closest to it in cosine similarity. This process repeats until the change in partitioning quality is small enough.

I have implemented the algorithm in parallel using OpenMP and Galois. My goal was to improve performance by adding optimizations presented in several research publications, and compare the two versions. I was also working on online spherical k-means code using Galois to see if I can obtain even better results.

Galois is a software library that provides a parallel framework for computing with large data sets. Galois works with graph data structures and allows automatic thread synchronization by locking individual processing nodes. It also has a built-in ordering scheme system that can prioritize graph nodes with higher importance values.


Source code for this project is available on GitHub!


Related Coursework: Machine Learning (Spring 2014), Neural Networks (Fall 2014), Linear Algebra (Spring 2012).