Integrating Visual and Linguistic Information to Describe Properties of Objects (2014)
Generating sentences from images has historically been performed with standalone Computer Vision systems. The idea of combining visual and linguistic information has been gaining traction in the Computer Vision and Natural Language Processing communities over the past several years. The motivation for a combined system is to generate richer linguistic descriptions of images. Standalone vision systems are typically unable to generate linguistically rich descriptions. This approach combines abundant available language data to clean up noisy results from standalone vision systems.

This thesis investigates the performance of several models which integrate information from language and vision systems in order to describe certain attributes of objects. The attributes used were split into two categories: color attributes and other attributes. Our proposed model was found to be statistically significantly more accurate than the vision system alone for both sets of attributes.

View:
PDF
Citation:
Undergraduate Honors Thesis, Computer Science Department, University of Texas at Austin.
Bibtex:

Calvin MacKenzie Undergraduate Alumni calvinm mackenzie [at] utexas edu