Learning Language Semantics from Ambiguous Supervision (2007)
This paper presents a method for learning a semantic parser from ambiguous supervision. Training data consists of natural language sentences annotated with multiple potential meaning representations, only one of which is correct. Such ambiguous supervision models the type of supervision that can be more naturally available to language-learning systems. Given such weak supervision, our approach produces a semantic parser that maps sentences into meaning representations. An existing semantic parsing learning system that can only learn from unambiguous supervision is augmented to handle ambiguous supervision. Experimental results show that the resulting system is able to cope up with ambiguities and learn accurate semantic parsers.
In Proceedings of the 22nd Conference on Artificial Intelligence (AAAI-07), pp. 895-900, Vancouver, Canada, July 2007.

Rohit Kate Postdoctoral Alumni katerj [at] uwm edu
Raymond J. Mooney Faculty mooney [at] cs utexas edu
KRISPER A semantic parser learning system that learn from ambiguous training examples.... 2007