Attention: You are viewing archived content. The information may be outdated and links may no longer work.

Humanoids.io | July 29, 2015

The deep learning, a way of learning through experience for AI, seem to have a bright future. After enabling an AI to discover Mario or learn to read, this method is now used to teach a robot the language subtleties.

A study, released this week at the International Joint Conference on Artificial Intelligence, describes the experiment led by a research team from Austin University, Texas. This study, called “Learning to Interpret Natural Language Commands through Human-Robot Dialog”, was made to create dialogue agent that can be implemented in a robot to enable it to understand basic language. For the researchers, the latest approaches on the subject were not satisfactory, since they did not take into account some new language variations. So far, the most common approaches were based on keyword search to get the meaning of the sentence or the creation of an initial data base and the robot could learn the terms and make links between words.

The dialogue agent designed for this study has an unprecedented system. Its learning method is based on three characteristics. First of all, the robot is capable of understand and identify terms thanks to semantic parsing, that is to say its ability to divide a sentence in syntactic categories. Besides, it can eliminate the ambiguities as regards some words thanks to its dialogue system. Finally, thanks to paraphrase, the robot can progressively learn human-robot interactions. The conversations were hold with the robot’s web-based interface via the Mechanical Turk platform. The dialogue agent was also implemented on a mobile robot, which had to learn and understand delivery and navigation requests in a working environment.

The mobile robot was an improved version of a segway, on which a laptop is installed and makes the Robot Operating System (ROS) work. The test was divided into three categories : navigation, delivery and validation. To make the robot navigate, it is necessary to ask it to go in a random room with the order : “[Name] needs the robot. Send it to the office where he/she is working.” The designations were chosen with complete names, first names, nicknames and titles. The delivery involved to send the robot to help a person with the order : “[Name] needs the item [number]”. The persons and items were selected randomly. This part consisted in asking a person to “give the first name and family name of the person in the office [number]”. Incorrect answers were then annualized and validated or refused, no to take in to account spelling mistakes for example.

The dialogue agent used the semantic parsing structure of Washington University. Lambda calculus formula were used to represent the words’ meaning (lexical items). A system of combinatory categorial grammar to tag each word with a synthetic category including a template-based GENLEX, a generic lexical analyser based on existing models. It would enable the robot, once a question re-worded, to associate unknown terms with the ones it already knows. Therefore, if a person asked the robot “deliver item 5 to Pierre” and then said it in other words because the robot said it did not understand, “bring the item 5 to Pierre”, the dialogue agent could relate deliver and bring to deduce the meaning of the order.

Moreover, the dialogue agent could correct grammar mistakes in its interlocutor’s sentences during the interactions with the people in charge of the test. The researchers are now considering to implement a speech recognition software to know if it can automatically correct consistent speech recognition errors.

Add new comment