Attention: You are viewing archived content. The information may be outdated and links may no longer work.

HPC Wire

For all the progress we've made in IT over the last 50 years, there's one area of life that has steadfastly eluded the grasp of computers: understanding human language. Now, researchers at the Texas Advanced Computing Center (TACC) are utilizing a Hadoop cluster on its Longhorn supercomputer to move the state of the art of language processing a little bit further.

Over the years, programmers have tried many different ways to improve the capability of computers to understand human language. IBM found some success with the brute force approach with Watson, which was loaded with 4 terabytes of dictionary and encyclopedia entries. While the Power-based system won the game show Jeopardy!, it flubbed an easy one on the show, when its question for an answer in the "US cities" category was "What is Toronto?"

The problem of natural language processing is not an easy nut to crack. Exactly replicating a human's method for understanding speech--which involves a combination of context, syntax, logic, and a sense of the speaker's intention (what tripped up Watson)--is not (yet) a suitable option to take with computers.

But a new project at the University of Texas at Austin hopes to improve one aspect of natural language processing: understanding context. Katrin Erk, a professor of linguistics at the university, figures that if hard coding word meanings into a computer doesn't work, then giving a computer a better model for determine meanings just might.

Erk's approach assumes a vast, multi-dimensional space where words and their meanings live. The further apart the words are in the model, the further apart their meanings. In particular, the approach helps the computer to parse homonyms, like "charge," which have multiple meanings.

"An intuition for me was that you could visualize the different meanings of a word as points in space," Erk says in a story on the TACC website. "You could think of them as sometimes far apart, like a battery charge and criminal charges, and sometimes close together, like criminal charges and accusations…The meaning of a word in a particular context is a point in this space. Then we don't have to say how many senses a word has. Instead we say: 'This use of the word is close to this usage in another sentence, but far away from the third use.'"

Erk's model required a lot of human text (books, etc) to work. In 2009, she started loading digitized works into her model, which resided on a desktop computer. This approach may have worked for a few million words, but what Erk really needed to flesh out her model was lots of human text, or billions of words. That's where TACC and the Hadoop subsystem on the Longhorn supercomputer come in.

"With Hadoop on Longhorn, we could get the kind of data that we need to do language processing much faster," Erk says. "That enabled us to use larger amounts of data and develop better models."

In addition to creating a word cloud, Erk and her collaborators, including University of Texas computer science professor Ray Mooney, expanded the model to measure the closeness of whole sentences, which is an even more computationally complex task than measuring word meanings. Researchers loaded one sentence into the system, and asked Hadoop to determine if another sentence was true based on the first. Hadoop scored an 85 percent accuracy rate on this test.

Erk and Mooney were recently awarded a DARPA grant to explore the possibilities of their model. "We want to get to a point where we don't have to learn a computer language to communicate with a computer. We'll just tell it what to do in natural language," Mooney said. "We're still a long way from having a computer that can understand language as well as a human being does, but we've made definite progress toward that goal."

 

News categories: