Interactive knowledge-based systems, such as advisory systems and intelligent tutoring systems, should be able to communicate effectively with their users. A key aspect of communication is the ability to explain complex phenomena. Given a query and a knowledge base, an Explanation Generation System must be able to select precisely those facts that enable it to construct a response to the query. In addition, it must be able to organize the selected information and translate the formal representational structures found in knowledge bases to natural language. The goal of this research is to design, implement, and empirically evaluate a computational model of explanation generation.

We have developed KNIGHT, a robust Explanation Generation System that dynamically constructs natural language explanations about scientific phenomena. To generate explanations, KNIGHT extracts knowledge structures from a large-scale knowledge base, organizes them into hierarchical discourse plans, and employs a unification-based, systemic grammar to translate them into smooth English prose. It offers many features not found in previous explanation generation systems. It exploits a formalism for representing ``discourse'' knowledge that is both expressive and easily maintained. It can control the amount of explanations' detail at runtime according to users' preferences. To generate user-specific explanations, it employs heuristic search techniques that find conceptual connections between newly introduced concepts and concepts that appear in an overlay user model.

We have undertaken the most extensive and rigorous empirical evaluation ever conducted on an explanation generator. First, KNIGHT constructed explanations on randomly chosen topics from the Biology Knowledge Base. This is an immense structure that contains more than 100,000 facts. We then enlisted the services of a panel of domain experts to produce explanations on these same topics. Finally, we submitted all of these explanations to a second panel of domain experts, who graded the explanations on an A-F scale. Remarkably, KNIGHT scored within ``half a grade'' of the domain experts. Its performance actually exceeded that of one of the domain experts.

  • Representative publication: James C. Lester and Bruce W. Porter, Developing and Empirically Evaluating Robust Explanation Generators: The KNIGHT Experiments. Computational Linguistics Journal, 23(1), pp. 65--101, 1997. ( Abstract and compressed postscript).
  • Knight - the lisp code for the Knight system,

    Back to MFKB Group Home Page

    Back to RKF Home Page Created by James Lester Maintained by Dan Tecuci
    Last modified August 17, 2000