Most machine learning does not exploit prior knowledge. Theory refinement (a.k.a.
theory revision or knowledge-base refinement) is the task of modifying an
initial imperfect knowledge-base (KB) to make it consistent with empirical
data. The goal is to improve the performance of learning by exploiting prior
knowledge, and to acquire knowledge which is more comprehensible and related to
existing concepts in the domain. Another motivation is to automate the process
of knowledge refinement in the development of expert systems and other
knowledge-based systems.
We have developed systems for refining knowledge in various forms
including:
- Propositional logical rules: Systems: EITHER, NEITHER
- First order logical rules (logic programs): Systems: FORTE
- Certainty-factor rules: Systems: RAPTURE
- Bayesian networks: Systems: BANNER
These systems have demonstrated an ability to revise real
knowledge bases and improve learning in several real-world domains including: