Reviews Submitted in 1996

One of the annoying facts about contemporary KR research is that we tend to devote a disproportionate amount of effort on the most abstract issues, presumably under the assumption that the more "fundamental" a topic is, the more prestigious it is to work on it. Compare, for example, the volume of work that has gone into the study of different variants of non-monotonic logic, with the volume of work where these logics are actually used for specific representation purposes. Since we all know that the "space" of possible information structures is rich and varied, one would have expected more work to go into the systematic mapping of the world of common-sense knowledge.

The paper by Paolo Terenziani (from Torino, Italy) at the 1996 ECAI (European Conference on AI) is a welcome exception from this general pattern. Terenziani addresses a particular kind of information structures, namely those involving periodic events; he analyzes and compares existing approaches to the formal representation of such structures, and then he proceeds to describe his own approach and to relate it to previous work. It is also reported that the described formalism has served as the theoretical basis for an implemented time-map management program.

There is no reason to believe that this is *the* definite account of
the topic Terenziani has addressed. In fact, the paper raises a number
of interesting new questions, on top of the ones it has answered.
For example, how would Terenziani's formalism connect to more
expressive action formalisms, for example those involving composite
and hierarchical actions or events? How would it connect to taxonomical
languages? and so on. But this is not an objection; on the contrary,
it suggests how additional work may be concretely related to the
present one.

The most interesting aspect of Terenziani's paper, in my opinion, is
that it suggests another methodological stance than the one we are
used to. Imagine, if you will, a situation where our conferences and
journals contained *many* papers of this kind, papers which each of
them addressed some well-defined and not-too-large part of the entire
world of information structures; which analyzed the topic at hand
while giving full attention and recognition to earlier approaches to
the same and related problems, and which then proposed a solution,
specified its advantages over previous approaches, possibly also its
limitations. My expectation is that this mode of working would make
it possible for contributions to cumulate so that, as a field, we
could gradually build up a coherent account of the representation
of knowledge. Not a "theory" in the sense of something which is
presented on a few tens of pages and professes to be the foundation
on which one could build solutions to all the problems, but a broad
account consisting of many interrelated parts.

(Review by Erik Sandewall.)

A nonmonotonic theory (formulated via circumscription, logic programs, default logic, etc.) is compiled if one has found an equivalent theory in a monotonic logic (its compiled form). There are many examples:

- Various papers by Lifschitz on determining first order theories equivalent to circumscriptive theories.
- The Clark completion semantics for logic programs.
- The successor state axiom solution to the frame problem resulting from the
circumscriptive policy of Lin and Shoham (Lin and Reiter,
*State constraints revisited*), and for nondeterministic actions (Lin,*Specifying the effects of indeterminate actions*). - The inductive definition approach of Denecker (
*Inductive definitions in knowledge representation*).

(Review by Ray Reiter.)

This is a very unique survey of foundations of logic programming. It is entirely different from any other textbook of logic programming. Logic programs are regarded as sets of rules rather than sets of clauses, and both the model-theoretic and operational semantics are described in terms of rule calculi. Both classical negation and negation as failure are fully incorporated into programs, and their semantics are given by the concept of answer sets.

(Review by Katsumi Inoue.)

The Katsuno-Mendelzon theory of updates emphasizes that updates, resulting as they do from changes to the world being modeled, are a different phenomenon than belief revision, as characterized, say, by the AGM postulates. But oddly enough, the KM story does not include actions as first class citizens. Boutilier presents a refreshingly different, and in many ways richer, account of updates. On his view, an update sentence can be treated as an observation of the world which, when it is new information, must be abductively explained as having been caused by one or more action occurrences. He formalizes this notion by explicitly providing for actions and their effects, and shows some relationships with, and departures from, the KM update postulates.

(Review by Ray Reiter.)

The author argues that indirect effects of actions cannot be adequately
specified by "state constraints" of the usual kind, and that the notion of
causation needs to be explicitly introduced. Tecnically, he proposes to
add a new ternary predicate *Caused* to the situation calculus;
*Caused(p,v,s)* means that the fluent *p* is caused (by something)
to have the value *v* in the situation *s*.

(Review by Vladimir Lifschitz.)

DLS is an algorithm for the elimination of second-order quantifiers whose intended applications are similar to those of SCAN (PPR review 96-3). The algorithm is implemented, and can be executed by submitting appropriate forms over the web.

(Review by Vladimir Lifschitz.)

SCAN is an algorithm for the elimination of quantified predicate variables, with applications to simplifying circumscription and to computing correspondences in modal logic. The algorithm is implemented, and can be executed by submitting appropriate forms over the web.

(Review by Vladimir Lifschitz.)

Rules in a logic program can be viewed as actions in the sense of the situation calculus. This idea, combined with Reiter's solution to the frame problem, leads to a new semantics of logic programs with negation as failure. For propositional programs, this semantics turns out to be equivalent to the stable model approach.

(Review by Vladimir Lifschitz.)

This is a collection of brief characterizations of many concepts of logical AI, with references and links to relevant publications by both McCarthy and others. The current version is an incomplete draft which has a paragraph about each of approximately 50 concepts.

(Review by Vladimir Lifschitz.)

To the top level of the PPR

Reviews submitted in 1997