Page of Positive Reviews of Research in Logical AI:
Reviews Submitted in 1996

96-9

Paolo Terenziani. Towards an ontology dealing with periodic events. Proc. ECAI-96, pp. 43-47.

One of the annoying facts about contemporary KR research is that we tend to devote a disproportionate amount of effort on the most abstract issues, presumably under the assumption that the more "fundamental" a topic is, the more prestigious it is to work on it. Compare, for example, the volume of work that has gone into the study of different variants of non-monotonic logic, with the volume of work where these logics are actually used for specific representation purposes. Since we all know that the "space" of possible information structures is rich and varied, one would have expected more work to go into the systematic mapping of the world of common-sense knowledge.

The paper by Paolo Terenziani (from Torino, Italy) at the 1996 ECAI (European Conference on AI) is a welcome exception from this general pattern. Terenziani addresses a particular kind of information structures, namely those involving periodic events; he analyzes and compares existing approaches to the formal representation of such structures, and then he proceeds to describe his own approach and to relate it to previous work. It is also reported that the described formalism has served as the theoretical basis for an implemented time-map management program.

There is no reason to believe that this is the definite account of the topic Terenziani has addressed. In fact, the paper raises a number of interesting new questions, on top of the ones it has answered. For example, how would Terenziani's formalism connect to more expressive action formalisms, for example those involving composite and hierarchical actions or events? How would it connect to taxonomical languages? and so on. But this is not an objection; on the contrary, it suggests how additional work may be concretely related to the present one.

The most interesting aspect of Terenziani's paper, in my opinion, is that it suggests another methodological stance than the one we are used to. Imagine, if you will, a situation where our conferences and journals contained many papers of this kind, papers which each of them addressed some well-defined and not-too-large part of the entire world of information structures; which analyzed the topic at hand while giving full attention and recognition to earlier approaches to the same and related problems, and which then proposed a solution, specified its advantages over previous approaches, possibly also its limitations. My expectation is that this mode of working would make it possible for contributions to cumulate so that, as a field, we could gradually build up a coherent account of the representation of knowledge. Not a "theory" in the sense of something which is presented on a few tens of pages and professes to be the foundation on which one could build solutions to all the problems, but a broad account consisting of many interrelated parts.

(Review by Erik Sandewall.)

96-8

Giambatista Amati, Luigia Aiello and Fiora Pirri. Definability and commonsense reasoning. To appear in Artificial Intelligence.

A nonmonotonic theory (formulated via circumscription, logic programs, default logic, etc.) is compiled if one has found an equivalent theory in a monotonic logic (its compiled form). There are many examples:

The following is an empirical observation: Virtually all compiled forms of nonmonotonic theories in the literature yield definitions, e.g. successor state axioms, Clark completion, etc. or they yield disjunctions of definitions, e.g. McCarthy's isblock(A) or isblock(B), Lin's solution to the frame problem for nondeterministic actions. Is this an accident? The paper under review suggests not. Specifically, it views nonmonotonic theories as providing implicit definitions of concepts, and shows that suitable fixed point equations act as generalizations of explicit definitions for these concepts. The paper demonstrates this point of view by showing how default logic extensions may be represented by fixed point equations in the logic KD4Z.

(Review by Ray Reiter.)

96-7

Vladimir Lifschitz. Foundations of logic programming. In Principles of Knowledge Representation, CSLI Publications, 1996, pp. 69-127.

This is a very unique survey of foundations of logic programming. It is entirely different from any other textbook of logic programming. Logic programs are regarded as sets of rules rather than sets of clauses, and both the model-theoretic and operational semantics are described in terms of rule calculi. Both classical negation and negation as failure are fully incorporated into programs, and their semantics are given by the concept of answer sets.

(Review by Katsumi Inoue.)

96-6

Craig Boutilier. Abduction to plausible causes: an event-based model of belief update. Artificial Intelligence 83 (1996), pp. 143-166.

The Katsuno-Mendelzon theory of updates emphasizes that updates, resulting as they do from changes to the world being modeled, are a different phenomenon than belief revision, as characterized, say, by the AGM postulates. But oddly enough, the KM story does not include actions as first class citizens. Boutilier presents a refreshingly different, and in many ways richer, account of updates. On his view, an update sentence can be treated as an observation of the world which, when it is new information, must be abductively explained as having been caused by one or more action occurrences. He formalizes this notion by explicitly providing for actions and their effects, and shows some relationships with, and departures from, the KM update postulates.

(Review by Ray Reiter.)

96-5

Fangzhen Lin. Embracing causality in specifying the inirect effects of actions. Proc. IJCAI-95.

The author argues that indirect effects of actions cannot be adequately specified by "state constraints" of the usual kind, and that the notion of causation needs to be explicitly introduced. Tecnically, he proposes to add a new ternary predicate Caused to the situation calculus; Caused(p,v,s) means that the fluent p is caused (by something) to have the value v in the situation s.

(Review by Vladimir Lifschitz.)

96-4

Patrick Doherty, Witold Lukaszewicz and Andrzej Szalas. The DLS algorithm.

DLS is an algorithm for the elimination of second-order quantifiers whose intended applications are similar to those of SCAN (PPR review 96-3). The algorithm is implemented, and can be executed by submitting appropriate forms over the web.

(Review by Vladimir Lifschitz.)

96-3

Hans Jürgen Ohlbach, Thorsten Engel, Renate Schmidt and Dov Gabbay. SCAN.

SCAN is an algorithm for the elimination of quantified predicate variables, with applications to simplifying circumscription and to computing correspondences in modal logic. The algorithm is implemented, and can be executed by submitting appropriate forms over the web.

(Review by Vladimir Lifschitz.)

96-2

Fangzhen Lin and Raymond Reiter. Rules as actions: A situation calculus semantics for logic programs. To appear in the Journal of Logic Programming.

Rules in a logic program can be viewed as actions in the sense of the situation calculus. This idea, combined with Reiter's solution to the frame problem, leads to a new semantics of logic programs with negation as failure. For propositional programs, this semantics turns out to be equivalent to the stable model approach.

(Review by Vladimir Lifschitz.)

96-1

John McCarthy. Concepts of logical AI.

This is a collection of brief characterizations of many concepts of logical AI, with references and links to relevant publications by both McCarthy and others. The current version is an incomplete draft which has a paragraph about each of approximately 50 concepts.

(Review by Vladimir Lifschitz.)

To the top level of the PPR

Reviews submitted in 1997