Page of Positive Reviews of Research in Logical AI:
Reviews Submitted in 1997

97-11

(a) Hector Geffner. Causal theories for nonmonotonic reasoning. Proc. AAAI-90, pp. 524-530. (b) Norman McCain and Hudson Turner. Causal theories of action and change. Proc. AAAI-97, pp. 460-465.

Much of recent work on reasoning about actions stresses the difference between asserting that a proposition F holds and asserting that F has a causal explanation. Geffner's paper was apparently the first that demonstrated the importance of this distinction for solving difficult cases of the frame problem. It describes a nonmonotonic logic with a modal operator C that expresses the existence of a causal explanation. (The predicate Caused introduced by Lin in 96-5 plays a similar role.) The use of the language is illustrated by formalizing the stuffy room domain from the 1988 AIJ paper by Ginsberg and Smith.

McCain and Turner proposed a simple nonmonotonic logic of "causal rules"

F => G

which correspond to material implications of the form

if F then CG

in Geffner's language. Their formalization of the commonsense law of inertia as a causal rule is original in that they do not think of this law as a default ("Normally, the values of fluents do not change"). Instead, they view inertia as an assertion about the existence of a causal explanation: If the value of a fluent did not change then there is a cause for this.

(Review by Vladimir Lifschitz.)

97-10

Common Sense 98 problem page.

This collection of interesting formalization problems is being compiled in connection with the forthcoming symposium on logical formalizations of commonsense reasoning. Most problems currently on the page are by Ernie Davis and Pat Hayes. Here is a sample -- Davis`s "cooking problem":

A cook is cracking a raw egg against a glass bowl. Properly performed, the impact of the egg against the edge of the bowl will crack the egg shell in half. Holding the egg over the bowl, the cook with then separate the two halves of the shell with his fingers, enlarging the crack, and the contents of the egg will fall gently into the bowl. The end result is that the entire contents of the egg will be in the bowl, with the yolk unbroken, and that the two halves of the shell are held in the cook's fingers. What happens if: The cook brings the egg to impact very quickly? Very slowly? The cook lays the egg in the bowl and exerts steady pressure with his hand? The cook, having cracked the egg, attempts to peel it off its contents like a hard-boiled egg? The bowl is made of looseleaf paper? of soft clay? The bowl is smaller than the egg? The bowl is upside down? The cook tries this procedure with a hard-boiled egg? With a coconut? With an M & M?

(Review by Vladimir Lifschitz.)

97-9

(a) Fahiem Bacchus and Froduald Kabanza. Using temporal logic to control search in a forward chaining planner. Proc. 3rd European Workshop on Planning, 1995. (b) Fahiem Bacchus and Froduald Kabanza. Planning for temporally extended goals. Proc. AAAI-96, pp. 1215-1222. (c) Biplav Srivastava and Subbarao Kambhampati. Synthesizing customized planners from specifications. To appear in the Journal of Artificial Intelligence Research.

I take these three papers as representative of recent work on classical AI planning that takes domain dependent control knowledge seriously.

The first paper by Bacchus and Kabanza describes a planning system that uses a simple forward chaining domain independent planner and a linear temporal logic for specifying domain specific control knowledge. The effectiveness of this planning system is demonstrated by applying it to benchmark domains such as the blocks world, the tire domain, and the logistics domain.

The second paper by Bacchus and Kabanza shows that the same linear temporal logic can also be used to specify what they call temporally extended goals such as goals with temporal deadlines, maintenance and safety goals.

The paper by Srivastava and Kambhampati describes a system that uses domain specific control knowledge in a different way. Instead of using it to generate an individual plan for a particular planning problem, they use it to synthesize a domain specific planner that can be run on all planning problems in the given domain. This is exactly like program synthesis, and naturally it is done by using a program synthesis system, the one developed by Smith at Kestrel.

(Review by Fangzhen Lin.)

97-8

Fangzhen Lin. Applications of the situation calculus to formalizing control and strategic information: the Prolog cut operator. Proc. IJCAI-97, pp. 1412-1418.

In an earlier paper joint with Ray Reiter (JLP Vol. 31) the author proposed a semantics for logic programs with negation by failure based on the situation calculus. The idea is to view a rule as an action that operates on interpretations of the underlying language. When restricted to Herbrand interpretations, this approach is equivalent to the stable model semantics of negation as failure.

In the paper under review, the situation calculus semantics is extended to programs with cut. One of the theorems demonstrates the correctness of the usual implementation of negation as failure in terms of cut relative to the stable model semantics. This is apparently the first time when a connection is established between a semantics of cut and a declarative semantics of negation as failure.

The author's long term goal is to use the situation calculus as a general framework for representing control and strategic information in problem solving. As the title shows, this study of the cut operator is a step in this direction.

The IJCAI-97 Program Committee included this contribution among the three IJCAI-97 Distinguished Papers.

(Review by Vladimir Lifschitz.)

97-7

Peter Schachte. Global variables in logic programming. Proc. ICLP-97, pp. 3-17.

This is a very nicely written paper on a topic that certainly deserves attention. The author argues that global variables are often useful within the logic programming framework, proposes a natural solution in the form of assignment to special variables, provides a simple semantics to the resulting programs and discusses at length the issue of possible implementation.

Global variables destroy the declarative character of logic programs but in many cases this loss is offset by other advantages like a smaller number of variables used and more concise form of the programs.

(Review by Krzysztof Apt.)

97-6

Shan-Hwei Nienhuys-Cheng and Ronald de Wolf. Foundations of Inductive Logic Programming. LNAI 1228, Springer, 1997.

Here is my foreword to this book.

One of the most interesting recent developments within the field of automated deduction is inductive logic programming, an area that combines logic programming with machine learning. Within a short time this area grew to an impressive field, rich in spectacular applications and full of techniques that called for new theoretical insights.

This is the first book that provides a systematic introduction to the theoretical foundations of this area. It is a most welcome addition to the literature concerning learning, resolution and logic programming.

The authors offer in this book a solid, scholar presentation of the subject. By starting their presentation with the self-contained account of the resolution method and of the foundations of logic programming they enable the reader to place the theory of inductive logic programming in the right historical and mathematical perspective. By presenting in detail the theoretical aspects of all components of inductive logic programming they make it clear that this field has grown into an important area of theoretical computer science.

The presentation given by the authors allows us also to reevaluate the role of some, until now, isolated results in the field of resolution and yields an interesting novel framework that sheds new light on the use of first-order logic in computer science.

I would like to take this opportunity and congratulate the authors with the outcome of their work. I am sure this book will have an impact on the future of inductive logic programming.

(Review by Krzysztof Apt.)

97-5

Bertram Fronhöfer. The Action-as-Implication Paradigm -- Formal Systems and Application. CS Press, München, 1996. ISSN 1432-2668, ISBN 3-931757-91-9.

In 1985 the reviewer proposed to treat actions in planning as implications in a deductively restricted part of classical logic. The approach became known as the linear connection method (LCM). Two years later Girard's linear logic (LL) was published which may be divided into an additive (aLL) and a multiplicative part (mLL). Later it turned out that LCM is more or less the same as mLL. The book under review provides a thorough and comprehensive study of the relationship of the two formalisms and applies the insight to planning by use of a standard theorem prover for first-order logic, SETHEO, with remarkable results in terms of efficiency.

The book introduces in the first two chapters both LCM and LL. Chapter 3 provides a general theory of matrix characterization and instantiates it to a matrix characterization of mLL. Chapter 4 contains the proof for the book's main theoretical result. It considers mLLe, i.e. mLL with the exchange rule, mLLew, i.e. mLLe with the weakening rule, and mLLewc, i.e. mLLew with the contraction rule, and gives matrix characterizations for each of the three in terms of spanning matings with certain restrictions. mLLe is characterized by Girard's proof nets, mLLewc by unrestricted spanning matings, and mLLew by the linearity restriction of LCM and an additional acyclicity restriction. The latter is obsolete in the case of implicational formulas without multiplicative disjunction, an issue studied in detail in Chapter 6. Chapter 5 provides the characterization for the case of the presence of the cut rule. Chapter 7 introduces a logic in between mLLew and mLLewc, say pLL, which allows an action rule to be used as an unlimited resource, i.e. more than once, and again gives a matrix characterization for it. In Chapter 8 the theoretical results are applied to planning.

The interesting theoretical aspect of these results is that the most suitable logic for planning is actually not LL but pLL (or LCM for that matter). The book has provided a solid basis for this logic.

The attraction of logic is that it provides a generic formalism which subsumes many application-specific formalisms. One theorem prover would suffice instead of many specialized mechanisms. The book gives a nice demonstration of this paradigm. It requires only a simple modification to get the classical theorem prover SETHEO, incidentally the winner of the international CADE-96 competition, to deal with the linearity restriction. After this modification Fronhöfer coded planning problems as input to the otherwise general SETHEO engine. The resulting "planner" uniformly outperformed UCPOP, a specialized and well-known planning system, on blocksworld tasks, and did so often by several orders of magnitude. True, UCPOP is not the latest achievement in planning, but only a few years ago it was considered the best planning system available. Comparisons with the more recent Graphplan technology in planning have yet to be made.

As a final remark it is mentioned that material in the book is very well presented with many examples illustrating the definitions and the phenomena of interest. I can strongly recommend it to researchers both in logics as well as in planning. This positive recommendation is positive on an absolute scale, and not only because the review is written for the Page of "Positive" Reviews.

(Review by Wolfgang Bibel.)

97-4

Alexander Bochman. (a) A logical foundation for logic programming I: Biconsequence relations and nonmonotonic completion. (b) A logical foundation for logic programming II: Semantics of general logic programs. To appear in the Journal of Logic Programming.

The author introduces a monotonic formalism of biconsequence relations which provides a syntactic representation for Belnap's four valued semantics. The formalism of biconsequence relations seems to constitute a very natural formalization of monotonic four-valued reasoning generalizing Scott's consequence relations which provided syntactic formalization for classical two-valued logic.

After studying the properties of biconsequence relations and establishing a number of natural and useful results, the author augments his formalism with two important additions. The first is the so called "coherence rule" which basically says that at the proof level the two different notions of negation, present in Belnap's 4-valued logic, coincide. Depending on the language to which the coherence rule is applied, its impact on the entire formalisms becomes weaker or stronger thus providing different variants of the semantics.

The second ingredient added by the author is the circumscriptive closure of his formalism which basically allows us to infer all formulae entailed by the class of all minimal models. This important addition transforms the monotonic formalism of biconsequence relations into a powerful non-monotonic framework. The important advantage of the proposed approach lies in the fact that different versions of the resulting non-monotonic formalism are obtained not by modifying the non-monotonic (circumscriptive) closure but rather by changing the class of formulae to which the coherence rule applies. As a result, different versions of the formalism depend primarily on the language used and not on different non-monotonic closures.

Finally, the author shows how to interpret general disjunctive programs by means of biconsequence relations and how various semantics proposed for disjunctive logic programs, such as the stationary semantics (several different versions), D_WFS-semantics, partial stable, stable and stable classes, can be easily expressed in his formalisms. In a related paper he discusses similar representation for default logic, autoepistemic logic and static semantics. These results shed an important light on these semantics and formalisms and on their mutual relationships.

This is a very well written paper which I highly recommend for anyone interested in the area of logic programming and non-monotonic reasoning and - in particular - in the semantics of disjunctive logic programs.

(Review by Teodor Przymusinski.)

97-3

Andrew Barto, Steven Bradtke and Satinder Singh. Learning to act using real-time dynamic programming. Artificial Intelligence 72 (1995), pp. 81-138.

Recent work on action (e.g., Gelfond and Lifschitz, Reiter, Sandewall, etc.) has established a strong connection between action theories and dynamic systems. Indeed, action theories appear to be nothing else than a convenient way for specifying dynamic systems. The paper by Barto et al is not about specifying dynamic systems but about using dynamic systems for selecting optimal courses of actions. The paper is beautifully written and is mostly self-contained, and provides a unifying theoretical foundation for a number of crucial problems in AI related to action, including planning, learning and control. It's not logical AI, but is AI at its best. People interested in theories of action should probably be aware of it.

(Review by Hector Geffner.)

97-2

William McCune. Solution of the Robbins problem. To appear in the Journal of Automated Reasoning.

E.V. Huntington demonstrated in 1933 that algebras that satisfy the equation

n(n(x) + y) + n(n(x) + n(y)) = x,

where + is associative and commutative, are Boolean. Herbert Robbins conjectured that algebras that satisfy the simpler equation

n(n(x + y) + n(x + n(y))) = x

are also Boolean. This conjecture remained an open problem for over sixty years until its truth was recently established by EQP, a general-purpose theorem-proving program for equational reasoning written by William McCune at Argonne National Laboratory.

This is a historic event for automated mathematics. The result also nicely combined many research themes in automated deduction: the "Argonne approach" to theorem proving and its implementation, Steven Winker's earlier work at Argonne that showed Robbins algebras are Boolean if very weak conditions are added (EQP proved that one of these sufficient conditions is a consequence of the Robbins equation), special unification algorithms for associative-commutative functions, and new restrictions ("basic paramodulation") of the classic paramodulation rule for equality reasoning.

(Review by Mark Stickel.)

97-1

Franz Baader and Bernhard Hollunder. Embedding defaults into terminological knowledge representation formalisms. Journal of Automated Reasoning 14 (1995), pp. 149-180.

This paper is on the problem of integrating Reiter's default logic into terminological representation systems. As part of their motivation, the authors present a few examples related to Reiter's treatment of open defaults, and these examples are of interest even irrespective of terminological logic. The examples reveal a major problem with the standard definition of an extension: replacing an axiom in a default theory by a logically equivalent sentence may lead to a theory with different extensions.

The authors point out that the modification proposed by David Poole (Variables in Hypotheses, 1987) suffers from the same problem, and that the treatment of open defaults proposed by this reviewer (On Open Defaults, 1990) is not entirely satisfactory either.

(Review by Vladimir Lifschitz.)

To the top level of the PPR

Reviews submitted in 1996