The Credit Card Domain: Discussion
Date: Wed, 7 Jul 1999
From: Vladimir Lifschitz
To: Michael Gelfond and Monica Nogueira
The part of your message that I found puzzling at first is the place
where you write that, in your example, expressing inertia by a "normal
default"
holds(F,T1) :- next(T,T1), holds(F,T), not holds(F,T1)
doesn't work, and the rule
holds(F,T1) :- next(T,T1), holds(F,T), not ab(F,T)
(along with appropriate cancellation rules) has to be used instead. In a
recent conversation, Michael clarified this to me. Apparently, the root
of the problem is that you expressed the "unexpected observation" as a
fact
holds(nhas(creditCard),1)
rather than constraint
:- holds(has(creditCard),1).
Expressing observations by constraints is an important element of the
methodology developed by Norm McCain and Hudson Turner in their recent
papers (see, for instance, 2.4 in the TAG bibliography). There is a
good reason to treat observations this way. An observation affects our
state of knowledge monotonically: it eliminates some of the pictures
of the world that we considered possible prior to the observation. And
this is exactly how adding a constraint affects the answers set of a logic
program: it eliminates some of them. But it never adds new answer sets,
which is what typically happens when you add a fact.
In my new note on default logic that you will find at
http://www.cs.utexas.edu/users/vl/mypapers/deflog.ps
this idea is discussed in historical perspective. Representing inertia
by a normal default was proposed by Ray Reiter in 1980. The Yale Shooting
controversy in the mid-eighties led us to believe that his idea wouldn't
work. But now we know that Reiter's solution to the frame problem works
fine provided that the rest of the available knowledge is represented
properly. Writing observations as constraints is one of the three
principles that we follow. The other two are using rules (rather than
material implications) to describe the effects of actions and restricting
attention to complete answer sets.
================
Date: Fri, 9 Jul 1999
From: Michael Gelfond and Monica Nogueira
To: Vladimir Lifschitz
It is possible that the differences in logic programming formalizations
are determined by the differences in the underlying theories on which these
formalizations are based. This may deserve some further discussion.
We reason in an L like language in which
action description D defines a transition diagram with arcs
labeled by (possibly compound) actions. The axioms, A, contain observations,
i.e. statements observed(f,t) and occurs(a,t) (where f is a fluent
literal, a is an action, and t is an integer representing time).
Actions are divided into exogenous and those performed by the agent.
Axioms of A must explicitly include all the occurrences of the agent's
actions. A also contains (explicitly or implicitly) the closed world
assumption for the occurrences of exogenous actions. This
allows us to make reasonable predictions about the future and at the
same time assume that "unlisted" exogenous actions occur when it is
needed to explain some "miracle".
Technically, model of a theory in this language is a pair consisting of a diagram
and a path. In this approach our observations are, in general,
NONMONOTONIC (due to the closed world assumption)
and therefore we do not express them as constraints.
Here is an example:
agent's action "a", exogenous action "e", fluents "f" and "g"
The causal laws are:
a causes g
e causes f
Axioms A1:
observed(neg f,0).
observed(neg g,0).
occurs(a,0)
This entails holds(g,1), holds(neg f, 1), neg occurs(e,0)
Now we observe
observed(f,1)
The new set of axioms, A2, entails
holds(f, 1), occurs(e,0)
which is obviously nonmonotonic.
In your aproach you remove the closed world assumption about
the occurrences of actions and add axioms to say that at each moment of
time e occurs or does not occur, etc. This make your representation
nonmonotonic with respect to causal laws only.
We believe that both approaches should be carefully investigated and
compared. Their advantages and drawbacks may depend on the type of
queries (tasks) we are planning to use our representations with.
============
Date: Mon, 12 Jul 1999
From: Hudson Turner
To: Michael Gelfond and Monica Nogueira
I am interested in how you express minimality of exogenous action
occurrences in a logic program. How do you obtain the nonmonotonic
behavior you describe below: so that adding the observation
observed(f,1)
to your program yields the new consequence
occurs(e,1)?
=============
Date: Wed, 14 Jul 1999
From: Michael Gelfond and Monica Nogueira
To: Hudson Turner
we do not really have a general enough answer to this question.
We are thinking about it though. Let us discuss one of the possible
approaches.
First, look at the example from our previous message. We have
1. Action description, D:
a causes g
e causes f
2. H0 - First set of observations:
observed(neg f,0).
observed(neg g,0).
occurs(a,0) (We may need observe here too but let's ignore it for now)
We will use this in conjunction with the following domain independent
axioms, say I:
h(F,T) :- observed(F,T).
h(F,T+1) :- occurs(A,T),
causes(A,F,P),
h(P,T).
ab(neg(F),T) :- occurs(A,T),
causes(A,F,P),
h(P,T).
h(F,T) :- caused(F,P),
h(P,T).
ab(neg(F),T) :- caused(F,P),
h(P,T+1).
h(F,T+1) :- h(F,T),
not ab(F,T).
and the closed world assumption for occurrences of actions:
neg(occurs(A,T)) :- not occurs(A,T). (CWA)
(To simplify we only allow preconditions consisting of one literal. )
I + D + H0 does entail holds(g,1), holds(neg f, 1), neg occurs(e,0)
Suppose now we observe f, i.e.,
H1 = H0 + observed(f,1)
We can check that I + D + H1 has no stable model, i.e. our
observation is not compatible with the only assumption we made,
namely CWA. This means that some exogenous unobserved action or
actions occurred somewhere in the past and we need to find these
actions. Here the situation is very similar to planning. We will enter
into a loop where, say, we will look for explanations of cardinality
one, two, etc.
By explanation we mean a minimal collection E of
statements of the form occurs(Ai,Tk) where
(a) Ai is an unobserved exogenous action and Tk a past moment of time,
(b) I + D + H + E is consistent, i.e. has a stable model.
H here is the current history of the domain which contains
unexplained observations of values of some fluents at the current
(last) moment of time.
To find explanation of cardinality 1 we take our program
I + D + H1 and combine it with the corresponding control module
defining the space of possible explanations. It may look like
occurs(A1,1) or ...or occurs(A1,n) or ... or occurs(Ak,1) or ... or occurs(Ak,n)
where A1...Ak is the list of exogenous actions and the current time
(the time of the miracle observation) is n+1.
The resulting program will have one model containing occurs(e,0).
In general we have many models corresponding to many possible
explanations.
For those of you who may want to play with the idea we have the corresponding
program for smodels attached below.
%----------------------------
% Fluents
%----------------------------
fluent(f).
fluent(g).
%----------------------------
% Actions
%----------------------------
action(a). % this means agent's action
exogAction(e).
%----------------------------
% Dynamic Causal Laws
%----------------------------
causes(a,g,none).
causes(e,f,none).
%----------------------------
% Static Causal Laws
%----------------------------
%----------------------------
% Observations
%----------------------------
observed(neg(f),0).
observed(neg(g),0).
occurs(a,0).
observed(f,1).
%----------------------------
% Domain Independent Axioms
%----------------------------
occurs(A,T) :-
time(T),
observed(A,T),
exogAction(A).
h(F,T) :-
time(T),
fluent_literal(F),
observed(F,T).
h(F,T1) :-
next(T,T1),
causes(A,F,P),
h(P,T),
occurs(A,T).
ab(NF,T) :-
time(T),
fluent_literal(F),
contrary(F,NF),
occurs(A,T),
causes(A,F,P),
h(P,T).
h(F,T) :-
time(T),
caused(F,P),
h(P,T).
ab(NF,T) :-
next(T,T1),
fluent_literal(F),
contrary(F,NF),
caused(F,P),
h(P,T1).
h(F,T1) :-
next(T,T1),
fluent_literal(F),
h(F,T),
not ab(F,T).
h(none,T) :-
time(T).
:- time(T), fluent_literal(F), contrary(F,NF), h(F,T), h(NF,T).
neg(occurs(A,T)) :-
time(T),
action(A),
not occurs(A,T).
neg(occurs(A,T)) :-
time(T),
exogAction(A),
not occurs(A,T).
% In this module we describe the space of possible occurrences
% of unobserved exogenous actions the program can use to explain
% the miracle. It assumes that the miracle can be explained by
% exactly one occurrence of one of such actions. This is a simplifying
% restriction. We plan to have it lifted in the next "release".
% In particular domains, this space can be substantially reduced
% by using extra information, e.g. relating miraculous fluents
% to a subset of relevant exogenous actions and selecting our
% explanations from this subset.
% Selects an unobserved exogenous action A. The program will
% later check if an occurrence of A in the past can explain the miracle.
occured(A) :-
exogAction(A),
not other_action(A).
% other_action(A) iff unobserved exogenous action different from A occurred in the past
other_action(A) :-
time(T),
lt(T, lasttime),
exogAction(A),
exogAction(B),
neq(A,B),
not observed(B,T),
occurs(B,T).
% Selects a possible explanation consisting of an unobserved
% exogenous action A and a moment T from the set of possible explanations.
occurs(A,T) :-
exogAction(A),
occured(A),
time(T), lt(T, lasttime),
not other_time(A,T).
% other_time(A,T) iff unobserved exogenous action A occurred at time different from T
other_time(A,T) :-
time(T),
time(T1),
neq(T,T1),
lt(T, lasttime), lt(T1, lasttime),
exogAction(A),
not observed(A,T1),
occurs(A,T1).
%--------------------------
% Auxiliary Predicates
%--------------------------
time(0..lasttime).
next(T,T1) :- time(T), time(T1), T1=T+1.
fluent_literal(F) :-
fluent(F).
fluent_literal(neg(F)) :-
fluent(F).
contrary(F,neg(F)) :-
fluent(F).
contrary(neg(F),F) :-
fluent(F).
hide next(X,Y).
hide time(X).
hide ab(X,Y).
hide fluent(X).
hide fluent_literal(X).
hide contrary(X,Y).
hide action(X).
hide causes(X,Y,Z).
hide observed(X,Y).
hide diff_from_occurs(X,Y).
hide exogAction(X).
hide h(X,Y).
hide occurs2(X).
hide other(X).
hide neg(X).
hide other_time(X,Y).
hide other_action(X).
hide occured(X).
hide caused(X,Y).
=================
Date: Wed, 14 Jul 1999
From: Hudson Turner
To: Michael Gelfond and Monica Nogueira
I have the impression that the preference you expressed (in response to
Vladimir) for observations of the form
observed(f,t) <-
as opposed to constraints
<- not holds(f,t)
does not play an essential role in the method you describe for identifying
minimal sets of exogenous actions sufficient to explain observations.
That is, I have the impression that your program I + D + H + E is monotone
in H. (Adding observations to H can only eliminate answer sets.)
As you say, if the program becomes inconsistent, one can add exogenous
actions to E to reestablish consistency. (The program is not monotone in
E.)
I've inserted a few remarks below.
On Wed, 14 Jul 1999, Michael Gelfond wrote:
>
> Hudson wrote:
> How do you obtain the nonmonotonic
> behavior you describe below: so that adding the observation
> observed(f,1)
> to your program yields the new consequence
> occurs(e,1)?
I should have written occurs(e,0), of course.
>
> Dear Hudson,
> we do not really have a general enough answer to this question.
It would be nice to obtain minimal explanations simply by adding new
observations to an otherwise fixed program.
It would also be nice if this did not involve a significantly more complex
representation of the action domain.
It appears that the problem of finding minimal explanations for
observations is very much like planning -- find (exogenous) actions that
achieve the observations. One difference: it is enough to find a
plausible (or as Norm and I might say, "causally possible") explanation --
it need not be sufficient (in the sense that it always achieves the
observations). Another complication: if the "initial state" isn't
completely known, then we are doing something *like* conformant
planning -- finding a single set of actions that *can* achieve the
observations in all initial states consistent with what we know.
So perhaps a nice method for finding minimal explanations would also show
us how to find minimal plans (without using a family of programs with
different numbers of time steps or action occurrences).
Then again, I suppose that complexity results suggest we can't do this,
unless we bound the length of plans/explanations?
==============
Date: Wed, 14 Jul 1999 15:28:45 -0600 (MDT)
From: Michael Gelfond
To: Hudson Turner
Hudson wrote:
"I have the impression that the preference you expressed (in response to
Vladimir) for observations of the form
observed(f,t) <-
as opposed to constraints
<- not holds(f,t)
does not play an essential role in the method you describe for identifying
minimal sets of exogenous actions sufficient to explain observations.
That is, I have the impression that your program I + D + H + E is monotone
in H. (Adding observations to H can only eliminate answer sets.)
As you say, if the program becomes inconsistent, one can add exogenous
actions to E to reestablish consistency. (The program is not monotone in
E.)"
Michael:
I am not sure I understand. Recall that H contains statements of
the form occurs(a,t). By adding such a statement you are forced
to withdraw neg(occurs(a,t)) (look at CWA). So why is it monotonic?
Am I missing something?
I am also not sure what you mean by E here? Is it one of the
explanations found?
Hudson:
"It would be nice to obtain minimal explanations simply by adding new
observations to an otherwise fixed program."
Michael:
In a sense that is exactly what we do. We have a program P = I + D
which is somewhat fixed (it only changes if D does).
We also have a control module, say C, which is also somewhat fixed.
(Of course it is not completed yet but hopefully it will be). Changes
in this module can be caused by addition of new domain dependent
information which does not happen often. (I would actually make such
information a part of D. Sometimes it can even be automatically extracted
from the causal laws)
So what we do is just take a fixed program P + C and add H6 to it
(H6 here replaces the previous input, H5, and of course is obtained by just adding
occurrences of actions performed at 5 and observations of fluents
performed at 6 to H5)
How does that differ from what you envision?
Hudson:
"It would also be nice if this did not involve a significantly more complex
representation of the action domain."
Michael:
I do not think that our representation of the action domain is
significantly more complex. Description of the domain consists of D
and H and here I do not see any increased complexity w.r.t. L
(except of course separation between agent's and exogenous actions
which is certainly needed.)
You may refer instead to the complexity of I. But this is again not
really more complex than our usual representation.
We separate between the occurrences of actions and values of fluents being
observed, and those being inferred from our theory. The first is
non-defeasible and the second can be defeated. But this is a useful
refinement and the price we pay for it is just two axioms,
occurs(A,T) :-
time(T),
observed(A,T),
exogAction(A).
h(F,T) :-
time(T),
fluent_literal(F),
observed(F,T).
(It probably would be methodologically better to use observed(A,T) in
H for all (not necessarily exogenous) actions but we didn't want to
rewrite parts of our program.)
You can also view the use of AB's in I more complex w.r.t. your use of
normal defaults. This is probably true but the added complexity is
negligible and we substantially use this difference in our control
module.
The last part, control module C, may look complex because in SMODELS
we have neither disjunction nor explicit choice operator (in the sense
of Sacca and Zaniolo). But what can we compare this complexity with?
Hudson:
"It appears that the problem of finding minimal explanations for
observations is very much like planning -- find (exogenous) actions that
achieve the observations."
Michael:
Right!
Hudson:
"One difference: it is enough to find a
plausible (or as Norm and I might say, "causally possible") explanation --
it need not be sufficient (in the sense that it always achieves the
observations). "
Michael:
Why is it enough? I am not sure I understand what "sufficient" means.
Can you elaborate?
Hudson:
"Another complication: if the "initial state" isn't
completely known, then we are doing something *like* conformant
planning -- finding a single set of actions that *can* achieve the
observations in all initial states consistent with what we know."
Michael:
You are right here. If the initial state is not completely known the
program we sent you will not work. But in this case we do what we did
for planning and in other cases, namely add
observed(f,0) or observed(neg(f),0)
for each fluent f.
==============
Date: Wed, 14 Jul 1999
From: Hudson Turner
To: Michael Gelfond
Thanks for your comments and questions about my questions and remarks.
Let me try to answer your questions, and clarify and correct my remarks.
> Hudson wrote:
> "I have the impression that the preference you expressed (in response to
> Vladimir) for observations of the form
>
> observed(f,t) <-
>
> as opposed to constraints
>
> <- not holds(f,t)
>
> does not play an essential role in the method you describe for identifying
> minimal sets of exogenous actions sufficient to explain observations.
>
> That is, I have the impression that your program I + D + H + E is monotone
> in H. (Adding observations to H can only eliminate answer sets.)
> As you say, if the program becomes inconsistent, one can add exogenous
> actions to E to reestablish consistency. (The program is not monotone in
> E.)"
>
> Michael:
> I am not sure I understand. Recall that H contains statements of
> the form occurs(a,t). By adding such a statement you are forced
> to withdraw neg(occurs(a,t)) (look at CWA). So why is it monotonic?
> Am I missing something?
>
> I am also not sure what you mean by E here? Is it one of the
> explanations found?
I should not have said monotone in H, since H includes occurrences of
non-exogenous actions, as you point out. My interest was in observations
(of facts).
And so I should have guessed instead that your program is monotone in
observations of fact. Is it? (If so, then I still have the impression
that observations could as well be expressed as constraints.)
I chose the form I + D + H + E for the general case of your program from
the point where you write
>> (b) I + D + H + E is consistent, i.e. has a stable model.
where as I understand it
I consists of domain independent axoms
D is the domain description (a translation of it)
H consists of observations of facts and non-exogenous action occurrences
E consists of exogenous action occurrences
In your example, E is initially empty, and H consists of
observed(neg f,0).
observed(neg g,0).
occurs(a,0).
You say
> Suppose now we observe f
which is captured by adding to H
observed(f,1).
As expected, the program begins inconsistent when this observation is added
to H.
I'll write I + D + H' + E to stand for the inconsistent program obtained
at this point.
As I understand, the next step is in response to the inconsistency caused
by adding the observation to H: now we add exogenous action occurrences to E,
until the program becomes consistent. (The program is not monotone in E.)
In your example, it is enough to add
occurs(e,0)
to the program.
> Hudson:
> "It would be nice to obtain minimal explanations simply by adding new
> observations to an otherwise fixed program."
>
>
> Michael:
> In a sense that is exactly what we do. We have a program P = I + D
> which is somewhat fixed (it only changes if D does).
Yes.
> We also have a control module, say C, which is also somewhat fixed.
I imagine this is the part I do not understand (now).
> (Of course it is not completed yet but hopefully it will be). Changes
> in this module can be caused by addition of new domain dependent
> information which does not happen often. (I would actually make such
> information a part of D. Sometimes it can even be automatically extracted
> from the causal laws)
(What kind of new domain dependent information?)
> So what we do is just take a fixed program P + C and add H6 to it
> (H6 here replaces the previous input, H5, and of course is obtained by just adding
> occurrences of actions performed at 5 and observations of fluents
> performed at 6 to H5)
>
> How does that differ from what you envision?
As far as I understand, that is essentially what I envisioned. When you
described it before, I had the impression that adding a fluent observation
(that caused inconsistency) led to a succession of programs --
>> To find explanation of cardinality 1 we take our program
>>
>> I + D + H1 and combine it with the corresponding control module
>> defining the space of possible explanations. It may look like
>>
>> occurs(A1,1) or ...or occurs(A1,n) or ... or occurs(Ak,1) or ... or occurs(Ak,n)
-- I had the impression that if you failed to find an explanation of
cardinality 1 then you would replace the "control module" above with
another that could find explanations of cardinality 2, and so forth.
I am eager to understand how control module C can eliminate the need for
a succession of programs.
> Hudson:
> "It would also be nice if this did not involve a significantly more complex
> representation of the action domain."
>
> Michael:
> I do not think that our representation of the action domain is
> significantly more complex.
I did not mean to say it was. (Thank you for your more detailed remarks
on this, anyway.)
I imagined that the crucial difficulty would lie in the functionality of
what I am now calling, as you suggest, control module C.
> The last part, control module C, may look complex because in SMODELS
> we have neither disjunction nor explicit choice operator (in the sense
> of Sacca and Zaniolo). But what can we compare this complexity with?
I had in mind, in particular, the following comment from your program:
% In this module we describe the space of possible occurrences
% of unobserved exogenous actions the program can use to explain
% the miracle. It assumes that the miracle can be explained by
% exactly one occurrence of one of such actions. This is a simplifying
% restriction. We plan to have it lifted in the next "release".
% In particular domains, this space can be substantially reduced
% by using extra information, e.g. relating miraculous fluents
% to a subset of relevant exogenous actions and selecting our
% explanations from this subset.
Presumably, your translation of the domain would have to capture
this "extra information", which in general -- say, for a language
with indirect effects -- seems like it might require some work (i.e.
a "significantly more complex" translation).
But I had deeper concerns about "lifting" the assumption that
a single action explains the miracle. It is not clear to me how to
obtain minimal explanations in this manner, even when the size
of explanations is bounded. And if it is not bounded, I mention
again that I wonder if complexity arguments don't suggest that,
at best, we would have to pay dearly. What I refer to is the
result, which I admittedly know only second-hand, that classical
planning is P-Space complete. (We can do satisfiability planning
with stable models because we look for plans of bounded length, and,
in fact, do not look for minimal plans even within those bounds,
except by considering a family of programs with different bounds.)
Perhaps the simplest place to look for complexity results applicable to
minimal explanations of bounded size is abductive logic programming.
Doesn't minimal abduction bump us a step up the polynomial heirarchy?
> Hudson:
> "One difference: it is enough to find a
> plausible (or as Norm and I might say, "causally possible") explanation --
> it need not be sufficient (in the sense that it always achieves the
> observations). "
>
> Michael:
> Why is it enough? I am not sure I understand what "sufficient" means.
> Can you elaborate?
If a domain is not deterministic, then even with complete knowledge about
the initial state, satisfiability planning is not sound -- a model gives
us a "causally possible" plan, but it may not be valid, because it can fail
to be "sufficient" or even (always) executable.
Example:
Tossing a coin is a causally possible plan for achieving heads
(no matter the initial state), but it is not sufficient. It *is*
"executable."
Tossing a coin and then "truly saying heads" is also a causally
possible plan for achieving heads. It is sufficient -- whenever it is done
it achieves the goal. But it is not executable. If the coin comes up
tails, you can't "truly say heads." (Heads is an action precondition.)
For explanations though, we often satisfy ourselves with what are essentially
causally possible plans -- requiring only that it *could* have happened the
way we say.
So if the coin is unexpectedly heads, we might explain it by (someone else's)
toss.
> Hudson:
> "Another complication: if the "initial state" isn't
> completely known, then we are doing something *like* conformant
> planning -- finding a single set of actions that *can* achieve the
> observations in all initial states consistent with what we know."
>
> Michael:
> You are right here. If the initial state is not completely known the
> program we sent you will not work. But in this case we do what we did
> for planning and in other cases, namely add
> observed(f,0) or observed(neg(f),0)
> for each fluent f.
I am not sure why your program doesn't work in this case. Perhaps you
don't include "complete initial state axioms" which I guess you would
write:
holds(f,0) <- not holds(neg(f),0)
holds(neg(f),0) <- not holds(f,0)
===============
Date: Thu, 15 Jul 1999
From: Michael Gelfond and Monica Nogueira
To: Hudson Turner
Thanks for a quick reaction. This really makes it fun.
I think we understand what you meant now and you are basically
right. Here are a few comments:
Hudson:
I should not have said monotone in H, since H includes occurrences of
non-exogenous actions, as you point out. My interest was in observations
(of facts).
And so I should have guessed instead that your program is monotone in
observations of fact. Is it? (If so, then I still have the impression
that observations could as well be expressed as constraints.)
MM:
There is a slight difference in terminology. Our agent can actually
observe occurrences of exogenous actions and the values of fluents. So
we call both of them facts (or observations). You are talking about, say, fluent-facts.
We can replace our formalization of the inertia by that using normal
defaults and represent fluent facts as constraints. You are right that
the resulting program should be equivalent to the original one. But it looks a
little strange to represent some observations as constraints and some as
atomic statements. We cannot make our mind about it yet.
Does anyone have a strong preference?
About the use of I + D + H + E. We understand what you meant now.
Notice though, that we allow H to contain observations of exogenous
actions.
Hudson: What kind of new domain dependent information? (can be added
to the control module)
MM.
Something like a list of possible alternative explanations for a
change in a fluent value, their plausibility, etc. For instance
absence of an object can be explained by actions "lose" or "steal".
Likelihood may depend on the neighborhood or value of the object, etc.
It is all fairly vague though.
Hudson
-- I had the impression that if you failed to find an explanation of
cardinality 1 then you would replace the "control module" above with
another that could find explanations of cardinality 2, and so forth.
MM.
This is a possible interpretation. Another way to look at it is to
view a control module as a basically procedural program consisting of
(more or less declarative) descriptions of the types of explanations
(or plans) we are looking for and the algorithm to generate and
analyze these explanations (plans) like iterative deepening or binary
search, etc. From this standpoint the control module is just one
program.
It certainly would be nice to have a completely declarative program
but we are not sure that it is possible. In fact one thing which
interests us here is to find a right language combining procedural and
declarative parts of the system. For instance, would it be better to
modify SMODELS to make it return one model at a time (i.e. each time
it is called it returns a different model)? Should we use Prolog to
represent the procedural part or some small procedural language a la
GOLOG is better? We are curious if any other readers are interested in
these type of questions.
Hudson:
Presumably, your translation of the domain would have to capture
this "extra information", which in general -- say, for a language
with indirect effects -- seems like it might require some work (i.e.
a "significantly more complex" translation).
MM.
We do not mean that this extra information can always be extracted from
the domain. Our guess is that it is possible if, say, the static laws of D
are of the form caused(f,g) where the premiss g is a literal. But in general info
like this should be supplied by the specifier.
Hudson
Perhaps the simplest place to look for complexity results applicable to
minimal explanations of bounded size is abductive logic programming.
Doesn't minimal abduction bump us a step up the polynomial hierarchy?
MM
A complexity concern is of course valid.
About abduction bump none of us can recall the result.
Does anyone know where it can be found?
Somehow, at this moment it does not bother us too much because we
assume that the space of possible explanations will be really small
(due to the extra information supplied by the specifier).
> Michael:
> I am not sure I understand what "sufficient" means.
> Can you elaborate?
If a domain is not deterministic, then even with complete knowledge about
the initial state, satisfiability planning is not sound -- a model gives
us a "causally possible" plan, but it may not be valid, because it can fail
to be "sufficient" or even (always) executable.
For explanations though, we often satisfy ourselves with what are essentially
causally possible plans -- requiring only that it *could* have happened the
way we say.
MM:
We got it. Thanks. (Actually now we remember seeing the term).
You are right of course and it is an interesting difference.
Hudson
> Michael:
> You are right here. If the initial state is not completely known the
> program we sent you will not work. But in this case we do what we did
> for planning and in other cases, namely add
> ********observed(f,0) or observed(neg(f),0)******
> for each fluent f.
I am not sure why your program doesn't work in this case. Perhaps you
don't include "complete initial state axioms" which I guess you would
write:
holds(f,0) <- not holds(neg(f),0)
holds(neg(f),0) <- not holds(f,0)
MM
True, but that is exactly the marked statement above. We just didn't
include it in the program we sent you before.
Thanks again. It certainly helps us to clarify our thoughts.