Choice Rules and the Belief-Based View of ASP, Part 3 ========= Date: 19 Nov 2010 From: Vladimir Lifschitz To: Marc Denecker > if there is a match, it is between negation as failure and the > strong introspection operator: in ASP-world programs, we need the > strong introspection operator K to express what is expressed through > NAF in ASP-belief applications. Yes, this is exactly what I meant. I'll try again: The difference between negation as failure and classical negation in "ASP-belief" programs seems to correspond to the difference between the presence of K and its absence in "ASP-world" programs. Can this idea be turned into a theorem? I have one more question. When we express the commonsense law of inertia in ASP, two negations are used, as in "ASP-belief" programs: holds(P,T+1) :- holds(P,T), not -holds(P,T+1). On the other hand, what we are describing here are transitions between states in the real world, not our (potentially incomplete) beliefs about them; accordingly, each answer set is a complete set of literals. Which of the two kinds of ASP is this, from your perspective? ========= Date: 19 Nov 2010 From: Marc Denecker To: Vladimir Lifschitz I will argue that the USA-Advisor written by Michael and his students, is an ASP-world program, not an ASP-belief program as you claim. First, it is not because a program contains the connective "-" that it must be an ASP-belief program. In programs written with the ASP-world methodology, the strong negation operator "-", if it occurs, acts as a predicate constructor: for each predicate p/n, it gives you another predicate -p/n whose relation to p/n in fairly loose: it suffices that p/n and -p/n are disjunct. This surfaces in the common practice to eliminate "classical negation" by replacing -p(t) literals by p*(t) atoms, where p* is a new predicate that is connected to p(x) by the constraint: :- p(x), p*(x). E.g., if eur(x) would informally mean "x is a european", then we could use -eur(x) to represent any set of objects that are not europeans. E.g., I could use -eur(x) to represent Venezuelans. Or I could use it to encode North Americans. Or Kangaroos. Etc. Stated differently, in ASP-world, the interpretation of -p/n is not logically determined by the interpretation of p/n. On the other hand, the interpretation of "not p/n" is fully determined by the semantics: in an answer set, it denotes the complement of "p/n". It follows that, if this answer set M represents a possible world, i.e., if M is a standard interpretation, then "not" corresponds to classical negation. Hence, in an ASP-world program in which eur(x) represents that x is a european, "not eur(x)" represents that x is NOT a european. I had searched for occurences of "-" in the USA-Advisor but didnt find them. They have been eliminated in the version in the 2002 paper [1] and the version that is available on the web. Now, in your last reply, you give an example how they were originally used: holds(P,T+1) :- holds(P,T), not -holds(P,T+1). I can't see how -holds(P,T) is defined, but my guess is that -holds(P,T) is to be interpreted as "On time point T there is a cause to terminate P". rather than as "P does NOT hold at time T". My point so far was: from the fact that the early versions of USA-Advisor contained "-", one cannot infer that it was developed as an ASP-belief program. So, is there another way to see whether USA-Advisor is following the ASP-belief or the ASP-world methodology? I think that this is fairly evident: + USA-Advisor will be an ASP-belief program if we can identify some relevant agent or knowledge base, whose beliefs (or possible alternative states of belief) are encoded in the program. Each answer set represents the set of literals believed in one of these possible belief states. + It will be an ASP-world program if the different answer sets represent different possible worlds. I don't think there is natural epistemic agent whose states of belief correspond to answer sets of this program. If he would exists, then he would have some kind of telekinetic powers:-). E.g., consider the following rule from the USA-Advisor. h(in_state(Sw,S),T+1) :- occurs(flip(Sw,S),T), not stuck(Sw). In ASP-belief, this rule would express that if THE EPISTEMIC AGENT DOES NOT KNOW THAT a switch is stuck, then a flip operation on that switch will modify the state of the switch accordingly -- the telekinetic effect. If the USA-Advisor is an ASP-world program, this rule expresses that in a state of the world in which a switch is NOT stuck, a flip operation will modify its state. Common sense tells me that the effect of a flip operation on a switch is independent of the epistemic state of whatsoever epistemic agent. In my opinion, the problem solved by USA-Advisor is a -challenging and highly interesting- planning problem, involving only objective knowledge about the world and no reflecting epistemic agents. Different answer sets represent different possible evolutions of the dynamic system. These worlds differ by the presence of other actions, by different initial states, perhaps by different effects of non-deterministic actions, etc. That is, each answer set represents a different possible world. So I claim that this is an ASP-world program. Why was ASP such a useful language to express the shuttle domain? Here are two clues that to me suggest the main factor: + In the conclusion of the 2002 paper [1], it is written that: "It was interesting to notice that many fluents of the RCS domain had natural recursive definitions, easily expressible in A-Prolog." + In a post on TAG on 22 Oct 2010, Michael expressed that the USA-Advisor consists mainly of "LP-functions" or "modules". A major part of the success of the USA-Advisor seems to be the presence of informal recursive definitions, and the fact that these can be expressed as LP-functions in ASP. In the ASP-world methodology, ASP is an excellent language to express recursive definitions. That has contribued a lot to the success of ASP in the USA-Advisor. On the other hand, the ID part of FO(ID) was developed specifically to represent recursive(=inductive) definitions and would be a more natural language to express these "natural recursive definitions". In this post, I have not addressed the issue of combining ASP-belief and ASP-world in one program, and the related issue of the symmetric and assymmetric predicates. I will come back to this in a later email. [1] Marcello Balduccini, Michael Gelfond, Monica Nogueira, and Richard Watson. Planning with the USA-Advisor. In 3rd International NASA Workshop on Planning and Scheduling for Space, Sep 2002. ========= Date: 19 Nov 2010 From: Marc Denecker To: Vladimir Lifschitz > The difference between > negation as failure and classical negation in "ASP-belief" programs seems > to correspond to the difference between the presence of K and its absence > in "ASP-world" programs. Can this idea be turned into a theorem? I'm sure it can. There is a kind of "phase-transition" from ASP-belief to ASP-world. As long as one can represent the belief of the knowledge base that one needs to query by one answer set, NAF can be used. Any addition that splits up the answer set in multiple answer sets forces the introduction of the K operator to reason on the knowledge base. This is what Gelfond showed in his strong introspection paper. Such an effect does not occur in the extension of FO(.) that was defined in [1]. [1] Answer Set Programming's Contributions to Classical Logic - An analysis of ASP methodology, Michaels anniversary symposium, 2010, http://people.cs.kuleuven.be/~marc.denecker/ctc.pdf ========= Date: 22 Nov 2010 From: Marc Denecker To: Vladimir Lifschitz You defined symmetric predicates as predicates P such that absence of an atom P(A) and of its classical negation -P(A) means "P(A) is unknown". Assymmetric predicates were defined as those where absence of the atom in an answer set means that "P(A) is false". You suggested that there might be a connection with the ASP-belief and the ASP-world methodologies that I introduced. In ASP-world (where answer sets are interpretations and represent possible worlds), absence of an atom P(A) in an answer set means that in this possible world, the atom is false. Could this be the same as your assymmetric predicate? Not really it seems to me, because in a possible world setting, one possible world does not decide whether an atom is unknown. Whether an atom is unknown depends here on the collection of answer sets/possible worlds, not just on one answer set. In particular, in an ASP-world program, P(A) is unknown exactly when it belongs to at least one answer set and is absent in at least one other answer set. To summarize our discussion, you asked if, in my view, the USA-Advisor program was encoded in a combination of ASP-belief and ASP-world. I have argued that it is entirely ASP-world: all answer sets represent possible worlds. I have conjectured that classical negation, as originally present in and later eliminated from the USA-Advisor, was used as a predicate constructor to denote one particular relation that is disjunct from the original predicate (but not its complement, i.e., not its classical negation). E.g., for Holds(P,T), I conjectured that -Holds(P,T) meant that there is a cause at time T for P to be false. This is evidently not the same as that P is false at time T. It should be easy to verify my conjecture: it suffices to look at the original rules defining -Holds(P,T). Is it possible to combine (the advantages of) the methodologies of ASP-world and ASP-belief in one ASP program? That would mean that part of an answer set is interpreted as a representation of a possible world and another part as a representation of a state of belief of some relevant agent. An NAF literal not P would mean that P is false for some predicates P, and that the epistemic agent does not know P for other predicates P. Is this possible? It sounds messy to me, I dont know any example, but I have no argument why it would be impossible. Does somebody see an example? ========= Date: 27 Nov 2010 From: Michael Gelfond To: Marc Denecker I agree that different people seem to use different methodologies for representing knowledge in ASP. I do not however understand Marc's distinction between ASP-belief and ASP-world. I am not sure how to address the question, and thought that it maybe best to start discussing what I mean by an agent: If some agreement is reached here we can continue. I start with the following question: According to the epistemic view answer sets of the program represents possible beliefs of an agent associated with the program. So the question is "What agent?" Quoting from the web page of our KR lab: our main goal is to better understand how to build software components of agents capable of reasoning in acting in a changing environment. From this standpoint our KR languages, including ASP, are used to represent knowledge and beliefs of such an agent. The agent uses the knowledge base (KB) to answer questions, to plan to achieve goals, to explain unexpected observations, etc. It is capable of performing great many operations on such a knowledge base. It can add new knowledge to it, select parts of KB most suitable for a particular task, find model or models of the KB, check if KB entails some statement, etc. The way the agent selects these and other operations depends on its architecture and on the outside world. In our original design we assume that the agent's observations of the outside world are accurate. If the agent observes that fluent F is true, it really is. So the agent may know the following default: cares(X,Y) :- parent(X,Y), not -cares(X,Y). "Parents normally care about their children", and the following facts: parent(john,mary) person(john) person(mary) etc. He can use this knowldge base to answer queries cares(john,mary)? The query is understood as ``Do you believe that John cares about Mary''? The answer will be YES. If however the question were: ``Is it true that John cares about Mary'' the careful agent will answer ``I do not know''. If the question were ``Do you know if John cares about Mary'' the answer will be NO. A more complex example of an agent is the one implemented in the USA advisor and its extensions. The agent was built using our standard methodology which, I thought, Marc called subjective or ASP-belief. According to Marc it is ASP-world. There is some confusion here (one or both of us can be confused) I suspect that much of the confusion is caused by mixing up the declaritive knowldge base of the agent and a particular program we use to run to perform various tasks. As I aready mention our main concern was efficiency so the original KB was substantially simplified. For instance, the rule h(in_state(Sw,S),T+1) :- occurs(flip(Sw,S),T), not stuck(Sw). is obtained from two rules of the declarative KB: -stuck(Sw) :- not stuck(Sw). and h(in_state(Sw,S),T+1) :- occurs(flip(Sw,S),T), -stuck(Sw). In searching for a plan the agent makes an asumption that things are normal and uses a standard causal law of AL to reason about consequences of FLIP h(F,T) means that F is TRUE at T (From my standpoint Marc's conjecture is wrong). The agent however understands that his conclusion about the value of IN_STATE after FLIP is only believed to be true. It does not have to be true. If the original assumption were false then so would be the conclusion. The agent's belief can be proven wrong by observation of IN_STATE after FLIP or by learning that this particular switch is stuck. In the former case the agent will do diagnostics. In the later he can try re-planning. It may be useful to concentrate on this explanation and see if it makes sense to the readers. ========= Date: 29 Nov 2010 From: Marc Denecker To: Michael Gelfond I am well aware of how you view ASP programs, as programs encoding different belief states of some epistemic agent. I agree that it is possible to view it that way, and sometimes it works really nice (as in the ASP-belief programs), but I find it unnecessarily complicated in most cases. My point was that there is an alternative simpler view on many ASP-programs (the ASP-world programs). We can see programs such as USA-advisor, colorability, hamiltonian path, computer configuration, etc., as logic theories in a sense similar to how it is done in classical logic: programs encode collections of Herbrand models (=answer sets) which are interpreted similar as models in classical logic: as formal representations of possible states of affairs of the problem domain (the view of Tarski and Wittgenstein) E.g., an answer set of the USA advisor as a possible evolution of the shuttle that satisfies some goal statement and the action domain theory an answer set of the colorability program as a possible coloring of the graph an answer set as a possible hamiltonian path an answer set as a possible configuration of the computer ... etc Arguably this is the natural interpretation of answer sets of these programs and the computational problem associated with each of these ASP programs is exactly to compute such a possible world, or parts of it. If we forget about the agent and we view an ASP program as an encoding of the models, i.e., of the different possible worlds of the problem domain, we can use the same ASP reasoning tools to solve the same many problems that you were talking of in your email: to compute possible states of affairs, to search answers that hold in all or in some possible worlds, etc. .. This is the same functionality, but seen in a different, more classical and simplified way. The next question might be why the view of an answer set as a possible world would be any simpler than the view as a belief state of an epistemic agent. I might be able to extend on that in more than one way but one argument is: if there is one thing that is shown by the history of nonmonotonic reasoning then it is that formal modelling of epistemic agents is really complex. If there is no clear and natural role for an epistemic agent in a KR-problem, let us apply Occams rasor and eliminate the agent. Frankly, I dont think many ASP programmers think in terms of epistemic agents, and of ASP programs as theories that tell such agents to believe this or that. The agent is not interesting; the possible worlds are what counts since these encode the solutions of typical ASP problems. I'm pretty sure that this is how most ASP people think of answer sets - even if they do not say so. It seems to me that you are one of the few that systematically thinks about ASP from the point of view of coding the beliefs of an epistemic agent. I respect that, but I can see the point of those that don't think in these terms. Thus, I argue that from a practical point of view, there is a need to explain the ASP formalism and the ASP methodology from the point of view of answer sets as possible worlds. (That is what we tried in our paper in the MG65 symposium). You wrote: >> The precise h(F,T) means that F is TRUE at T (From my standpoint >> Marc's conjecture is wrong). Here you misinterpreted me. I fully agree with you on this point. My conjecture was: that in an answer set of USA-advisor, -h(F,T) is true exactly when there is an action that makes F false at time T. I.e., when there is a cause for F to become false. If that is true, it means that we can interpret -h as a new predicate such that -h(F,T) informally means: there is a cause for F to become false at time T. I think this conjecture still stands. ========= Date: 30 Nov 2010 From: Michael Gelfond To: Marc Denecker Marc seems to argue that (a) few ASP programmers think about agents and their beliefs when they write ASP programs and that (b) "from a practical point of view, there is a need to explain the ASP formalism and the ASP methodology from the point of view of answer sets as possible worlds. (That is what we tried in our paper in the MG65 symposium." Both statements may be correct. I certainly didn't argue against them. But I thought that the point of the original discussion was to see if there are indeed two different knowledge representation methodologies and, if so, try to better understand them. At least I am trying to do that. My previous message, in which I started to explain my methodology (I assume ASP-belief) was written for this purpose. I also attempted to set the record straight. I was seriously involved in the design and implementation of ASP-advisor and know what methodology was used! And it was used consistently. If Marc were to say that different methodology could have been used I would not object to it. But I understood his statement "I will argue that the USA-Advisor written by Michael and his students, is an ASP-world program" to mean that it was developed using ASP world methodology. If so, it is simply historically not true. If ASP-world program has a different meaning then it is probably fine, but the precise meaning of the statement avoids me. (c) Marc seems to also argue that ASP-world is simpler and in other ways preferable to ASP-belief methodology. This maybe the case but since I didn't yet understand the real differences in methodologies and their ramifications I can not really argue about that. Possibly the key difference is in the use of ASP. Marc writes: "The agent is not interesting; the possible worlds are what counts since these encode the solutions of typical ASP problems." Since my goal is to design agents which use the same knowledge to perform different tasks (e.g. USA-advisor uses the same knowledge base to perform both planning and diagnostics) it is strange for me to hear that "agent is not interesting". But if ASP is used as a programming language to solve, say, Hamiltonian Path problem then Marc can be right and agent is not necessary. (d) Marc writes:" my conjecture was: that in an answer set of USA-advisor, -h(F,T) is true exactly when there is an action that makes F false at time T. I.e., when there is a cause for F to become false. If that is true, it means that we can interpret -h as a new predicate such that -h(F,T) informally means: there is a cause for F to become false at time T. I think this conjecture still stands." I do not quite follow this. Certainly -h(F,T) (as well as h(F,T)) can hold in the initial situation or be true by the law of inertia. In these cases I wouldn't say that it was caused by action. As far as having cause for -h(F,T) this has a flavor of causal logic and action language C which is not what I used. But of course one can interpret -h(F,T) as a new predicate -- Vladimir and I showed that long ago. ========= Date: 30 Nov 2010 From: Michael Gelfond To: Vladimir Lifschitz > I want to record information about which students in > my class are graduate students. Consider three possibilities: > > (1): > > student(a;b;c;d;e). > grad(a;b). > -grad(c;d;e). > > (2) > > student(a;b;c;d;e). > grad(a;b). > -grad(X) :- student(X), not grad(X). > > (3) > > student(a;b;c;d;e). > -grad(c;d;e). > grad(X) :- student(X), not -grad(X). > > If I understand your distinction correctly then you will say that in > version (1) we know (the truth about) the status of each student; in > version (2) we know the status of a, b, but not the status of c, d, e; > in version (3) it's the other way around. Is this right? YES. (I assume that by WE you mean the agent which uses the program.) > But all three formalizations represent the same complete state of > knowledge, isn't that right? I am not sure what you mean by "state of knowledge". Did I use this term? If so, sorry, I've forgotten. > We can argue which of them is more concise, > or more elaboration tolerant, or easier to understand, but it seems that > there is no difference between them as far as the state of knowledge is > concerned. Again, I am not sure what state of knowledge means. You do not mean simply that they have the same answer set, are you? Let me try to speculate what can be meant by the state of knowledge. One possibility -- set of all formulas (literals, disjunctions of literals, etc) known by the reasoner associated with the program. Pedro and I talked about what it means for such an agent to know formula in the second part of our discussion. If this definition is adopted then of course program (1) and (2) have different knowledge states. Agent associated with (1) knows that p(c) is false, while that associated with (2) does not. ========= Date: 30 Nov 2010 From: Vladimir Lifschitz To: Michael Gelfond > > But all three formalizations represent the same complete state of > > knowledge, isn't that right? > > I am not sure what you mean by "state of knowledge". I'll rephrase this. These formalizations tell us the same thing about what the agent knows to be true in the real world: it knows that a,b,c,d,e are students, that a,b are graduate students, and that c,d,e are not. ========= Date: 30 Nov 2010 From: Michael Gelfond To: Vladimir Lifschitz Sorry, perhaps the source of confusion is in my hasty reply to your question. Let me go back to it: Vladimir: > Take this example. I want to record information about which > students in > my class are graduate students. Consider three possibilities: > > (1): > > student(a;b;c;d;e). > grad(a;b). > -grad(c;d;e). > > (2) > > student(a;b;c;d;e). > grad(a;b). > -grad(X) :- student(X), not grad(X). > > (3) > > student(a;b;c;d;e). > -grad(c;d;e). > grad(X) :- student(X), not -grad(X). > > If I understand your distinction correctly then you will say that in > version (1) we know (the truth about) the status of each student; in > version (2) we know the status of a, b, but not the status of c, d, e; > in version (3) it's the other way around. Is this right? > Michael: YES. ---------- I think my answer was given under the incorrect assumption. I assumed that your programs (2) and (3) allow arbitrary updates. But, since you are really describing your own class, which is static (i.e. people neither register for nor drop the class) no updates of the program are allowed. In this case the answer to your question will be NO. All three programs KNOW the same thing. Let me explain how this answer follows from my formalization of knowledge in more detail. First, as I mentioned in various places before, the agent's knowledge base is not an ASP program --- it is a MODULE,, M , i.e. PROGRAM, P, together with syntactically defined class of updates, say U (as well as syntactically defined collection of queries but I'll ignore this for now). Let us also assume that we have a notion of formula -- literal, disjunction of literals, something more complex,...) DEF: We'll say that the agent associated with M knows F if for any set X of rules from U, P + X entails F. Since your problem is static, the set U is empty, and hence all three programs know the same thing. If I allow the set of updates to include literals, then program (1) knows -grad(c) while the second believes that to be the case but does not know it. If one tries to build an agent he should think about possible updates of the agent's knowledge (as well as of possible queries). They can determine which way of representing knowledge is preferable. The situation is similar to that in procedural programming --- particular data structure used to represent a set S should be selected only after one decides on operations which will be applied to S. For instance, in the USA adviser, which is certainly a "dynamic system" atoms indicating faulty components, like stuck(switch1) belonged to U. So the corresponding closed world assumption was defeasible. Hence, the best the agent could do was to hope that the assumption really holds. Otherwise his plan may fail to achieve the goal. In your example CWA assumptions in programs (2) and (3) are not really defeasible (even though it may look like they are). ========= Date: 1 Dec 2010 From: Vladimir Lifschitz To: Michael Gelfond and Marc Denecker Michael's comments about the role of a syntactically defined class of updates, and about "static" domains (where that class is empty) shed new light, it seems to me, on Marc's distinction between ASP-world and ASP-belief methodologies. In the static case, when we are not interested in updates, there is no need to distinguish between what is true in the world and what the beliefs of the agent are. Perhaps Marc's words > I dont think many ASP programmers think in terms of epistemic > agents, and of ASP programs as theories that tell such agents > to believe this or that. The agent is not interesting; the possible > worlds are what counts can be understood as the claim that ASP programmers often deal with static domains, and in such cases thinking in terms of epistemic agents may be an unnecesary complication. ========= Date: 1 Dec 2010 From: Pedro Cabalar To: TAG regarding some comments about the use of -holds in formalisations of inertia, I would like to comment some alternative representations of the latter I have seen or used before. As written in the previous discussion, a possible representation of inertia using strong negation would be: (A1) holds(F,T1) :- holds(F,T), not -holds(F,T1). (A2) -holds(F,T1) :- -holds(F,T), not holds(F,T1). where T1 is the next situation to T. We are assuming here that fluents are Boolean. In many cases, however, we must deal with a multi-valued fluent F so that holds(F,V,T) means F takes value V at time T. Then, inertia may amount to the single rule: (B1) holds(F,V,T1) :- holds(F,V,T), not -holds(F,V,T1). provided that, whenever F takes a value W, it "does not" (strong negation) take a different value V (B2) -holds(F,V,T) :- holds(F,W,T), V!=W. Rule (B2) seems quite reasonable for a functional fluent and independent of dynamic aspects, so I wouldn't say it has any causal flavour. Still, I understand Marc's point about causality since some formalisations of inertia have traditionally used an "abnormality" predicate ab(F,V,T1) in place of the literal -holds(F,V,T) in the two rules above (B1), (B2). The meaning of "ab(F,V,T)" actually suggests "F=V has been caused to terminate at time T". However, as Michael said, strong negation can always be replaced by an auxiliary predicate. Calling it "aux", "ab" or "caused" is not essential if it just plays the same role. Thus, under my point of view, there is no real significative difference between using "-holds(F,V,T1)" or "ab(F,V,T1)" as both representations are strongly equivalent (modulo the original alphabet without "ab" or "-"), regardless the informal meaning the programmer originally had in mind. We can even go further than (B1),(B2) and use existential quantifiers to encode inertia as the single rule: (C) holds(F,V,T1) :- holds(F,V,T), not exists W ( W!=V, holds(F,W,T1) ). This third option has the "advantage" of not requiring strong negation or auxiliary predicates. As I explained in a previous mail, it is also strongly equivalent (modulo original alphabet) to the formalisation with "ab" or "-". I said above that not using "-" is an advantage because, under my point of view, strong negation for multi-valued fluents as in (B1), (B2) becomes a bit redundant. Two epistemic situations are possible for a multi-valued fluent F: either holds(F,V,T) for some value V in its domain; or F has no value at all, i.e., there is no V for which holds(F,V,T) belongs to the model. Strong negation doesn't add any relevant information to this. Another hint that shows this redundancy is that we can, in fact, encode Boolean fluents as a particular case where the domain is just {true,false}. When we take (C) for this limited domain, we get (C1) holds(F,true,T1) :- holds(F,true,T), not holds(F,false,T1). (C2) holds(F,false,T1) :- holds(F,true,T), not holds(F,true,T1). which is (A1),(A2) but with a reified truth value. ========= Date: 2 Dec 2010 From: Marc Denecker To: Michael Gelfond I must not have been very clear. You obviously programmed the USA-Advisor using your methodology. I would be a fool to have tried to deny that. Let me go back. Your methodology is based on encoding an introspective epistemic agent into possibly different states of belief, from which answers can be derived to many problems. Also I did some research about modeling introspective agents (in the context of autoepistemic and default logic) and I still find such agents a very difficult concept. So I wonder whether we can dispose of this concept: it is possible to interpret ASP programs without referring to such an epistemic agent? In my view, the answer is: + sometimes this is impossible; in these cases, the only sensible interpretation of an answer set is as a state of belief of an epistemic agent -- these are the ASP-belief programs, by definition. + in many cases it is possible: in these cases, we CAN interpret the answer sets as Herbrand interpretations, i.e., as possible worlds of the problem domain. We CAN view these answer set programs in a similar way as in classical logic, as theories that specify a class of possible worlds -- these are the ASP-world programs, by definition. Notice: I do not claim that an ASP-world program cannot be viewed as an encoding of an introspective agent. On the contrary, I agree with you that it is possible to do so. But we do not need to. So, my claim that the USA-Advisor is an ASP-world program is not in contradiction with your view. An ASP-belief application. In some applications, there is an introspective epistemic agent around in the problem domain that needs to be modeled. A very clear example is the incomplete knowledge base: Student(a;b;c). Minority(a) -Minority(b) Interview(x) <- not Minority(x), not -Minority(x). The last rule expresses that if someones minority status is unknown, then he or she has to be interviewed. Unknown by who? By the knowledge base represented by the program. Or by us, the ASP programmer, who's knowledge is represented here. To model this problem domain in a natural way, we need a logic with an introspective epistemic operator. This would also be a natural application for autoepistemic logic. The (unique) answer set of this program { Student(a;b;c), Minority(a), -Minority(b), Interview(c)} represents indeed the believed literals of the knowledge base. Here the epistemic agent is real and coincides with the knowledge base (whose knowledge coindices with that of us, the ASP-programmer). Does it make sense to view this program as a description of possible world(s)? Obviously not. We, the ASP programmer (or the knowledge base) have incomplete knowledge on the minority status of c and to us, multiple worlds are possible. None of these possible worlds correspond to the answer set. This is a prototypical example of the ASP-belief methodology. Its properties are: there is existing introspective agent in the problem domain, and we can not model the problem domain in a natural way without an epistemic operator for this agent. ASP-world The above situation is an interesting but rather exceptional KR situation. In most ASP applications, there is no epistemic agent around in the problem domain, let alone an introspective one. E.g., in the colorability domain, it is about graphs and colors and colorings. E.g., in the USA-Advisor, what WE know about that domain is "objective": it is about material stuff like pipes and valves and actions, and the effect and non-effect of actions. No introspection is involved so far. Do we have to understand answer sets of such programs as someone's states of belief? Or can we interpret the answer sets of the USA-Advisor as (Herbrand) interpretations representing possible worlds of that dynamic system? Here a possible world obviously corresponds to a possible evolution of the system. To me that seems evident. Why would it not be a correct view? A minor problem might be the presence of the "classical negation" -h(p,t) in the original USA-Advisor. But that is a detail. -h can be interpreted here as one particular relation that is disjunct with h. I know your methodology reasonably well, and I guessed that h can be interpreted as "there is a cause for p to become false at t". As you mentioned, this may not be entirely accurate in the initial state, but that can be fixed. The properties of an ASP-world program are: our knowledge about the problem domain is "objective", there is no clear introspective agent in the domain. Nor are their relevant properties in which we, the ASP programmer or the knowledge base need to reflect about our own state of belief. The epistemic agent underlying the ASP program is then not us, nor an existing agent in the problem domain. Although this agent plays a central role in your view and in your methodology, it is a virtual being -- this is how we called it in the MG65 paper-- and I prefer to apply occam's rasor to it. Maybe you disagree with the term "virtual agent". It seems that this agent is very real to you. Your methodology is based on the view of programming this agent with (introspective) knowledge, and subsequently have this agent solve our problems. I view knowledge representation in a more standard way as composing a logic description modeling the problem domain and applying an inference tool(s) to the theory to solve our problems. In this view, if there is no true introspective knowledge to be represented there is no role for an introspective epistemic agent and I prefer not to view my computer as an agent. I hope I succeeded in clarifying my standpoint. Can we agree that many ASP-world programs exists, i.e., that in many answer set programs, answer sets can be interpreted as (Herbrand) interpretations (not only programs about static domains but also about dynamic domains such as the USA-Advisor: I'm referring here to the preceding email discussion between you and Vladimir])? Then we can move on to the really interesting questions (about which I have some opinions): - For what kind of applications involving introspective agents does ASP-belief work well? (Clearly ASP offers less natural expressivity than AEL or DL) - What kind of logic is ASP when viewing answer sets as possible worlds? How can we explain the ASP-world methodology from this point of view, i.e., without referring to the virtual agent? What kind of knowledge can be naturally represented in it? - E.g., what is the meaning of negation as failure in this alternative view ? -- it cannot be an epistemic operator in this view since there is no underlying epistemic agent! - how to interpret inertia rules if the negation as failure that you use to express inertia is not a default/epistemic operator? - etc. ========= Date: 6 Dec 2010 From: Marc Denecker To: Michael Gelfond I would like to add a clarification to my previous email. I claimed that in most ASP applications (including the USA-Advisor), the domain does not naturally involve epistemic agents. Nor do we, ASP programmers, have introspective propositions about the problem domain. In such applications, we can view answer sets in two ways: - in the standard ASP-view, as representations of belief states of some (virtual) introspective agent - but also as (Herbrand) interpretations, representations of possible worlds, where absence of an atom in an answer set means that the atom is false in the world. I defined such applications as ASP-world programs. There is also the interesting class of ASP programs in which this alternative view (answer sets as possible worlds) is impossible - these were called ASP-belief programs. A next question might be: why would we care? ASP has obviously developed a successful methodology for solving problems, and ASP people are used to explain this methodology in terms of introspective (virtual) agents. Why would one care about yet another view? More than one argument can be given here, but probably none goes as deep as the following fundamental science argument. The scientific discipline of "knowledge representation" might be defined as the formal empirical science that studies human knowledge. This involves: discovering what kind of knowledge human experts have in different domains, and making natural "models" of this knowledge (read: making formal languages or adding logic constructs to existing languages using which these forms of knowledge can be naturally and accurately described). Clearly, this is a strong fundamental view that goes to the essence of knowledge representation. In this view, the following critique on ASP can be made. By definition of ASP-world applications, human expert knowledge is of "objective" nature; there is no natural introspective agent involved. Yet, when using ASP to encode such knowledge, the official ASP methodology requires us to encode our given knowledge in terms of introspective statements encoding for different possible states of belief of an epistemic agent. Why does the ASP language force us to introduce the -complex- concept of an introspective epistemic agent in applications where no such agent naturally appears? The introduction of this agent seems to me like a candidate school example of a scientific "ad hoc" hypothesis. (From Wikipedia (but accurate enough): In science and philosophy, ad hoc means the addition of extraneous hypotheses to a theory to save it from being falsified. Ad hoc hypotheses compensate for anomalies not anticipated by the theory in its unmodified form. ) In what way does the introduction of the virtual agent save ASP "from being falsified"? What is "not anticipated by ASP in its unmodified form"? I conjecture that in most applications of ASP, the knowledge of the domain cannot be represented in ASP unless a virtual agent is artificially brought to the scene. Hence, the introduction of the virtual agent is the ad hoc hypothesis that saves ASP. One can see this statement as a scientific conjecture that can be "experimentally" verified or falsified. Our experiments are the ASP-world programs. In these ASP-world programs, we need to identify (1) what is known of the domain, (2) that the ASP agent is indeed virtual, and (3) that we cannot encode the domain knowledge in ASP without introducing the virtual agent. Here is a simple "experiment". Take an application in which it is known that "Bob is a minority student, John is not a minority student, Dave or Ann (or both) are minority students". There is clearly no natural introspection involved in this scenario. The state of belief of us, or of the knowledge base, is clear and unique. What literals do we believe? { Minority(Bob), -Minority(John) } There are several ways to express this domain in ASP. One is: Student(B;J;A;D). { Minority(x) } <- Student(x). <- Minority(John). <- not Minority(Bob). <- not Minority(Ann), not Minority(Dave). There are three answer sets. None reflects the unique state of belief of us, or of the knowledge base. This shows that the agent is virtual. Suppose we were not allowed to introduce an artificial agent. Can we represent this scenario in ASP? The only "real" agent is the knowledge base itself. If this is the agent of the ASP program, its answer set should be unique and it should be: { Minority(Bob), -Minority(John) } This answer set does not reflect the disjunctive knowledge about Ann and Dave. So, there is no ASP-belief program that can accurately represent the given information, for the simple reason that an answer set cannot accurately represent a belief state involving disjunctive information. As a practical consequence, we see that no answer set program with this unique answer set can accurately answer to the question: "is it possible that Ann and Dave are not minority students?" Notice that this query can be correctly answered with the earlier ASP-world program. So, introducing the virtual agent saved the day. This "experiment" confirms the conjecture. The same argument can be made for most ASP-programs. The above is a fundamental critique on ASP. In view of it, the following research questions rises: The "real" ASP-applications are the ASP-belief programs. What are the strenghts and limitations of ASP as a language to represent introspective knowledge? Compared to e.g., AEL or DL? There is no doubt that the ASP-world methodology is a very effective methodology to solve actual problems in many domains. ASP people succeed in encoding the kind of human expert knowledge that is available in many domains. Irresistibly, the scientific question rises: what are the types of human expert knowledge that we find in so many domains and that can be encoded succesfully in ASP? And how does the ASP representation work? Can we scientifically explain this without taking refuge to introducing a virtual agent? --------- (See Part 4 for the continuation of this discussion.)