Sam's Calculus Date: Mon, 7 Oct 2002 From: Vladimir Lifschitz To: TAG These are fragments from several messages that Jonathan and I have received in connection with our paper "Making an argument more convincing." === From Esra (esra@cs.utexas.edu): You introduce an interesting approach to solve a commonsense reasoning problem. Though it is not clear to me how someone should start this process of proving assertions as you described. Also I wonder whether you have ever used the approach Vladimir had told us earlier: ``It occurs to me that the right approach to a difficult commonsense reasoning problem, such as Sam's Calculus, is to write a detailed informal proof first, and then formalize it. '' (to formalize each step, for instance). === Vladimir's reply: Here is what I was thinking: 1. Ask yourself which subset of the given facts is the absolute minimum sufficient for deriving the expected conclusion. 2. Derive the conclusion from these facts informally. 3. Formalize your argument in an elaboration tolerant way. 4. Ask yourself which of the given facts that you did not use can help you overcome possible objections. 5. Describe these objections and your counter-argument informally. 6. Go to Step 3. === From Michael (mgelfond@redwood.cs.ttu.edu): This is a good formulation. I was trying to pinpoint the differences between our approaches and this may help too. I DO NOT NORMALLY LOOK AT THE EXPECTED CONCLUSION BUT RATHER AT THE GIVEN (AND COMMON-SENSE) KNOWLEDGE OF THE SUBJECT. I am trying to find some piece of general knowledge we really use in the given example and formalize it in a reasonably general form. If you have time please look at the attached solution of the same problem. I did it for my students hoping to involve them into further discussion but it didn't go too far. At that time we discussed object orientation and modules which explains some peculiarity of my representation. Instead of knowledge of subjects, like math, calc, etc, I am talking about 'peaces of knowledge' which are given unique names. This slightly complicates the representation but I do not want to change anything now - an unpolished solution can give a better idea about the methodology. Is there some difference in the approaches or this is just my imagination? Any comments from everyone are of course very welcome. If anyone is interested in finishing the formalization please let me know. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% %%%%% MODULE 1: KNOWLEDGE EVOLUTION %%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%% The module is dealing with evolution of pieces of knowledge, %%%%% defined by relation 'knowledge(K)'. We use K as a variable %%%%% of this type. %%%%% We also use (possibly indexed) variables T and Q whose %%%%% domains are defined by relations scale(0..2). % where values 0..2 are interpreted as follows: % 0 - no knowledge % 1 - average knowledge % 2 - good knowledge %%%%% NOTICE, that relations 'knowledge' and 'unit_time' %%%%% should be imported from other modules, while %%%%% 'scale' seems to belong to this one. %%%%% The module will 'import' relation %%%%% 'quality(K,Q,T)' %%%%% which holds if Q is the quality %%%%% of knowledge K at time T. %%%%% We assume that the value Q in this relation %%%%% is determined by: % % 1. The natural laws of knowledge retention; % 2. The actions of upgrading and refreshing % the knowledge which alter the natural progression of things. degree(1). degree(2). action(upgrade(K,D,T)) :- knowledge(K), degree(D), unit_time(T). action(refresh(K,T)) :- knowledge(K), unit_time(T). % Executability conditions: % % Knowledge cannot be upgraded beyond the limit. :- upgrade(K,D,T), quality(K,Q,T), Q + D > 2, knowledge(K), degree(D), scale(Q), unit_time(T). % The former increases the quality of knowledge K by D, % the letter restores the knowledge lost by the % deterioration process. % We assume that, in the absence of intervening actions, % retention of knowledge over time is controlled by the % following LAWS: % 1. Normally, good and bad knowledge is inertial, % i.e. persist over time % % I guess this is really not true - after sufficiently % long period even good knowledge disappears but its o.k. % for now. inert(0). inert(2). quality(K,Q,T) :- quality(K,Q,T-1), not n_quality(K,Q,T), knowledge(K), inert(Q), unit_time(T), T > 0. % 2. Normally, average knowledge persists over short % intervals of time but drops to 0 after that. % To express this in a Markovian model of the domain % we will introduce an auxiliary fluent % last_modified(K,T0,T) % which says that K was modified at T0 % and neither upgraded nor refreshed % in the interval (T0,T]. % The fluent will be defined by the following rules: % The initial value: last_modified(K,0,0):- knowledge(K). % The effect axioms: last_modified(K,T+1,T+1) :- upgrade(K,D,T), knowledge(K), degree(D), unit_time(T). last_modified(K,T+1,T+1) :- refresh(K,T), knowledge(K), unit_time(T). % Inertia last_modified(K,T0,T+1) :- last_modified(K,T0,T), not n_last_modified(K,T0,T+1), knowledge(K), unit_time(T0), unit_time(T). % Functional dependency n_last_modified(K,T1,T) :- last_modified(K,T2,T), neq(T1,T2), knowledge(K), unit_time(T), unit_time(T1), unit_time(T2). % Now we are ready to continue defining 'quality' % 2a. Normally, average quality persists over a short period. quality(K,1,T+1) :- last_modified(K,T0,T), quality(K,1,T), short(T0,T+1), not n_quality(K,1,T+1), knowledge(K), unit_time(T0), unit_time(T). % 2b. drops to 0 after a long time: quality(K,0,T+1) :- last_modified(K,T0,T), quality(K,1,T), long(T0,T+1), not n_quality(K,1,T+1), knowledge(K), unit_time(T0), unit_time(T). % and may or may not drop to 0 in the gray area quality(K,0,T+1) | quality(K,1,T+1) :- last_modified(K,T0,T), quality(K,1,T), gray(T0,T+1), knowledge(K), unit_time(T0), unit_time(T). % Functional dependency: n_quality(K,Q1,T) :- quality(K,Q2,T), neq(Q1,Q2), knowledge(K), unit_time(T), scale(Q1), scale(Q2). % Effect axioms: quality(K,Q+D,T+1) :- upgrade(K,D,T), quality(K,Q,T), knowledge(K), scale(Q), degree(D), unit_time(T). quality(K,Q,T+1) :- refresh(K,T), quality(K,Q,T), knowledge(K), scale(Q), unit_time(T). % Finally, we assume that each K is % assigned the initial quality value, 0. quality(K,0,0) :- knowledge(K), not n_quality(K,0,0). % Negation (This is of course not needed if I use classical negation). :- n_quality(K,Q,T),quality(K,Q,T), knowledge(K), unit_time(T), scale(Q). :- n_last_modified(K,T0,T), last_modified(K,T0,T), knowledge(K), unit_time(T0), unit_time(T). % Negation :- upgrade(K,D,T), n_upgrade(K,D,T), knowledge(K), degree(D), unit_time(T). :- refresh(K,T), n_refresh(K,T), knowledge(K), unit_time(T). %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % MODULE 2: TIME % % Module 1 will export the qualitative length of intervals % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% unit_time(0..5). short(T1,T2) :- unit_time(T1), unit_time(T2), T2 > T1, T2 - T1 < 2. long(T1,T2) :- unit_time(T1), unit_time(T2), T2 - T1 > 3. gray(T1,T2) :- unit_time(T1), unit_time(T2), T2 > T1, not long(T1,T2), not short(T1,T2). %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % MODULE 3. Auxiliary % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % This part requires a lot of thought and depends % on possible applications of our theory of knowledge. % I simply included information relevant to the problem. % Comments and suggestions will be appreciated! % First we need some kind of hierarchy of subjects. I'll be % very simple here -- adding a real hierarchy is a good exercise. subject(c1). subject(high_school_math). subject(math). is_in(c1,math). is_in(high_school_math,math). % We do not know if calculus is part of high school math or not, % so there is some incompleteness here: is_in(c1,high_school_math) | n_is_in(c1,high_school_math). % Next I will include some information on the % relationship of 'upgrade' and 'refresh' with % other relevant actions. For Sam's calculus % we need action 'think'. % Let think(P,S,T) mean that person P thinks about subject S at moment T % In this case we'll need the rules; think(P,S,T) :- upgrade(K,D,T), who(K,P), what(K,S), knowledge(K), degree(D), unit_time(T). think(P,S,T) :- refresh(K,T), who(K,P), what(K,S), knowledge(K), unit_time(T). % and think(P,S2,T) :- is_in(S1,S2), think(P,S1,T), subject(S1), subject(S2), person(P), unit_time(T). % A really interesting question is how to propagate 'quality' % through this hierarchy. I have no really good understanding here. % We seem to do though somehow - some introspection may help. % Finally, our input -- info about Sam: person(sam). knowledge(k). % k is a knowledge object which who(k,sam). % refers to Sam's knowledge of what(k,c1). % first calculus. % received 'c' in all high school math classes. received(sam,S,c,0) :- is_in(S,high_school_math), subject(S). % This rule relates grade C and average quality 1. quality(K,1,0) :- knowledge(K), who(K,P), what(K,S), received(P,S,c,0). % It is out of place here --- grades would be related to quality % in some other module. How to do it I have no idea (compute the average?). % Next time - I want to go home SOON! % Finally, I'd like to start with not knowing anything about Sam % after high school - he could of study calculus 0{upgrade(K,D,T)}1 :- knowledge(K), degree(D), unit_time(T). 0{refresh(K,T)}1 :- knowledge(K), unit_time(T). % We should be able to conclude that he didn't from n_think(sam,math,T) :- unit_time(T). % (***) -- Sam didn't think about math. % I also added: :- upgrade(K,D,T), refresh(K,T), knowledge(K), degree(D), unit_time(T). % Negation :- think(P,S,T), n_think(P,S,T), subject(S), person(P), unit_time(T). :- is_in(S,Ss), n_is_in(S,Ss), subject(S), subject(Ss). hide. show quality(A,B,C). % Answer: 1 % Sam didn't study calculus % quality(k,0,0) % quality(k,0,1) % quality(k,0,2) quality(k,0,3) quality(k,0,4) quality(k,0,5) % Answer: 2 % Sam retained calculus in the gray area % quality(k,1,0) % quality(k,1,1) % quality(k,1,2) % quality(k,0,3) quality(k,0,4) quality(k,0,5) % Answer: 3 % Sam didn't retain calculus in the gray area % quality(k,1,0) % quality(k,1,1) % quality(k,0,2) quality(k,0,3) quality(k,0,4) quality(k,0,5) % I suggest you to run this with different histories. % Notice, that if you remove statement (***) -- % Sam not thinking about math -- the number of models % increases very dramatically. This is because we do not % view this story as a complete narrative. Many things can % happen in 20 years and a C student can become a good % engineer. ================ Date: Mon, 18 Nov 2002 From: Vladimir Lifschitz To: TAG I am forwarding to you an interesting exchange between Esra and Michael on the methodology of formalizing commonsense reasoning, which continues our earlier discussion of Sam's calculus (see http://www.cs.utexas.edu/users/vl/papers/sams.ps and http://www.cs.utexas.edu/users/vl/tag/discussions.html). === E. > I have started studying your formalization of Sam's calculus, and I have > some questions. M. Great! Looking forward to talking to you. Maybe we'll push it in some interesting direction. MORE ON METHODOLOGY (Esra and Michael) First exchange: E. > As far as I understand you first consider pieces of knowledge and then > relate them to a person's knowledge of subjects (whereas Jonathan and > Vladimir consider a person's knowledge of subjects). In the first part, > you do not look at the expected conclusion. I wonder whether you take the > conclusion into account in the second part. M. I think about the desired conclusion twice: (a) In the very beginning when I determine what parts of the story are relevant from the standpoint of a 'problem designer'. As any communication with the user this is a difficult process. When I've heard about the Sam's calculus the first time I thought about the following argument: If I didn't think about a math subject for 20 years I forget it completely (even if I knew it to start with). Calculus (C) is a part of math (M), Sam didn't think about M hence he didn't think about C, hence he does not know C. Does not make sense to ask him for help. This is fairly easy to formalize - grades, high school, etc are irrelevant. No nonmonotonicity either. The formalization below is quit different. The goal is to describe the evolution of knowledge. Sam's story is secondary. ( To make it more interesting I even assuming that good knowledge does not disappear.) (b) I also think about the desired conclusion at the very end. It determines the type of query I'd like to ask and allows me to test my representation. For instance, if I remember correctly the query was: Does it make sense to ask Sam for help? Well, if I am really in trouble and there is no one else to ask, it does, if there is at least one answer set in which Sam MAY know calculus. This is not really expressible in the language (which shows its weakness). I can of course just to ask for a model satisfying :- who(K,sam), not quality(K,0,20). If there are other candidates it may make sense to ask Sam for help if he definitely know calculus. I did not even address these questions here. Instead I am simply defining possible states of Sam's knowledge. Second exchange: Hi Michael, Thanks for your response. > When I've heard about the Sam's calculus the first time > I thought about the following argument: > If I didn't think about a math subject for 20 years I forget it completely > (even if I knew it to start with). Calculus (C) is a part of math (M), > Sam didn't think about M hence he didn't think about C, hence he does not > know C. Does not make sense to ask him for help. > > This is fairly easy to formalize - grades, high school, etc are irrelevant. > No nonmonotonicity either. E. It is not clear to me why we do not need nonmonotonicity in this case. Don't we need a rule to express that, normally, someone who knows a subject does not forget it over a short period? M. The desired conclusion was about the current moment - twenty years after the high school. Hence I was not interested in short periods. > (b) I also think about the desired conclusion at the very end. > It determines the type of query I'd like to ask and allows me to test my > representation. For instance, if I remember > correctly the query was: Does it make sense to ask Sam for help? E The goal is to infer that Sam is not the person to ask about a calculus problem. Jonathan and Vladimir state their goal as inferring that Sam cannot solve a calculus problem when asked. M. So they changed the goal, right? Why? It may have something to do with the difficulties I mentioned. Remember my solution was written soon after our meeting in Lubbock, so I am referring to what I've heard there. > Well, if I am really in trouble and there is no one else to ask, > it does, if there is at least one answer set in which Sam > MAY know calculus. E. This argument makes sense for the original goal. I don't know how it is related to Jonathan and Vladimir's goal though. M. Sure, I am only talking about the original goal. > This is not really expressible in the language > (which shows its weakness). E. How about the following? % you wouldn't ask a question to a person if he cannot solve it but % someone else can. -ask(S,P,T) :- person(P), subject(S), person(P1),P!=P1,unit_time(T), not solve(P,S,T), solve(P1,S,T). % otherwise, you can ask a question about subject S to anyone; ask(S,P,T) :- person(P), subject(S), unit_time(T), not -ask(P,S,T). % someone whose knowledge in subject S is bad cannot solve a problem in % that subject. -solve(P,S,T) :- person(P), subject(S), unit_time(T), what(K,S), quality(K,0,T). % otherwise, a person can solve a problem in subject S. solve(P,S,T) :- person(P), subject(S), unit_time(T), not -solve(P,S,T). M. YES, YOU CAN DO THAT. But I do not think it solves the problem I was referring too. Suppose you have two answer sets - one contains ask(sam,calc,20) and another one does not. Do we ask Sam? I guess the answer is yes, right? In other words I'd like to use quantifiers over answer sets which can not be done, I think, without extending the language. Does it make sense? If not I'll try to give a better explanation. ON MARKOVIAN MODELS: E. You write: > % 2. Normally, average knowledge persists over short > % intervals of time but drops to 0 after that. > > % To express this in a Markovian model of the domain > % we will introduce an auxiliary fluent > % last_modified(K,T0,T) > % which says that K was modified at T0 > % and neither upgraded nor refreshed > % in the interval (T0,T]. What is a Markovian model? M. A theory in action language describes a collection of possible trajectories of the system. Such a collection can be described via a transition diagram, i.e. successor states of the system can be made to depend only on the current state and an action. If it is the case the system's description is Markovian. Otherwise it is not. For instance, a causal law h(F,T+1) :- h(g,T), o(a,T-1). is non-Markovian. Of course a non-markovian description can be made Markovian by introducing new fluents but sometimes non-Markovian descriptions are shorter and more natural. ON LAST_MODIFIED IS IT INERTIAL? E. > It is not clear to me why the last_modified is inertial. Isn't it possible > to give an inductive definition? M. I am not sure I understand the question. I guess, here is my reasoning. Consider a fluent last_modified(k,T0) - k was last modified at moment T0. Suppose this fluent holds at moment T, i.e. h(last_modified(k,T0),T) (or last_modified(k,T0,T) in the program) What will be the value of this fluent at T+1? Well, it still will be true unless some action changes it. What definition would you prefer? E. Yes, you are right. I understand it now. A CORRECTION!!! E. > The rules defining quality confuse me a little bit. For instance, what > does the following rule mean quality(K,1,T+1) :- last_modified(K,T0,T), quality(K,1,T), short(T0,T+1), not n_quality(K,1,T+1), knowledge(K), unit_time(T0), unit_time(T). >when the knowledge K is upgraded to degree 2 at T0 ? M. I am confused too. What I meant (and what was is in my running program) is below. sorry for the mistake. % 2a. Normally, average quality persists over a short period. quality(K,1,T) :- last_modified(K,T0,T), quality(K,1,T0), short(T0,T), not n_quality(K,1,T), knowledge(K), unit_time(T0), unit_time(T). It says that if at moment T the last modification of K happened between T0-1 and T0, and the resulting quality was 1, and the interval between T0 and T is short then, normally, the quality will stay 1. In other words a standard representation of the above default. % 2b. drops to 0 after a long time: quality(K,0,T) :- last_modified(K,T0,T), quality(K,1,T0), long(T0,T), not n_quality(K,1,T), knowledge(K), unit_time(T0), unit_time(T). % and may or may not drop to 0 in the gray area quality(K,0,T) | quality(K,1,T) :- last_modified(K,T0,T), quality(K,1,T0), gray(T0,T), knowledge(K), unit_time(T0), unit_time(T). CONSTRAINTS VERSUS RULES: > E. > > Is there a reason why you define functional dependencies with rules that > > introduce negations of predicates like > > >> n_quality(K,Q1,T) :- > >> quality(K,Q2,T), > >> neq(Q1,Q2), > >> knowledge(K), > >> unit_time(T), > >> scale(Q1), > >> scale(Q2). > >> > > > instead of rules that do not introduce negations of predicates > > like > > :- quality(K,Q1,T), > quality(K,Q2,T), > neq(Q1,Q2), > knowledge(K), > unit_time(T), > scale(Q1), > scale(Q2). > > M. > Yes, even though it is not really important here. > Consider a situation in which 'p' is true. 'q' is false, > and 'f' is unknown. > We can write it as > > p. > ~q. > f | ~f > > or > > p. > ~q. > > In the latter case, which is sometimes preferable, I really need to have the negation. > > Let's go to our case. I'd like to be able able to ask > > 'Is quality(k,1,20) false?' > > i.e. does the negation of quality(k,1,20) belong to all the answer sets. > > I need my rule to get the answer 'yes'. E. I understand. KNOWLEDGE UPGRADE and THINKING. First Exchange: E. It is not clear to me why you define think in terms of upgrade and refresh: > % Let think(P,S,T) mean that person P thinks about subject S at moment T > % In this case we'll need the rules; > > think(P,S,T) :- > upgrade(K,D,T), > who(K,P), > what(K,S), > knowledge(K), > degree(D), > unit_time(T). > > think(P,S,T) :- > refresh(K,T), > who(K,P), > what(K,S), > knowledge(K), > unit_time(T). > M. I guess I viewed upgrading and refreshing as processes which include thinking. E. To me think is exogenous, and upgrade and refresh depend on think: {think(P,S,T)} :- person(P), subject(S), unit_time(T). upgrade(K,D,T):- think(P,S,T), who(K,P), what(K,S), knowledge(K), degree(D), unit_time(T). refresh(K,T):- think(P,S,T), who(K,P), what(K,S), knowledge(K), unit_time(T). What do you think? M. This does not look intuitive to me. First of all for me exogenous means an action not performed by the agent. Thinking is done by the same agent who upgrades and/or refreshes. You choice rule seems to say that thinking on the subject just happens or does not happen to a reasoner. Moreover, thinking about calculus is normally not enough for me to upgrade and refresh. Thinking + some reading + doing some exercise may be enough for the task, but not thinking alone. But maybe I am missing something in your formalization. Second Exchange: E. I thought when we define an action to be exogenous we allow it to occur without a reason that can be explained by the given theory, so an exogenous action's occurrence may have some other cause not explained by the given theory. M. That is how you guys use it in C and it does make sense. In my experience though people in planning use it the way I do (I learned the word from them). Exogenous action is the one which originated outside of the agent. E. To me, someone may think about a subject for a reason other than to upgrade his knowledge or to refresh his knowledge. But as a result of his thinking he may upgrade his knowledge or he may refresh his knowledge. Actually my two rules defining upgrade and refresh do not express this, as you point out in your message. They should rather be like: {upgrade(K,D,T)}:- think(P,S,T), who(K,P), what(K,S), knowledge(K), degree(D), unit_time(T). {refresh(K,T)}:- think(P,S,T), who(K,P), what(K,S), knowledge(K), unit_time(T). M. These rules are better but I am still not convinced. Here are some arguments: Upgrade and refresh are basic actions of the theory - one can certainly argue about their appropriateness but you do not seem to be doing that. So suppose I have a domain description which contains upgrade(sam,calc,0). I claim that connection between 'upgrade' and 'think' shall allow me to conclude that Sam thought about calculus at time 0. Your representation does not seem to allow this conclusion. On another hand you are right that thinking can of course occur independently from upgrading or refreshing knowledge. I can, for instance, say 'think(sam,calc,5)'. In my formalization as well as in yours we still do not know if Sam upgraded or refreshed his knowledge about calculus. And that is how it should be. In other words the above rules are true even if you remove 'think' from it. Am I making sense? WHAT IS 'AFTER SCHOOL'? > E. > When you say ``after high school'' in the following > > > % Finally, I'd like to start with not knowing anything about Sam > > % after high school - he could of study calculus > > do you refer to a specific unit_time? > > M. > No. I am saying that could of work on it at any time between high school > and now. E. I am confused. I was expecting a specific time unit corresponding to ``after high school'' since we want to say that Sam hasn't thought about math after high school. For instance, from the following rule n_think(sam,math,T) :- unit_time(T). I understand that Sam doesn't think at any time. Does time unit 0 correspond to ``after high school''? M. Time 0 is the moment he received the result of his exams. I guess it is the 'first moment after high school'. But any T > 0 is also 'after high school'. So you are right - he didn't think about math at any time I modeled. Is it in any way counterintuitive? If so I need to model his life in school by more that knowing his grade. MORE WORK TO DO. M > > % A really interesting question is how to propagate 'quality' > > % through this hierarchy. E. > Getting the average of qualities may be a way. > M. > Maybe. We can probably propagate from parts to the whole. > But what about from whole to the parts? > And how do we organize the hierarchy of knowledge? > Suppose I know that Mary has a really good knowledge of Math. > Does this mean that she has a good knowledge of calculus? > Probably. Logic? I am not sure. There are some standard > and less standard parts of the hierarchy. > Interesting question though. E. You are right. This is interesting and difficult ;) === Date: Tue, 19 Nov 2002 From: Esra Erdem To: Michael Gelfond > E. > The goal is to infer that Sam is not the person to ask about a calculus > problem. Jonathan and Vladimir state their goal as inferring that Sam > cannot solve a calculus problem when asked. > > M. > So they changed the goal, right? Why? It may have something to do with > the difficulties I mentioned. > Remember my solution was written soon after our meeting in Lubbock, > so I am referring to what I've heard there. As far as I remember, the goal was modified like this when we started to study Sam's calculus problem in Austin. I don't remember why, probably to make the problem simpler. > M. YES, YOU CAN DO THAT. But I do not think it solves the problem > I was referring too. Here it is: Suppose you have two answer sets - > one contains ask(sam,calc,20) and another one does not. > Do we ask Sam? In other words I'd like to use quantifiers over answer sets. > Does it make sense? If not I'll try to give a better explanation. I understand. > M. That is how you use it in C and it does make sense. > In my experience though people in planning use it the way I do (I learned > the word from them). Exogenous action is the one which originated outside of > the agent. This is nice to learn. > E. > To me, someone may think about a subject for a reason > other than to upgrade his knowledge or to refresh his knowledge. But as a > result of his thinking he may upgrade his knowledge or he may refresh his > knowledge. Actually my two rules defining upgrade and refresh do not > express this, as you point out in your message. They should rather be > like: > > {upgrade(K,D,T)}:- > think(P,S,T), > who(K,P), > what(K,S), > knowledge(K), > degree(D), > unit_time(T). > > > {refresh(K,T)}:- > think(P,S,T), > who(K,P), > what(K,S), > knowledge(K), > unit_time(T). > > M. These rules are better but I am still not convinced. > Here are some arguments: > > Upgrade and refresh are basic actions of the theory - one can certainly > argue about their appropriateness but you do not seem to be doing that. I like introducing upgrade and refresh as actions. I don't understand how upgrade and refresh are different from think. I guess, I would like to know what you mean by ``basic actions''. I view upgrade, refresh and think as actions. > So suppose I have a domain description which contains > upgrade(sam,calc,0). I claim that connection between 'upgrade' and > 'think' shall allow me to conclude that Sam thought about calculus > at time 0. > Your representation does not seem to allow this conclusion. I understand your point. What about the following formalization? {think(P,S,T)} :- person(P), subject(S), unit_time(T). {upgrade(K,D,T)} :- knowledge(K), degree(D), unit_time(T). {refresh(K,T)} :- knowledge(K), unit_time(T). :- person(P), subject(S), unit_time(T), knowledge(K), refresh(K,T), not think(P,S,T). :- person(P), subject(S), unit_time(T), knowledge(K), degree(D), upgrade(K,D,T), not think(P,S,T). It says that we have three actions: upgrading, refreshing and thinking. They may occur anytime with the constraint that upgrading and refreshing cannot occur without thinking. > On another hand you are right and thinking can of course occur > independently > from upgrading or refreshing knowledge. I can, for instance, say > 'think(sam,calc,5)'. > In my formalization as well as in yours we still do not know if Sam > upgraded or refreshed his knowledge about calculus. > And that is how it should be. We can add think(sam,calc,5). to the problem description and we could still get an answer set where Sam doesn't upgrade or refresh his knowledge. Is this what you meant above? If we don't add this information, we cannot get such an answer set. I think we should be able to get such an answer set. What do you think? === Date: Mon, 2 Dec 2002 From: Michael Gelfond To: Esra Erdem > E. > To me, someone may think about a subject for a reason > other than to upgrade his knowledge or to refresh his knowledge. But as a > result of his thinking he may upgrade his knowledge or he may refresh his > knowledge. Actually my two rules defining upgrade and refresh do not > express this, as you point out in your message. They should rather be > like: > > {upgrade(K,D,T)}:- > think(P,S,T), > who(K,P), > what(K,S), > knowledge(K), > degree(D), > unit_time(T). > > > {refresh(K,T)}:- > think(P,S,T), > who(K,P), > what(K,S), > knowledge(K), > unit_time(T). > > M. These rules are better but I am still not convinced. > Here are some arguments: > > Upgrade and refresh are basic actions of the theory - one can certainly > argue about their appropriateness but you do not seem to be doing that. E. >>I like introducing upgrade and refresh as actions. I don't understand how >>upgrade and refresh are different from think. I guess, I would like to >>know what you mean by ``basic actions''. I view upgrade, refresh and think >>as actions. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% M. o.k. , let me try. My main goal in this exercise (for me) was to outline a (very simplified) theory of evolution of knowledge. In this theory the basic objects are pieces of knowledge with certain attributes. The only fluent is quality(K,Q) and the only actions are 'upgrade' and 'refresh'. The corresponding description is given in MODULE 1 - knowledge evolution. Since Module 1 is the 'main' part of my program and I informally referred to its actions as 'basic'. 'Think' is not included in module 1, hence it is not basic. Do you think it should be included? E. > So suppose I have a domain description which contains > upgrade(sam,calc,0). I claim that connection between 'upgrade' and > 'think' shall allow me to conclude that Sam thought about calculus > at time 0. > Your representation does not seem to allow this conclusion. >> I understand your point. What about the following formalization? >> {think(P,S,T)} :- person(P), subject(S), unit_time(T). >> {upgrade(K,D,T)} :- knowledge(K), degree(D), unit_time(T). >> {refresh(K,T)} :- knowledge(K), unit_time(T). >> :- person(P), subject(S), unit_time(T), knowledge(K), refresh(K,T), not think(P,S,T). >> :- person(P), subject(S), unit_time(T), knowledge(K), degree(D), upgrade(K,D,T), not think(P,S,T). M. This is a difficult question and will require a long answer. This formalization is certainly much better than the previous once but I do not think it is fully satisfactory. It seems to contain unnessacary rules and does not correctly answer queries about thinking. To explain my view let me go over my solution in more detail. You ask: 'What about the following formalization?' and I have to counter with my own question - formalization of what information and for what purpose? My goal in formalizing the SAM's story was to test my knowledge evolution theory (KET). The conclusion I reached was 'KET is not bad but, to make it more useful, we should extend it by a knowledge hierarchy. But let us forget about the hierarchy and simply replace math by calculus. Your rules will not be changed by that. KET describes the corresponding transition diagram and, to test it, I need to provide KET with some inputs, e.g. initial situation S0 + occurrences of actions. Example: quality(k,average,0). (view average as a symbolic constant). refresh(k,2). An important comment: I always use such an input together with the closed world assumption: No actions occur except those mentioned in the domain: -refresh(K,T) :- not refresh(K,T). and similarly for 'upgrade'. Conceptually this CWA belongs to KET - but being lazy I omitted it which made my program 'epistemologically inadequate'. (This is a kind of sloppiness Marc Deneker often complains about. ) Of course if I run this input together with KET I'd know quality of 'k' at each step. NOW I can formulate the goal of my Auxiliary module as follows: use information about SAM to produce a proper input for KET. The Sam's story really starts here. As always in translation I can decide which information is essential and which I can ignore. So I'll have a few versions: 1. If I formulate Sam's story simply as ``Sam didn't know calculus at 0 and never thought about it since' then quality(k,none,0). will be enough. I'll get the correct answers. Of course quality(k,none,0) can be obtained from other info in the story by connecting grades to quality and calculus to high school math (either via prerequisites as in Jonathan's paper with Vladimir or in the way I've done, etc. But this is more or less routine. NOTICE - I do not need to mention 'think'. 2. So when do I need to mention 'thinking'? I guess it will be useful if I want my program to answer queries about it, or if I want to prevent updates like 'refresh(k,4)', or when I want to be able to extract a proof of the fact that Sam does not know calculus which uses 'think', etc. So I am connecting 'think' and the basic actions of KET by think(P,S,T) :- upgrade(K,D,T), who(K,P), what(K,S). etc. Now if I ask 'think(sam,calc,t)' for some moment t the answer will be 'maybe' since neither query nor its negation is entailed by the program. Adding extra knowledge about Sam -think(sam,calc,T) :- time(T). will force the program to answer the same query with 'no'. Note that the 'wrong' updates are also prohibited, etc. Note that neither of rules: >> {upgrade(K,D,T)} :- knowledge(K), degree(D), unit_time(T). >> {refresh(K,T)} :- knowledge(K), unit_time(T). >> {think(P,S,T)} :- person(P), subject(S), unit_time(T). is needed. So why did I have the first two rules in my formalization? I used them simply to test KET - not to formalize this particular story about Sam. If they are not present - no action occurs due to CWA of KET. If they are then actions may occur at any point - the rules override the CWA. But I still do not need the third rule. There is no CWA for 'think' and hence nothing to override. Adding >> {think(P,S,T)} :- person(P), subject(S), unit_time(T). or better >> 1{-think(P,S,T),think(P,S,T)}1 :- person(P), subject(S), unit_time(T). will force me to 'consider' 'think' but will not really change the entailment I am interested in. (Note: -think is needed to get negative answers to a query. ) I hope this helps to understand my reaction. Any comments?