#|
Copyright (C) 1994 by Computational Logic, Inc. All Rights Reserved.
This script is hereby placed in the public domain, and therefore unlimited
editing and redistribution is permitted.
NO WARRANTY
Computational Logic, Inc. PROVIDES ABSOLUTELY NO WARRANTY. THE EVENT SCRIPT
IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,
INCLUDING, BUT NOT LIMITED TO, ANY IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND
PERFORMANCE OF THE SCRIPT IS WITH YOU. SHOULD THE SCRIPT PROVE DEFECTIVE, YOU
ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
IN NO EVENT WILL Computational Logic, Inc. BE LIABLE TO YOU FOR ANY DAMAGES,
ANY LOST PROFITS, LOST MONIES, OR OTHER SPECIAL, INCIDENTAL OR CONSEQUENTIAL
DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THIS SCRIPT (INCLUDING BUT
NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES
SUSTAINED BY THIRD PARTIES), EVEN IF YOU HAVE ADVISED US OF THE POSSIBILITY OF
SUCH DAMAGES, OR FOR ANY CLAIM BY ANY OTHER PARTY.
|#
; An NQTHM Formalization of a Small Machine
; by J Strother Moore II
; May 30, 1991
; This file serves as a good introduction to the Nqthm approach to
; language semantics. We have carried out this approach on much larger
; examples than presented here. It is, for example, that used at all
; levels of the CLI short stack (hardware description language, machine
; language, assembly language, high-level language). The semantics of those
; levels are so large and complicated that it is difficult to see the
; basic ideas. Those ideas are highlighted here simply by dealing with
; a trivial language.
; This is a list of events to be processed by NQTHM starting from the
; GROUND-ZERO state. In it I develop
; (a) an operational semantics for a simple programming language
; (b) a program that implements multiplication by repeated addition
; (c) a proof of the correctness of the multiplication program
; directly from the "operational" semantics
; (d) a program that does exponentiation and uses the multiplier
; (e) a proof of the correctness of the exponentiation program
; (f) the most general correctness theorem about the multiplier
; (g) the definition and correctness of the "McCarthy" functional
; semantics of the multiplier
; (h) a proof of the correctness of the multiplier by the inductive
; assertion method.
; (i) May 13, 1992. a proof of the general theorem that our
; standard form of a correctness result for a subroutine
; implies our standard form of a termination result. This
; part of the file is not part of the tutorial because the
; proof is pretty messy.
; The programming language is not particularly elegant. Its only
; redeeming features are that its semantics is easily written down and
; it lets me illustrate the points I'm trying to make. This is by no
; means a complete or exemplary "library" for dealing with programs in
; this language; I have in fact kept the facts to a bare minimum.
; We start by defining our "small machine."
(boot-strap nqthm)
; States are represented by the following shell objects:
(add-shell st nil stp
((pc (none-of) zero)
(stk (none-of) zero)
(mem (none-of) zero)
(haltedp (none-of) zero)
(defs (none-of) zero)))
; Utility Functions
(defn add1-pc (pc)
(cons (car pc) (add1 (cdr pc))))
(defn get (n lst)
(if (zerop n) (car lst) (get (sub1 n) (cdr lst))))
(defn put (n v lst)
(if (zerop n)
(cons v (cdr lst))
(cons (car lst) (put (sub1 n) v (cdr lst)))))
(defn fetch (pc defs)
(get (cdr pc)
(cdr (assoc (car pc) defs))))
; The Semantics of Individual Instructions
; Move Instructions
(defn move (addr1 addr2 s)
(st (add1-pc (pc s))
(stk s)
(put addr1 (get addr2 (mem s)) (mem s))
f
(defs s)))
(defn movi (addr val s)
(st (add1-pc (pc s))
(stk s)
(put addr val (mem s))
f
(defs s)))
; Arithmetic Instructions
(defn add (addr1 addr2 s)
(st (add1-pc (pc s))
(stk s)
(put addr1
(plus (get addr1 (mem s))
(get addr2 (mem s)))
(mem s))
f
(defs s)))
(defn subi (addr val s)
(st (add1-pc (pc s))
(stk s)
(put addr
(difference (get addr (mem s))
val)
(mem s))
f
(defs s)))
; Jump Instructions
(defn jumpz (addr pc s)
(st (if (zerop (get addr (mem s)))
(cons (car (pc s)) pc)
(add1-pc (pc s)))
(stk s)
(mem s)
f
(defs s)))
(defn jump (pc s)
(st (cons (car (pc s)) pc)
(stk s)
(mem s)
f
(defs s)))
; Subroutine Call and Return
(defn call (subr s)
(st (cons subr 0)
(cons (add1-pc (pc s)) (stk s))
(mem s)
f
(defs s)))
(defn ret (s)
(if (nlistp (stk s))
(st (pc s) (stk s) (mem s) t (defs s))
(st (car (stk s)) (cdr (stk s)) (mem s) f (defs s))))
; One can imagine adding new instructions.
; The Interpreter
(defn execute (ins s)
(if (equal (car ins) 'move) (move (cadr ins) (caddr ins) s)
(if (equal (car ins) 'movi) (movi (cadr ins) (caddr ins) s)
(if (equal (car ins) 'add) (add (cadr ins) (caddr ins) s)
(if (equal (car ins) 'subi) (subi (cadr ins) (caddr ins) s)
(if (equal (car ins) 'jumpz) (jumpz (cadr ins) (caddr ins) s)
(if (equal (car ins) 'jump) (jump (cadr ins) s)
(if (equal (car ins) 'call) (call (cadr ins) s)
(if (equal (car ins) 'ret) (ret s)
s)))))))))
(defn step (s)
(if (haltedp s)
s
(execute (fetch (pc s) (defs s)) s)))
(defn sm (s n)
(if (zerop n)
s
(sm (step s) (sub1 n))))
; This concludes our formal definition of the intepreter.
; We next prove a small collection of lemmas that tightly control the
; expansion of the the interpreter. The idea is that we don't want sm
; or step to expand unless we know what the current instruction is and
; have enough time on the clock to execute it. So we will prove
; certain rewrite rules that manipulate step and sm and then disable
; those functions so that only the rules are available.
(prove-lemma step-opener (rewrite)
(and (implies (haltedp s) (equal (step s) s))
(implies (listp (fetch (pc s) (defs s)))
(equal (step s)
(if (haltedp s)
s
(execute (fetch (pc s) (defs s)) s)))))
((disable execute)))
(disable step)
(prove-lemma sm-plus (rewrite)
(equal (sm s (plus i j))
(sm (sm s i) j)))
(prove-lemma sm-add1 (rewrite)
(equal (sm s (add1 i))
(sm (step s) i)))
(prove-lemma sm-0 (rewrite)
(equal (sm s 0) s))
(disable sm)
; Now we move to our first example program. We will define a program
; that multiplies two naturals by successive addition. We will then
; prove it correct.
; The program we have in mind is:
; (times (movi 2 0)
; (jumpz 0 5)
; (add 2 1)
; (subi 0 1)
; (jump 1)
; (ret))
; Observe that the program multiplies the contents of reg 0 by the
; contents of reg 1 and leaves the result in reg 2. At the end, reg 0
; is 0 and reg 1 is unchanged. If we start at a (call times) this
; program requires 2+4i+2 instructions, where i is the initial
; contents of reg 0. To keep the proof incredibly simple, we will
; prove the program correct only for the 5 register version of our
; machine! (Why 5? Why not 3? Because eventually we will use times
; in another program that uses 5 registers. In general we should
; prove it for an arbitrarily large memory -- and we will -- but that
; just complicates the statement without contributing to the example.)
; We start by defining the constant that is this program:
(defn times-program nil
; This program multiplies r0 times r1 by adding r1 to
; an accumulator (r2) r0 times. The accumulator is
; initialized to 0.
; instruction pc comment
'(times (movi 2 0) ; 0 Initialize accumulator r2
(jumpz 0 5); 1 Jump to end if r0 is 0
(add 2 1) ; 2 Add r1 to r2
(subi 0 1) ; 3 Decrement r0 by 1
(jump 1) ; 4 Jump to pc 1
(ret))) ; 5 Return
; and a function that multiplies the "same way."
(defn times-fn (i j ans)
(if (zerop i)
ans
(times-fn (sub1 i) j (plus ans j))))
; In some sense, the following mathematical fact completely captures
; the correctness of the program:
(prove-lemma times-fn-is-times (rewrite)
(implies (numberp ans)
(equal (times-fn i j ans)
(plus (times i j) ans))))
; at least if one also understands
(prove-lemma plus-right-id (rewrite)
(equal (plus x 0) (fix x)))
; The real problem is proving that the program has this semantics.
; First, how much time does the program need? It takes one tick to do
; the CALL, one for the MOVI at pc 0, then 4 ticks for each iteration
; of the loop at pc 1, and then 2 more ticks to get out of the loop
; and do the RET. So:
(defn times-clock (i)
(plus 2 (times i 4) 2))
; We could have written (plus 4 (times i 4)) but by using this
; algebraically odd expression we make sm-plus, above, immediately
; applicable.
; We next address ourselves to the loop from pc 1 through 4. Consider
; an arbitary arrival at pc 1 and suppose you have (times i 4) ticks.
; The following theorem tells us what you get:
(prove-lemma times-correct-lemma (rewrite)
(implies (and (numberp i)
(equal (assoc 'times defs) (times-program)))
(equal (sm (st '(times . 1)
stk1
(list i j ans r3 r4)
f
defs)
(times i 4))
(st '(times . 1)
stk1
(list 0 j (times-fn i j ans) r3 r4)
f
defs))))
; It is then trivial to construct the entire correctness proof:
(prove-lemma times-correct (rewrite)
(implies (and (equal (fetch pc defs) '(call times))
(equal (assoc 'times defs) (times-program))
(numberp i))
(equal (sm (st pc stk (list i j r2 r3 r4) f defs)
(times-clock i))
(st (add1-pc pc)
stk
(list 0 j (times i j) r3 r4)
f
defs))))
; We disable the clock function so that subsequent programs can
; use it without its expansion messing up their algebraic
; form.
(disable times-clock)
; It is worth noting that this file has been rather carefully crafted
; to make the above proof go through with a minimum of fuss. In
; general, we will have to prove lots of lemmas about the utility
; functions GET and PUT to handle arbitrarily sized memories. And we
; have to prove lots of lemmas about arithmetic to explain the data
; handling in our programs. To some extent those arithmetic facts get
; in the way of our desired treatment of the clock in our proofs,
; e.g., if the theorem prover knows the usual facts about PLUS and
; TIMES then (PLUS 2 (TIMES I 4) 2) would become (PLUS 4 (TIMES 4 I))
; and we'd then have to take special care to force sm to open the way
; we want in this proof. One avenue that has been used to avoid this
; problem is to define the clock functions with special arithmetic
; primitives, e.g., CLK-PLUS and CLK-TIMES (which are in fact just the
; familiar functions) but which we then disable and isolate from the
; free-wheeling arithmetic simplifications.
; We now consider the role of subroutine call and return in this
; language. To illustrate it we'll implement exponentiation, which
; will CALL our TIMES program. The proof of the correctness of the
; exponentiation program will rely on the correctness of TIMES, not on
; re-analysis of the code for TIMES.
; The mathematical function we wish to implement is:
(defn exp (i j)
(if (zerop j)
1
(times (exp i (sub1 j)) i)))
; The program we have in mind is:
(defn exp-program nil
; This program computes (exp r0 r1) and leaves the result in r1.
; Because we use times (which requires repeatedly loading r0 and r1 to
; pass in its parameters) and because times smashes r2 with its
; result, we will use r3 and r4 as our "locals." We will use r1
; as our running answer, which starts at 1. After moving r0 and r1
; to r3 and r4 respectively and initializing our running answer to 1,
; we just multiply r3 by r1 (r4 times), moving the product back into
; r1 after each multiplication.
'(exp (move 3 0) ; 0 move r0 to r3
(move 4 1) ; 1 move r1 to r4
(movi 1 1) ; 2 initialize r1 as our running ans
(jumpz 4 9) ; 3 if r4 is 0, quit
(move 0 3) ; 4 move r3 into r0
(call times) ; 5 multiply r0 times r1, result in r2
(move 1 2) ; 6 move result back to r1
(subi 4 1) ; 7 decrement r4
(jump 3) ; 8 repeat
(ret))) ; 9 return
; A recursive description of the loop (pc 3 through 8) in this
; algorithm is:
(defn exp-fn (r0 r1 r2 r3 r4)
(if (zerop r4)
r1
(exp-fn 0 (times r3 r1) (times r3 r1) r3 (sub1 r4))))
; Pretty weird.
; We need a little more arithmetic than we have, namely
; associativity and right identity for times:
(prove-lemma associativity-of-times (rewrite)
(equal (times (times i j) k) (times i (times j k))))
(prove-lemma times-right-id (rewrite)
(equal (times i 1) (fix i)))
; So now the system can prove that the weird exp-fn is just exp (in a
; generalized sense that accomodates the initial value of r1).
(prove-lemma exp-fn-is-exp (rewrite)
(implies (numberp r1)
(equal (exp-fn r0 r1 r2 r3 r4)
(times (exp r3 r4) r1))))
; Here is the clock function for exp. Again we use an algebraically
; odd form simply to gain instant access to the desired sm-plus
; decomposition. The "4" gets us past the CALL and the first 3
; initialization instructions; the times expression takes us around
; the exp loop j times, and the final "2" gets us out through the RET.
; Note that as we go around the loop we make explicit reference to
; TIMES-CLOCK to explain the CALL of TIMES.
(defn exp-clock (i j)
(plus 4 (times j (plus 2 (times-clock i) 3)) 2))
; Now we prove the "loop invariant" for the EXP program. We simply
; tell the system to induct according to exp-fn. We could "trick" it
; into doing that by using exp-fn in place of the (times (exp r3 r4)
; r1) expressions, but that is devious and doesn't always work.
(prove-lemma exp-correct-lemma (rewrite)
(implies (and (numberp r3)
(numberp r4)
(equal (assoc 'exp defs) (exp-program))
(equal (assoc 'times defs) (times-program)))
(equal (sm (st '(exp . 3)
stk
(list r0 r1 r2 r3 r4)
f
defs)
(times r4 (plus 2 (times-clock r3) 3)))
(st '(exp . 3)
stk
(if (zerop r4)
(list r0 r1 r2 r3 r4)
(list 0
(times (exp r3 r4) r1)
(times (exp r3 r4) r1)
r3
0))
f
defs)))
((induct (exp-fn r0 r1 r2 r3 r4))))
; The theorem prover is now set up to prove that exp is correct
; without further assistance. (But you must not underestimate how
; clever this assistance has been to make this possible!)
(prove-lemma exp-correct (rewrite)
(implies (and (numberp i)
(numberp j)
(equal (fetch pc defs) '(call exp))
(equal (assoc 'exp defs) (exp-program))
(equal (assoc 'times defs) (times-program)))
(equal (sm (st pc stk (list i j r2 r3 r4) f defs)
(exp-clock i j))
(st (add1-pc pc)
stk
(if (zerop j)
(list i (exp i j) r2 i 0)
(list 0 (exp i j) (exp i j) i 0))
f
defs))))
; Ok, enough of this. Presumably the point has been made: correctness
; proofs can be "stacked."
; Recall that we have been dealing with an unnecessarily restricted
; view of the machine, namely that it only have 5 memory locations.
; Before leaving this approach and pursuing some others, let us
; quickly prove the most general form of the correctness result for
; TIMES.
; We start with the basic normalization rules for get and put.
(defn length (lst)
(if (nlistp lst)
0
(add1 (length (cdr lst)))))
(prove-lemma put-put-0 (rewrite)
(implies (and (lessp addr (length mem))
(equal (get addr mem) val))
(equal (put addr val mem) mem)))
(prove-lemma put-put-1 (rewrite)
(equal (put addr v2 (put addr v1 mem))
(put addr v2 mem)))
(prove-lemma put-put-2 (rewrite)
(implies (and (numberp addr1)
(numberp addr2)
(not (equal addr1 addr2)))
(equal (put addr2 v2 (put addr1 v1 mem))
(put addr1 v1 (put addr2 v2 mem)))))
(prove-lemma get-put (rewrite)
(implies (and (numberp addr1) (numberp addr2))
(equal (get addr1 (put addr2 val mem))
(if (equal addr1 addr2)
val
(get addr1 mem)))))
(prove-lemma length-put (rewrite)
(implies (lessp addr (length mem))
(equal (length (put addr val mem)) (length mem))))
(disable get)
(disable put)
; And a few basic arithmetic facts.
(prove-lemma difference-1 (rewrite) (equal (difference x 1) (sub1 x)))
(prove-lemma difference-elim (elim)
(implies (and (numberp i)
(not (lessp i j)))
(equal (plus j (difference i j)) i)))
(prove-lemma associativity-of-plus (rewrite)
(equal (plus (plus i j) k) (plus i (plus j k))))
(prove-lemma commutativity-of-plus (rewrite)
(equal (plus i j) (plus j i)))
(prove-lemma commutativity2-of-plus (rewrite)
(equal (plus i (plus k j)) (plus k (plus i j))))
; Ok, now we get specific to the TIMES program. The following function
; "is" loop in the TIMES program vis-a-vis its effect on a completely
; arbitrary memory mem. If a program is run entirely for its effect on
; memory (as opposed to the subroutine stack or the haltedp flag, then
; this program "is" the McCarthy-esque functional analogue of the loop.
(defn times-mem-fn-loop (mem)
(if (zerop (get 0 mem))
mem
(times-mem-fn-loop
(put 0 (sub1 (get 0 mem))
(put 2 (plus (get 2 mem) (get 1 mem))
mem))))
((lessp (get 0 mem))))
(defn times-mem-fn (mem)
(times-mem-fn-loop (put 2 0 mem)))
; In proving this functional analogue correct we essentially carry
; our McCarthy's functional semantics approach. The theorem below
; establishes that times-mem-fn-loop just does two puts into mem: it
; 0's r0 and it puts (r0*r1)+r2 into location r2:
(prove-lemma times-mem-fn-loop-is-times (rewrite)
(implies (and (numberp (get 0 mem))
(numberp (get 2 mem))
(lessp 2 (length mem)))
(equal (times-mem-fn-loop mem)
(put 0 0
(put 2 (plus (times (get 0 mem) (get 1 mem)) (get 2 mem))
mem)))))
(prove-lemma times-mem-fn-is-correct nil
(implies (and (numberp (get 0 mem))
(lessp 2 (length mem)))
(equal (times-mem-fn mem)
(put 0 0
(put 2 (times (get 0 mem) (get 1 mem))
mem)))))
; Our aim, in the revisited times-correct theorem, is to establish that
; executing a CALL of TIMES has the following effect on an almost arbitrary
; state s:
(defn times-step (s)
(st (add1-pc (pc s))
(stk s)
(put 0 0
(put 2 (times (get 0 (mem s))
(get 1 (mem s)))
(mem s)))
f
(defs s)))
; The proof proceeds, as we have seen twice before, first by an
; inductive analysis of the loop itself. Note that we induct
; according to times-mem-fn-loop.
(prove-lemma times-correct-lemma-revisited (rewrite)
(implies (and (numberp (get 0 mem))
(equal (assoc 'times defs) (times-program)))
(equal (sm (st '(times . 1)
stk1
mem
f
defs)
(times (get 0 mem) 4))
(st '(times . 1)
stk1
(times-mem-fn-loop mem)
f
defs)))
((induct (times-mem-fn-loop mem))))
; Unfortunately, the above lemma is not quite applicable in our use below
; because the mem that occurs in the state in the lhs of the conclusion is
; not going to be syntactically identical to the mem that occurs in the
; (times (get 0 mem) 4) in the clock. The reason is that the clock mem is
; the original mem while the state mem is the one produced by moving a 0
; into r2. Of course, they have the same r0 value. So, having proved
; the inductive fact we need, we now "generalize" it.
(prove-lemma times-correct-lemma-revisited-and-generalized (rewrite)
(implies (and (equal r0 (get 0 mem))
(numberp (get 0 mem))
(equal (assoc 'times defs) (times-program)))
(equal (sm (st '(times . 1)
stk1
mem
f
defs)
(times r0 4))
(st '(times . 1)
stk1
(times-mem-fn-loop mem)
f
defs))))
; And now we can prove the most general form of the correctness of our
; TIMES program. It tells us that if you are interested in (sm s n),
; where the pc points to a CALL of TIMES, the definition of 'TIMES is
; ours, memory is at least 3 long, r0 is numeric, the halt flag is
; off, and there are at least (times-clock r0) ticks on the clock,
; then you can just take a times-step and decrease the clock by
; (times-clock r0). What more could you want?
(prove-lemma times-correct-revisited nil
(implies
(and (equal (fetch (pc s) (defs s))
'(CALL TIMES))
(equal (assoc 'TIMES (defs s))
(times-program))
(lessp 2 (length (mem s)))
(equal r0 (get 0 (mem s)))
(numberp r0)
(not (lessp n (times-clock r0)))
(not (haltedp s)))
(equal (sm s n)
(sm (times-step s)
(difference n
(times-clock r0)))))
((disable commutativity-of-plus
commutativity2-of-plus)
(enable times-clock)))
; The Inductive Assertion Approach
; First, we simply prove the hand-generated verification
; conditions from an informal annotation of our TIMES
; program.
(prove-lemma verification-conditions-for-times nil
(and (implies (and (numberp i0)
(numberp i1))
(and (numberp 0)
(equal (times i0 i1)
(plus 0 (times i0 i1)))))
(implies (and (numberp r2)
(equal (times i0 i1)
(plus r2 (times r0 r1)))
(not (zerop r0)))
(and (numberp (plus r2 r1))
(equal (times i0 i1)
(plus (plus r2 r1)
(times (sub1 r0) r1)))))
(implies (and (numberp r2)
(equal (times i0 i1)
(plus r2 (times r0 r1)))
(zerop r0))
(equal r2 (times i0 i1)))))
; Now we develop the analogue of the inductive assertion
; method formally.
; Introduce p as an arbitrary invariant under stepping. The
; everywhere true predicate witnesses this constraint.
(constrain p-step (rewrite) (implies (p s) (p (step s)))
((p (lambda (s) t))))
; Observe that such a p is invariant under arbitrary length runs of the
; machine.
(prove-lemma p-invariant (rewrite)
(implies (p s0) (p (sm s0 n)))
((enable sm)))
; That's it. It is really deep isn't it?
; Now we'll define a p that suits our specification for TIMES. We call
; it timesp.
(defn r0 (s) (get 0 (mem s)))
(defn r1 (s) (get 1 (mem s)))
(defn r2 (s) (get 2 (mem s)))
(defn timesp (i0 i1 s)
(and (numberp i0)
(numberp i1)
(stp s)
(nlistp (stk s))
(equal (assoc 'times (defs s)) (times-program))
(equal i1 (r1 s))
(if (equal (pc s) '(times . 0))
(equal i0 (r0 s))
(if (equal (pc s) '(times . 1))
(and (numberp (r2 s))
(equal (times i0 i1) (plus (r2 s) (times (r0 s) (r1 s)))))
(if (equal (pc s) '(times . 2))
(and (not (zerop (r0 s)))
(numberp (r2 s))
(equal (times i0 i1) (plus (r2 s) (times (r0 s) (r1 s)))))
(if (equal (pc s) '(times . 3))
(and (not (zerop (r0 s)))
(numberp (r2 s))
(equal (plus i1 (times i0 i1))
(plus (r2 s) (times (r0 s) (r1 s)))))
(if (equal (pc s) '(times . 4))
(and (numberp (r2 s))
(equal (times i0 i1) (plus (r2 s) (times (r0 s) (r1 s)))))
(if (equal (pc s) '(times . 5))
(equal (r2 s) (times i0 i1))
f))))))))
; Since timesp is preserved by step:
(prove-lemma timesp-step (rewrite)
(implies (timesp i0 i1 s)
(timesp i0 i1 (step s))))
; we can immediately conclude by functional instantiation that
; it is preserved under arbitrary runs of the machine:
(functionally-instantiate timesp-invariant nil
(implies (timesp i0 i1 s0) (timesp i0 i1 (sm s0 n)))
p-invariant
((p (lambda (s) (timesp i0 i1 s))))
((disable timesp)))
; By additionally assuming that the initial and final pcs
; are at 0 and 5 respectively in TIMES, we derive the
; desired theorem.
(prove-lemma times-correct-revisited-again nil
(implies (and (stp s0)
(nlistp (stk s0))
(equal (assoc 'times (defs s0)) (times-program))
(equal i0 (get 0 (mem s0)))
(equal i1 (get 1 (mem s0)))
(numberp i0)
(numberp i1)
(equal (pc s0) '(times . 0))
(equal (pc (sm s0 n)) '(times . 5)))
(equal (get 2 (mem (sm s0 n))) (times i0 i1)))
((use (timesp-invariant))))
; The following events are not at all easy to follow and should not be
; considered part of the tutorial. They are included in this file to
; justify the sentence, in the second edition of the Handbook, that
; our standard form of correctness theorem for a subroutine implies
; the standard form of the termination theorem for that subroutine.
; In particular, we lead the system the proof of the following
; theorem. Suppose s is a state poised to execute a CALL of some
; subroutine fn (and the halt flag of s is F). Suppose that some
; non-zero number of steps, n, later the stack is the same as it is in
; s. Intuitively, this means that the subroutine was called and
; eventually returned. Then if the subroutine is called as the
; top-level program the halt flag is eventually set. That is to say,
; let s' be obtained from s by setting the pc to (fn . 0), the first
; instruction in fn, and let the stack be nil, i.e., this is the
; top-level, main program. Then by running s' n steps we obtain a
; state with the halt flag set. That is the theorem
; standard-correctness-implies-termination, below.
; It is a fairly difficult theorem for two reasons. First, it
; considers running fn in two different states: as part of a
; continuing computation and as the top-level main program. We
; therefore have to develop lemmas that let us modify the state, e.g.,
; change the stack, without damaging some aspects of the computation.
; Second, the hypothesis that the stack eventually (at tick n) is the
; same as before the CALL means that a balanced RET was executed. But
; it does not mean the balancing RET was executed at tick n. For all
; we know, the CALL returned immediately and during the remaining
; ticks we possibly called other routines or even returned from the
; caller and eventually re-entered! But we can convert that
; hypothesis into one that says for some k