4. Variables and relations between their values

When introducing the basic notions we have said that different happenings could take place following the same pattern of behaviour. And as the happening is fully determined by the confrontation of the pattern of behaviour with the initial state, it follows that the same pattern of behaviour can only evoke different happenings on different occasions when at these different occasions the initial states differ from each other. In this section we shall show how so-called variables are used for the description of the initial (and final) states. We find the typical use of variables when the same pattern of behaviour is followed repeatedly, i.e. when sequencing is repetitively controlled.

We begin with a very simple program: given two positive integer values A and B, a program has to be made that will compute (i.e. can cause a computer to compute) the Greatest Common Divisor of A and B. Let us use the notation GCD(A, B) for that value.

(Remark. We have restricted ourselves to positive numbers in order to make life somewhat easier. Zero is divisible by any positive integer D (for 0 = D * 0), and there would be no objection with B > 0, to

GCD(0, B) = B    .

But admitting zero as argument is asking for trouble, because GCD(0, 0) is clearly undefined! In order to avoid these complications, we restrict ourselves to positive arguments.)

For the sake of argument we request an algorithm in which no arithmetic operators other than addition and subtraction will be used. How do we find such an algorithm?

Well, there seem in this case to be two ways of attacking the problem. The first one is more or less a direct application of the definition. One could construct a table of divisors of A (including 1) and a table of divisors of B (also including 1); as both A and B are finite and different from zero, both tables contain only a finite number of numbers. From these two tables of divisors one can construct a third table of common divisors, i.e. containing all the numbers in both of them. This third table is non-empty (because it contains the number 1) and it is finite (because it cannot be longer than any of the original tables). From this non-empty, finite table we can select the greatest number and that will, by virtue of the way in which it has been found, be the Greatest Common Divisor.

We could do it along the lines just sketched (or at least use it as a source of inspiration). In the current example, however, there is a second line of attack because the GCD is a well-known mathematical function, "well-known" meaning that a number of its properties are known. If we can think of so many of its properties that they define the GCD —i.e. that the GCD is the only function satisfying them— we might try to determine GCD(A, B) by exploiting these properties. What properties can we think of?

1)     GCD(a, b) = GCD(b, a)
2)     GCD(a, b) = GCD(a + b, b) = GCD(a, a + b)
3.1)  if a > b:    GCD(a, b) = GCD(a - b, b) = GCD(a, a - b)
3.2)  if a = b:    GCD(a, b) = a = b
3.3)  if a < b:    GCD(a, b) = GCD(a, b - a) = GCD(b - a, b)

(We can think of other properties, such as

4)     for n ≥ 0:    GCD(an, bn) = GCD(a, b)n
5)     for c > 0:    GCD(c * a, c * b) = c * GCD(a, b)    ,

but they look less promising as they involve multiplication and we have to restrict ourselves to the arithmetic operations of addition and subtraction.)

The first property states that the GCD is a symmetric function. The second one states that the GCD of two numbers is equal to the GCD of one of them and the sum, while the third property states this for the difference. Because we want to restrict ourselves to positive numbers, we have dealt with the cases a < b and a > b separately. The case a = b, however, is very special: it is the only case in which the value of the GCD is given directly.

Relations 2, 3.1 and 3.3 tell us that the GCD of a pair of numbers is equal to the GCD of another pair of numbers. This suggests that we use the "current state" to fix such a number pair; the algorithm can then try to change these numbers in such a way that

 firstly: the GCD remains constant secondly: until eventually the two numbers are equal and rule 3.2 can be applied.

With the second requirement in mind, rule 2 does not look too promising: given two positive numbers, one of them never can be equal to their sum. On the other hand, given two (different!) positive numbers, one of them, viz. the smallest, can be equal to their difference. This suggest that from 3.1 and 3.3 we use:

3.1'     if a > b:     GCD(a, b) = GCD(a - b, b)

3.3'     if a < b:     GCD(a, b) = GCD(a, b - a)    .

Now the moment has come to consider the first version of the program:

program1:
begin integer a, b, gcd;
a:= A; b:= B;
while ab do
if a > b then a:= ab
else b:= ba
gcd:= a;
print(A); print(B); print(gcd)
end

(In this program we have used the well-known alternative connective "ifthenelse". The construction

"if inspection then action1 else action2"

causes one of two actions, either action1 or action2 to take place. If the inspection delivers the value true, action1 will take place (and action2 will be skipped); if the inspection delivers the value false, (action1 will be skipped and) action2 will take place. We can describe the conditional connective in terms of it:

"if inspection do action"

is equivalent to

"if inspection then action else nothing" )     .

When we try to understand this program we should bear the following in mind:

While the typical use of variables manifests itself with the program loop, the way to understand such a program implies looking for the relations between their values which remain invariant during the repetition.

In this example the invariant relation P is

P :   a > 0 and b > 0 and GCD(a, b) = GCD(A, B )     .

The relation P holds after initialization (for then a = A and b = B ; from A > 0 and B > 0, relation P then follows).

The repeatable statement will only be executed under the additional condition a ≠ b; i.e. either a < b or a > b. If a > b, then the new value of a, viz. ab, will again be positive and GCD(ab) will remain unchanged on account of 3.1'; if a < b, the new value of b will again be positive and GCD(a, b) will remain unchanged on account of 3.3'. The invariance of relation P is therefore established.

When the loop terminates, a = b is guaranteed to hold, GCD(A, B) = GCD(a, b) = GCD(aa) and on account of 3.2 the assignment "gcd := a" will establish the net effect "gcd := GCD(AB)".

To complete the proof we must demonstrate that the repetition will indeed terminate. Whenever the repeatable statement is executed, the largest of the two (different!) values is decreased by the value of the other which is positive; as a result

max(a, b)T0 > max(a, b)T1

We also know that before the repetition max(a, b) = max(A, B) is finite; from the invariance of the relation P (a > 0 and b > 0) we conclude that

max(a, b) > 0

will continue to hold. All values being integer, the maximum number of times the repeatable statement can be executed must be less than the max(A, B) and therefore the repetition must terminate after a finite number of repetitions. And this completes the proof.

Once we have this program, it is not difficult to think of others. Reading program1 we observe that each subtraction is preceded in time by two tests, first ab and then a > b; this seems somewhat wasteful as the truth of a > b already implies the truth of of ab. What happens in time is that a number of times a will be decreased by b, then b will be decreased by a, and so on. A program in which (in general) the number of tests will be smaller is

program 2:
begin integer a, b, gcd;
a := A; b := B;
while ab do
begin while a > b do a := ab;
while b > a do b := ba
end;
gcd := a;
print(A ), print(B ), print(gcd)
end     .

Exercise. Prove the correctness of program 2.

Exercise. Rewrite program 2 in such a way that the outer repeatable statement contains only one loop instead of two. Prove its correctness.

Before going on, it is desirable to give a more formal description of the kind of theorems that we use. (In the following I shall make use of a formalism introduced by C.A.R. Hoare.)

Let P, P1, P2, … stand for predicates stating a relation between values of variables. Let S, S1, S2, … stand for pieces of program text, in general affecting values of variables, i.e. changing the current state. Let B, B1, B2, … stand for either predicates stating a relation between values of variables or for pieces of program text evaluating such a predicate, i.e. delivering one of the values true or false without further affecting values of variables, i.e. without changing the current state.

Then                    P1 { S } P2

means: "The truth of P1 immediately prior to the execution of S implies the truth of P2 immediately after that execution of S ". In terms of this formalism we write down the following theorems. (Some readers would prefer to call some of them rather "axioms" or "postulates", but at present I don't particularly care about the difference.)

Theorem 1:

 Given: P1 { S1 } P2 P2 { S2 } P3 Conclusion: P1 { S1; S2 } P3

(This theorem gives the semantic consequences of the semicolon as connective.)

Theorem 2:

 Given: B { S } non B Conclusion: true { if B do S } non B

(Here "true" is the condition which is satisfied by definition, i.e. the conclusion that after the execution of the conditional statement "non B " will hold, is independent of any assumptions about the initial state.)

Theorem 3:

 Given: (P and B ) { S } P Conclusion: P { if B do S } P

Theorem 4:

 Given: (P1 and B ) { S1 } P2 (P1 and non B ) { S2 } P2 Conclusion: P1 { if B then S1 else S2 } P2

Theorem 5:

 Given: (P and B ) { S } P Conclusion: P { while B do S } (P and non B )

Remark: This theorem only applies to the case that the repetition terminates, otherwise it is void.

Theorem 5 is one of the most useful theorems when dealing with loops. The appropriate reasoning mechanism to deal with loops is mathematical induction, but often the use of Theorem 5 (which itself can only be proved by mathematical induction) avoids a direct use of mathematical induction.

We used Theorem 5 when proving the correctness of program1. Here was

P:     a > 0 and b > 0 and GCD(a, b) = GCD (A, B )               and

B:     ab     .

We draw attention to the fact that we could not show "PS } " but only "(P and ) { S } ": for a and b to remain positive it was necessary to know that initially they were different. (How is this with program2?) We also draw attention to the fact that after termination, when we wished to show that a = GCD(A, B ), we did not only use the mild conclusion "" bur the strong conclusion "P and non ": we need the knowledge that a = b in order to justify the application of 3.2.

With respect to termination one often uses a somewhat stronger theorem, the formulation of which falls outside the strict Hoare formalism:

Theorem 6:

 Given: (P and B ) { S } P Conclusion: in P { while B do S } the relation (P and B ) will hold immediately after each execution of the repeatable statement that is not its last execution.

This theorem often plays a role when deriving a contradiction from the assumption that the loop will not terminate.

*               *
*

There is an alternative form of repetition control which might be represented by

"repeat S until B "

(other authors write "do S until B "); it is semantically equivalent to

"S ; while non B do S "     .

(Instead of describing its semantics in terms of the other repetitive connective, we could also have given a recursive definition in terms of itself, viz.

"S ; if non B do repeat S until B "   .)

The differences with the while-clause are:

1)    the termination condition has been inverted

2)    the repeatable statement is executed at least once.

Sometimes the repeat-clause comes in really handy and the text sometimes gains in clarity when it is used. It should be used with great caution, a caution which is shown in the pertinent

Theorem 7:

 Given: P1 { S } P2 (P2 and non B ) { S } P2 Conclusion: P1 { repeat S until B } (P2 and B )

Remark: This theorem only applies to the case that the repetition terminates, otherwise it is void.

The greater the complexity of the assumptions about B and S which have to be verified reflects the additional care required in the use of the repeat-clause.

Exercise. We give now three tentative alternative programs (3, 4, and 5) for the computation of GCD(AB ). Discover which of them are correct and which are faulty by trying to prove their correctness. If the program is incorrect, construct an A-B-pair for which it fails.

program3:
begin integer a, b, gcd;
a := A; b := B ;
repeat if a > b then a := ab
else b := ba
until a = b;
gcd := a;
print(A ); print(B ); print(gcd )
end

program4:
begin integer a, b, gcd;
a := A; b := B ;
repeat while a > b do a := ab
while b > a do b := ba
until a = b;
gcd := a;
print(A ); print(B ); print(gcd )
end

program5:
begin integer a, b, gcd;
a := A; b := B ;
while ab do
begin if a < b do begin x := a; a = b; b := x end;
repeat a := ab until ab
end;
gcd := a;
print(A ); print(B ); print(gcd )
end

(Note. If any of the above programs is correct, its occurrence in this exercise is not to be interpreted as a recommendation of its style!)

Exercise. Prove that the following program will print in addition to the greatest common divisor of A and B the smallest common multiple of A and B (being defined as their product divided by the greatest common divisor).

begin integer a, b, c, d, gcd, scm;
a := A; b := B ; c:= B ; d := 0;
while ab do
begin while a > b do begin a := ab; d := d + c end;
while b > a do begin b := ba; c := c + d end
end;
gcd := a; scm := c + d ;
print(A ); print(B ); print(gcd ); print(scm)
end

Hint. Follow the expression a * c + b * d  . (Courtesy Software Sciences Holdings Limited London)

*               *
*

For a meaningful loop controlled by a while-clause we can make the following further observations. Let us consider the loop "while B do ".

Our first observation is that the execution of the repeatable statement must change the state. Suppose that it does not. If then the repeatable statement is executed once, the predicate B (that was true prior to its execution) will still be true after its execution, so that the statement S will be executed yet another time; then the argument can be repeated and the net result is that the repetition will not terminate. The only termination occurs when B is false to begin with: then the statement S will be executed zero times. This we don't call a "meaningful loop".

So, execution of statement S changes the state: let us call "s" that part of the total state that may be changed by the execution of statement S. Here "s" can be regarded as the aggregate of variables whose values can be affected by the execution of statement S. Then our second observation is that (one or more variables of) s must be involved in the predicate B. If not, a single execution of the repeatable statement would imply infinitely many (by the same argument as used in our previous observation). Treating "s" as a generalized variable, we can express this explicitly by rewriting the loop, using two functions of the generalized argument s:

"while B(s) do s := S(s)"     .

This is the basic form of every repetition which is controlled by a while-clause. Reading it, it is obvious that its behaviour is undefined when the initial value of s is completely undefined. In terms of programming this leads to our third observation:

Every repetition controlled by a while-clause requires a proper initialization of the variables involved.

(Although obvious, this rule is mentioned explicitly, because I have seen many programmers forgetting to initialize properly. The rule is so rigid that such an omission is not a mistake but a blunder.)

Let the initial state be denoted by s0 and let the state immediately after the i-th execution of the repeatable statement be denoted by si; let the loop terminate after the k-th execution of the repeatable statement. Then the following theorems hold.

Theorem 8:
Given s0 , the remaining successive values of s are given by the recurrence relation

si = S (si-1)              for 0 < ik     .

Theorem 9:
In the sequence s0, s1, …, sk no two values (with different subscripts) are equal.

Theorem 10:

B (si ) for all i satisfying 0 ≤ i < k          and

non B (sk)     .

Theorem 8 is fairly obvious. It is mentioned, however, because it shows that a repetition is an adequate tool for the evaluation of a recurrence relation. Theorem 9 is mentioned because it is so obviously correct (although I see at present no use for it). Theorem 10 —also called the "Linear Search Theorem"— which is also fairly obvious, is a very important one, comparable in importance to Theorem 5. It does not only assert that after termination the state sk will be such that non B is valid, it asserts that in all previous states si with i < k (if any) B did hold. This theorem is used in many searching algorithms looking for the smallest number for which a predicate B is false: by inspecting the numbers in the order of increasing magnitude, the smallest becomes the first found. Theorem 10 tells us that in that case a loop controlled by a while-clause is a proper vehicle. (It is not restricted to looking for a smallest number, it can be applied to any (somehow) ordered set of values of some sort.)

Remark. We have called the last three theorems "obvious". This is not meant to imply that giving a reasonably formal proof for them is trivial. I did it, but the resulting proofs were surprisingly cumbersome and boring.

Finally, in Theorems 2, 3, and 5 we always have assumptions including the truth of B prior to the execution of S. Clearly we are not interested in the net effect of the execution of S after an initial state in which B is false. As a matter of fact, with such initial states the net effect of an S will often be undefined. (A similar remark can be made regarding Theorem 4.) in other words: our statements S regarded as operators are often not defined on the domain of all possible states, they are "partial operators" and the predicates occurring in the clauses ensure that they will not be evoked inappropriately. This observation should temper our hope of increasing computation speed by introducing more parallelism.