EWD 316

EWD316: A Short Introduction to the Art of Programming


prof.dr.Edsger W.Dijkstra

August 1971


0. Contents

1. Preface

2. Some Fundamental Notions

3. Programming Languages and their Implementation

4. Variables and relations between their values

5. Programs corresponding to recurrence relations

6. A first example of step-wise program composition

7. The shortest spanning subtree of a graph

8. The towers of Hanoi

9. The problem of eight queens

10. A rearranging routine


     The market is already so heavily overloaded with introductory texts on computers and computer programming that one must have rather specific reasons to justify the investment of one's time and energy in the writing of yet another "Short Introduction to the Art of Programming". The sole fact that one likes to write is, in itself, an insufficient justification. Before undertaking such a task I should therefore ask myself "Why am I going to do it?" and also "What do I expect to be the distinguishing features of this little monograph?".

     There is a simple, practical reason. At my University I give, mainly for future Mathematical Engineers, an introduction to the art of programming and my students would welcome some supporting material. Besides that, without some form of lecture notes, my colleagues have very little idea of what I am really trying to teach! The contents of this course show signs of settling down —I have now given it three times— and therefore this seems the appropriate moment to produce a document that can serve as lecture notes.

     These are purely local circumstances and as far as they are concerned, a normal set of lecture notes —in Dutch, say— would do. The fact that I have not chosen this form means that I am aiming at a larger audience. Such an act is always somewhat presumptuous and the usual author's trick to save the image of his modesty is to say that from various sides he has been urged to produce his manuscript —a trick that I could apply in this case without lying. But I don't think that I shall resort to that trick because I really believe that a larger audience than just my students can benefit from it, or even enjoy it.

     The fact is that over the last years I have addressed myself to the question of whether it was conceivable to increase our programming ability by an order of magnitude and what techniques (mental, organizational or mechanical) should then be applied in the process of program composition. Personally, I felt thaese investigations very rewarding: I gained a much deeper understanding of the nature of the difficulty of the programming task, I became much more conscious about my "programming style", which improved considerably, and found myself, when programming, in much better control of what I was doing than I had ever been before. Needless to say, my teaching was heavily influenced by these experiences.

     The purpose of this little monograph is to assist the programming reader in cleaning up his own thinking, to transmit to him some mental disciplines by sticking to which he can avoid making his job unnecessarily difficult. It is born out of dissatisfaction with the usual kind of programming course, which now strikes me as like the type of driving lessons in which one is taught how to handle a car instead of how to use a car to reach one's destination. This monograph is intended as a complement to such courses; I shall try to present programming —to quote Niklaus Wirth— "as a discipline on its own merits, as a methodology of constructive reasoning applicable to any problem capable of algorithmic solution".

     I expect the distinguishing feature of this little monograph to be its incompleteness, incompleteness in many, many respects.

     It will not be self-contained in the sense that I assume my readers somewhat familiar with a decent higher level programming language. (This assumption is a direct consequence of the local circumstance that my students have had a modest prior exposure to the cleaner aspects of ALGOL 60.)

     For those readers who identify the programmer's competence with a thorough knowledge of the idiosyncrasies of one or more of the baroque tools into which modern programming languages and systems have degenerated, the book will also be very incomplete, because I won't describe any programming language —not even the one I use— to any degree of detail. I shall use some sort of programming language, as "a communication language" say, not for the communication of algorithms, but for the communication of ways of thinking, as a vehicle for programming style.

     In yet another respect, this little monograph will be very incomplete. As said above, I shall try to present programming "as a discipline on its own merits, as a methodology of constructive reasoning applicable to any problem capable of algorithmic solution". At present, such a methodology does not yet exist in the full sense of the word, only elements of it have become apparent, others are just lurking behind our mental horizon. This of course, is not very satisfactory, but it is a true reflection of the current, still rather poor state of the art. It is a consolation that no piece of scholarship ever reaches the state of perfection and I tell myself that the conviction that there is more to come is no justification for witholding what we have got.

     It will also be incomplete as a result of the choice of the examples and the choice of the considerations. By necessity, the examples will be "small" programs, while the need for a discipline becomes really vital in the case of "large" programs. Dealing with small examples in an ad-hoc fashion gives the student not the slightest clue as to how to keep the construction of a large program under his intellectual control. Illustrating how we can avoid unmastered complexity, I hope to deal with small examples in such a fashion, that methodological extrapolation to larger tasks is feasible. The selection of considerations is also kept to a strict minimum: we restrict ourselves to programming for purely sequential machines and when we consider a trade-off question, we shall usually present this in the form of a trade-off between computation time versus store requirements. In this respect the document may strike the reader as very strongly dated, perhaps even out-dated by the time it appears in print. If so, I hope that I can justify my defense, which is that such a reader has failed to read between the lines: it is not so much the particular trade-off question chosen that matters, as the fact that problem has been approached in such a fashion that we have made a conceptual framework in which such specific trade-off questions can be postponed until the appropriate moment. The only thing I can do at this stage is to urge my readers to read between the lines as much as possible. (If one tries to transmit ideas or methods, one can talk about them but that alone is insufficient: one must show examples illustrating them. When lecturing, it is my sad experience that after having dealt with a specific example, I find the attention of half my audience completely usurped by this example: they have forgotten that the example was only brought to their attention to illustrate something more general. This is a sad experience and no amount of prior warning that this misunderstanding is bound to happen if they are not careful, has ever enabled me to avoid it!) To put it in another way: it is my purpose to transmit the importance of good taste and style in programming, the specific elements of style presented serve only to illustrate what benefits can be derived from "style" in general. In this respect I feel akin to the teacher of composition at a conservatory: he does not teach his pupils how to compose a particular symphony, he must help his pupils to find their own style and must explain to them what is implied by this. (It has been this analogy that made me talk about "The Art of Programming".)

     There is a further class of potential readers that will find this subject matter very incompletely dealt with, viz. those who identify the programmer's task with writing programs in, say, FORTRAN or PL/1. One of my implicit morals will be that such programming languages, each in their own way, are vehicles inadequate to guide our thoughts. If FORTRAN has been called an infantile disorder, PL/1 must be classified as a fatal disease.

     Although I would love to do it, it is impossible to give a true acknowledgement, listing all persons whose relevent influence on my thinking I gratefully remember or should remember. With my apologies to all persons unmentioned I would like to make a few exceptions and list in alphabetical order: my mother mrs.B.C.Dijkstra - Kluyver, R.W.Floyd, C.A.R.Hoare, P.Naur, B.Randell, D.T.Ross, N.Wirth and M.Woodger. None of the persons listed, however, should in any way be held responsible for the views expressed (with the possible exception of my mother who is in some sense responsible for my existence).

     I am deeply indebted to my sister-in-law, mrs.E.L.Dijkstra - Tucker for her willingness to correct my use of English in yet another manuscript and to W.H.J.Feijen for the great care with which he has screened the text for typing errors.

Back to top

Some Fundamental Notions

     In this section a number of notions will be introduced, because they are fundamental to the whole activity of programming. They are so fundamental that they will not be dissected into more primitive concepts. As a result, this section will be a very informal one, analogy, metaphor and natural language (poetry, if I were able!) being the only available vehicles to convey their contents and connotations.

     It is not unusual —although a mistake— to consider the programmer's task to be the production of programs. (One finds terms such as "software manufacturing", proposals to measure programmer productivity by the number of lines of code produced per month etc., although I have never seen the suggestion to measure composer productivity by the number of notes, monthly scribbled on his score!) This mistake may be at the heart of the management failure which is so apparent in many large software efforts. It is a mistake because the true subject matter of the programmer's activity is not the program he composes, but the class of possible computations that may be evoked by it, the "production" of which he delegates to the machine. It seems more fruitful to describe the programmer's activity as "designing a class of computations", rather than as "making a program". In this connection it should be borne in mind that the more relevant assertions about programs —e.g. about the correctness of their resource demands— indeed pertain to the computations, rather than to the last thing that leaves the programmer's hands, viz. the program text. It is for this reason that, when introducing fundumental notions, I will start at the side of the computations, with the "happenings in time".

     The first notion is that of an action. An action is a happening, taking place in a finite period of time and establishing a well-defined, intended net effect. In this description, we have included the requirement that the action's should be "intended", thereby stressing the purposefulness. If we are interested in action at all, it will be by virtue of its net effect.

     The requirement that the action should take place in a finite period of time is most essential: it implies that we can talk about the moment T0, when the action begins, and the later moment T1, when the action ends. We assume that the net effect of the action can be described by comparing "the state at moment T0" with the "state at moment T1".

     An example of an action would be a housewife peeling the potatoes for the evening dinner. The net effect is that the potatoes for the evening dinner are at moment T0 still unpeeled, say in the potato basket in the cellar, while at moment T1 they will be peeled and, say, in the pan they are to be cooked in.

     When we dissect such a happening as a time sequence of (sub)actions, the cumulative effect of which then equals the net effect of the total happening, then we say that we regard the happening as a sequential process, or process for short.

     Whenever such a dissection is permissible, we can regard the same happening either as an action, or as a sequential process. When our interest is confined to the net effect, to the states "before and after", then we regard it as an action. If, however, we are interested in one or more intermediate states as well, then we regard it as a process. In the latter case the moment T0 coincides with the beginning of the first subaction and the end of each subaction coincides with the beginning of the next one, with the exception of the last subaction, whose end coincides with T1, the end of the whole happening.

     I must stress, that whether some happening is regarded as an action or as a process is not so much an inherent property of the happening as an expression of our mood, of the way in which we prefer to look at it. (Why we should want to look at it in different ways will be left for later discussion.) Similarly, if we have chosen to regard the happening as a process, the way in which it has been dissected is also not so much an inherent property of the happening as a result of which of its distinguishable intermediate states (for some reason or another) we wish to take into consideration.

     The happening of the potato-peeling housewife could, for instance, be described by the time-seccession of the following subactions of the housewife:

"fetches the basket from the cellar;
fetches the pan from the cupboard;
peels the potatoes;
returns the basket to the cellar" .

     Here the total happening has been described as a time-succession of four subactions. In order to stress that we have given the description of a happening, we have phrased it in the form of an eye-witness account. Note, that if the eye-witness did not bother to describe that the basket was fetched from the cellar before the pan was fetched from the cupboard, the first two lines would have been condensed into a single subaction "fetches the basket from the cellar and the pan from the cupboard".

     We postulate that in each happening we can recognize a pattern of behaviour, or pattern for short; the happening occurs when this pattern is followed. The net effect of the happening is fully determined by the pattern and (possibly) by the initial state (i.e. the state at moment T0). Different happenings may follow the same pattern; if these happenings establish different net effects, the net effect must have been dependent on the initial state as well, and the corresponding initial states must have been different.

     How we can recognize the same pattern in different happenings falls outside the scope of this text. If we meet a friend, we can recognize his face, no matter what facial expression he shows: it may be an expression we have never seen on is face before! Similarly with different happenings: we recognize the same pattern, abstracting from the possibly different initial states and net effects.

     We return for a moment to the housewife. On a certain day she has peeled the potatoes for the evening dinner and we have an eye-witness account of this happening. The next day, again, she peels the potatoes for the evening dinner and the second happening gives rise to an eye-witness acount equal to the previous one. Can we say, without further ado: "Obviously, the two accounts are equal to each other for on both occasions, she has done exactly the same thing."?

     How correct or incorrect this last statement is depends on what we mean by "doing the same thing". We must be careful not to make the mistake of the journalist who, covering a marriage ceremony, reported that the four bridesmaids wore the same dress. What he meant to say was that the four dresses were made from material of the same design and —apart from possible differences in size— according to the same pattern.

     The two actions of the housewife are as different from each other as the dresses are: they have, as happenings, at least a different identity: one took place yesterday, one today. As each potato can only be peeled once, the potatoes involved in the two happenings have different identities as well; the first time the basket may have been fuller than the second time; the number of potatoes peeled may differ, etc.

     Yet the two happenings are so similar that the same eye-witness account is accepted as adequate for both occasions and that we are willing to apply the same name to both actions (e.g. "peeling the potatoes for the evening dinner").

     An algorithm is the description of a pattern of behaviour, expressed in terms of a well-understood, finite repertoire of named (so-called "primitive") actions of which it is assumed a priori that they can be done (i.e. can be caused to happen).

     In writing down an algorithm, we start by considering the happening to take place as a process, dissected into a sequence of subactions to be done in succession. If such a subaction occurs in the well-understood, finite repertoire of named actions, the algorithm refers to it by its name. If such a subaction does not occur in the finite repertoire, the algorithm eventually refers to it by means of a subalgorithm in which the subaction, in its turn, is regarded as a process, etc. until at the end all has been reduced to actions from the well-understood, finite repertoire.

     The notion of an algorithm, of an executable precept for the establishing of a certain net effect, is very well known from daily life: knitting patterns, directions for use, recipes and musical scores are all algorithms. And if one asks the way to the railway station in an unfamiliar town, one asks essentially for an algorithm, for the description of a pattern of behaviour which, when followed, will lead to the desired goal.

     In our definition of an algorithm we have stressed that the primitive actions should be executable, that they should be done. "Go to the other side of the sqaure." is perfectly acceptable, "Go to hell.", however, is not an algorithm but a curse, because it cannot be done.

     Besides that we have stressed that the repertoire should be well-understood: between the one who composed the algorithm and the one who intends to follow it there should be no misunderstanding about this repertoire. (In this respect knitting patterns are, as a rule, excellent, recipes are of moderate quality while the instructions one gets when asking the way are usually incredibly bad!) How essential this lack of understanding is may perhaps best be demostrated by a recipe for jugged hare as it occurs in an old Dutch cookery-book; translated into English the recipe runs as follows: "One taketh a hare and prepareth jugged hare from it.". The recipe is not exactly wrong, but it is hardly helpful!

     Let us now contrast the eye-witness account of the potato peeling session:

"fetches the basket from the cellar;
fetches the pan from the cupboard;
peels the potatoes;
returns the basket to the cellar"

with the corresponding algorithm —the set of instructions, say, the housewife might give to a new maid—:

"fetch the basket from the cellar;
fetch the pan from the cupboard;
peel the potatoes;
return the basket to the cellar" .

     Comparing the two, we may well ask what we have gained, for it seems a roundabout way of doing things: describing a pattern of behaviour which, when followed, will evoke the happening, while in the eye-witness account we had an excellent way of describing the happening itself.

     What have we gained? Well, nothing as long as we restrict ourselves to algorithms that can be given —as in our example— by a concatenation of names of actions, to be done in the given order. Under that restriction an eye-witness account of the actions "as they take place" is equally good. But the behaviour of the housewife (or the maid) could be a little bit more complicated: let us suppose that after the pan had been fetched, she puts on an apron if necessary, i.e. when she wears a light-coloured skirt and that on one day she uses the apron while on the other day she doesn't.

     On a rather abstract level —i.e. without explicit mentioning of the apron and the condition under which it is used, a uniform eye-witness account would still do (in some fashion) for both sessions, e.g.:

"fetches the basket from the cellar;
fetches the pan from the cupboard;
takes preparation with regard to clothing;
peels the potatoes;
returns the basket to the cellar"

with the implicit understanding that "takes preparation with regard to clothing" covers the empty action when her skirt is not light-coloured and covers putting on an apron when her skirt is light-coloured.

     If, however, we want to go into more detail and want to mention the apron explicitly, then "takes preparation with regard to clothing" has to be replaced in the eye-witness account of the one day's session by

"sees that her skirt is light-coloured and therefore puts on an apron"

and in the other day's session by

"sees that her skirt is not light-coloured and therefore omits putting on an apron" .

     The trouble is, that the eye-witness account cannot contain the single sentence:

"puts on an apron if her skirt is light-coloured"

for then the audience justly asks "does she do it or not?". In other words: in that degree of details we cannot cover the two happenings by the same eye-witness account, for in that degree of detail the two happenings differ!

     It is here that the potential power of the algorithm becomes apparent, for we can recognize the same pattern of behaviour in the two happenings and by describing that pattern of behaviour we give something that is applicable under both circumstances, light- as well as dark-coloured skirt. This is possible thanks to the fact that what actually happens when a certain pattern of behaviour is followed may be co-determined by the state of affairs which is current when the action begins.

     We see two things: the inspection of whether the skirt is light-coloured or not and, depending on the outcome of this inspection, the action "put on an apron" is to take place or not. In order to express this conditional execution we need in our algorithm another connective besides the semicolon. In our example of the algorithm (I refer to the instructions to the new maid) the semicolon had a double function: in the text it separates one action name from the next action name, but besides that it implied for the happening a certain amount of what is technically called "sequencing control", i.e. it was meant to imply that the end moment of the preceding action should co-incide with the beginning of the following action. We now need another connective, indicating whether or not the inspection should be followed by the next action in the text. We write for instance the following algorithm:

"fetch the basket from the cellar;
fetch the pan from the cupboard;
if skirt is light-coloured do put on an apron;
peel the potatoes;
return the basket to the cellar" .

(for historical reasons the so-called conditional connective "if...do" is split into two symbols "if" and "do", enclosing the inspection.)

     The conditional connective connects two actions, the first of which must be a so-called "inspection". This inspection describes a state of affairs, which may be true or false ("false" is the technical term for "not true"). The happening which is to take place corresponding to the conditional compound

"if inspection do action"

may take one of two mutually exclusive forms: either the inspection gives the result true and it is followed bu the action, or the inspection delivers the result false and thereby the whole compound action has been completed. The algorithm derives its superiority over the eye-witness account from the fact that it may contain connectives (such as the conditional connective) that imply a more elaborate sequencing control than the semicolon.

     We need a further connective before we can see the full superiority of the algorithm over the eye-witness account, viz. a repetitive connective.

     Suppose that we want to express that "peeling the potatoes" is in itself a process that deals with one potato at a time and that, correspondingly, our primitive action is named "peel a next potato". If the number of potatoes to be peeled is a fixed constant, say always 25, then we can replace "peel the potatoes" by 25 times "peel a next potato", separated from each other by in toto 24 semicolons. But we now assume that the number of potatoes to be peeled may differ from one day to the next; yet we want to recognize in each peeling session the same pattern of behaviour. We suppose the housewife capable of looking into the pan and judging whether the amount of peeled potatoes is sufficient or not.

If we know a priori that in the worst case (i.e. many guests and very small potatoes) she will never have to peel more than 500 potatoes, we can give a general algorithm describing the actual peeling by repeating in the text of our algorithm 500 times (seperated by in toto 499 semicolons) the conditional compound:

if number of peeled potatoes is insufficient do peel a next potato" .

     Several objections can be made to this solution. There is the practical objection that it would reduce the construction of algorithms to doing lines. Furthermore we had to make the fundamental assumption that we know in advance a maximum number. Often it is very hard to give such an upper bound a priori and if it can be given, such an upper bound is usually many times larger than the average value. And if in actual fact 25 potatoes have to be peeled, the 26th inspection "number of peeled potatoes insufficient" —i.e. the first to deliver the result "false"— gives fresh information, the following 474 inspections (which are prescribed by the algorithm as suggested) give no new information. Once the housewife has established that the number of peeled potatoes is no longer insufficient, she should not be forced to look into the pan another 474 times in order to convince herself!

     In order to meet these objections, we introduce a repetitive connective which, again for historical reasons, is written in two parts "while...do". Using this connective we can write the algorithm:

"fetch the basket from the cellar;
fetch the pan from the cupboard;
if skirt is light-coloured do put on an apron;
while number of peeled potatoes is insufficient do
                                   peel a next potato;
return the basket to the cellar" .

     The process corresponding to

"while inspection do action"

consists of one or more executions of the conditional compound

"if inspection do action"

viz. up to and including the first time that the inspections gives the result "false".

     We can also describe the semantics of the repetitive connective in termns of the conditional one recursively:

"while inspection do action"

is semantically equivalent to

"if inspection do
     begin action; while inspection do action end"

Here the symbols "begin" and "end" are used as opening and closing bracket respectively; they are a syntactical device to indicate that the conditional connective connects the inspection (from the first line) to the whole of the second line: the value delivered by the first inspection descides whether what is described on the second line (from begin until end) will be done in its entirety or will be skipped in its entirety.

Note. In the above I have approached the idea of an algorithm starting my considerations from the class of happenings in which we wanted to discern the same pattern of behavior. In addition to the semicolon as connective in the text of the algorithm this led to other connectives such as the conditional connective "if...do" and the repetitive connective "while...do". It is not unusual to approach the relation between algorithm and computations from the side of the algorithm; such an approach leads in a very early stage to syntactical considerations, as a result of which the connectives are introduced in a somewhat different terminology. Instead of

"if inspection do action"

people write

"if condition do statement" .

     The part of the text denoted by "if condition do" is then described as "conditional clause" which is regarded as a prefix attached to the "statement", the whole construction comprising clause and statement together is then called "a conditional statement". Similarly, in

"while condition do statement" ,

"while condition do" is called "a repetitive clause" and the statement is called "the repeatable statement". This terminology is so widely used that —in spite of its syntacical origin— I shall not refrain from using it whenever I see fit to do so.

     As a final exercise I shall describe the pattern of behaviour of a housewife who —for some obscure reason— is so conditioned that she can only peel an even number of potatoes for the evening dinner:

"fetch the basket from the cellar;
fetch the pan from the cupboard;
if skirt is light-coloured do put on an apron;
while number of peeled potatoes is insufficient do
          begin peel a next potato; peel a next potato end;
return the basket to the cellar" .

This example is included to show that the same set of primitive actions allows different patterns of behaviour.

     The notion of an algorithm is a very powerful one, for a single algorithm "extracts" what a large number of different happenings may have in common. And it does not do so by ignoring details, on the contrary, a single algorithm covers a whole class of happenings to the very degree of detail in which the corresponding eye-witness accounts would differ from each other. The possible large number of different corresponding happenings is generated by the different ways of sequencing as might be controlled by the conditional, the repetitive (and similar, see later) connectives.

     On the one hand we have the algorithm, a finite text, a timeless, static concept; on the other hand we have the corresponding happenings that may be evoked by it, dynamic concepts, happenings evolving in time. The intimate relation between the two —to which we refer by the term "sequencing"— lies at the heart of the algorithmic notion. (It is to this intimate relation that I refer whenever I stress that the programmer's true activity is "The design of classes of computations".) The notion of an algorithm is admittedly a very powerful one; before going on, however, I shall allow myself a little detour in order to indicate what "price" we have paid for its introduction.

     We have stated that we restrict ourselves to happenings taking place in a finite period of time. Whenever an algorithm is followed, a happening is taking place, eventually as a time-succession of primitive actions. It is only realistic to postulate that each primitive action will take a finite period of time, unequal to zero: no action will take place "infinitely fast". This implies that we confine our attention to happenings that are taking place as a time-succession of a finite number of primitive actions.

     And now we are beginning to see the price: it is very easy to write down a text that looks like an algorithm but that is not an algorithm in our sense of the word, because the effort to follow it turns out to be a never-ending task, e.g.

"while skirt is light-coloured do peel a next potato" .

When we assume that the peeleing of a next potato does not influence the colour of the skirt, we have just two cases: either the shirt is not light-coloured and the only action taking place is the inspection establishing this fact, or the skirt is light-coloured and will remain so and what the pattern could be interpreted to describe is the peeling of an infinite number of next potatoes. This is usually called "an improper algorithm".

     The question of whether a text that looks like an algorithm is indeed a proper algorithm or not, is far from trivial. As a matter of fact Alan M. Turing has proved that there cannot exist an algorithm capabale of inspecting any text and esablishing whether it is a proper algorithm or not. The assumption if the existence of such an algorithm leads to a contradiction which will be sketched below. Supose that we have such an algotithm, an inspection


which delivers the result true when the text named L is aproper algorithm and the result false when it is improper. Consider now the following text, named L:

L: "while proper(L) do whistle once"

(in which "whistle once" is assumed to be an available primitive). If we start to follow this algorithm, how many times will a whistle sound? The assumption that "proper(L)" delivers the result true will cause the algorithm to be improper and vice versa! The conclusion is that no algorithm for the inspection "proper" can exist. (Marvin Minsky concludes in "Computation, Finite and Infinite Machines", Prentice Hall, 1967 a formal treatment of this proof with the sentence: "We have only the deepest sympathy for those readers who have not encountered this type of simple yet mind-boggling argument before.".)

     The moral of this story is that it is an intrinsic part of the duty of everyone who professes to compose algorithms to supply a proof that his text indeed represents a proper algorithm.

     Our next fundamental notion is a machine (or a "computer"). A machine is a mechanism capable of causing actions to take place following a pattern of behaviour such as can be described by algorithms expressed in terms of a repertoire of primitive actions belonging to this machine.

     Above we have given two algotithms for peeling potatoes, one for a natural number of potatoes and one only for even numbers of potatoes. Both algorithms have been expressed in the same repertoire of primitive actions. They were introduced in the realm of "observing happenings"; the one could describe the pattern of behaviour of my left-hand neighbour, the other the one of my right-hand neighbour. Suppose that my own wife

1)   is also capable of performing those primitive actions
2) will accept from me algorithms expressed in these primitives and will follow such an algorithm obediently.

Then I can make her peel either as my left-hand neighbour or as my right-hand neighbour, depending on the algorithm I have supplied to her. Then she is an example of a machine.

     A mechanism that can do only one thing (such as one of the most widely-spread automata, the toilet flusher), is not called a machine. Essential for us is the associated repertoire of actions, the ability to accept patterns if behaviour and to behave accordingly.

     Machines are mechanical algorithm followers. The fact that in the last decennia increasingly powerful machines have become available to mankind is directly responsible for the increased importance of and interest in algorithms and their composition.

Note. It is a trivial matter to compose an algorithm for the fastest machine in the world, a proper algorithm in the theoretical sense of the word but somewhat impractical, as it would take the machine a million years to carry out the corresponding process to completion. The clain that "the machine is capable of causing the process to take place" is then somewhat subject to doubt: in actual fact it cannot. In what follows we shalln't be bothered by the distinction between "theoretically possible" and "practically feasible". Not because we are impractical, on the contrary! The point is that in the meantime computers are so powerful that the class of practically feasible computations is by now sufficiently large —to put it mildly!— to make machines very useful and intriguing pieces of equipment, fully worthy of our attention.

     We call an algorithm intended to control the behaviour of a machine, a program. In other words, we reserve the name program for those algorithms that are intended for mechanical execution. In the general notion of an algorithm we have only required that the repertoire should be "well-understood", without bothering how this understanding is established: if a composer indicates "Andante" (= "going") for a composition in three-four time, he may do so, because, remarkably enough, we may expect this indication to be somehow meaningful even for a player with two legs. In the case of a machine, the situation is drastically different, for a machine is a finite piece of equipment which, by its very construction, has associated with it a finite, very well defined repertiore and whenever it is fed with a program it shall behave exactly as prescribed.

     The fact that machines are completely obedient slaves has caused complaints from many beginning programmers. Their obedience, they felt, makes programming cruelly difficult, for a trivial mistake in the program is sure to lead to entirely unintended behaviour. The programmer's inability to appeal to "the common sense of the machine" has been experienced as one of its major shortcomings. The more experienced programmer learns to appreciate its servile, strict obedience; thanks to it we can instruct it to do something "uncommon"! And this is something you cannot do with a servant who "rounds off" his instructions to the nearest probable interpretation.

     In the preceding paragraphs I have introduced programming as an important activity because now we have machines that can be controlled by programs and for which we have to compose programs when we want to use them, i.e. when we want to convert them into the tool solving our problem. But this is not the whole story. A computer is a many-sided thing. For its manufacturer it is primarily a product that he can (hopefully) produce and sell with profit. For many institutional buyers the computer is probably primarily a status symbol. For many years it is either a source of endless worries or, as the case may be, a highly useful tool. In University surroundings, the view of the computer as a tool to be used tends to be the predominant one. And there is no denying it: in their capacity of tools the computers are changing the face of the earth (and of the moon as well!). Yet I am convinced that we underestimate the computer's significance if we confine our appreciation of it to the aspects mentioned. They may cause shocks to the basis of our society, but I believe in the longer run these will turn out to be but ripples on the surface of our culture. I expect a much more profound influence from the advent of the modern computer, viz. in its capacity of a gigantic intellectual challenge, unprecendented in the history of mankind.

     The computer as a piece of equipment presents us with an entirely new combination of simplicity and power, which makes the programming task a unique challenge. When the electronic engineers have done their job properly, they present to the programmer a mathematically trivial piece of equipment. Its instruction code (its "repertoire") can be described perfectly well on a modest number of pages, everything is finite and discrete, there is just no place for conceptually difficult mathematical notions, such as continuity, infinity, limits, irrational numbers and whatnots. The mathematical basis of programming is just very, very simple. So simple that programming should be easy: it should be easy to conceive programs, it should be easy to convince oneself that a program is correct and that the machine working under its control will indeed produce the desired result. From its basic simplicity one derives the intuitive feeling that it should be a trivial matter to keep the happening evoked by one's program firmly within one's intellectual grip.

     But its basic simplicity is only one side of the coin: the other side presents the extreme power (both as regards capacity and speed) of currently available computers. As a result of its extreme power, both the amount of information playing a role in the computations as well as the number of operations performed in the course of a computation, escape our unaided imagination by several orders of magnitude. Due to the limited size of our skull we are absolutely unable to visualize to any appreciable degree of detail what we are going to set in motion, and programming thereby comes an activity facing us with conceptual problems that have risen far, far above the original level of triviality.

     It is the possibility of considering realistically effective solutions of any degree of sophistication, combined with our complete intellectual grip on what we are considering, which will deeply influence our ways of thinking and our appreciation of our own thought processes. It has no precedent, for whenever in the past we were faced with something powerful (a storm or an army) we never had effective control over it. (This for a long time, used to be the definition of "powerful", viz. that were were "powerless" in the face of it!)

Back to top

Programming Languages and their Implementation

     The activity of composing programs is called "programming". In the preceding section we have introduced progams as algorithms intended to control the behaviour of machines and by virtue of the actual existence of such machines, programming is a very practical activity. It is one of the youngest branches of applied mathematics (in the broad sense of the word, i.e. not confined to mathematical physics or numerical analysis), it is as important as the applications in question, it is practical in the sense that it is the programmer's intention that a machine will actually display the behaviour as prescribed by the algorithm. For that reason a conscious programmer should respect the limitations of the (finite) machines. Alternative programs causing a machine to establish the same net result and therefore in that respect equivalent, may differ greatly in what is usually called "efficiency", i.e. in the demands they make upon the machine's resources. For many years, efficiency has been used as the sole yard-stick along which to compare the relative quality of alternative programs for the same task. In the meantime, programming has turned out to be so difficult, that other quality aspects have gained relative importance, such as the ease with which we can understand a program, can convince ourselves of its correctness or can modify it, etc. Yet, efficiency concerns cannot be ignored and in order to give the reader some feeling for the nature of the limitations he should respect, we shall give in this section an overall view of computing machines and the way in which they execute programs.

     In this little monograph we shall confine our attention to sequential algorithms, i.e. algorithms describing what actions should happen in succession, one after the other. Such algorithms have a property for which they have been blamed (and not entirely without justification), viz. that thet are often "overspecific" as regards the order in which things have to happen. If two actions, say "A" and "B" have both to be done, a purely sequential algorithm will prescribe

either    "A; B"              or    "B; A"          ,

viz. action A followed in time by action B or the other way round. It will not express that the order is immaterial and, what is possibly more important, it will not express that the two actions are so "non-interfering" with each other that they may take place concurrently, or —to use the jargon— may be done in parallel.

     For various reasons I have decided to restrict my attention to purely sequential programs. The most obvious reason is to be found in the structure of the machines that are currently available or can be expected to become available in the next years. One or two decades ago, machines used to be purely sequential. In the meantime we have got equipment allowing for a limited amount of parallelism (dual processor machines, independent communication channels etc.), but such pieces of equipment are at best an aggregate of a small number of individual sequential componenents. In such machines the potential paralellism of activities is exploited by standard control programs (so-called "operating systems"), while the individual user still works in a strictly sequential environment. And it is to the individual user that this little monograph addresses itself.

     With the advent of what is called "large scale integration" (being a term from the computer field, its acronym LSI is better known!) it seems to become technically feasible to build machines more like "clouds of arithmetic units" with information processing activities going on simultaneously all over the place, for shorter periods of time even independently of each other. Programming for such machines will pose completely different trade-off problems: one will be willing to invest in potentially useful computing activity before its actual usefulness has been established, all for the sake of speeding up the whole computation. But although I know such machines may be coming, I shall not touch these problems for the following reasons. First, as far as general purpose applications are concerned, I have my doubts about the effectiveness with which such forms of paralellism can ever be exploited. Second —and that is the most important consideration— parallel programming is an order of magnitude more difficult than sequential programming. (This statement will be doubted but but I have enough experience in multiprogramming to feel myself entitled to say so. The point is that with parallelism a great variety of happenings may take place under control of the same program(s). On account of undefined speed ratios a set of parallel programs is written for a partly non-deterministic machine and special care is required to ensure that, on a higher level of abstraction, its total behaviour can again be regarded as uniquely determined by the program(s).) Third, I an not over-impressed by the complaints that sequential programs specify a more stringent time-succession than logically necessary: I have often the somewhat uneasy feeling that these complaints finds their origin in the mathematical tradition of the pre-computer age. In classical mathematics the notion of an algorithm has been neglected; mind you, I am not blaming the previous mathematicians for this because before the actual existence of automatic computers, algorithms were hardly a relevent subject. But we should not close our eyes to the fact that the course of history has caused mathematics to be more tuned to timeless problems to static relations, to functional dependence. (The existence of classical mechanics does not contraadict this observation: renaming the independent variable in the differential equations "k", say, instead of the usual "t" does not influence the mathematics involved.) Some of the efforts to remove the overspecification of the time-succession —they rely heavily on functional dependence— strike me as tackling the programming problem with classical concepts that have been developed for other purposes. So much for my decision to restrict my considerations to sequential machines.

     To get some feeling for the demands made upon the modern automatic computer, let us focus our attention for a moment upon an average sizable computation, for instance, the computation of (a good approximation of) the inverse of a given matrix of, say, 100 by 100 elements. Such a job has two markedly quantative aspects:

a)     a vast amount of numbers is involved: posing the problem implies the specification of 10.000 numbers, the answer is also given by 10.000 numbers (each of which is, in general, a function of all 10.000 elements of the given matrix)
b) a vast amount of computation has to be done: if it is done by elimination, the number of operations (i.e. multiplications and additions) is of the order of magnitude of 1.000.000.

     The construction of machines able to cope (reliably!) with these two very different aspects of "multitude" is one of the greater triumphs of electronics. It has been achieved by applying the old and well-known principle: "Divide and Rule.". In modern computers one can disinguish two vital components, each of which has the specific task to cope with one of the forms of multitude.

a)     the store (called "memory" in American); this is the component able to receive, store and and return vast amounts of information; its primary function is to be large, to be able to contain very much information
b) the arithmetic unit or processor; this is the component in which the actual work —adding, subtracting, multiplying, comparing etc.— is done; its primary function is to be very fast so that it may do a great deal in a limited period of time.

     It is not the function of the arithmetic unit to be large in the sense that it should contain large amounts of information. On the contrary: while nearly all the information, relevant for te computation at large, lies "sleeping" in the store, at any moment of time only the tiny fraction actually involved in the information processing activity is found (copied) in the arithmetic unit, which is only capable of dealing with a few numbers at a time, say the two numbers to be added and the sum formed by the act of addition. Whenever two numbers (in store) are to be added, they are transported from the store to the arithmetic unit, where the sum will be formed; once formed the sum will either be kept in the arithmetic unit for immediate further processing or it will be sent back to store for later processing. Microscopically, the store acts as the icebox in which all information is kept which is not involved in the current activity of the arithmetic unit. If small letters indicate variables in store and R indicates a register in the arithmetic unit, the computation

x := (a + b)*(c + d)

might be evoked by the following sequence of instructions:

R:= a;
R:= R + b;
t:= R;
R:= c;
R:= R + d;
R:= t * R;
x:= R

     The first instruction fetches the value of a from store into the register R, the next one increases (the contents of) R by the value of b (from store). At this stage one of the two factors to be multiplied has been computed. Before the multiplication can take place, the second factor has to have been computed as well; in a machine with a single register R for arithmetic results, this second addition implies again the use of the register R. In order to make this register available for this purpose, the third instruction sends the value of the first factor —a so-called "intermediate result"— back to store, assigning it to a variable here named "t": the first sum is sent back to store for later usage. The fourth and fifth instructions compute the second factor, the value of which is left in R, ready for multiplication by the stored valued called "t". The last instruction stores the product, now formed in R, so that it can be retrieved under the name "x" for later usage.

     The above example illustrated many things. It shows how

x:= (a + b)*(c + d) ,

which on one level of interest can be regarded as a single action, on closer inspection turns put to be a sequential process taking place as a time-succession of seven more primitive sub-actions ("program steps"). It also shows that at any moment in time only a tiny portion of the algorithm is in active control of what actually happens: while the first of the second addition is performed, the fact that the two sums will have to be multiplied is still "dormant". (If the total action had been

x:= (a + b)/(c + d) ,

the only modification neccessary would have been the replacement of the sixth instruction by

R:= t / R ;

the first five instructions would have been insensitive to this change. That is what I meant by "dormant".)

     It also demonstrates that, just as at any moment in time, only a tiny fraction of the numerical information is involved in actual processing, also only a tiny fraction of the program exercises control, viz. the instruction currently executed.

     It also demonstrates that it is no good just to divide the machine into two components, store and arithmetic unit, but that one must also provide for a (dense) information traffic between the two; this is provided for by what connects the two together, the so called "selection".

     We have said that the store should be able to store information; it must, for instance, be able to store "numbers", e.g. the intermediate result called "t". Obviously, these numbers cannot be kept in store like balls in an urn: when the instruction

R:= t * R ;

has to be executed, the store must not return just any number, but quite definitely it must return the value sent to it two instructions earlier. For that reason the numbers in store are not arranged as balls in an urn, on the contrary! Store is arranged as a number of so-called "storage cells", each capable of holding the value of one number at a time. Each storage cell is identified by its so-called "address"; each time contact with the store is required —either to receive or to return information— this request is accompanied by a statement of the address of the storage cell involved. If the store is to receive information —this is called "writing into store"— the value to be stored and the address of the storage location involved (plus a "write request") are sent to store and selection respectively; as a result of the writing operation the original contents of the storage cell, which get lost, are replaced by the new value. If the store is to return information —this is called "reading from store"— the address of the storage cell involved (plus a "read request") is sent to the selection; as a result the contents of the storage cell are returned from store (and kept in the storage cell as well for later reading if desired). As far as destruction, reception and reproduction of the information contained in a storage cell are concerned, the situation shows great analogy to a tape in a tape recorder. You can use the tape to record as many pieces of music as you want, but only one at the same time: whenever you record a new piece of music on an old tape, its previous contents are wiped out; the piece of music currently recorded, however, can be played back as many times as you wish. (To make the analogy a true one, we must restrict outselves to pieces of music of equal duration, precisely matched to the length of the tape, matched to its (finite) information capacity.

     Storage cells can store information by virtue of the fact that they can be in a finite number of distinct states. In practically all computers they are composed of elementary components, each of which can be in one of two possible states. (The most common form is a little ring of ferromagnetic material that will be circularly magnetized in one of the two possible directions.) One such component can be in 2 different states (say "North" and "South"), two such components can be together in 4 different total states, ("North-North", "North-South", "South-North" and "South-South"), N such components together can be in 2N mutually different states. The number of elementary components associated with each storage cell is a characteristic constant of the machine's store and is called "the word length". If the word length is 32, the number of different possible total states per word is 232, i.e. slightly over 4*109; the arithmetic unit will associate with each state a numerical value; in terms of these numerical values a storage cell can then hold, for instance any integer value ranging from (roughly) –2*109 to +2*109.

     The finite capacity of the storage cell is something the user must be aware of: it is matched to the abilities of the arithmetic unit, i.e. if the latter deals with integer values it is geared to operations up to a certain maximum absolute value, if it deals with (approximations of) real numbers, it is geared to dealing with them in a certain precision, maximum absolute value and precision respectively being chosen such that the numerical values to be distinguished between can be stored in one (or possibly two successive) storage cells. If greater integers or reals in higher precision have to be manipulated, special measures have to be taken which will be more expensive.

     In the meantime we have explained enough about the general machine structure to mention to aspects of the "costs" involved in the execution of a program. One of them is computation time. We have seen that the arithmetic unit performs one operation after the other, and the more operations a program prescribes, the longer the total amount of time the arithmetic unit will have to spend to to carry the computation to completion. The other one is storage usage. If a thousand values have to be computed in order to be added together, we may compare the following two algorithms. The first one first computes all thousand values and stores them, after which they are added, the second algorithm immediately adds each number to the partial sum as soon as it has been computed. With regard to storage usage the first algorithm is more demanding: at some stage of the computation it requires a sufficient amount of store to hold all thousand values, an amount of store which in the second algorithm, remains available for other (perhaps more useful) purposes.

     So much for the finite capacity of each storage cell and the fact that a store contains only a finite number of such cells. Let us return to their addresses: a while ago we hinted that each storage cell is identified by an "address" and that each reference to store takes place under control of the address of the storage cell concerned, but up till now we have not been very explicit about what an address really is. Well, this is very simple: the storage cells are numbered: 0, 1, 2, 3, 4, 5, ... up to and including M – 1 if the store comprises M different storage cells (M between 16.000 and 1.000.000 being typical figures), and the ordinal number of each storage cell is used as "its address" (like houses in a street!) This implies that the storage cells have a natural order, viz. the order of increasing address. Given the address of a storage cell, the address of the next storage cell can be computed by adding 1 to the given address of the preceding one.

     This natural ordering of the storage cells is heavily exploited. If a vector, i.e. a sequence of numbers a0, a1, ... , an has to be stored, its elements can be stored in successive storage cells. If the address of element a0 is known, the address of element ai can then be computed, viz. by adding (the value of) i to the address of the element a0.

     The natural order of the storage cells is also expoited in the program representation. Remember, we have postulated that a machine could "accept" a program, and that, once the program had been accepted, the machine could execute the program (i.e. cause the happening as prescribed by the program). In other words, when the machine is executing the program, this program —i.e. "the information describing how to behave"— must be somewhere in the machine! Where? Well, in the store, the store being specifically the machine component able to hold information. In other words, the store is used for two different purposes: it holds the numerical information to be manipulated, but it also holds —in some other part of it— the program, i.e. the information describing what manipulations have to be performed.

     To sketch briefly how this can be done, we return to our previous example, where

x := (a + b)*(c + d)

was decomposed into the sequence of seven instructions denoted by

R:= a;
R:= R + b;
t:= R;
R:= c;
R:= R + d;
R:= t * R;
x:= R

and the question is: by means of what conventions do we represent the above information in a store capable of holding numbers? This is achieved by a two-stage convention, one for representing single instructions and one for representing a sequence.

     The first convention is to choose for each instruction a unique number code. In the above notation we have denoted variables (or: the addresses of the storage cells associated with the variables and containing their current value) with small letters (a, b, c, d, t, and x); but addresses are numbers and that component is therefore already numerical. Using "s" for "any address" we see that in the above example, we can distinguish instructions of four different types:

1.         R := s
2. R := R + s
3. R := s * R
4. s := R

The second part of the convention associates with each type of instruction a number (e.g. the numbers 1, 2, 3, and 4 to the types shown above; it should be mentioned that in actual machines the number of of instruction types is considerably larger than 4). By concatenating the digits describing the instruction type number with those giving the address we have a number code for each possible instruction and we assume that the word length of the storage cell is sufficient to contain such a number. The first convention as just described, reduces the problem of storing a sequence of instructions to storing a sequence of numbers. Now the second convention is that the sequence as such is represented by storing these numbers in successive storage cells, i.e. storage cells with successive addresses. And this completes (roughly) the trick.

     (Note. This is by no means the only way to represent programs inside the machine's store; in so-called "stack machines" other conventions are chosen. The above elaboration is only shown by way of example, demonstrating the possibility.)

     The dual role of the store —storage of instructions and storage of variables— implies another way in which a program can be expensive to execute: if the program text is very long, by that very fact the program text will make a heavy demand on storage capacity. If we have two alternative programs for the same job, one requiring 5000 instructions to describe it and the other requiring 10000 instructions to describe it, then —all other things being equal— the first alternative will be cheaper.

     The above concludes our bird's eye view of the so-called hardware machine, i.e. the physical piece of electronic equipment as it is delivered by the manufacturer: a machine that can accept and then execute programs written as long sequences of instructions from an instruction repertoire that is specific for this particular machine (and its copies). Until the late fifties programmers indeed produced their programs as long sequences of such instructions, but when machines became faster and when more came on the market, the shortcomings of this way of working became more and more apparent.

     Because the programmer expressed his program in terms of the instruction repertoire of that particular machine, he was forced to tailor his program to that machine. He needed a thorough and ready knowledge of all the details of the instruction repertoire of that machine —which for the more intricate machines was no mean task— and worse: once his program was written, it could only be executed by that particular machine. Exchange of programs between institutes equipped with different machines was impossible; furthermore, whenever an institute replaced its old machine by a new and different one, all programs for the old machine became obsolete. From that point of view it was clear that tailoring one's programs so closely to the idiosyncrasies of a specific piece of hardware was not a very wise investment of the intellectual energies of one's programmers.

     But even without the problem of transferring programs from one hardware machine to another, this way of programming, expressing programs as a monotonous stream of machine instructions, showed great drawbacks. One serious drawback was that this close contact between programmer and physical machine did not only enable the programmer to lard his program with all sorts of coding tricks, it actually invited him to do so. For many a programmer this temptation became irresistable; there even has been a time when it was generally believed that one of the most vital assets of a virtuoso programmer was that he be "puzzle-minded", and it was only slowly recognized that a clear and systematic mind was more essential! When "tricky programming" was en vogue, programming was not only very expensive —it took too much time— it also turned out to be too difficult to get a program correct. Looking back, the period of tricky programming now strikes us as a generation of programmers walking on a tight-rope, in full confidence because they were unaware of the abysmal depth beneath it! The modern competent programmer is more humble and avoids clever tricks like the plague.

     It was not only the preponderance of coding tricks that made programming "in machine code" as it is called nowadays, too difficult and too risky.

     Firstly, a program in machine code contains very little redundance and as a result it is very sensitive to even small writing errors —errors of the level of "spelling mistakes" or "printing errors".

     Secondly, the programmer who thinks in terms of variables has to denote these variables in his program text by the addresses of the storage cells dedicated (by him) to hold their values. As a result the programmer has the burden of storage layout and all the clerical work implied by this.

     Thirdly —and this is the major reason— machine code is an improper vehicle to represent "structure": it is just a single, monotonous sequence of machine instructions, incapable of expressing in a direct and useful form the structure of the algorithm. In what follows it will become abundantly clear that when we wish to compose programs in a reliable fashion, we can only do so by structuring the happenings we intend to evoke, and that we are in urgent need of a descriptive vehicle such that in the program text itself the structure of the happenings —i.e. if the computations— can be adequately reflected.

     The above shortcomings led to the design of so-called "(higher level) programming languages". A programming language can be regarded as the machine code for a fictitous, idealized machine. Whereas the old machine codes were tailored to the needs of the hardware —i.e. the equipment electronic engineers could make— programming languages are more tailored to the intellectual needs and conceptual difficulties of the programmer who has to design the computations.

     The problem now is that on the one hand we have the hardware machine A, that can be built but for which we don't like to program because it is too cumbersome, and on the other hand we have "dreamed up" the fictitious machine B, for which we would love to program but which the engineers cannot build. How do we bridge that gap?

     The gap is bridged by "software": given machine A, we can make, once and for all, a program (in the machine code for machine A) which prescribes to machine A the pattern of behaviour it should follow if it is to simulate machine B. Such a program is called "software for machine A". Given hardware machine A, loaded with software, we have a mechanism —partly "hard", partly "soft"— that is able to execute programs written for the fictitious machine B.

     Usually this combination of hard- and software processes such programs in two stages. In the first stage (the "translation stage") the program written in programming language B is subjected to a translation process. In this process a storage layout is decided, the necessary bookkeeping is carried out and an equivalent program —but now expressed in machine code A— is produced. In the second stage (the "execution stage") the output of the first one is interpreted by machine A as a program and the intended computation is evoked.

     The standard software that goes with the machine shields the user from the idiosyncrasies of the specific machine; apart from that it invokes —behind the user's back, so to say— the standard ways of dealing with the tougher properties of the hardware, such as the possible paralellism (i.e. concurrence in time) of the computation proper and information transfers from and to peripheral devices and multilevel stores. Up till now we have described the hardware as if all storage cells were equally well accessible for the arithmetic unit. In practice this is seldom the case, two storage levels being quite common: primary store (usually ferrite cores) and secondary store (usually magnetic drums). The cells in primary store are the only ones that are directly and immediately accessible for the arithmentic unit; the information in secondary store (which in capacity is an order of magnitude larger than primary store) is not directly accessible for the arithmetic unit, but the possibility of bulk transfers from primary store and secondary store is availabe instead. In such machines, the software may move the information around between the two stores, all the time keeping track of where everything is to be found at any moment and trying to keep in primary store all "currently relevant" information. (This is called "the implementation of a virtual store".)

     We have mentioned the concept of the virtual store because it is related to an efficiency aspect over which the programmer has some control and in regard to which the programmer therefore has some responsibility. This is called "vagrancy". A program has a small degree of vagrancy whenever for larger periods of time access are confined to a small, dense subset of the total amount of information; in that case the hope is justified that during that period of time this dense subset will be kept in primary store and that therefore the computation can go on at full speed. In computations with high vagrancy, the probability of information needed in secondary store is much larger and the transport facilities between the storage levels then tend to become the bottle-neck. Therefore, if possible, high vagrancy should be avoided.

Back to top

Variables and relations between their values

(In progress)

Back to top

Programs corresponding to recurrence relations

(In progress)

Back to top

A first example of step-wise program composition

(In progress)

Back to top

The shortest spanning subtree of a graph

(In progress)

Back to top

The towers of Hanoi

(In progress)

Back to top

The problem of eight queens

(In progress)

Back to top

A rearranging routine

(In progress)

Back to top

transcribed by Bert Put
revised Sat, 22 Oct 2005