But that was the time that the ball had travelled, and therefore the required distance is
And this was the end of their derivation, completed without any computation of the trajectory, the maximum height, etc. (It was typical that all professional mathematicians that produced the latter solution without hesitation had already discovered it in their schooldays!)
Compared with the first solution, the second one is so much simpler that it is worthwhile to try to analyse, how this simplification could be achieved. The first solution uses the parabola that describes the trajectory, i.e. it gives the full connection between the horizontal and the vertical movement. The second solution, however, consists of two isolated arguments, a consideration about the horizontal movenment and a consideration about the vertical movement, the two being connected by what we computer scientists would call "a thin interface", viz. the total travelling time. It is an example of a "modularized" argument; it is apparently characterized by concise modules connected via thin interfaces. (It is worth noticing that in our little cannon ball example the interface was expressed in terms of "time", a concept that hardly occurred in the problem statement! Refusal to introduce this concept or —what amounts to the same thing—: its immediate elimination, leads immediately to the timeless aspects of the problem, to the shape of the trajectory, i.e. to the clumsy first solution.)
I have used the computing term "modularization" intentionally. Too often in the past the justification of "modularization" of a large software project has been that it allowed a number of groups to work in parallel on different parts of the same project. And we also know from sad experience that unless the interfaces are indeed sufficiently thin, communication problems become prohibitively severe and that trying to solve these problems by improving the communication facilities between the groups is fighting the symptoms instead of the cause. Here, however, we see in all its clarity a quite different and more fundamental purpose of modularization, viz. to reduce the total reasoning (irrespective of whether thereafter it will be done by different groups in parallel, or by a single one in succession).
How we choose our "modules" is apparently so critical that we should try and say something about it in general. History can supply us with many illuminating examples, I shall just pick one of them, viz. Galileo's discovery of the laws of the falling body. Since Aristotle, who had observed feathers, leaves and pebbles, heavy bodies fell faster than light ones. It was a first-rate discovery to separate two influences, the pull of gravity and the resistance of the air, and to study the influence of gravity in the hypothetical absence of air resistance, regardless of the fact that, due to the "horror vacui" this absence was not only unrealistic but philosophically unimaginable to the late medieval mind. Galileo could also have chosen the other abstraction, viz. to study the influence of air resistance in the equally unimaginable absence of gravity's pull, but he did not and the reason is quite obvious: he would not have known what to say about it!
The moral of the story is two-fold and, after some consideration, clear. Firstly, whenever an intellectual subuniverse, such as that of Galileo's falling bodies, is created, it is only justified to the extent that you can say something about that subuniverse, about its laws and their consequences. Secondly, the subuniverse must be concise and, therefore, we must have a "thin" interface. Without assumptions, laws or axioms or how you call them, one can say nothing about the subuniverse, so something must be taken into account. But it should be the minimum with the maximum gain, for the more is dragged into the picture, the less concise our reasoning that relies on it: at some stage in the game, some sort of Law of Dimnishing Returns comes into action and that is the moment to stop extending the module of our reasoning.
The relevance of this type of considerations to our own trade of programming I can easily illustrate with an example from my own experience, viz. the introduction of the concept of cooperating sequential processes as a way of coming to grips with the problem of operating system design. It is one thing to decide to try to parcel out what should happen under control of an operating system as what happens under control of a set of cooperating sequential processes. It is quite another thing to decide to regulate their cooperation without any assumptions about their different speed ratios, not even between such processes for which such speed ratios are known well enough. But the decision is easily justified: only by refusing to take such analogue data into account could we restrict our subuniverse to one for which discrete reasoning was sufficient. The gain was two-fold: firstly, our more general considerations were applicable to a wider class of instances, secondly, the simplicity of our subuniverse made the study of phenomena such as deadlock and individual starvation feasible. Furthermore it showed the great benefits that could be derived from the availability of primitives catering for the so-called "mutual exclusion".
The idea of trying to reduce the demands on our reasoning powers is very close to a philosophical principle that since many centuries is known as "Occam's Razor": if two competing hypotheses explain the same phenomenon, the one that embodies the weaker assumptions should be preferred. (And we must assume that William of Occam knew how to compare the relative weaknesses of competing hypotheses.)
Of one thing we should always be aware. Our reasoning powers get strained by a case-analysis in which we have to distinguish between a great number of different cases. When the number of cases to be distinguished between builds up multiplicatively, they get quickly strained beyond endurance and it is almost always a clear indication that we have separated our concerns insufficiently. (From this point of view it is not surprising that many, who have thought, feel that the technique of the so-called "Decision Tables" is self-defeating. In view of their tendency towards exponential growth, the advice to use them seems hardly justified.)
In the mean time we have encountered a very different kind of thinking activity. Besides the reasoning that actually solves the problem, we have the —hopefully preliminary!— thinking that reduces that amount of reasoning. Let us call it "pondering", thereby indicating that, ideally, it is done before the actual reasoning starts. It is, however, intended to include as well the "supervision" during the reasoning, the on-going "efficiency control".
The ability to "ponder" successfully is absolutely vital. When we encouter a "brilliant, elegant solution", it strikes us, because the argument, in its austere simplicity, is so shatteringly convincing. And don't think that such a vein of gold was struck by pure luck: the man that found the conclusive argument was someone who knew how to ponder well.
Reasoning, we know, can be taught. Can we also teach "pondering"? We certainly can teach pondering, as long as we do not flatter ourselves with the hope, that all our students will also learn it. But this should not deter us, for in that respect "pondering" is no different from the subject "reasoning", for also for the latter subject holds that some students will never learn it.
Among mathematicians I have encountered much skepticism about the teachability of pondering, but the more I see of the background of that skepticism, the less discouraging it becomes. Sometimes the skepticism is no more than expressing the justified doubt, whether anyone can learn how to ponder well, but as just stated, that need not deter us: let us focus our attention on that part of the population that could learn how to ponder provided that they are taught how to ponder. I see no reason why that part of the population should be empty. Sometimes the skepticism is the result of the inability to form a mental picture of how such pondering lessons would look like, but that inability should not deter us either, for it is so easily explained. Today's mathematical culture suffers from a style of publication, in which the results and the reasoning justifying them are published quite explicitly but in which all the pondering is rigorously suppressed, as if the need to ponder were a vulgar infirmity about which we don't talk in civilized company. (And if the author has not already suppressed it, there is a fair chance that the editor will do so for him!) In the nineteenth century —read Euler, who quite openly and apparently with great delight mentioned all his hunches with what is now a surprising frankness!— we did not suffer from such a cramped style. And as a result of this fashion to regard explicit reference to one's pondering as "unscientific", many contemporary mathematicians even lack the vocabulary in which they could describe it and this lack makes them very unconscious about their own methology. Their way of pondering being unknown to themselves, it becomes something "unknowable" and highly personal, it becomes regarded as a gift with which someone must be "born". And here we find the third source of skepticism: the mere suggestion that pondering could be taught is experienced as a threat upon their ego.
To those who have never tried to visualize how lectures in pondering could look like and, therefore, doubt their feasibility, I can only give one advice. Suppose that you stop teaching results and solutions, but start to solve problems in the lecture room and that you try to be as explicit as possible about your own pondering. What will happen? The need to get some sort of verbal grip on your own pondering will by sheer necessity present your ponderings as something in which, as time progresses, patterns will become distinguishable. But once you have established a language in which to do your own pondering, in which to plan and to supervise your reasoning, you have presented a tool that your students could use as well, for the planning and supervision of their reasoning. In all probability it will have your own personal flavour —I hope that it will, I am tempted to add— but this does by no means exclude that it won't help some of your students: you would not be the first to found a school of thought! They will learn in your way never to embark unnoticed on an extensive new line of reasoning; then they will learn in your way never to do so without a prior evaluation of the chance of simplification versus the chance of further complication, etc. And, eventually, when they grow up, they will learn to ponder in their own way, although the traces of your teaching will probably remain visible all though their lives.
In the above I have presented "pondering" as an optimization problem, viz. how to get the maximum result out of the minimum amount of reasoning. In doing so I followed a quite respectable tradition that presents certain forms of "laziness" as an indispensable mathematical virtue. This may surprise all those who know my profound distrust regarding the simplifications that underly the "Homo Economicus" as a picture of Man, a picture in which the self-sacrifying person (if admitted as a human being at all!) is classified as a non-interesting fool. To correct a wrong impression I may have made, let me quantify my considerations. If you would like to think effectively (because you like doing so), then you had better try to become an expert at it! But, if you would prefer to dream (because you like having beautiful dreams), then you had better become an expert at dreaming! (Remarks to which we can add, that the two facilities do not exclude each other.)
There is a third, indispensable mental activity which, for lack of a better name, I indicate with "revolting". This is what has to take place when, no matter what we try, our maximized output remains nil, i.e., when we are stuck in an impossible task. The only effective reactions are either to change the problem until it becomes managable, or to throw it away and to turn to something else. Obvious as this may seem, I must mention it explicitly, because experience has shown that these are difficult decisions to consider, and that, therefore, we must teach the courage to revolt.
The decision is always difficult because it has the connotation of failure —usually without justification, for most tasks are impossible anyhow—, it is, however, particularly difficult if the impossibility of the task is politically impalatable to the society of which one is a member. In serious cases the victim who has embarked on a popular, but impossible task, can hardly afford to admit even to himself its impossibility and the resulting inner conflicts may form a serious threat to one's integrity, one's health and one's sanity. For those who doubt their courage to revolt, it seems a safe precaution to stay away from popular projects, particularly if they work in an intolerant environment. For those who have learned to revolt well, the act is always one of liberation: it is an ability shared by most innovators of science.
|13th February 1975
Burroughs Research Fellow