How do we contribute? (30.9.1986)

I know many scientists. Some of them are exceptionally mean, but the vast majority is honest and decent. They would like their designs, be they systems, methods or theories, to be used and their writings to be read. And they wish so not only for their personal gratification, but also because they would honestly like to be helpful and to contribute. Their eagerness that at least somewhere their work is accepted is in general more than mere concern about their careers, it is a genuine reflection of their feeling that, if it is ignored, they have failed to contribute. So far, so good. The danger, however, is to equate degree of acceptation as a measure of your success, or, even worse, as a measure of the quality of your work.

The reason is, that there is often a great discrepancy between what the world asks for and what it needs. The dilemma confronts the computing scientists in its crudest form: the magic of the giant brains or machines that think is still so strong that what the world asks for from them are Philosophers' Stones by the dozen and all other forms of transistorized snake oil. Some of our colleagues have been unable to resist the temptation: if the world asks for Panchromatic Concept Animation, Based on Deep-Structured 3D Intuition Nets, they dutifully produce Panchromatic Concept Animation Based on Deep-Structured 3D Intuition Nets. I trust that I don't need to argue that embracing so fully the morals of the bestseller society is tantamount to academic dishonesty in one of its severest forms.

But there are much subtler forms of the dilemma. What the world needs most, but never asks for, is improvement of its ways of doing things. But you know how the world is: of course it is greatly in favour of improvement, but on closer inspection it insists on improvement without change. [Such is, in particular, the greatest ideal of all managers.] As a result, many authors are rather reluctant to deviate too visibly from established practice and current conventions. It is a problem all of us face. Do we continue to start numbering at 1 instead of with 0, merely because that's the way we used to do it? Do we stick to that habit even though we know that adherence to it —e.g. via Fortran— costs the computing community conservatively estimated at least $100 million per year? Perhaps —I hope— it comes so "natural" to you to start numbering at 0 that you don't realize anymore that this change of convention has been a hot issue; if so, let me inform you that, a quarter of a century ago, adopting the new convention was enough to make you an outcast in the mathematical world. The problem of this deviation was that it was so visible.

Consequently, many scientists feel much more comfortable if they can have their impact incrementally, by steps so little that the public at large is not frightened by them. If feasible, such an approach can be defended as an effective strategy. But in this discrete world, such continuous transitions are often impossible: you can try to buffer the shock by eloquence and metaphors, but no matter how hard you try, there is no gentle transition from classical mechanics to quantum mechanics, nor to the theory of relativity.

The moral is obvious. If, for the sake of acceptance, you stay closer to the status quo than you would like to, your cautiousness might render you ineffective in the sense that you will not effectuate the more radical improvement you had in mind. The successful scientist fits in the future he is aiming for; in the present he is living in he is by definition a misfit.

Austin, 26 September 1986


transcribed by Martijn van der Veen
revised Thu, 27 May 2010