Verified Software: Frequently Asked Questions


by Tony Hoare and Jayadev Misra


DRAFT :19 September 2005





Project Management


Q1. Why do you think that the time is ripe for a concerted effort of

this nature?


A: The challenge of construction of a routinely usable Program Verifier was first proposed in 1969 by Jim King in his Doctoral Dissertation. Since that time, the power and capacity and sheer numbers of installed computers have each increased more than a thousand-fold; and a thousand-fold advance has been made in certain aspects of mechanised theorem proving (e.g., SAT solving). There have been useful conceptual advances in programming theory, and there is already useful experience of construction of manual proofs for critical programs. The cost of programming error to the economy of the world has similarly escalated.


If all the advances in software and hardware technology can be brought to bear in a coordinated project for the delivery of a program verifier, the prospect of success is much greater than it ever has been in the past. Even though the risks are still high, this is the kind of challenge that must be attempted again; because the potential of reward far outweighs the risk. Even in the event of failure, some beneficial spin-off is likely; and another attempt will surely be made later.


Q2. It has been over 35 years since the publication of the classic paper, An Axiomatic Basis of Computer Programming, by Tony Hoare. Yet, very few schools teach formal methods in U.S.A. and most practitioners avoid any formal work. What do you expect it will take to effect a paradigm shift?


A: The existence (or even the near prospect) of an automatic program verifier may well be the trigger for the necessary paradigm shift. It may break the vicious circle of students who do not wish to learn a technology that is not used in industry, and an industry that is reluctant to apply a technology in which their recruits are not trained.


The crucial motivation for education in the principles of verification is that it is driven by scientific ideals, in that it addresses the questions fundamental in all branches of engineering science. For any product of engineering, the qualified engineer must be able to answer questions on what it does and how it works. Scientists should be able to answer deeper questions, explaining why it works, and how we know that the answers to the previous questions are correct. Even if the fundamental achievements of pure science seem remote from practical application, good engineers and entrepreneurs eventually find ways of exploiting established scientific theories and tools.


Q3. How will you educate managers, software designers and programmers for this paradigm shift?


A: There is good evidence that intellectually live programmers can be educated in the principles and practice of specification and verification. For example, the Department of Continuing Education at Oxford runs a part-time Professional Development Masters Degree, which attracts many students from the British software Industry. The United Nations University in Macau has concentrated their research and education on formal methods of software engineering. Their experience is available to be exploited and generalised in the development of tools and texts for general education.


Q4: How would you train verification engineers? Should they have domain knowledge, say in banking or medical profession? What kind of CS background should they have? How long should they be trained?

A: The patterns for professional education in other branches of Engineering should be followed as closely as possible. A good general engineering education is a mixture of mathematics, general science and general engineering. It is illustrated by more detailed studies and exercises in a particular domain of application. But good engineers will always look forward to mastering or at least understanding many other specialisations, as the need arises during their professional career.


Q5: Can anything be learnt by doing a time and motion study on programmers: what do programmers do and how do they achieve results, however imperfectly?


A: There are many experienced programmers who can articulate their intuitions and their practices in program design and implementation. They provide a valuable resource for scientific and engineering research. Theorists will take up the challenge of generalising from the experience of individual programmers, to make it as widely applicable as possible; and tool-builders have the challenge of providing widely usable formal support for recognised best practice.


Q6: People work with different time-scales; some work on problems of immediate interest, and some others on longer-term projects. How will you find a team of people to work on a very long-term project? And, can researchers contribute to the goals of this project without being formal members of it?


A: We expect that the recognised leaders of the project will make plans to split the long-term project into many shorter-term sub-projects. Each subproject will provide some identifiable theoretical result or scientific tool or experimental application. Each subproject will have identifiable beneficiaries, and provide a basis on which further development can progress. There will be considerable exchange of experienced staff moving into and out of these subprojects. Many researchers trained on this longer term and more idealistic project will move into industry, and apply their experience on projects delivering more immediate and more commercial benefit.

Any researcher will become a formal member of the project quite informally, simply by announcing his/her commitment to one of the subtasks that have been recognised as an integral part of the project.



Q7: Personnel turnover: success of any project depends on the quality of people working on it. How can you maintain consistent quality over a span of 15 years?


The nature of Science is cumulative and transferable; and it does not depend on continuity of employment of individual scientists. By concentrating on scientific methods and results which are cumulative, the project can tolerate considerable turnover of staff, and will actually benefit from the increased variety of experience brought in by newcomers, and their lack of preconceptions about what is too difficult to tackle.


But many (probably most) scientists working in the area will not make a fifteen-year commitment to it. They will prefer to maintain their freedom to work more independently on novel lines of personal research, and only publish their results in the normal way when their research has been successful. Their research could still be highly relevant to the goals of the project, and may well lead to unexpected breakthroughs, which excite the enthusiasm of tool-builders and experimentalists to exploit their results as soon as possible.


Q8: You expect this project to be an international collaboration. How do you expect to coordinate the activities of geographically diverse groups?


A: The primary medium of discussion of coordination will be provided by the internet. This will be supplemented by frequent meetings of representatives of each of the teams who have committed themselves to the project. These could be organised as workshops at the standard series of specialist international conferences.


In order to plan coordination among differing specialisations, a new mechanism may be needed. If a group of representative leaders of the project can be identified, it may help for them to hold more frequent (say six-monthly) meetings, perhaps under the aegis of an IFIP Working Group.


Q9: How much coordination do you need, among the tool builders and the basic researchers, to meet the milestones?


A: The main purpose of setting up a Grand Challenge project will be to stimulate such coordination. The Challenge will motivate theorists to develop their theories to remedy perceived deficiencies in existing toolsets, and carry through their development to the stage of possible implementation in the context of established languages and tools. Tool-builders will be motivated to scan the literature for theories that could offer significant advantages to the users of their tools.


It will be for the researchers themselves to agree how much it is worth while to coordinate at any given time. No target can be imposed from the top. In all cases, the prospect of giving better support to users must be the primary motive.


Q10: Do you envision cooperation or competition as the main driver of the project?


A: Both. There will be cooperation on the long-term goals, and short-term competition between those who propose different means of achieving them. The shortest-term and most measurable goals, like speed of decision procedures, may be the subject of regular formal competitions; the rules must be carefully formulated (and occasionally changed) to ensure that winning entrants are directly contributing to the long-term goal. In the end, it is by attracting the most successful users that a tool will win the greatest esteem.


Q11: Where can the results of this project be published? Are there journals which will report on excruciatingly detailed work which may be quite specific to this project?


A: Successful verification of each new kind of application or module will be reported in a learned journal, in the usual way of a scientific experiment. The article will report in convincing scientific prose the background, pre-suppositions, caveats, conclusions and suggested generalisations of each experiment.


The excruciating details of the verification will be publicly available in the Repository, whose guardians will be responsible for ensuring that the experiment can be repeated. This will mean that the performance of the original tools can be compared with those that are developed later.


Q12: What is the role of industry? In the initial phase when you are developing the verifier? Later, when it is operational?


A:

1. release staff to participate in the project, possibly part time.

2. commission academic consultancy and research to apply formal methods

to projects of industrial importance, and to use prototype tools on a

speculative basis.

3. release codes and specifications for inclusion in the repository, to

provide experiments for further improvement of the tools.

4. commercial suppliers of programming tools should track the progress

of the project, and incorporate parts of the technology that appear

sufficiently mature into their existing program analysers and other

tools.


Q13: Do you envision a top-down management structure which will create the major guidelines and adopt the important decisions? Will they decide on what goes into the repository, or the guidelines on when it is acceptable to put something in the repository?


A: The project should be driven by the experimental researchers, who have undertaken the task to build up a repository of a million lines of verified code. It is the experimentalists who will essentially decide the eventual content of the repository, by choosing which new programs of which new kinds that they wish to verify.


For overall coordination and advance planning of the project, it may be helpful to identify a group of broadly representative researchers who have the confidence of the research community engaged in the project. They would meet at regular intervals to review progress and to suggest plans for further development.


Q14: Can you deliver the results earlier if, for example, the resources were doubled?


A: The hypothesis that underlies the Grand Challenge project is that progress in the design and use of tool-sets can be hastened in two ways: by integration of tools exploiting different technologies for different parts of the task, and by continuous interaction between tool-builders and reasonably large communities using them for ever more difficult tasks. Such interactions drive an evolutionary process of development, which may take ten or twenty years to reach fruition. Evolution takes time, and it is not easy to speed up.


In the early years, when the project is building up its culture, and its structure for collaboration, inappropriate amounts of funding could even be deleterious.

Q15: What are the most likely ways in which the project could fail?


A: Our promotion of this project as a Grand Challenge is based two

major assumptions, either of which may turn out to be unjustified.


We have assumed that hardware performance and proof technology and

programming theory is on the point of crossing a threshold, so that the

goal of automatic verification that has failed on all previous attempts

is now within our reach.


We have also assumed that by concerted endeavour the research community

can hasten the achievement of that goal. There are three ways at least

in which this second assumption may fail.


Firstly we hope is that the power of a tool (or of a coherent toolset)

will be increased in the long run if it is assembled from powerful

components, in a way that permits continued more or less independent

improvement of each component. This may involve serious loss of

performance; and worse, the combination may require too wide a range of

expertise for most potential users. Certainly, this kind of integration

will require a degree of cooperation which has not before been common in

Computer Science research. The validity of the assumption should be

tested by pilot experiments before the main project starts.


Secondly we hope that the usability of tools can be significantly

increased by close interaction between tool-building teams and a

substantial community of knowledgeable scientists, attempting to apply

the tools to an ever extending corpus of challenge codes. It may be

hard to recruit, train and motivate the users, especially at the

beginning of the project, when the tools are most feeble. The

tool-building teams require continuity of employment, and in return they

must accept a service role in the project.


Thirdly we hope that the development and extension of the toolset to

deal with early experiments will make the tool more capable of meeting

the challenge of later more substantial experiments. We hope also that

the results of individual experiments will be cumulative. For example,

inference of assertions and other low-level documentation for a legacy

program should contribute to any subsequent more substantial reverse

engineering of the same program. We may be disappointed in such hopes if it

turns out that every correct program is correct for a different reason.


Finally, even if the project succeeds fully in its scientific goals,

many of its scientific advances may take a long time to find widespread

application. There are many possible reasons for this; but it is not

worth speculating on them so far in advance.


Q16: Who gets credit if the project is successful? And, whom should we

blame for failure?


A: Everybody engaged in the project must get the credit for success. Particular praise must go to the experimentalists. It is they who labour hard to apply early prototypes of tools to programs of increasing complexity, when they know that later development of better tools will make their early work so much easier that their efforts will seem redundant.


In the event of failure, there should be no apportionment of blame. It is known from the start that the project carries many risks. Even a failed project can have many beneficial spin-offs.


Technical aspects


Q17: If the programmers were better trained, and they had access to

appropriate tools, would not the verifier be redundant?


A: Surely, the verifier will be a central component in any future tool-set contributing to programmer productivity. Such checkers of performance and safety are now standard, even compulsory, in all other branches of engineering.


Q18: Your concern seems to be to detect errors caused by unintentional human failings. What about intentional ones such as viruses and Trojan horses? Can the verifier detect their presence?


A: Yes, in principle. The specifications, the codes and the proofs will be subject to check by independent verifiers. The specifications must continue to be subject to human scrutiny, to ensure that they are both comprehensible and appropriate. The main continuing risk will be a failure of diligence in human checking. There is still the danger that the specifications themselves may be deliberately designed to mislead.


Q19: How much of a sure bet is this project given only that we can only

count on modest improvements in machine speed, but not on any

extraordinary development such as fast SAT solving?


A: The project is not a sure bet, and for many reasons. At present, machine speeds and memory capacity do not appear to be the main impediments to the development of the science; though higher speeds will always be welcome in its eventual application.


Q20: Which features of programming language will help and which ones will hinder your efforts?


A: One of the goals of this project is to answer these questions, and support the answers by convincing scientific evidence. The answers may be used by programming teams to select design patterns and coding disciplines to avoid or severely restrict features that involve more intricate proofs. In the long run, programming languages may evolve to remove such features, or control them by confining their use to recognisable and verifiable design patterns.


Initially, the most problematic features are likely to be two-fold: general aliasing, which prevents easy determination of the extent of changes made by a program module; and general inheritance, which makes it difficult to determine which method body is called by which method call.


Q21: How can a careful designer of software exploit his design decisions to guide the verifier? Is there a design for verification?


A: Yes. As in other branches of engineering, the requirement for verification is a significant factor in the organisation of a project, and can legitimately influence many of the detailed design decisions. One goal of this project will be to apply verification technology to early design documents. An even greater challenge will be to maintain the validity of the design documentation as the product evolves to meet changing requirements.


Q22: How extensive are the set of properties you plan to verify? Full functional

verification? Reactive properties? Space and time utilisation?


A: The aim of the project is to support reasoning about any aspect of correctness or dependability or performance that can be precisely and clearly stated for input to a checker. In many cases, the specifications will be considerably weaker, and hopefully easier to prove, than total functional correctness.


Q23: Will your first large verified system be a stand-alone system or will it communicate with other systems?


A: The repository should contain systems of both kinds (and many other kinds as well). It is the choice of the experimental community which challenges they will tackle first. Which of them succeeds first will be a matter for competition.


Q24: Given the interests of both of you in concurrency, is concurrency

going to play a major role?


A: It is the interests of the general research community that will determine the range of issues selected for research at each stage. The challenge of concurrency is likely to feature strongly. This is for two reasons: the development of hardware multi-core architecture makes it more relevant; and there is likely to be a need for a range of solutions, involving design patterns of lesser or greater complexity and generality.


Q25: Which of the following kinds of systems would you expect to verify?

A word processor

A web server

Air-traffic controller

Operating system kernel

Medical instruments


A: All of the above, and more. For the larger systems, a selection of modules may be adequate to test applicability of the technology. It is an important goal of the project to establish whether a single verification tool or toolset will be adequate for as wide a range of applications as possible.


Q26: Since this is a long-term project will not the tools be applicable only to the systems of today?


A: Yes, there is likely to be a lag between the first development of any new kind of system and the first application of verification tools to it. The process of catching up will be continuous, even after ‘completion’ of the main project.


But first, let us see if we can catch up, and develop a verifier for systems developed fifteen years ago.


Q27: We have had extraordinary improvements in SAT solving. What will be the role of decision procedures in this project? Does the project rely on inventions of new decision procedures? Are fast decision procedures key to the success of this project?


A: Fast decision procedures (or rather, constraint solvers) will play an essential role in the project. They will provide the reasoning engines for all the other kinds of verification technology, including heuristic proof search, resolution, and model checking. New and faster decision procedures will be very welcome, and they must also work well together with each other and with the tools which exploit them.


But there are many other technologies whose development is essential to the project, and all of them deserve the title of ‘key’.


Q28: Will the verifier eliminate need for manual verification, much as the development of calculators has eliminated manual long division?


A: That is the impression we would like to give. But I expect that any significant new application of computers will require development of new application-specific theories, whose proofs will first be constructed manually by experts in proof technology. These proofs will be checked by a proof tool, and the theorems will then be available for routine application by programmers.


Q29: Is it likely that some of your verification components will be eventually included in compilers to warn the programmer about certain possible behaviours, or even to optimise code generation?


A: Yes, such warnings may progressively be included in program analysers that are currently in widespread use. In fact, the generation of more comprehensive and more accurate warnings may be the first of the industrial applications of the research of this project.


If an optimiser is going to exploit information conveyed in program assertions, it is clearly important that the assertions should be verified: otherwise, debugging will be a nightmare.


Q30: How interactive will the verifier be? How much of its internal structure should one know to drive it effectively?


A: For most programmers, interaction should probably be at the level of test cases rather than proof. We hope that interactive proving should be confined to specialists, whose task is to develop libraries of re-usable theories and implementations.


Q31: Can you truly base your verifier design on an existing language? Are they not too ephemeral? Should you not base the tools on meta-level constructs so that they can be adapted for specific languages easily?


Languages in widespread use are not (perhaps unfortunately) ephemeral. The scientific problems that they pose are worth solving, both for their practical and for their scientific significance. If the scientific theory can model and in principle solve the problems posed by existing languages, the scientist’s advice to the engineer to avoid the problems will be much more credible.


It is expected that many of the tools will be applicable to more than one language, and even to systems incorporating modules written in more than one language, for example a procedural language and a functional language. Many systems of the present day are put together from modules written in scripting languages, spreadsheet languages, and database query languages. It will be a significant long-term challenge to extend verification technology to these.


Q32: Can you achieve both modularity and performance in tool building and integration?


A: It is a basic hypothesis of the Grand Challenge project that different technologies can be combined into a single tool that is conveniently usable as a scientific instrument; and yet each technology is still subject to independent improvement of by specialists in that field. Such a style of evolution is now standard in the software industry; and one day the tradition of collaboration will spread to more ambitious kinds of research in Computer Science as well.


Q33: Is this the best repository structure? What about code which depend on libraries? On seldom-used features of the language?


A: The repository will contain some programs that use libraries, and others that exploit seldom-used language features. Perhaps no-one will be willing to tackle the more difficult challenges until the tool-sets are more adequate than they are today. Then it will be a matter of competition which programs are the first to be verified


Q34: How can you be sure that the compiler implements the semantics with respect to which you have verified the code?


A: This is a lively topic of research, and a very challenging one. In future, more programs will be constructed by generation from specification, and correctness of generators will be increasingly an issue, because they do not have the advantage of the universal application that increases confidence in the soundness of general-purpose language compilers.


One method of increasing confidence in generated programs is to make the generator produce specifications and proofs at the same time as code, so that correctness can be verified by an independent tool.


For a general-purpose language, perhaps critical parts like the optimisers and the code generators will one day be generated from semantic specifications. It will be interesting to see whether complete compilers will be checkable by a general-purpose program verifier. Meanwhile, we will continue to rely on testing both the compilers and the object programs, on the reasonable engineering hypothesis that rare errors will be discovered, and will not mask each other.


Q35: Would it be more effective to build large systems by combining components according to some safe composition rules, and the role of the machine would be to assert the safety of composition?


A: Yes, that will often be the most effective way of exploiting

verification technology. In fact, we expect that the repository will

contain examples of the verification of useful design patterns, in which

the strategy for combination of modules has been verified in advance.


Q36: Can we trust the verification efforts by an unknown team any more than the code from an unknown source?


A: Yes, definitely. Trust in a programming team can be justified if reading (or sampling) their specifications shows them to be clear, comprehensive and accurate, and if their proofs are subject to an independent verification. This should be complemented by the normal methods of professional inspection and interview, such as that conducted by financial auditors.


Q37: How long do we have to wait to apply the verifier in practice? At what point would I use the verifier in preference to a debugger, even though it is incomplete?


A: We hope that a primitive verifier, usable by experts as a scientific instrument, will be available early in the project. Prediction of the speed of transfer of technology into widely used toolsets is not a matter that scientists are qualified to pronounce on. It depends as much on chance evolution of the market as on planned evolution of the technology. The project will collect convincing evidence that verification is possible, and that it can be done at reasonable cost. The technology transfer will take place some time after it is established that the cost is less than the short-term costs of testing code, the medium-term costs of delay in delivery, as well as the long-term costs of residual error.

These considerations will be much more persuasive than pursuit of an ideal of correctness.


Q38: Which will be more cost-effective, debug or verify, if I want to be reasonably confident about a piece of software, not necessarily get a full guarantee? What is an acceptable price to verify?


A: We suspect that the good engineer on a large project will always use a verifier in conjunction with a debugger. Good engineering judgement will be needed to decide which aspects of a programming project are most worthy of the detailed specification necessary for the verifier, and which can safely be left to testing.


Here is an encouraging prediction: after the project is complete, the engineer will seldom regret the use of the verifier, and will sometimes resolve to use it more substantially in the next project.


Q39: Is this challenge grand enough? Would it possibly suck money away from other worthy projects where advances in basic sciences can be made?


A: The initiation of a good scientific challenge should not be dependent on any significant increase of funding. We could recommend this project as just one of the more efficient ways of way of spending whatever money is allocated for the advancement of Computer Science. Because Science is cumulative, progress can be made, albeit more slowly, even with the same absolute amounts of funding, provided the funds are distributed wisely.


Because of the risks, we think it would be dangerous if this project absorbed more than a few percent of the total world-wide expenditure on research in Computer Science. Verification only solves one of the problems of effective computer application, and continued research support is essential in all the other areas relevant to the delivery of dependable and useful software.