LOGO
Formal Methods in
Computer-Aided Design
30 Oct - 2 Nov, 2018
University of Texas, Austin, Texas

Tutorials and Keynotes


Tutorials

The tutorials will take place on 30 October, 2018.

Nina Narodytska, VMware Research

Formal Verification of Deep Neural Networks.

Abstract: Deep neural networks are among the most successful artificial intelligence technologies making impact in a variety of practical applications. However, many concerns were raised about the `magical’ power of these networks. It is disturbing that we are really lacking of understanding of the decision making process behind this technology. Therefore, a natural question is whether we can trust decisions that neural networks make. One way to address this issue is to define properties that we want a neural network to satisfy. Verifying whether a neural network fulfills these properties sheds light on the properties of the function that it represents. In this tutorial, we overview several approaches to verifying neural networks properties. The first set of methods encode neural networks into Integer Linear Programs or Satisfiability Modulo Theory formulas. They come up with domain-specific algorithms to solve verification problems. The second approach is to treat the neural network as a non-linear function and to use global optimization techniques for verification. The third line of work uses abstract interpretation to certify neural networks. Finally, we consider a special class of neural networks – Binarized Neural Networks – that can be represented and analyzed using Boolean Satisfiability. We discuss how we can take advantage of the structure of neural networks in the search procedure.

Oded Padon, Stanford University

Deductive Verification of Distributed Protocols in First-Order Logic

Abstract: Formal verification of infinite-state systems, and distributed systems in particular, is a long standing research goal. In the deductive verification approach, the programmer provides inductive invariants and pre/post specifications of procedures, reducing the verification problem to checking validity of logical verification conditions. This check is often performed by automated theorem provers and SMT solvers, substantially increasing productivity in the verification of complex systems. However, the unpredictability of automated provers presents a major hurdle to usability of these tools. This problem is particularly acute in case of provers that handle undecidable logics, for example, first-order logic with quantifiers and theories such as arithmetic. The resulting extreme sensitivity to minor changes has a strong negative impact on the convergence of the overall proof effort.

On the other hand, there is a long history of work on decidable logics or fragments of logics. Generally speaking, decision procedures for these logics perform more predictably and fail more transparently than provers for undecidable logics. In particular, in the case of a false proof goal, they usually can provide a concrete counter-model to help diagnose the problem. However, decidable logics pose severe limitations on expressiveness, and it is not immediately clear that such logics can be applied to proving complex protocols or systems.

In this tutorial, we will explore a practical approach to using first order-logic, and a decidable fragment thereof, to prove complex distributed protocols and systems. The approach, implemented in the Ivy verification tool, applies abstraction and modular reasoning techniques to mitigate the expressiveness limitations of decidable fragments. The high-level strategy involves the following ideas:

  • Abstracting infinite-state systems using first-order logic.
  • Carefully controlling quantifier-alternations to ensure decidability.
  • Using modular reasoning principles to decompose a proof into decidable lemmas.

Experience to date indicates that the approach, based on first-order logic, is surprisingly powerful, and it is possible to prove safety and liveness properties of complex protocols (e.g., Paxos variants), and also to produce verified low-level implementations, using decidable logics. Moreover, the effort required to structure the proof in this way is more than repaid by greater reliability of proof automation, which significantly reduces the overall verification effort. Better matching human reasoning capabilities to the capabilities of automated provers results in a more stable and predictable formal development process.

This tutorial is based on joint works with Jochen Hoenicke, Neil Immerman, Aleksandr Karbyshev, Giuliano Losa, Kenneth L. McMillan, Aurojit Panda, Andreas Podelski, Mooly Sagiv, Sharon Shoham, Marcelo Taube, James R. Wilcox, and Doug Woos

Marijn Heule, University of Texas, Austin

Formal Verification of Unsatisfiability Results

Satisfiability (SAT) solvers are used for determining the correctness of hardware and software systems. It is therefore crucial that these solvers justify their claims by providing proofs that can be independently verified. This holds also for various other applications that use SAT solvers. Just recently, long-standing mathematical problems were solved using SAT, including the Erdos Discrepancy Problem, the Pythagorean Triples Problem, and Schur Number Five. Especially in such cases, proofs are at the center of attention, and without them, the result of a solver is almost worthless.

What the mathematical problems and the industrial applications have in common, is that proofs are often of considerable size – in the case of the Schur Number Five about 2 petabytes in a highly compressed format. To demonstrate how to increase trust in the correctness of multi-CPU-year computations, we validated the poof of the Schur number five problem. We certified the proof using the ACL2 theorem proving system. Given the enormous size of the proof, we argue that any result produced by SAT solvers can now be validated using highly trustworthy systems with reasonable overhead.

The tutorial also covers how to use tools that validate proofs of unsatisfiability. Apart from verifying SAT-solving results, these tools support producing unsatisfiable cores and optimized proofs. Unsatisfiable cores can be useful in various debugging settings, while optimized proofs allow for fast validation by a formally-verified tool and an independent party.


Keynotes

Grant Olney Passmore, Aesthetic Integration

Formal Verification of Financial Algorithms with Imandra

Abstract: Many deep issues plaguing today’s financial markets are symptoms of a fundamental problem: The complexity of algorithms underlying modern finance has significantly outpaced the power of traditional tools used to design and regulate them. At Aesthetic Integration, we’ve pioneered the use of formal verification for analysing the safety and fairness of financial algorithms. With a focus on financial infrastructure (e.g., the matching logics of exchanges and dark pools), we’ll describe the landscape, and illustrate our Imandra formal verification system on a number of real-world examples. We’ll sketch many open problems and future directions along the way.

Grigore Rosu, University of Illinois at Urbana-Champaign

Formal Design, Implementation and Verification of Blockchain Languages

Abstract: Many of the recent cryptocurrency bugs and exploits are due to flaws or weaknesses of the underlying blockchain programming languages or virtual machines. The usual post-mortem approach to formal language semantics and verification, where the language is firstly implemented and used in production for many years before a need for formal semantics and verification tools naturally arises, simply does not work anymore. New blockchain languages or virtual machines are proposed at an alarming rate, followed by new versions of them every few weeks, together with programs (or smart contracts) in these languages that are responsible for financial transactions of potentially significant value. Formal analysis and verification tools are therefore needed immediately for such languages and virtual machines. We present recent academic and commercial results in developing blockchain languages and virtual machines that come directly equipped with formal analysis and verification tools. The main idea is to generate all these automatically, correct-by-construction from a formal specification. We demonstrate the feasibility of the proposed approach by applying it to two blockchains, Ethereum and Cardano.