FMCAD 2006
Formal Methods in Computer Aided Design
San Jose, CA, USA
November 12 - 16
Paper Abstracts

Model checking with SAT

Edmund M. Clarke

Symbolic model checking with binary decision diagrams was a major
breakthrough and led to the use of this model checking for hardware
verification by semiconductor companies. However, because binary
decision diagrams are a canonical form for Boolean functions and
variables must occur in the same order on each path from the root to a
leaf, the number of state variables that can be handled by this means
is limited.

The development of powerful programs for determining if a Boolean
function is satisfiable promises to have an equally dramatic effect on
the success of model checking.  Bounded model checking for linear
temporal logic was the first technique to exploit the power of SAT
solvers. While this approach is useful for finding counterexamples, it
cannot in general be used to show that a formula is true in a model. A
number of methods have been proposed for making bounded model checking
complete or extending traditional fix-point algorithms for model
checking to use SAT solvers instead of binary decision diagrams. These
techniques include bounding the diameter of the model, induction,
efficient image computation using cube enlargement or circuit
co-factoring, and Craig interpolation.

My tutorial will describe how bounded model checking for linear
temporal logic works, including new linear translations into SAT and
explain the complete algorithms for SAT-based model checking and
discuss the advantages and disadvantages of each approach.

Theorem Proving in Verification: The ACL2 System

J. Strother Moore 

I plan to mention syntax, axioms, rules of inference, the basic idea
of symbol manipulation to transform formulas, mechanization, the
cooperative roles of heuristics and decision procedures, and
user-interface issues.  I will show how you use a formal logic to
formalize different kinds of computing systems including a netlist
language, a simple compiler, a JVM.  I will do a variety of simple
proofs, first just in the functional setting (e.g., append and
reverse) and then about computational models, like a multiplier
netlist and properties of JVM bytecode programs.  Along the way I will
mention, but not dwell on, some ACL2 industrial applications just so
they know this is real.  The applications will include the AMD Athlon
FPU, a microcoded operating system kernel, and the JVM.

SMT Solvers: Theory & Practice

Leonardo de Moura 

Decision procedures for checking satisfiability of logical formulas
are crucial for many verification applications. Of particular recent
interest are solvers for Satisfiability Modulo Theories (SMT). SMT
solvers decide logical satisfiability (or dually, validity) of
formulas in classical multi-sorted first-order logic with equality,
with respect to a background theory. The success of SMT for
verification applications is largely due to the suitability of
supported background theories for expressing verification
conditions. These theories include: the empty theory, which gives rise
to the so-called logic of equality and uninterpreted functions (EUF);
real or integer arithmetic; and theories of program or hardware
structures such as bitvectors and arrays.

In this talk we will discuss how modern SMT solvers work, the theory
behind them, and the SAT extensions required for SMT. I will also
focus in some specific (and important) theories such as: linear
arithmetic and bitvectors. I will also describe how SMT solvers are
used in industry and Microsoft in particular.

Integrating FV Into Main-Stream Verification: The IBM Experience

Jason Baumgartner

In this talk we discuss the evolving role of the (semi-)formal
SixthSense project at IBM. IBM has a long history of successful FV
deployment, which has secured the role of a dedicated model checking
team on virtually all IBM hardware products. However, that team is
invariably too small to impact the majority of the design logic,
resulting in FV being treated as a prioritized fire-fighting technique
peripheral to the main-stream design and (simulation-based)
verification methodology. To bring formal methods to the masses, the
SixthSense philosophy has been that of: (1) adopting reusable
specification across FV and simulation frameworks, and (2) scaling
automated formal algorithms to the size of testbenches common in
simulation frameworks.  This paradigm has resulted in substantially
wider-scale usage of SixthSense, often by users who are not versed in
formal methods, and on problems whose size and complexity far exceed
those thought amenable to automated formal analysis.

We discuss the impact this need for scalability has placed on
SixthSense technologies and implementation. In particular, we discuss
the core technologies implemented within the SixthSense system, from
semi-formal and proof algorithms to the novel transformation-based
verification framework interconnecting these with a variety of
reduction and abstraction engines. This overall framework leverages
the synergy between these algorithms to bring large testbenches within
the capacity of the core automated solver engines. In many cases, this
scalability enables bugs to be identified semi-formally, and even
proofs to be completed, without a need for any manual decomposition or
attributing of large-scale testbenches.

We lastly discuss the impact that this technical solution has had on
design and verification methodologies within IBM. We discuss numerous
applications of this technology, including designer-level verification
and design optimization, sequential equivalence checking,
reference-model style verification, and hardware-failure recreation
efforts. These include cases where the entire SixthSense testbench is
reused with main-stream simulation and emulation, and cases where
large-scale simulation-based verification tasks have been outright
replaced by SixthSense.

What can the SAT experience teach us about abstraction? 

Ken McMillan 

Enabling Large-Scale Pervasive Logic Verification through
Multi-Algorithmic Formal Reasoning

Jason Baumgartner, Tilman Gloekler, Devi Shanmugam, Rick Seigler, Gary
Van Huben, Hari Mony, Paul Roessler, and Barinjato Ramanandray

Pervasive Logic is a broad term applied to the variety of logic
present in hardware designs, yet not a part of their primary
functionality. Examples of pervasive logic include initialization and
self-test logic. Because pervasive logic is intertwined with the func-
tionality of chips, the verification of such logic tends to require
very deep sequential analysis of very large slices of the design. For
this reason, pervasive logic verification has hitherto been a task for
which formal algorithms were not considered applicable.
In this paper, we discuss several pervasive logic verification tasks
for which we have found the proper combination of algorithms to enable
formal analysis. We describe the nature of these verification tasks,
and the testbenches used in the verification process. We furthermore
discuss the types of algorithms needed to solve these verification
tasks, and the type of tuning we performed on these algorithms to
enable this analysis.

Post-reboot Equivalence and Compositional Verification of Hardware 

Zurab Khasidashvili, Marcelo Skaba, Daher Kaiss, and Ziyad Hanna

We introduce a finer concept of a Hardware Machine, where the set of
post-reboot operation states is explicitly a part of the FSM
definition. We formalize an ad-hoc flow of combinational equivalence
verification of hardware, the way it was performed over the years in
the industry. We define a concept of post-reboot bisimulation, which
better suits the Hardware Machines, and show that a right form of
combinational equivalence is in fact a form of post-reboot
bisimulation. Further, we show that alignability equivalence is a form
of post-reboot bisimulation, too, and the latter is a refinement of
alignability in the context of compositional hardware verification. We
find that post-reboot bisimulation has important advantages over
alignability also in the wider context of formal hardware
verification, where equivalence verification is combined with formal
property verification and with validation of a reboot sequence. As a
result, we propose a more comprehensive, compositional, and fully-
formal framework for hardware verification. Our results are extendible
to other forms of labeled transition systems and adaptable to other
forms of bisimulation used to model and verify complex hardware and
software systems.

Synchronous Elastic Networks

Sava Krstic, Jordi Cortadella, Mike Kishinevsky, and John O'Leary

We formally define--at the stream transformer level--a class of
synchronous circuits that tolerate any variability in the latency of
their environment. We study behavioral properties of networks of such
circuits and prove fundamental compositionality results. The paper
contributes to bridging the gap between the theory of
latency-insensitive systems and the correct implementation of
efficient control structures for them.

Finite Instantiations for Integer Difference Logic

Hyondeuk Kim and Fabio Somenzi

The last few years have seen the advent of a new breed of decision
procedures for various fragments of first-order logic based on
propositional abstraction. A lazy satisfiability checker for a given
fragment of first-order logic invokes a theory-specific decision pro-
cedure (a theory solver) on (partial) satisfying assignments for the
abstraction. If the assignment is found to be consistent in the given
theory, then a model for the original formula has been found. Oth-
erwise, a refinement of the propositional abstraction is extracted
from the proof of inconsistency and the search is resumed. We de-
scribe a theory solver for integer difference logic that is effective
when the formula to be decided contains equality and disequality
(negated equality) constraints so that the decision problem partakes
of the nature of the pigeonhole problem. We propose a reduction of the
problem to propositional satisfiability by computing bounds on a
sufficient subset of solutions, and present experimental evidence for
the efficiency of this approach.

Tracking MUSes and Strict Inconsistent Covers

Eric Gregoire, Bertrand Mazure, and Cedric Piette

In this paper, a new heuristic-based approach is in-
troduced to extract minimal ly unsatisfiable subformulas
(in short, MUSes) of SAT instances. It is shown that
it often outperforms current competing methods. Then,
the focus is on inconsistent covers, which represent sets
of MUSes that cover enough independent sources of in-
feasibility for the instance to regain satisfiability if they
were repaired. As the number of MUSes can be expo-
nential with respect to the size of the instance, it is
shown that such a concept is often a viable trade-off
since it does not require us to compute al l MUSes but
provides us with enough mutual ly independent infeasi-
bility causes that need to be addressed in order to restore

Ario: A Linear Integer Arithmetic Logic Solver

Hossein Sheini and Karem Sakallah 

In this paper we describe our solver for systems
of linear integer arithmetic logic. Such systems are commonly
used in design verification applications and are classified under
Satisfiability Modulo Theories (SMT) problems. Recognizing the
fact that in many such applications the majority of atoms are
equalities or integer unit-two-variable inequalities (UTVPIs), we
present a framework that integrates specialized theory solvers
for those atoms within a SAT solver. The unique feature of our
strategy is its simultaneous adoption of both a congruence-closure
equality solver and a transitive-closure UTVPI solver to find a
satisfiable set of those atoms. A full-scale ILP solver is then
utilized to check the consistency of all integer constraints within
the solution. Other notable features of our solver include its
combined deduction and learning schemes that collectively make
our solver distinct among similar solvers.

Understanding the Dynamic Behaviour of Modern DPLL SAT Solvers
through Visual Analysis

Cameron Brien and Sharad Malik 

Despite the many recent improvements in the speed and robustness of
DPLL-based SAT solvers, we still lack a thorough understanding of the
working mechanisms and dynamic behaviour of these solvers at
run-time. In this paper, we present TIGERDISP, a tool designed to
allow researchers to visualize the dynamic behaviour of modern DPLL
solvers in terms of time-dependent metrics such as decision depth,
implications and learned conflict clauses. It is our belief that
inferences about dynamic behaviour can be drawn more easily by vi-
sual analysis than by purely aggregate post-execution metrics such as
total number of decisions/implications/conflicts. These inferences can
then be validated through detailed quantitative analysis on larger
sets of data. To this end, we have used TIGERDISP with the HAIFASAT
and MINISAT solvers and have generated a few specific inferences about
their relatively efficient and inefficient solving runs. We have then
tested one of these inferences through quantitative analysis on a
larger data set and have presented our findings in this paper. An
important application of TIGERDISP would be in the development of a
solver that employs adaptive algorithms. This is an area that has
intrigued researchers in the past, but has not seen significant
results for lack of a clear understanding as to what constitutes good
progress during the run of a SAT solver. With better knowledge of
dynamic behaviour, it is conceivable that an adaptive solver could be
designed such that it switches between several competitive heuristics
at runtime based on a quantitative analysis of its own dynamic

Over-Approximating Boolean Programs with Unbounded Thread Creation

Byron Cook, Daniel Kroening and Natasha Sharygina 

This paper describes a symbolic algorithm for overapproximating
reachability in Boolean programs with unbounded thread creation. The
fix-point is detected by projecting the state of the threads to the
globally visible parts, which are finite. Our algorithm models
recursion by over-approximating the call stack that contains the
return locations of recursive function calls, as reachability is
undecidable in this case. The algorithm may obtain spurious
counterexamples, which are removed iteratively by means of an
abstraction refinement loop. Experiments show that the symbolic
algorithm for unbounded thread creation scales to large abstract

An Improved Distance Heuristic Function for Directed Software 
Model Checking

Neha Rungta and Eric Mercer

State exploration in directed software model check-
ing is guided using a heuristic function to move states near
errors to the front of the search queue. Distance heuristic
functions rank states based on the number of transitions needed
to move the current program state into an error location. Lack
of calling context information causes the heuristic function to
underestimate the true distance to the error; however, inlining
functions at call sites in the control flow graph to capture calling
context leads to an exponential growth in the computation. This
paper presents a new algorithm that implicitly inlines functions
at call sites to compute distance data with unbounded calling
context that is polynomial in the number of nodes in the control
flow graph. The new algorithm propagates distance data through
call sites during a depth-first traversal of the program. We show
in a series of benchmark examples that the new heuristic function
with unbounded distance data is more efficient than the same
heuristic function that inlines functions at their call sites up to
a certain depth.

Liveness and Boundedness of Synchronous Data Flow Graph

Amir Hossein Ghamarian, Marc Geilen, Twan Basten, Bart Theelen,
Mohammad Reza Mousavi, and Sander Stuijk

Synchronous Data Flow Graphs (SDFGs) have proven to be suitable for
specifying and analyzing streaming applications that run on single- or
multi-processor platforms. Streaming applications essentially
continue their execution indefinitely. Therefore, one of the key
properties of an SDFG is liveness, i.e., whether all parts of the SDFG
can run infinitely often. Another elementary requirement is whether an
implementation of an SDFG is feasible using a limited amount of
memory. In this paper, we study two interpretations of this property,
called boundedness and strict boundedness, that were either already
introduced in the SDFG literature or studied for other models. A
third and new definition is introduced, namely self-timed boundedness,
which is very important to SDFGs, because self-timed execution
results in the maximal throughput of an SDFG. Necessary and
sufficient conditions for liveness in combination with all variants of
boundedness are given, as well as algorithms for checking those
conditions. As a by-product, we obtain an algorithm to compute the
maximal achievable throughput of an SDFG that relaxes the requirement
of strong connectedness in earlier work on throughput analysis.

Model Checking Data-Dependent Real-Time Properties of the European 
Train Control System

Johannes Faber and Roland Meyer 

The behavior of embedded hardware and software systems is determined
by at least three dimensions: control flow, data aspects, and
real-time requirements. To specify the different dimensions of a
system with the best-suited techniques, the formal language CSP-OZ-DC
[1] integrates Communicating Sequential Processes (CSP) [2], Object-Z
(OZ) [3], and Duration Calculus (DC) [4] into a declarative
formalism equipped with a unified and compositional semantics. In this
paper, we provide evidence that CSP-OZ-DC is a convenient language for
modeling systems of industrial relevance. To this end, we examine the
emergency message handling in the European Train Control System (ETCS)
[5] as a case study with uninterpreted constants and infinite data
domains. We automatically verify that our model ensures real-time
safety properties, which crucially depend on the system's data
Related work on ETCS case studies focuses on stochastic examinations
of the communication reliability [6], [7]. The components' data
aspects are neglected, though.

The design of a distributed model checking algorithm for Spin

Gerard Holzmann 

For as long as many of us can remember, PC speeds and memory
sizes have been increasing, following a roughly exponential trend
that is known as Moore's curve. With it, the speed and scope of
formal methods tools has also steadily improved. It is as if the
formal methods community has been getting a free ride for many
years. But, this trend has ended. Instead of continuing to
increase clock speeds, the chips makers are now focusing on the
development of multi-core systems: doubling the number of cores
on a single chip at regular intervals, while keeping clock speeds
constant. Designers of logic model checkers must now switch
strategies as well if they want to continue the free ride.  We
report on early work on the design and implementation of a
multi-core version of the Spin model checker. In this talk I will
explain the rationale for the chosen approach, and present the
first experimental results.

Reducing Verification Complexity of a Multicore 
Coherence Protocol Using Assume/Guarantee

Xiaofang Chen, Yu Yang, Ganesh Gopalakrishnan, and Ching-Tsun Chou

We illustrate how to employ metacircular assume/guarantee reasoning
to reduce the verification complexity of finite instances of protocols
for safety, using nothing more than an explicit state model
checker. The formal underpinnings of our method are based on
establishing a simulation relation between the given protocol M, and
several overapproximations thereof, M1, . . . , Mk. Each Mi simulates
M , and represents one "view" of it. The Mis depend on each other both
to define the abstractions as well as to justify them. We show that in
case of our hierarchical coherence protocol, its designer could easily
construct each of the Mi in a counterexample guided manner. This
approach is practical, considerably reduces the verification
complexity, and has been successfully applied to a complex
hierarchical multicore cache coherence protocol which could not be
verified through traditional model checking.

Advanced Unbounded Model Checking Based on AIGs, BDD Sweeping, and
Quantifier Scheduling

Florian Pigorsch, Christoph Scholl, and Stefan Disch 

In this paper we present a complete method for verifying properties
expressed in the temporal logic CTL. In contrast to the majority of
verification methods presented in recent years, we support unbounded
model checking based on symbolic representations of characteristic
functions. Among others, our method is based on an advanced
And-Inverter Graph (AIG) implementation, quantifier scheduling, and
BDD sweeping. For several examples, our method outperforms BDD based
symbolic model checking by orders of magnitude. However, our approach
is also able to produce competitive results for cases where BDD are
known to perform well.

Symmetry reduction for STE model checking

Ashish Darbari 

In spite of the tremendous success of STE model checking one cannot
verify circuits with arbitrary large number of state holding
elements. In this paper we present a methodology of symmetry reduction
for STE model checking, using a novel set of STE inference rules. For
symmetric circuit models these rules provide a very effective
reduction strategy. When used as tactics, rules help decompose a given
STE property to a set containing several classes of equivalent STE
properties. A representative from each equivalence class is picked and
verified using an STE simulator, and the correctness of the entire
class of assertions is deduced, using a theorem that we
provide. Finally inference rules are used in the forward direction to
deduce the overall statement of correctness. Our experiments on
verifying arbitrarily large CAMs and circuits with multiple CAMs, show
that these can be verified using a fixed, small number of BDD

Thorough Checking Revisited

Shiva Nejati, Mihaela Gheorghiu, and Marsha Chechik

Recent years have seen a proliferation of 3-valued models for
capturing abstractions of systems, since these enable verifying both
universal and existential properties. Reasoning about such systems is
either inexpensive and imprecise (compositional checking), or
expensive and precise (thorough checking). In this paper, we prove
that thorough and compositional checks for temporal formulas in their
disjunctive forms coincide, which leads to an effective procedure for
thorough checking of a variety of abstract models and the entire

Life in the Jungle: Simulation vs. Verification

Robert Jones 

Ecological Niche or Survival Gear? - Improving an
industrial simulation methodology with Formal methods

Wolfgang Roesner

Giving the Gorilla some Brains: How can Formal complement Simulation?

Andreas Kuehlmann 

Despite much progress in the area of formal methods, simulation-based
techniques are still the main workhorse for functional verification
and will remain so for the foreseeable future.  However, contrary to
this practical reality, most research efforts in functional
verification are devoted to formal methods.  On the other hand,
current techniques applied in random-biased simulation for testing
functional correctness are mostly ad-hoc and may greatly benefit from
a more formal approach.  Examples are algorithms for efficient
constraint solving, coverage analysis, and systematic coverage
maximization.  During this panel we will discuss the pros and cons of
the various approaches to functional verification and attempt to
outline a possible synergy between them.  The goal is to increase the
awareness of large-scale verification requirements and trade-offs and
to start a discussion that might broaden the scope of current
verification research.

For introducing the topic, two industry experts will first give an
overview lecture describing requirements for large scale verification
projects and then join a larger panel to discuss the various
trade-offs and opportunities.

Synthesis of Designs from Property Specifications 

Amir Pnueli 

Optimizations for LTL Synthesis

Barbara Jobstmann and Roderick Bloem 

We present an approach to automatic synthesis of specifications given
in Linear Time Logic. The approach is based on a translation through
universal co-Buchi tree automata and alternating weak tree automata
[1]. By careful optimization of all intermediate automata, we achieve
a major improvement in performance.

We present several optimization techniques for alternating tree
automata, including a game-based approximation to language emptiness
and a simulation-based optimization. Furthermore, we use an
incremental algorithm to compute the emptiness of nondeterministic BuĻ
chi tree automata. All our optimizations are computed in time
polynomial in the size of the automaton on which they are computed.

We have applied our implementation to several examples and show a
significant improvement over the straightforward
implementation. Although our examples are still small, this work
constitutes the first implementation of a synthesis algorithm for full
LTL. We believe that the optimizations discussed here form an
important step towards making LTL synthesis practical.

From PSL to NBA: A Modular Symbolic Encoding.		

Alessandro Cimatti, Marco Roveri, Simone Semprini and Stefano Tonetta 

The IEEE standard Property Specification Language (PSL) allows to
express all ω-regular properties mixing Linear Temporal Logic (LTL)
with Sequential Extended Regular Expressions (SEREs), and is
increasingly used in many phases of the hardware design cycle, from
specification to veri cation. 

Many verification engines are able to manipulate Nondeterministic
Buchi Automata (NBA), that can represent ω-regular
properties. Thus, the ability to convert PSL into NBA is an important
enabling factor for the reuse of a large wealth of verification tools.

Recent works propose a two-step conversion from PSL to NBA: first, the
PSL property is encoded into an Alternating Buchi Automaton (ABA);
then, the ABA is converted into an NBA with variants of
Miyano-Hayashi's construction. These approaches are problematic in
practice: in fact, they are often unable to carry out the conversion
in acceptable time, even for PSL specifications of moderate size. 

In this paper, we propose a modular encoding of PSL into symbolically
represented NBA. We convert a PSL property into a normal form that
separates the LTL and the SERE components. Each of these components
can be processed separately, so that the NBA corresponding to the
original PSL property is presented in the form of an implicit product,
delaying composition until search time.

Our approach has two other advantages: first, we can leverage mature
techniques for the LTL components; second, we leverage the particular
form of the PSL components that appear in the normal form to improve
over the general translation.

The transformation is proved correct. A thorough experimental analysis
over large sets of paradigmatic properties (from patterns of
properties commonly used in practice) shows that our approach
drastically reduces the construction time of the symbolic NBA, and
positively affects the overall verification time.

Assume-Guarantee Reasoning for Deadlock

Sagar Chaki and Nishant Sinha

We extend the learning-based automated assume guarantee paradigm to
perform compositional deadlock detection. We define Failure
Automata, a generalization of finite automata that accept regular
failure sets. We develop a learning algorithm LF that constructs the
minimal deterministic failure automaton accepting any unknown regular
failure set using a minimally adequate teacher. We show how LF can be
used for compositional regular failure language containment, and
deadlock detection, using non-circular and circular assume guarantee
rules. We present an implementation of our techniques and encouraging
experimental results on several non-trivial benchmarks.

A Refinement Method for Validity Checking of Quantified First-Order
Formulas in Hardware Verification

Husam Abu-Haimed, David Dill, and Sergey Berezin 

We introduce a heuristic for automatically check- ing the validity of
first-order formulas of the form ∀αm ∃βn.ψ(αm, βn) that are encountered in 
inductive proofs of hardware correctness. The heuristic introduced in
this paper is used to automatically check the validity of k-step
induction formulas needed to verify hardware designs. The heuristic
works on word-level designs that can have data and address buses of
arbitrary widths. Our refinement heuristic relies on the idea of
predicate instantiation introduced in [2]. The heuristic proves
quantified formulas by the use of a validity checker, CVC [21], and a
first-order theorem prover, Otter [16]. Our heuristic can be used as a
stand-alone technique to verify word-level designs or as a component
in an interactive theorem prover. We show the effectiveness of this
heuristic for hardware verification by ver- ifying a number of
hardware designs completely automatically.  The large size of the
quantified formulas encountered in these examples shows the
effectiveness of our heuristic as a component of a theorem prover.

An Integration of HOL and ACL2.

Michael Gordon, Warren Hunt, Matt Kaufmann, and James Reynolds 

A link between the ACL2 and HOL4 proof assistants is described. This
allows each system to be deployed smoothly within a single formal
development. Several applications are being considered: using ACL2's
execution environment for simulating HOL models; using ACL2's proof
automation to discharge HOL proof obligations; and using HOL to
specify and verify properties of ACL2 functions that cannot easily be
stated in the first-order ACL2 logic.

Care has been taken to ensure sound translations between the logics
supported by HOL and ACL2. The initial ACL2 theory has been defined
inside HOL, so that it is possible to prove mechanically that
first-order ACL2 functions on S-expressions correspond to higher-order
functions operating on a variety of types. The translation between
the two systems operates at the level of S-expressions and is intended
to handle large hardware and software models.

ACL2SIX : A Hint used to Integrate a Theorem Prover and an Automated
Verification Tool

Jun Sawada and Erik Reeber 

We present a hardware verification environment that integrates the
ACL2 theorem prover and SixthSense, an IBM internal formal
verification tool. In this environment, SixthSense is invoked through
an ACL2 function acl2six that makes use of a general-purpose external
interface added to the ACL2 theorem prover. This interface allows
decision procedures and modelcheckers to be connected to ACL2 by
simply writing ACL2 functions. Our environment also exploits a unique
approach to connect the logic of a general-purpose theorem prover with
machine designs in VHDL without a language embedding.  With an example
of a pipelined multiplier, we show how our environment can be used to
divide a large verification problem into a number of simpler problems,
which can be verified using automated verification engines.

Automatic Generation of Schedulings for Improving the
Test Coverage of Systems-on-a-Chip

Claude Helmstetter, Florence Maraninchi, Laurent Maillet-Contoz, and
Matthieu Moy 

SystemC is becoming a de-facto standard for the early simulation of
Systems-on-a-chip (SoCs). It is a parallel language with a
scheduler. Testing a SoC written in SystemC implies that we execute
it, for some well chosen data. We are bound to use a particular
deterministic implementation of the scheduler, whose specification is
non-deterministic. Consequently, we may fail to discover bugs that
would have appeared using another valid implementation of the
scheduler. Current methods for testings SoCs concentrate on the
generation of the inputs, and do not address this problem at all. We
assume that the selection of relevant data is already done, and we
generate several schedulings allowed by the scheduler
specification. We use dynamic partial-order reduction techniques to
avoid the generation of two schedulings that have the same effect on
the system's behavior. Exploring alternative schedulings during
testing is a way of guaranteeing that the SoC description, and in
particular the embedded software, is scheduler-independent, hence more
robust. The technique extends to the exploration of other non-fully
specified aspects of SoC descriptions, like timing.

Simulation Bounds for Equivalence Verification of Arithmetic Datapaths with 
Finite Word-Length Operands

Namrata Shekhar, Priyank Kalla, M. Brandon Meredith, and Florian Enescu 

This paper addresses simulation-based verification of highlevel
descriptions of arithmetic datapaths. Instances of such designs are
commonly found in DSP for audio, video and multimedia applications,
where the word-lengths of input/output bit-vectors are fixed according
to the desired precision. Initial descriptions of such systems are
usually specified as Matlab/C code. These are then automatically
translated into behavioural/RTL descriptions (HDL) for subsequent
hardware synthesis.

In order to verify that the initial Matlab/C model is bit-true
equivalent to the translated RTL, how many simulation vectors need to
be applied?  This paper explores results from number theory and
commutative algebra to show that exhaustive simulation is not
necessary for testing their equivalence. In particular, we derive an
upper bound on the number of simulation vectors required to prove
equivalence or identify bugs. These vectors cannot be arbitrarily
generated; we determine exactly those vectors that need to be
simulated. Extensive experiments are performed within practical CAD
settings to demonstrate the validity and applicability of these

Design for Verification of the PCI-X Bus.

Ali Habibi, Haja Moinudeen, and Sofiene Tahar 

The importance of re-usable Intellectual Properties (IPs) cores is
increasing due to the growing complexity of today's system-on-chip and
the need for rapid prototyping. In this paper, we provide a design for
verification approach of a PCI-X bus model, which is the fastest and
latest extension of PCI technologies. We use two different modeling
levels, namely UML and AsmL. We integrate the verification within the
design phases where we use model checking and model based testing,
respectively at the AsmL and SystemC levels. This case study presents
an illustration of the integration of formal methods and simulations
for the purpose of providing better verification results of SystemC

Formal Analysis and Verification of an OFDM Modem Design using HOL	

Abu Nasser Mohammed Abdullah, Behzad Akbarpour, and Sofiene Tahar 

In this paper we formally specify and verify an implementation of the
IEEE802.11a standard physical layer based OFDM (Orthogonal Frequency
Division Multiplexing) modem using the HOL (Higher Order Logic)
theorem prover. The versatile expressive power of HOL helped model the
original design at all abstraction levels starting from a
floating-point model to the fixed-point design and then synthesized
and implemented in FPGA technology. The paper also investigates the
rounding error accumulated during ideal real to floating-point and
fixedpoint transitions at the algorithmic level.

A Formal Model of Lower System Layers	

Julien Schmaltz 
We present a formal model of the bit transmission between
registers with arbitrary clock periods. Our model considers precise
timing parameters, as well as metastability. We formally de ne the
behavior of registers over time. From that de nition, we prove, under
certain conditions, that data are properly transmitted. We discuss how
to incorporate the model in a purely digital model. The hypotheses of
our main theorem de ne conditions that must be satis ed by the purely
digital part of the system to preserve correctness.

Formal Verification in Industry -- Current State and Future Directions 

Christian Jacobi

Recent years have shown a dramatic increase in the capabilities
of formal and semi-formal verification methods. Despite those
improvements, the main verification work-horse in industry
remains random simulation. In this talk we will summarize the
verification approaches applied in large-scale microprocessor
development projects, and will describe where and how formal
verification is already being applied. We then describe some
obstacles that prevent a wider range of FV usage, and discuss
some ideas on how the research community could help to overcome
those obstacles. One of those ideas is the development of
"verification templates": blue-prints for how to verify certain
types of logic. For example, there are well known and proven
strategies for arithmetic circuits, but there is little
experience on the automatic verification of complex cache
controllers at the RTL level. Another obstacle is lacking synergy
between simulation and FV techniques, which could be improved
with synthesizable testbenches that can run in simulation, formal
verifcation, and also on hardware accelerators.

Balancing Formal and Dynamic Techniques in Validation of Industrial
Arithmetic Designs 

Roope Kaivola

Formal verification of microprocessor components has been pursued
in various forms for roughly a decade now in Intel. One of the
most successful target areas has been the verification of
floating-point arithmetic units against IEEE floating-point
standard. In this area, formal verification has become an
established practice applied in most recent processor designs. I
will describe our verification framework for the task, combining
symbolic simulation and BDD analysis with theorem-proving, inside
a general-purpose functional programming language.  I will also
discuss the tradeoffs between dynamic testing and formal
verification in a product development project setting.

Perils of Post-Silicon Validation

Piyush Desai

Efficient and timely post-si validation of contemporary general
purpose CPUs is a daunting task due to three formidable obstacles.

1. Explosion of configuration, state, and data space that must be
validated to warrant production quality coverage.

2. Lack of necessary and sufficient control (stimuli) to accelerate
validation and debug.

3. Lack of necessary and sufficient visibility for coverage
measurements and failure debug.

The explosion of the coverage space that must be targeted for
necessary and sufficient post-si validation is the most fundamental
barrier that deserves immediate attention. The problem is rooted in
the fact that there are no formal and deterministic methods to specify
a necessary and sufficient coverage sub-space (N+S sub-space) from the
virtually infinite coverage space of a typical CPU
implementation. Once a formal specification paradigm is available to
capture the total coverage space, algorithms can be developed to prune
the space to identify inherent redundancies, factor in user specified
constraints, and deduce the N+S sub-space.  Tools based on such
coverage pruning algorithms can be directed to generate test cases to
exercise the N+S sub-space, and/or sweep through run-time execution
traces to gather coverage data and even recognize functional
anomalies. Such approaches have been applied with limited success to
specific functional domains such as the instruction-set architecture
and the architecturally defined memory ordering. More general
techniques and algorithms must be developed however to specify and
prune large state machines, and data and configuration spaces.

A critical variation of the coverage theme is sustained stress testing
around design corners. As opposed to testing for functional coverage,
sustained stress testing of complex control logic and datapths tends
to expose rare functional failures and circuit marginalities that
almost always involve intricate sequence of unpredictable events and
data patterns. Staging sustained stress scenarios require precise
analytical or queuing models in order to characterize the optimal
instruction and data flows that will saturate the underlying control
logic and datapath structures with maximal achievable throughput
rate. Applicable random events are introduced in such instruction/data
flows at different rates to stress the design. A framework for
building formal stochastic models for major control and datapaths in a
CPU implementation would significantly accelerate stress testing and
the discovery of the gnarly design errata that are usually associated
with such testing.

These are just couple of examples of the challenges we face and need
to address in the industry. Given that the realm of post-si functional
validation has remained relatively unexplored in the academia, while
the architecture of general purpose CPUs has been evolving rapidly to
satisfy a myriad of demanding workloads of multimedia, database, and
networking/communication tasks, significant challenges and
opportunities exist for researching breakthrough solutions for the
aforementioned barriers. In this talk I will give a prelude to more of
these challenges and opportunities.

Sequential Equivalence Checking across Arbitrary Design Transformations:
Technologies and Applications

Viresh Paruthi

High-end hardware design flows mandate a variety of sequential
transformations to address needs such as performance, power,
post-silicon debug and test. Furthermore, it is often the case that
functional design modifications are required to preserve
backward-compatibility of specific aspects and modes of design
behavior. The industrial demand for robust sequential equivalence
checking (SEC) solutions capable of efficiently and exhaustively
proving the required equivalence criteria between two designs is thus
becoming increasingly prevalent. In this talk, we discuss the role of
SEC within IBM. We motivate the need for a highly-automated scalable
solution, which is robust against a variety of design transformations
- including those that alter initialization sequences, and preserve
equivalence only conditionally. This motivation has caused us to
embrace the paradigm of SEC with respect to designated initial states.

We next describe the unique technologies comprised within our SEC
framework, embodied in the SixthSense toolset. To achieve the
necessary degree of scalability, we leverage a diverse set of
synergistic transformation and verification algorithms within our SEC
framework, which we have found necessary for the automated solution of
the most complex SEC problems. Lastly, we discuss numerous types of
applications of this SEC solution in an industrial setting, and its
significance in reshaping design and verification methodologies.

On-Chip Instrumentation as a Verification Tool

Rick Leatherman

On Chip Instrumentation or OCI, the addition of IP to perform in
silicon analysis and optimization of complex systems been widely used
for processor subsystems that are not amenable to purely software
oriented verification approaches. First Silicon Solutions is extending
this approach to address larger portions of the architecture and more
global instrumentation of the SoC. We will discuss approaches to OCI
in current use as well as work being done by related ongoing standards
groups and activities with regards to open problems in this area.