FMCAD 2007
Formal Methods in Computer Aided Design
Austin, TX, USA
November 11 - 14
Paper Abstracts


The Synergy between Logic Synthesis and Equivalence Checking

R. Brayton

It is well-known that combinational and sequential synthesis (Syn),
and equivalence checking (EC) are closely related. However,
increasingly the ties between the two are becoming even more
interconnected. The reasons are the following. First, advances in EC
are spurring on further research and interest in Syn, while the
potential advantages that can be shown for Syn are spurring further
research in SEC.  Second, techniques discovered in SEC are being found
useful in Syn and vice versa. Third, both Syn and EC must be made
scalable for application to present and future designs, and we believe
that scalable synthesis implies scalable verification and vice
versa. This talk will illustrate a number of examples illustrating
both the synergy and technology transfer between the two areas. These

* unifying representation (sequential AIGs)
* cut/window-based techniques and the notion of locality, which 
  guarantees scalability
* use of assume then prove paradigm through simulations and SAT
* use of invariants
* use of interpolation
* recording a history of synthesis transformations.


Power Management for VLSI Circuits and the Need for
High-Level power Modeling and Design Exploration

Farid Najm

With every new generation of semiconductor technology, design teams
need to pay increased attention to the power dissipation of VLSI
chips. This trend, which started in the early 1990s, has now become
serious enough that managing power is now the primary design
constraint for many design groups. In order to manage the power
dissipation of the final design, much care has to be taken during
system partitioning, architecture definition, and circuit design. Much
work has been done in the EDA community on power estimation, power
modeling, and power optimization. However, the practical impact of
low-level (circuit, physical) EDA solutions for power management has
not been great.  Indeed, the only practical solutions that have
emerged for power management are not EDA-driven, but are based on
design/technology techniques such as using multiple supply voltages or
voltage islands, using different types of transistors (with different
Vt's for example), using sleep transistors to control leakage, etc.
To be sure, some of these design techniques present their own EDA
problems, such as finding the best assignment of the different Vt's or
the best choice of voltage domains. However, it is generally felt that
EDA for power management can have more impact at a higher level of
abstraction, in the architecture or system domain. The purpose of this
tutorial is to illuminate the link between high-level design decisions
and the power dissipation of the final silicon chip. I will provide a
general introduction to the issue of power, describe why it is a
serious concern, explain how power depends on various physical level
variables and on signal switching activity, and work upwards from
there to review existing techniques for high-level power modeling and


Practical SAT

Niklas Een

In this tutorial I will try to convey some of the lessons that I have
learned and experienced that I have had when applying SAT in an
industrial setting.  There are numerous engineering decisions that one
faces when using SAT practically. Many of these are not treated in the
published literature, largely because they are considered too
"minor". Still, making the wrong decision can lead to unnecessarily
poor performance or complicated code. In particular, I will use this
tutorial to put focus on issues related to incremental SAT and to CNF
encoding of SAT-problems derived from circuits.


Modeling Data in Formal Verification: Bits, Bit Vectors, or Words

Randal E. Bryant

Classical methods for specifying and reasoning about programs consider
each word to be an arbitrary numeric value.  This has carried over
into program analysis tools, where decision procedures for integer or
real arithmetic are used to reason about program behavior.  At the
other extreme, most automatic methods for formally verifying hardware
operate at the bit level, viewing each signal as a Boolean function of
the input and state bits.  In between these extremes, data can be
modeled by finite-precision arithmetic and bit-wise operations on
fixed-width data words.  A decision procedure for such a bit-vector
model can accurately capture the behavior of actual systems, while
also exploiting abstraction techniques that are difficult to recognize
when the system is modeled purely at the bit level.

In this tutorial, we survey the different approaches to modeling data
and data operations in verification tools.  We present recent results
on decision procedures for bit-vector and word-level representations.


Formal Verification: A Business Perspective

Moderator:  Aarti Gupta, NEC

Panelists: Robert Kurshan
           Hillel Miller 
           Rajeev Ranjan 
           Harry Foster
           Husam Abu-Haimed
           Prakash Narain

Formal verification (FV) technology has been around for about two
decades, and several EDA vendors have products incorporating FV. While
most deployments are targeted at functional verification of
user-written assertions, FV technology is also showing up in other key
verification hot spots, such as verification of clock domain
crossings, and verification of false and multi-cycle timing paths. In
this panel, we will take stock of the current status of the business
of FV, and of its future needs in research.

Senior executives from EDA vendors and user companies will discuss
their view of the current status of acceptance of FV technology
vis-ŕ-vis the conventional validation flow in semi-conductor
companies: where it is successfully deployed and by whom,
opportunities where it is not deployed or tends to fail in practice.
They will discuss roadblocks in its wider-spread usage and challenges
in building the FV market. They will finish with an outline of
research areas that they feel are most relevant to realize the full
potential of FV technologies.

The panel is intended to be of interest not only to academic and
industrial researchers, but also to current and future users of
verification tools and technologies. The aim is to build a community
that can channel scientific research into successful business.


Verification of embedded software in industrial microprocessors

Eli Singerman

The validation of embedded software in VLSI designs is becoming
increasingly important with their growing prevalence and
complexity. This talk will provide an overview of state-of-the-art
methods for verification of embedded software in industrial
microprocessors. First, a compositional approach to generate a formal
model of the design will be described. I will then discuss formal
verification techniques including both property and equivalence
verification. Employing formal in improving traditional simulation
based validation methodologies through automatic coverage and
test-generation will be explained. I will conclude by briefly
describing the application of these techniques.


Exploiting Resolution Proofs to Speed Up LTL Vacuity Detection for BMC

Jocelyn Simmonds, Jessica Davies, Arie Gurfinkel and Marsha Chechik

When model-checking reports that a property holds on a model, vacuity
detection increases user confidence in this result by checking that
the property is satisfied in the intended way.  While vacuity
detection is effective, it is a relatively expensive technique
requiring many additional model-checking runs.  We address the problem
of efficient vacuity detection for Bounded Model Checking (BMC) of LTL
properties, presenting three partial vacuity detection methods based
on the efficient analysis of the resolution proof produced by a
successful BMC run. In particular, we define a characteristic of
resolution proofs -- peripherality -- and prove that if a variable is
a source of vacuity, then there exists a resolution proof in which
this variable is peripheral. Our vacuity detection tool, VaqTree, uses
these methods to detect vacuous variables, decreasing the total number
of model-checking runs required to detect all sources of vacuity.


Improved Design Debugging using Maximum Satisfiability

Sean Safarpour, Mark Liffiton, Hratch Mangassarian, Andreas Veneris
and Karem Sakallah

Within the VLSI and SoC design cycles, debugging is one of the most
time consuming manual tasks. CAD solutions strive to reduce the
inefficiency of debugging by identifying error sources in designs
automatically. Unfortunately, the capacity and performance of such
automated techniques remain inadequate for industrial
applicability. This work aims to improve the performance of current
state-of-the-art debugging techniques, thus making them more
practical. More specifically, this work proposes a novel design
debugging formulation based on maximum satisfiability (max-sat) and
approximate max-sat. The developed technique can quickly discard many
potential error sources in designs, thus drastically reducing the size
of the problem passed to a conventional debugger.  The max-sat
formulation is used as a pre-processing step to construct a highly
optimized debugging framework. Empirical results demonstrate the
effectiveness of the proposed framework as run-time improvements of
orders of magnitude are consistently realized over a state-of-the-art


Industrial Strength SAT-based Alignability Algorithm for Hardware
Equivalence Verification

Daher Kaiss, Marcelo Skaba, Ziyad Hanna and Zurab Khasidashvili

Automatic synchronization (or reset) of sequential synchronous
circuits is considered as one of the most challenging tasks in the
domain of formal sequential equivalence verification of hardware
designs. Earlier attempts were based on Binary Decision Diagrams
(BDDs) or classical reachability analysis, which by nature suffer from
capacity limitations. Previous attempt to attack this problem using
non-BDD based techniques was essentially a collection of heuristics
aiming at toggling of the latches and it is not guaranteed that a
synchronization sequence will be computed if it exists.

In this paper we present a novel approach for computing reset
sequences (and reset states) in order to perform sequential hardware
equivalence verification between circuit models. This approach is
based on the dual-rail modeling of circuits and utilizes efficient
SAT-based engines for Bounded Model Checking (BMC). It it is
implemented in Intel's sequential verification tool, Seqver, and has
been proven to be highly successful in proving equivalence of complex
industrial designs. The synhcronization method described in this paper
can be used in many other CAD applications including formal property
verification, automatic test generation and power estimation.


Boosting Verification by Automatic Tuning of Decision Procedures

Frank Hutter, Domagoj Babic, Holger Hoos and Alan Hu

Parameterized heuristics abound in CAD, and the manual tuning of the
respective parameters is difficult and time-consuming. Very recent
results from the artificial intelligence (AI) community suggest that
this tuning process can be automated, and that doing so can lead to
significant performance improvements; furthermore, automated parameter
optimization can provide valuable guidance during the development of
heuristic algorithms. In this paper, we study how such an AI approach
can improve a state-of-the-art SAT solver for large, real-world
bounded model-checking and software verification instances. The
resulting, automatically-derived parameter settings yielded runtimes
on average 4.5 times faster on bounded model checking instances and
500 times faster on software verification problems than extensive
hand-tuning of the decision procedure. More interestingly, the
availability of automatic tuning influenced the design of the solver,
and the automatically-derived parameter settings provided a deeper
insight into the properties of problem instances.


Verifying Correctness of Transactional Memories

Ariel Cohen, John O'Leary, Amir Pnueli, Mark R. Tuttle and Lenore Zuck

We show how to verify the correctness of transactional memory
implementations with a model checker.  We give a framework for
describing transactional memory that is expressive enough to capture
the various notions of transactional memory provided by most of the
implementations in the literature.  We give proof rules for showing
that an implementation satisfies its specification, and we show how to
use them as part of mechanical verification.  We demonstrate our work
using the TLC model checker to verify several well-known
implementations described abstractly in the TLA+ specification


Algorithmic Analysis of Piecewise FIFO Systems

Naghmeh Ghafari, Arie Gurfinkel, Nils Klarlund and Richard Trefler

Systems consisting of several components that communicate via
unbounded perfect FIFO channels (i.e. FIFO systems) arise naturally in
modeling distributed systems. Despite well-known difficulties in
analyzing such systems, they are of significant interest as they can
describe a wide range of communication protocols. Previous work has
shown that piecewise languages play an important role in the study of
FIFO systems. In this paper, we present two algorithms for computing
the set of reachable states of a FIFO system composed of piecewise
components. The problem of computing the set of reachable states of
such a system is closely related to calculating the set of all
possible channel contents, i.e. the limit language. We present new
algorithms for calculating the limit language of a system with a
single communication channel and a class of multi-channel system in
which messages are not passed around in cycles through different
channels. We show correctness of our algorithms and discuss their
worst case complexity.


Transaction Based Modeling and Verification of Hardware Protocol

Xiaofang Chen, Steven German and Ganesh Gopalakrishnan

Modeling hardware through atomic guard/action transitions with
interleaving semantics is popular, owing to the conceptual clarity of
modeling and verifying the high level behavior of hardware. In mapping
such specifications into hardware, designers often decompose each
specification transition into sequences of implementation transitions
taking one clock cycle each. Some implementation transitions realizing
a specification transition overlap. The implementation transitions
realizing different specification transitions can also overlap.

We present a formal theory of refinement, showing how a collection of
such implementation transitions can be shown to realize a
specification. We present a modular refinement verification approach
by developing abstraction and assume-guarantee principles that allow
implementation transitions realizing a single specification transition
to be situated in sufficiently general environments. Illustrated on a
non-trivial VHDL cache coherence engine, our work may allow designers
to design high performance controllers without being constrained by
fixed automated synthesis scripts, and still conduct modular


Automating Hazard Checking in Transaction-Level Microarchitecture Models

Yogesh Mahajan and Sharad Malik

Traditional hardware modeling using RTL presents a time-stationary
view of the design state space which can be used to specify temporal
properties for model checking. However, high-level information in
terms of computation being performed on units of data (transactions)
is not directly available. In contrast, transaction-level
microarchitecture models view the computation as sequences of
(data-stationary) transactions. This makes it easy to specify
properties which involve both transaction sequencing and temporal
ordering (e.g. data hazards). In RTL models, additional book-keeping
state must be manually introduced in order to specify and check these
properties. We show here that a transaction-level microarchitecture
model can help automate checks for such properties via the automated
creation of the book-keeping structures, and illustrate this for a
simple pipeline using SMV. A key challenge in model-checking the
transaction-level microarchitecture is representing the dynamic
transaction state space. We describe a fixed point computation which
uses a suitable representation.


Computing Abstractions by integrating BDDs and SMT Solvers

Roberto Cavada, Alessandro Cimatti, Anders Franzen, Kalyanasundaram
Krishnamani, Marco Roveri and R.K. Shyamasundar

The efficient computation of the abstractions of the concrete program
is a key to the efficiency of Counter-Example Guided
Abstraction-Refinement (CEGAR).  Recent works propose for such
computation the use of DPLL-based SMT solvers turned into
enuemerators.  This technique has been successfully applied in the
realm of software, where a control flow graph directs the exploration
of hte boolean space. However, when applied to programs with a
substantial boolean component, this approach breaks down. In fact, it
intrinsecally relies on the enumeration of all the models, which
basically requires the enumerations of all the disjuncts in the DNF of
the abstraction.

In this paper we propose a new technique to improve the construction
of abstractions. We complement DPLL-based SMT solvers with the use of
BDDs, that enables us to avoid the model explosion. Essentially, we
exploit the fact that BDDs are a DAG representations of the space that
a DPLL-based enumerator treats as a tree. A preliminary experimental
evaluation shows the potential of the approach.


Induction in CEGAR for Detecting Counterexamples

Chao Wang, Aarti Gupta and Franjo Ivancic

Induction has been studied in model checking for proving the validity
of safety properties, i.e., showing the "absence" of counterexamples.
To our knowledge, induction has not been used to refute safety
properties.  Existing algorithms including bounded model checking,
predicate abstraction, and interpolation are still not efficient in
detecting long counterexamples.  In this paper, we propose the use of
induction inside the counterexample guided abstraction and refinement
(CEGAR) loop to prove the "existence" of counterexamples.  We target
bugs whose counterexamples are long and yet can be captured by regular
patterns.  We identify the pattern algorithmically by analyzing the
sequence of spurious counterexamples generated in the CEGAR loop, and
perform the induction proof automatically.  The new method has little
additional overhead to CEGAR and its overhead is insensitive to the
actual length of the concrete counterexample.


Lifting Propositional Interpolants to the Word-Level

Daniel Kroening and Georg Weissenbacher

Craig interpolants are often used to approximate inductive invariants
of transition systems. Arithmetic relationships between numeric
variables require word-level interpolants, which are derived from
word-level proofs of unsatisfiability. The computation of these proofs
relies on word-level theorem provers. While automatic theorem provers
have made significant progress in the past few years, all competitive
solvers for bit-vector arithmetic are based on flattening the
word-level structure to the bit-level. We propose an algorithm that
lifts a resolution proof obtained from a bit-flattened formula up to
the word-level, which enables the computation of word-level
interpolants. We apply this technique to the verification of low-level
software given in ANSI-C and properties of data-paths of circuitry
given in Verilog.


COSE: a Technique for Co-optimizing Embedded Systems

Fadi Zaraket, John Pape, Margarida Jacome, Adnan Aziz and Sarfraz

Embedded systems typically consist of a composition of a set of
hardware and software IP modules. Each module is heavily optimized by
itself. However, when these modules are composed together, significant
additional opportunities for optimizations are introduced because only
a subset of the entire functionality is actually used. We propose
COSE, a technique to jointly optimize such designs.  We use symbolic
execution to compute invariants in each component of the design. We
propagate these invariants as constraints to other modules using
dependency analysis of the composition of the design. This captures
optimizations that go beyond and are qualitatively different than
those achievable by compiler optimization techniques such as common
subexpression elimination, which are localized. We again employ static
analysis techniques to perform optimizations subject to these
constraints. We implemented COSE in the Metropolis platform and
achieved significant optimizations using reasonable computation.


Cross-Entropy Based Testing

Hana Chockler, Benny Godlin, Eitan Farchi and Sergey Novikov

In simulation-based verification, or testing, we check the correctness
of a given program by executing it on some input vectors. For even
medium-size programs, exhaustive testing is impossible. Thus, many
errors are left undetected. The problem of increasing the
exhaustiveness of testing and decreasing the number of undetected
errors is the main problem of software testing.  In this paper, we
present a novel approach to software testing, which allows us to
dramatically raise the probability of catching rare errors in large
programs.  Our approach is based on the cross-entropy method. We
define a performance function, which is higher in the neighborhood of
an error or a pattern we are looking for. Then, the program is
executed many times, choosing input vectors from some random
distribution.  The starting distribution is usually uniform, and it is
changed at each iteration based on the value of the performance
function in the previous iteration. The cross-entropy method was shown
to be very efficient in measuring the probabilities of rare events and
in finding solutions for hard optimization problems. Our experiments
show that the cross-entropy method is also very efficient in locating
rare bugs and patterns in large programs. We show the experimental
results of our cross-entropy based testing tool and compare them to
random testing and to other testing tools.


FMCAD 2027: Will the 'FM' Have a Real Impact on the 'CAD'?

Moderator:  William Joyner, SRC

Panelists: Robert Jones
           Andreas Kuehlmann
           Carl Pixley

Imagine FMCAD twenty years from now.  What will be the state of formal
methods?  Has the major impact of them already been realized, as
indicated by continuing industrial and university interest and
research, by an array of companies with formal-content solutions, and
by articles in EE Times?  Will their impact be more pervasive, with
formal techniques overtaking simulation in verification, including for
software?  Will formal methods have applications besides functional
validation, spreading to timing, synthesis, layout, and other areas?
Or will the status quo prevail, with formal methods getting better and
better but matched by the rising tide of whining: if only more people
understood, if only we had more cycles and capacity, if only people
would use our language...

Our panelists will present the top challenges and/or opportunities
that they see on the horizon, and give an indication of the landscape,
not just of verification or formal methods, but of design automation
overall.  Audience members will respond with open mic, confronting or
supporting the panelists and giving their own (brief) views of the


Automatic Abstraction Refinement for Generalized Symbolic Trajectory

Yan Chen, Yujing He, Fei Xie and Jin Yang

In this paper, we present AutoGSTE, a comprehensive approach to
automatic abstraction refinement for generalized symbolic trajectory
evaluation (GSTE). This approach addresses imprecision of GSTE's
quaternary abstraction caused by under-constrained input circuit
nodes, quaternary state set unions, and existentially quantified-out
symbolic variables. It follows the counter example guided abstraction
refinement framework and features an algorithm that analyzes counter
examples (symbolic error traces) generated by GSTE to identify causes
of imprecision and two complementary algorithms that automate model
refinement and specification refinement according to the causes
identified. AutoGSTE completely eliminates false negatives due to
imprecision of quaternary abstraction. Application of AutoGSTE to
benchmark circuits from small to large size has demonstrated that it
can quickly converge to an abstraction upon which GSTE can either
verify or falsify an assertion graph efficiently.


A Logic for GSTE

Edward Smith

The formal hardware verification technique of Generalized Symbolic
Trajectory Evaluation (GSTE) traditionally uses diagrams called
assertion graphs to express properties. Although the graphical nature
of assertion graphs can be useful for understanding simple properties,
it places limitations on formal reasoning. Clean reasoning is
important for high-level verification steps, such as property
decomposition. In GSTE, formal reasoning is also required to justify
abstraction refinement steps, which are achieved via property
transformation. This paper proposes a linear temporal logic,
generalized trajectory logic, to provide a basis for reasoning about
GSTE properties. The logic is textual and algebraic in nature, so it
induces clean reasoning rules. We describe model checking for the
logic, and look at rules for semantic equality, property decomposition
and abstraction refinement.


Automatic Abstraction in Symbolic Trajectory Evaluation

Sara Adams, Magnus Bjork, Tom Melham and Carl-Johan Seger

Symbolic trajectory evaluation (STE) is a model checking technology
based on symbolic simulation over a lattice of abstract state
sets. The STE algorithm operates over families of these abstractions
encoded by Boolean formulas, enabling verification with many different
abstraction cases in a single model-checking run. This provides a
flexible way to achieve partitioned data abstraction. It is usually
called 'symbolic indexing' and is widely used in memory verification,
but has seen relatively limited adoption elsewhere, primarily because
users typically have to create the right indexed family of
abstractions manually. This work provides the first known algorithm
that automatically computes these partitioned abstractions given a
reference-model specification. Our experimental results show that this
approach not only simplifies memory verification, but also enables
handling completely different designs fully automatically.


A Coverage Analysis for Safety Property Lists

Koen Claessen

We present a new coverage analysis that can be used in property-based
verification. The analysis helps identifying "forgotten cases";
scenarios where the property list under analysis does not constrain a
certain output at a certain point in time. These scenarios can then be
manually investigated, possibly leading to new, previously forgotten
properties being added. As there often exist cases in which outputs
are not supposed to be specified, we also provide means for the
specificier to annotate properties in order to control what cases are
supposed to be underconstrained. Two main differences with earlier
proposed similar analyses exist: The presented analysis is
design-independent, and it makes an explicit distinction between
intentionally and unintentionally underspecified behavior.


What Triggers a Behavior?

Orna Kupferman and Yoad Lustig

We introduce and study trigger querying.  Given a model "M" and a
temporal behavior "varphi", trigger querying is the problem of finding
the set of scenarios that trigger "varphi" in "M". That is, if a
computation of "M" has a prefix that follows the scenario, then its
suffix satisfies "varphi". Trigger querying enables one to find, for
example, given a program with a function "f", the scenarios that lead
to calling "f" with some parameter value, or to find, given a hardware
design with signal "err", the scenarios after which the signal "err"
aught to be eventually raised.

We formalize trigger querying using the temporal operator "triggers",
which is the most useful operator in modern industrial specification
languages.  A regular expression "r" triggers an LTL formula "varphi"
in a system "M", denoted "M models r triggers varphi", if for every
computation "pi" of "M" and index "i >= 0", if the prefix of "pi" up
to position "i" is a word in the language of "r", then the suffix of
"pi" from position "i" satisfies "varphi".  The solution to the
trigger query "M models ? triggers varphi" is the maximal regular
expression that triggers "varphi" in "M".  Trigger querying is useful
for studying systems, and it significantly extends the practicality of
traditional query checking. Indeed, in traditional query checking,
solutions are restricted to propositional assertions about states of
the systems, whereas in our setting the solutions are temporal

We show that the solution to a trigger query "M models ? triggers
varphi" is regular, and can be computed in polynomial
space. Unfortunately, the polynomial-space complexity is in the size
of "M". Consequently, we also study partial trigger querying, which
returns a (non empty) subset of the solution, and is more
feasible. Other extensions we study are observable trigger querying,
where the partial solution has to refer only to a subset of the atomic
propositions, constrained trigger querying, where in addition to "M"
and "varphi", the user provides a regular constraint "c" and the
solution is the set of scenarios respecting "c" that trigger "varphi
in M", and relevant trigger querying, which excludes vacuous triggers
--- scenarios that are not induced by a prefix of a computation of
"M".  Trigger querying can be viewed as the problem of finding
sufficient conditions for a behavior "varphi" in "M". We also consider
the dual problem, of finding necessary conditions to "varphi", and
show that it can be solved in space complexity that is only
logarithmic in "M".


Two-Dimensional Regular Expressions for Compositional Bus Protocols

Kathi Fisler

Bus and interconnect protocols contain a few core operations (such as
read and write transfers) whose behaviors interleave to form complex
executions.  Specifications of the core operations should be flexible
enough that simple composition operators suffice for capturing most
interleavings, even in the presence of common hardware issues such as
glitches.  Oliveira and Hu proposed a form of pipelined regular
expressions to specify atomic protocol compositions, but they
abstracted away clocking and glitches.  This paper uses the AMBA-2
specification to argue that a two-dimensional form of regular
expressions with annotations handles such timing subtleties while
retaining the simplicity of Oliveira and Hu's pipelined compositions.


A Quantitative Completeness Analysis for Property-Sets

Martin Oberkönig, Martin Schickel and Hans Eveking

The demand for error-free designs has increased dramatically. To
prevent cost-intensive corrections of errors found in the late phases
of a design process a "good" (formal) specification is already needed
in the beginning of the development process. For property-based design
this is essential. To evaluate whether a specification is "good", an
analysis tool is needed. In the field of simulation it has been state
of the art for more than ten years to have several kinds of coverage
analysis such as code coverage or transition coverage. This paper
defines a quantitive metric of the completeness of a formal
specification. The presented method only considers the formal
specification, an implementation or model is not needed. So an
analysis of a specification can be directly done when it was
written. Furthermore it is described, how the properties can be
transformed to calculate the metric.


Formal Verification of Systems-on-Chip --- Industrial Experiences and
Scientific Perspectives

Wolfgang Kunz

Even after years of progress in the field of formal property checking
many designers in industry still consider simulation as their most
powerful and versatile instrument when verifying complex SoCs. Often,
formal techniques are only conceded a minor role. At best, they are
viewed as nice-to-have additions to conventional simulation, for
example as a tool for "hunting bugs" in corner cases. Fortunately, in
some parts of industry, a paradigm shift can be observed. Verification
methodologies have emerged that involve property checking
comprehensively, and in a systematic way. This has led to major
innovations in industrial design flows. There are more and more
applications where formal property checking does not only complement
but actually replace simulation.

In this talk, we report on experiences from large-scale industrial
projects documenting this emancipation of property checking. We
present a systematic methodology as it has evolved in some parts of
the industry. Furthermore, we attempt to identify the bottlenecks of
today's technology and to outline specific scientific challenges.

While formal property checking for individual SoC modules can be
considered quite mature it is well-known that there are tremendous
obstacles when moving the focus from modules to the entire system. On
the system level, today's verification tools run into severe capacity
problems. These do not only result from the sheer size of the system
but also from the different nature of the verification tasks. In our
presentation, we will analyze and discuss these challenges, relating
to well-known abstraction approaches and to techniques for state space
approximation. More specifically, as a first step towards formal
chip-level verification, we will talk about techniques for verifying
communication structures (interfaces) between the individual SoC
modules. We will outline some new ideas how certain abstraction
techniques can be tailored towards a specific verification methodology
such that a correctness proof becomes tractable even for complex SoC


Automated Extraction of Inductive Invariants to Aid Model Checking

Michael Case, Alan Mishchenko and Robert Brayton

Model checking can be aided by inductive invariants, small local
properties that can be proved by simple induction.  We present a way
to automatically extract inductive invariants from a design and then
prove them.  The set of candidate invariants is broad, expensive to
prove, and many invariants can be shown to not be helpful to model
checking.  In this work, we develop a new method for systematically
exploring the space of candidate inductive invariants, which allows us
to find and prove invariants that are few in number and immediately
help the problem at hand.  This method is applied to interpolation
where invariants are used to refute an error trace and help discard
spurious counterexamples.


Checking Safety by Inductive Generalization of Counterexamples to

Aaron Bradley and Zohar Manna

Scaling verification to large circuits requires some form of
abstraction relative to the asserted property.  We describe a safety
analysis of finite-state systems that generalizes from counterexamples
to the inductiveness of the safety specification to inductive
invariants.  It thus abstracts the system's state space relative to
the property.  The analysis either strengthens a safety specification
to be inductive or discovers a counterexample to its correctness.  The
analysis is easily made parallel.  We provide experimental data
showing how the analysis time decreases with the number of processes
on several hard problems.


Fast Minimum Register Retiming Via Binary Maximum-Flow

Aaron Hurst, Alan Mishchenko and Robert Brayton

We present a formulation of retiming to minimize the number of
registers in a design by iterating a maximum network flow
problem. Because all flows are unitary, the problem can be simplified
to binary marking. Existing methods solve this as an instance of
minimum cost network flow, an algorithmically and practically more
difficult problem than maximum flow. Our algorithm has a worst-case
bound of O(R2E), and we demonstrate on a set of circuits that our
formulation is 5x faster than minimum cost-based methods. Furthermore,
the retiming returned will be the optimum one which minimizes the
total distance of register movement.


Formal Verification of Partial Good Self-Test Fencing Structures

Adrian Seigler, Gary Van Huben and Hari Mony

The concept of applying partial fencing to logic built-in self test
(LBIST) hardware structures for the purpose of using partially good
chips is well known in the chip design industry. Deceptively difficult
though is the task of verifying that any particular implementation of
partial fencing logic actually provides the desired behavior of
blocking down-stream impact of all signals from fenced interfaces, and
also ensuring that the partial fencing does not inadvertently preclude
any common logic from being fully tested.

In this paper we discuss a case study for a verification method which
exploits the power of formal verification to prove that any given
partial fencing design satisfies all behavioral expectations. We
describe the details of the verification method and discuss the
benefits of using this approach versus using traditional simulation
methods. We also discuss the testbenches created as part of applying
this new method. Furthermore, we discuss the formal verification
algorithms that were employed during application of the method along
with the tuning that was done to enable efficient completion of the
verification tasks at hand.


Case study: Integrating FV and DV within the Verification of Intel®
Core Microprocessor

Alon Flaisher, Alon Gluska and Eli Singerman

The ever-growing complexity of Intel® CPUs, together with shortened
time-to-market requirements, imposes significant challenges on
pre-silicon logic verification. To address the increasing verification
gap, major improvements to verification practices are required. In
Merom, Intel's Core Duo microprocessor, we integrated Formal
Verification (FV) with Dynamic Verification (DV) such that FV was also
practiced by non-FV experts and replaced some traditional,
simulation-based verification activities. This led to both higher
productivity and better quality compared to previous projects. In this
paper we report on the integration we used including two examples,
results, and future directions.


Circuit-Level Verification of a High-Speed Toggle

Chao Yan and Mark Greenstreet

s VLSI fabrication technology progresses to 65nm feature sizes and
smaller, transistors no longer operate as ideal switches.  This
motivates the verification of digital circuits using continuous
models.  This paper presents the verification of the high-speed toggle
flip-flop proposed by Yuan and Svensson.  Our approach builds on the
projection based methods originally proposed by Greenstreet and
Mitchell.  While they were only able to demonstrate their approach
with two- and three-dimensional systems, we apply projection based
analysis to a seven-dimensional model for the flip-flop.  We believe
that this is the largest, non-linear circuit or hybrid system verified
by anyone to date.

In this paper, we describe how we overcame problems of numerical
accuracy and stability associated with the original projection based
methods.  In particular, we present a novel linear-program solver and
new methods for constructing accurate linear approximations of
non-linear dynamics.  We use the toggle flip-flop as an example and
consider how these methods could be extended to verify a complete,
standard cell library for digital design.


Combining Symbolic Simulation and Interval Arithmetic for the
Verification of AMS Designs

Mohamed Zaki, Ghiath Al Sammane, Sofiene Tahar and Guy Bois

Analog and mixed signal (AMS) designs are important integrated
circuits that are usually needed at the interface between the
electronic system and the real world. Recently, several formal
techniques have been introduced for AMS verification. In this paper,
we propose a difference equations based bounded model checking
approach for AMS systems.  We define model checking using a combined
system of difference equations for both the analog and digital parts,
where the state space exploration algorithm is handled with Taylor
approximations over interval domains. We illustrate our approach on
the verification of several AMS designs including DSM modulator and
oscillator circuits.


Analyzing Gene Relationships for Down Syndrome with Labeled
Transitions Graphs

Neha Rungta, Hyrum Carroll, Eric Mercer, Randall Roper, Mark Clement
and Quinn Snell

Down Syndrome (DS) patients have an extra copy of chromosome 21 that
causes certain phenotypes including mental retardation and distinct
physical characteristics. In many cases, chromosome 21 genes cause
these phenotypes by modulating the expression of other genes in
complex gene regulatory networks. The increasing number and size of
gene regulatory networks make it infeasible to manually derive gene
relationships. This work combines gene regulatory networks to build a
labeled transition graph. We then perform a reachability analysis
using a randomized depth-first search to generate traces of gene
interaction relationships between chromosome 21 and the genes
responsible for DS phenotypes. The gene interactions presented in this
work provide new information to DS researchers for the amelioration or
prevention of specific DS phenotypes.


A Formal Model of Clock Domain Crossing and Automated Verification of
Time-Triggered Hardware

Julien Schmaltz

We develop formal arguments about a bit clock synchronization
mechanism of time-triggered hardware. The architecture is inspired by
the FlexRay standard and described at the gate-level. The
synchronization algorithm relies on specific values of a counter. We
show that under our theory, values proposed in the literature do not
allow correct transmission. Our framework is based on a general and
precise model of clock domain crossing, which considers meta-stability
and clock imperfections. Our approach combines this model with the
state transition representation of hardware. The result is a clear
separation of analog and digital behaviors. Analog considerations are
formalized in the logic of the Isabelle/HOL theorem prover. Digital
arguments are discharged using the NuSMV model checker. To the best of
our knowledge, this is the first verification effort tackling
asynchronous transmission at the gate-level.


Modeling Time-Triggered Protocols and Verifying their Real-Time

Lee Pike

Time-triggered systems are distributed systems in which the nodes are
independently-clocked but maintain synchrony with one another.
Time-triggered protocols depend on the synchrony assumption the
underlying system provides, and the protocols are often formally
verified in an untimed or synchronous model based on this assumption.
An untimed model is simpler than a real-time model, but it abstracts
away timing assumptions that must hold for the model to be valid.  In
the first part of this paper, we extend previous work by Rushby to
prove, using mechanical theorem-proving, that for an arbitrary
time-triggered protocol, its real-time implementation satisfies its
untimed specification.  The second part of this paper shows how the
combination of a bounded model-checker and a satisfiability modulo
theories (SMT) solver can be used to prove that the timing
characteristics of a hardware realization of a protocol satisfy the
assumptions of the time-triggered model.  The upshot is a
formally-verified connection between the untimed specification and the
hardware realization of a time-triggered protocol with respect to its
timing parameters.


A Mechanized Refinement Framework for Analysis of Custom Memories

Sandip Ray and Jayanta Bhadra

We present a framework for formal verification of embedded custom
memories.  Memory verification is complicated by the difficulty to
abstract design parameters induced by the inherently analog nature of
transistor-level designs.  We develop behavioral formal models that
specify a memory as a system of interacting state machines, and relate
such models with an abstract read/write view of the memory via
refinements.  The operating constraints on the individual state
machines can be validated by readily available data from analog
simulations.  The framework handles both static RAM (SRAM) and flash
memories, and we show initial results demonstrating its applicability.