Trustworthy Hardware from Certified Behavioral Synthesis

Background, Motivation, and Goals

Recent years have witnessed continuing miniaturization of VLSI design and fabrication technologies, producing chips with higher and higher transistor density. A consequence of this advancement, together with the high computational demands of modern applications, is that hardware designs have continued to rise in complexity to make use of all the available transistors. This makes it challenging to produce reliable, high-quality hardware designs through hand-crafted Register-Transfer Level (RTL) or gate-level implementations. The problem is exacerbated with aggressive time-to-market requirements, leading to what is often referred to as the design productivity gap. Electronic System Level (ESL) design is often seen as a solution to this gap. The idea of ESL is to raise the design abstraction by specifying a hardware design behaviorally with a high-level language such as SystemC, C, or C++. ESL specifications are synthesized to hardware through a process known as behavioral synthesis. Behavioal synthesis consists of a number of inter-deptendent components, e.g., compilation, scheduling, resource allocation, control synthesis, etc., each of which involves application of a sequence of transformations ultimately compiling an ESL description into RTL.

Research in behavioral synthesis is now relatively mature, with several available tools such as AutoPilot, Bluespec, Cynthesizer, SPARK, etc. Nevertheless, and despite its great need, behavioral synthesis has not yet found wide acceptance in the current industrial practice. A major barrier to its adoption is the lack of the designers' confidence in the correctness of synthesis tools themselves. Behavioral synthesis tools are complex software systems, often with thousands of lines of code, and consequently they are prone to logical and implementation errors. Errors in synthesis tools can result in RTL designs with bugs, with expensive ramifications. Thus, it is critical to develop techniques for certifying that the synthesized RTL conforms to its behavioral specification. Indeed, the state of behavioral synthesis today has sometimes been likened to that of logic synthesis (automatic translation of synthesizable RTL to gates) whose adoption in the 1990s was boosted significantly by the advancement of equivalence checking technology to automatically check conformance of the synthesized netlist with the RTL. Unfortunately, the significant gap in abstraction level between an ESL description and RTL — the very reason for adopting ESL for design specification — makes it difficult to check conformance between the two. Consequently, behaviorally synthesized RTL designs are either (a) error-prone or (b) overly conservative, producing circuits of poor quality and performance.

This goal of this project is to develop a generic, mechanized framework for applying formal analysis and verification techniques to certify correctness of hardware designs produced through behavioral synthesis. Here, "correctness" means that the synthesized RTL implements the input ESL description. Our research involves investigating to decompose the certification of correctness into pieces which can be effectively handled by a synergistic combination different formal verification technques. We have used our framework to certify several RTL designs synthesized from AutoPilot. Our experiments indicate that our framework can scale to designs with tens of thousands of lines of synthesized RTL.

Funding, Personnel, and Organizations

This is a collaborative research between Portland State University and University of Texas at Austin. The research is funded by the National Science Foundation under Grants CCF-0917188 and CCF-0916772.[1] In addition, we closely collaborate with the industry (e.g., researchers from Intel Corporation and AutoESL), to ensure that our frameworks and reasoning tools meet the industry needs.

Here are the people who have been involved in this research so far in various capacities.

We're looking for more help, so if you're interested in the work feel free to talk to us.


The publications below are posted roughly in reverse chronological order. The ATVA 2009 paper provides a general introduction to our framework, and explains how we can decompose the problem into pieces which can be handled by the complementary techniques of theorem proving and model checking respectively. The DATE 2010 paper presents several optimizations to scale up the techniques discussed to practical designs. The DAC 2012 paper shows how to perform equivalence checking in the presence of loop pipelining; this is non-trivial since pipelining destroys mappings of internal operations between the high-level design description and the synthesized hardware.

[1] Any opinions, findings and conclusions or recomendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.