A quick comparison of sv and its predecessor, esim.

SV and ESIM have many similarities. For instance:

- They largely share the same goal: to provide a representation for (RTL) hardware designs that can be automatically derived from Verilog/SystemVerilog, can be concretely or symbolically simulated, and can be formally analyzed by ACL2.
- They are both, like Verilog or SystemVerilog, based on four-valued logics where each ``bit'' of a wire can be either 1, 0, X, or Z; see ACL2::why-4v-logic for additional background on why four-valued logics are useful in hardware modeling.
- They both provide similar user-interfacing tools. For instance, the symbolic-test-vectors of SV are intentionally very similar to the symbolic test vectors of ESIM.

However, SV and ESIM have significantly different implementations. Below, we discuss some of these differences, their motivations, and their consequences.

ESIM is a **bit-level** backend. That is, the expression language used
by ESIM is 4v. Each expression in this language represents a
single (four-valued) bit.

In contrast, SV is a

The main motivation for this change was to make it easier to translate large Verilog/SystemVerilog designs with reasonable performance. In ESIM, bit-blasting every expression has proven to be a major performance bottleneck...

- In some cases it requires the creation of millions of symbols. This could
be slow just to intern. We typically built ACL2 with a much larger
ACL2 package to improve performance. - These symbols were (of course) then consed together into various structures, which could become large and require lots of memory to represent (and to hons).
- We needed separate expressions for each bit of the circuit. For instance,
an assignment like
assign foo[7:0] = bar[7:0] & baz[7:0] would require eight separate, distinctand expressions, i.e.,(and bar[7] baz[7]) ,(and bar[6] baz[6]) , ...; each of these were ACL2::4v-sexprs, which were typically honses, so there is a significant memory cost. - Traversing so many individual bit expressions adds significant time overhead for algorithms such as evaluating, reducing, topologically sorting and composing update functions, etc., as well as much larger memo tables.
- It takes some memory to record the association of Verilog-level vectors to their corresponding bits, and some computation to reassemble vectors out of their bits, which you need to do in tools like waveform dumping and symbolic test vector processing.
- There are other undesirable consequences for the front-end (i.e., Verilog translation), which we will come back to later.

A vector-level language largely avoids these problems. Since each variable represents an entire vector, we need far fewer symbols and expressions, and correspondingly far less memory is needed to represent structures such as the state of the circuit at a particular instant in time and its update functions. The symbolic expressions for our update functions are also more compact, requiring less memory for memo tables and less time for traversals. We don't need to track associations between vectors and their individual bits for waveforms/etc., which further reduces memory overhead and computation time.

A vector-level expression language also may lend itself to more specialized reasoning strategies than just bit-blasting. Bit-blasting is still an important tool and is still well-supported by SV, but delaying bit-blasting opens up opportunities for certain kinds of vector-level analysis such as rewriting.

In ESIM's module representation, each module is either:

- a primitive, defined by an explicit representation of outputs/next-state bits in terms of inputs/previous-states, or
- a compound module, defined by a list of submodule instances.

This approach meant that to translate a typical Verilog module containing assignments as well as submodules, the assignments first needed to be broken down and represented as a series of submodules themselves. Historically this was done by vl2014 transforms such as in the vl2014::split and vl2014::occform.

In SV, we eliminate this process by directly representing assignments of expressions to wires in the modules; this removes the need for these two steps, and helps to make the actual SV representation look more like the original Verilog. (Well, slightly, anyway.)

One advantage of ESIM's module representation is that the semantics of a module hierarchy are relatively straightforward and well-defined: each module has its output and next-state functions in terms of its inputs and previous states, and the functions for a parent module are computed by taking a fixpoint of the compositions of the functions of all submodules.

In SV, rather than expressing the semantics in terms of a module hierarchy,
only the expression language has a real concrete interpretation. To obtain the
meaning of a module hierarchy, we first **flatten** it and resolve aliasing
between wires, obtaining a flattened set of assignments and state updates. We
then compose these together to obtain update functions for each wire and state
element in terms of primary inputs and previous states.

While arguably some meaning is lost here—the flattening and alias-normalization transforms are sufficiently complicated that we can't claim that the module hierarchy has a nice, crisp semantics—an important advantage is that this process offers a much simpler way to deal with Verilog constructs that violate the module hierarchy. For example, consider a Verilog assignment to a hierarchical identifier. This sort of construct can't be accommodated in ESIM without drastically modifying its simple, elegant hierarchical semantics, but it is fairly straightforward in SV.

Meanwhile, ESIM's support for bidirectionally-driven wires relies on a subtle, unverified transformation to the module hierarchy which is necessary to ensure a fundamental assumption that every wire has only one driving module. In contrast, with SV's flattening approach, a wire that is bidirectionally driven reduces to one that is just multiply driven; it is relatively easy to resolve any multiply-driven wires after flattening and before composition.

SV's approach to deriving an FSM from a module hierarchy via flattening, alias normalization, and composition has proven to be very convenient for expressing many SystemVerilog features such as compound data structures. Thanks to this, SV can support a richer set of SystemVerilog designs than ESIM.

The following timings and performance discussion are for one of Oracle's main hardware proof stacks.
Perhaps these statistics should be taken lightly, as they were only single
runs, and it's possible (though somewhat unlikely) that there was contention
for the cores. These were run with

- In Esim: 12 hours and 14 minutes of time

19758.892u 434.856s 12:14:18.48 45.8% 0+0k 692208+2144728io 18pf+0w - In SV: 7 hours and 7 minutes of time

16128.279u 410.741s 7:07:16.04 64.5% 0+0k 520448+1000408io 12pf+0w

Generally speaking, proofs are either the same speed or faster under the SV
framework. One *composition* (svex-decomp) proof using gl (which is the old method for doing such proofs) took approximately
5386 seconds in Esim and now takes 69 seconds in SV. As another example, one
proof that describes the functionality of a Verilog circuit took 166 seconds in
Esim and now takes 170 seconds in SV.