High-level description of decomposition methodology for svtvs.

In hardware verification, it is important to be able to break down proofs into pieces. An important technique for this purpose is to assume that certain internal signals have particular values at particular times, and prove something about their fanout logic. For symbolic simulation, the best performance for this method is obtained by replacing the fanin logic of these signals with free symbolic values, which we call "overriding" those signals. In order to compose together multiple proofs using this technique, it is important to be able to use a theorem proved about a design's behavior with certain signals overridden to prove a theorem about the design without (some of) those overrides.

See also the sv-tutorial and in particular the topic decomposition-proofs, which walks through a runnable example.

As a high-level example, suppose we prove that the partial product summation
of a multiplier is correct by overriding the partial product signals. Note:
the following uses a made-up syntax, for illustration purposes only. In our
real framework,

(defthm multiplier-pp-sum-correct-override (implies (unsigned-byte-p 128 partial-products-value) (let* ((run (run-design (multiplier) :inputs `((opcode . ,*multiply-opcode*)) :overrides `((partial-product-signal . ,partial-products-value))))) (equal (lookup 'multiplier-output run) (pp-sum-spec partial-products-value))))) }

Additionally, suppose that we've proved that the same multiplier computes a correct Booth encoding as its partial products signal:

(defthm multiplier-booth-encoding-correct (implies (and (unsigned-byte-p 16 a) (unsigned-byte-p 16 b)) (let* ((run (run-design (multiplier) :inputs `((opcode . ,*multiply-opcode*) (a-input . ,a) (b-input . ,b))))) (equal (lookup 'partial-product-signal run) (booth-enc-spec a b)))))

We'd like to be able to compose these two theorems and prove the multiplier correct from end to end. But the design here has been run with two different settings for the inputs and overrides, and these can't be reconciled as-is. At a high level, our decomposition methodology works by allowing theorems such as these -- which are convenient to prove using symbolic simulation -- to be generalized into theorems that can be straightforwardly composed. Using the same made-up syntax:

(defthm multiplier-pp-sum-correct-gen (let* ((opcode (lookup 'opcode input-env)) (run (run-design (multiplier) :inputs input-env)) (partial-products-value (lookup 'partial-product-signal run)) (result (lookup 'multiplier-output run))) (implies (and (equal opcode *multiply-opcode*) (unsigned-byte-p 128 partial-products-value)) (equal result (pp-sum-spec partial-products-value)))))

Notice this has been changed in two ways:

- First, the
:inputs argument, which used to be an explicit alist, is now a free variable with a hypothesis assuming that the inputs that were given before (opcode bound to*multiply-opcode* ) are correctly bound there. - Second, the overrides are replaced by references to observations of
internal signals. That is, we're no longer replacing the fanin cone of
partial-product-signal with a variable, as we wanted for symbolic simulation; rather, we're looking at whatever the value of that signal happens to be, and proving that the multiplier output satisfies its spec based on that signal.

The booth encoding theorem can be changed similarly, although since it had no overrides only the first change applies here:

(defthm multiplier-booth-encoding-correct-gen (let* ((opcode (lookup 'opcode input-env)) (a (lookup 'a-input input-env)) (b (lookup 'b-input input-env)) (run (run-design (multiplier) :inputs input-env)) (partial-products-value (lookup 'partial-product-signal run))) (implies (and (equal opcode *multiply-opcode*) (unsigned-byte-p 16 a) (unsigned-byte-p 16 b)) (equal partial-products-value (booth-enc-spec a b)))))

Notice that these theorems now can be composed! In fact, given a lemma about
the bit width of

(defthm multiplier-computes-pp-sum-of-booth-enc (let* ((opcode (lookup 'opcode input-env)) (a (lookup 'a-input input-env)) (b (lookup 'b-input input-env)) (run (run-design (multiplier) :inputs input-env)) (result (lookup 'multiplier-output run))) (implies (and (equal opcode *multiply-opcode*) (unsigned-byte-p 16 a) (unsigned-byte-p 16 b)) (equal result (pp-sum-spec (booth-enc-spec a b))))))

This methodology's main user interface is def-svtv-generalized-thm. See that topic for usage. In the rest of this topic we'll discuss how this works and the solutions to certain complications.

To produce the generalized forms of the theorems above, we want a particular
property of the design: that if we override a signal with the value that it
would have had anyway, then the values of all signals are preserved. Stated in
terms of our imaginary

(defthm multiplier-partial-product-overrides-transparent (let* ((base-run (run-design (multiplier) :inputs input-env)) (partial-products-value (lookup 'partial-product-signal base-run)) (override-run (run-design (multiplier) :inputs input-env :overrides `((partial-product-signal . ,partial-products-value))))) (all-lookups-equivalent override-run base-run)))

This theorem, along with a couple of other simpler facts that allow the
generalization of the input environment, would be sufficient to let us
generalize

Generally we also want to allow for multiple potentially overriden signals in different combinations:

(defthm multiplier-partial-product-overrides-transparent (let* ((base-run (run-design (multiplier) :inputs input-env)) (override-run (run-design (multiplier) :inputs input-env :overrides override-env))) (implies (override-signals-consistent-with-base-run override-env base-run) (all-lookups-equivalent override-run base-run))))

Here we imagine

This overrides-transparent condition seems intuitively obvious and we would like to prove it for all designs. However, it is actually a fairly deep property of the design object, and as we'll see, depending how we compose the design together it may or may not be true.

We have two underlying methods for ensuring that this property holds. The
first method method proves this property of the actual SVTV we're used to
working with. This uses a syntactic check, which is fast and reliable but may
fail on designs with apparent combinational loops, such as designs with
latch-based logic or signals that feed into their own clock gates. If the
syntactic check fails, an equivalence check using

The second method does not prove the overrides-transparent condition of the actual SVTV, but instead proves it of an uncomputed idealized version of the SVTV. This method doesn't require a syntactic check or equivalence check and will work on any design. However, proofs by composition will then be about the idealized SVTV, even though the lower-level proofs by symbolic simulation are done on the usual SVTV. This can be a disadvantage because we don't then know whether the properties shown by composition are true of the SVTV. See svtv-decomposition-choosing-a-method for more on this topic.

The syntactic check method works by checking the expressions that make up
the finite-state machine derived from the circuit. Overrides are accomplished
by replacing references to signals with override muxes, essentially

The correct way to deal with 0-delay combinational loops is to compute a
*fixpoint*. That is, for a given setting of the circuit inputs and
stateholding elements, begin at a state setting these values for the
inputs/states and all internal signals set to X. Apply the internal signals'
update functions to obtain a new state. Repeat this until we reach a fixpoint.
It is a theorem that if all signals have finite bit widths and the update
functions respect X-monotonicity, then a fixpoint is reached by the time we
have repeated this

Because of the number of repetitions needed, it isn't always practical to actually compute the fixpoint. Instead, in the second method we use an approximate composition method that is efficient, practially useful, and conservative with respect to the fixpoint: that is, if a signal's value in this approximate composition is non-X, then its fixpoint value must be the same.

The approximate composition does not always satisfy the overrides-correct condition above. For example, if we override a signal that is part of a 0-delay combinational loop and its fanin logic therefore appears more than once in different forms (composed with itself a different number of times), then overriding all occurrences of that fanin logic with the signal's final value isn't guaranteed to preserve the value of other signals.

However, the overrides-correct condition is true of the fixpoint. One
version of this is proved in "centaur/sv/svex/fixpoint-override" as

This leads us to the following methodology for composing hardware proofs with overrides:

- Compute a symbolic representation
*approx*of the desired run of the design based on the approximate composition -- in particular, using defsvtv$. - Define a non-executable function
*ideal*that logically computes the analogous run with the fixpoint composition. This can be done using def-svtv-ideal. - To prove composition-friendly facts about
*ideal*, first prove a lemma with overrides about*approx*(such asmultiplier-pp-sum-correct-override and use the conservativity of*approx*with respect to*ideal*along with the overrides-correct property of*ideal*to prove the composition-friendly fact (such asmultiplier-pp-sum-correct-gen ) about*ideal*. This is automated by def-svtv-generalized-thm.

Alternatively, to use the syntactic check method instead:

- Compute a symbolic representation of the desired run of the design based on the approximate composition -- in particular, using defsvtv$.
- Check that the approximate composition satisfies the syntactic check
described above regarding override muxes, or fall back on the FGL equivalence
check to prove override transparency if the syntactic check fails. This can be
done using
def-svtv-refinement . - To prove composition-friendly facts about the SVTV, first prove a lemma
with overrides along with the overrides-correct
property of the SVTV to prove the composition-friendly fact (such as
multiplier-pp-sum-correct-gen ). This is automated by def-svtv-generalized-thm.

See svtv-decomposition-choosing-a-method for pros and cons of the above methods.

- Def-svtv-generalized-thm
- Prove a theorem about an idealized SVTV via a symbolic simulation lemma about the SVTV itself.
- Svtv-decomposition-choosing-a-method
- Summary of the relative advantages of the ideal-based vs syntactic-check-based methods for SVTV decomposition.
- Def-svtv-refinement
- For a given SVTV, prove the theorems necessary to use that SVTV in (de)composition proofs using def-svtv-generalized-thm, as in the svex-decomposition-methodology.
- Def-svtv-ideal
- Define a non-executable, fixpoint-based analogue of a symbolic-test-vector and prove properties that allow it to be used in decomposition proofs.