Discussion of the least-fixpoint theory used in SV

One of the important steps in our method for interpreting the semantics of a
hardware design is transforming (composing) a network of local variable
assignments into a set of full 0-delay update functions comprising a finite
state machine. The practical method of doing this is discussed in svex-composition. The input to this step is a set of assignments each giving
the value of a variable in terms of inputs, previous states, and other internal
variables, and the output of the step is a new set of assignments that
eliminates the dependencies on other internal variables. As a simple example,
suppose

assign a = f(b); assign b = g(c);Then the output of the composition step should be a new set of assignments of the same variables:

assign a = f(g(c)); assign b = g(c);

This is straightforward if the dependency graph formed by all the assignments is acyclic. However, this is not always the case, even in relatively conventional hardware designs. Latch-based logic tends to create loops that will be broken under each setting of the clock and enable signals. Even flipflop-based designs can have similar apparent self-loops if clock gate signals are derived from the outputs of flops that they gate.

The general practical approach to such situations (as used in svex-composition) is to substitute an X (signifying an unknown value) for each variable involved in the loop and then unroll the loop a few times. E.g., supposing we have

assign a = f(b, c); assign b = g(a, d);we can unroll this to:

assign a = f(g(f(g('X, d), c), d), c); assign b = g(f(g(f('X, c), d), c), d);

To formalize the relationship between the outputs and inputs of this
process, we define the relation

But this leads to the question of whether this is correct or enough. It would be more satisfying to have a stronger connection between the network of assignments parsed from the hardware design and the FSM produced by composing them together. This weak connection also makes it difficult to support hierarchical and temporal decomposition -- e.g., with only this connection we can't argue that a network composed from a submodule has anything to do with a network composed from its parent module, because we might have chosen different ways to do the composition and there's no a priori connection between these.

Fortunately, there is a stronger theoretical basis that this

The fixpoint also has good decomposition properties. We show that if we take
the fixpoint of a network with certain internal signals replaced by artificial
values, then we get the same result as the fixpoint of the original network if
those artificial values match the values of the respective signals in the
original fixpoint. This lets us take theorems proved about a

Finally, this fixpoint should allow hierarchical decomposition, i.e. we should be able to use theorems proved about the fixpoint FSM of a submodule to prove similar facts about the fixpoint FSM of a larger scope.

The least fixpoint depends conceptually on the view of svex expressions and
4vec values as partially ordered according to how much information they
contain. Svex expressions take values that are called 4vecs, which
are (conceptually) vectors of 4-valued 'bits' that are either X, Z, 1, or 0.
Here 1 and 0 signify good Boolean values, Z means the wire is undriven, and X
means unknown. In the information ordering, X is the bottom, and the other
three values are above it but unrelated to each other. This ordering is
recognized by the function

An important property of some SVEX expressions is monotonicity with respect
to this information ordering. That is, an expression is monotonic if its
evaluation on two environments related by

We also sometimes use weaker forms of monotonicity which say that an
expression is monotonic on certain variables, or monotonic on all variables
except a certain set. This is true if, for any evaluation on two environments
that (1) satisfy

If we have a set of assignments of expressions to variables, and those expressions are monotonic on all the variables assigned, and each expression has a maximum bit width, then we show how to create a least fixpoint.

Essentially, we just compose the entire list of expressions with itself
repeatedly

How can we know we've reached a fixpoint?

First note that subsequent iterations only increase in the information
ordering. By monotonicity of the expressions, if iteration

Next note that

Therefore, it suffices to set

In fact, the fixpoint we arrive at this way is the unique least fixpoint of
the network. To see this, suppose that we have our fixpoint valuation

- Flatnorm->ideal-fsm
- Returns the fixpoint FSM derived from the assignment network and state updates (delays) given by the input.