5. Using def-gl-thm
The def-gl-thm command is the main interface for using GL to prove
theorems. Here is the command we used in the bit-counting example from 1. An Example GL Proof:
(def-gl-thm fast-logcount-32-correct
:hyp (unsigned-byte-p 32 x)
:concl (equal (fast-logcount-32 x)
(logcount x))
:g-bindings `((x ,(g-int 0 1 33))))
Unlike an ordinary defthm command, def-gl-thm takes separate
hypothesis and conclusion terms (its hyp and :concl arguments). This
separation allows GL to use the hypothesis to limit the scope of the symbolic
execution it will perform. You also have to provide GL with :g-bindings
that describe the symbolic objects to use for each free variable in the
theorem.
What are these bindings? In the fast-logcount-32-correct theorem, we
used a convenient function, g-int, to construct the :g-bindings.
Expanding this away, here are the actual bindings:
((x (:g-integer 0 1 2 ... 32)))
The :g-bindings argument uses a slight modification of the symbolic
object format where the Boolean expressions are replaced by distinct natural
numbers, each representing a Boolean variable. In this case, our binding for
x stands for the following symbolic object:
Xinit = (:g-integer b0 b1 ... b31 b32)
Note that Xinit is not the same object as Xbest from 4. Proving Theorems by Symbolic Execution—its sign bit is b32
instead of false, so Xinit can represent any 33-bit signed integer
whereas Xbest only represents 32-bit unsigned values. In fact, the
:g-bindings syntax does not even allow us to describe objects like
Xbest, which has the constant false instead of a variable as one of
its bits.
There is a good reason for this restriction. One of the steps in our proof
strategy is to prove coverage: we need to show the symbolic objects we
are starting out with have a sufficient range of values to cover all cases for
which the hypothesis holds; more on this in 7. Proving Coverage. The
restricted syntax permitted by :g-bindings ensures that the range of
values represented by each symbolic object is easy to determine. Because of
this, coverage proofs are usually automatic.
Despite these restrictions, GL will still end up using Xbest to carry
out the symbolic execution. GL optimizes the original symbolic objects
inferred from the :g-bindings by using the hypothesis to reduce the space
of objects that are represented. In BDD mode this optimization uses BDD parametrization, which
restricts the symbolic objects so they cover exactly the inputs recognized by
the hypothesis. In AIG mode we use a lighter-weight transformation that
replaces variables with constants when the hypothesis sufficiently restricts
them. In this example, either optimization transforms Xinit into
Xbest.