• Top
    • Documentation
    • Books
    • Boolean-reasoning
    • Projects
      • Apt
      • Zfc
      • Acre
      • Milawa
      • Smtlink
      • Abnf
      • Vwsim
      • Isar
      • Wp-gen
      • Dimacs-reader
      • Pfcs
      • Legacy-defrstobj
      • Proof-checker-array
      • Soft
      • C
      • Farray
      • Rp-rewriter
      • Instant-runoff-voting
      • Imp-language
      • Sidekick
      • Leftist-trees
      • Java
      • Taspi
      • Bitcoin
      • Riscv
      • Des
      • Ethereum
      • X86isa
        • Program-execution
        • Sdm-instruction-set-summary
        • Tlb
        • Running-linux
        • Introduction
        • Asmtest
          • X86isa-build-instructions
          • Publications
          • Contributors
          • Machine
          • Implemented-opcodes
          • To-do
          • Proof-utilities
          • Peripherals
          • Model-validation
          • Modelcalls
          • Concrete-simulation-examples
          • Utils
          • Debugging-code-proofs
        • Sha-2
        • Yul
        • Zcash
        • Proof-checker-itp13
        • Regex
        • ACL2-programming-language
        • Json
        • Jfkr
        • Equational
        • Cryptography
        • Poseidon
        • Where-do-i-place-my-book
        • Axe
        • Bigmems
        • Builtins
        • Execloader
        • Aleo
        • Solidity
        • Paco
        • Concurrent-programs
        • Bls12-377-curves
      • Debugging
      • Std
      • Proof-automation
      • Macro-libraries
      • ACL2
      • Interfacing-tools
      • Hardware-verification
      • Software-verification
      • Math
      • Testing-utilities
    • X86isa

    Asmtest

    A simple framework for testing execution of small programs against the x86isa model.

    Overview

    The asmtest framework is designed to allow us to test many small programs ("snippets"), each many times on different data, and compare the results obtained by the x86 model with those obtained by running the programs on hardware.

    A snippet is a C function or a piece of assembly code which reads a fixed-size (per snippet) memory block of input and writes a fixed-size memory block of output. A pointer to the input memory block is given in RDI/the first function argument, and a pointer to the output memory block is given in RSI/the second function argument.

    To run a snippet on hardware, we use an executable into which all snippets have been built. This executable chooses a snippet to run, input file to read, and output file to write based on its command line arguments. The input file is read into memory and interpreted as N input memory blocks; the execution then runs the snippet on each input memory block, writing out N output memory blocks to the output file. The executable can also optionally output a "conf" file that describes the test (listing the snippet name, input file, and output file).

    To run a snippet on the x86 model, we read that executable into the x86 state using binary-file-load and can then use the test-snippet or test-snippetconf utilities to repeatedly run the snippet on the model on the various inputs, comparing them to the outputs from the executable.

    We provide scripts to automate this process for randomly generated inputs. You can run make test in the asmtest directory to run all the snippets on random input. This will create a results directory with a file for each test that contains the ACL2 output for that test. Generally, successful tests have little output while unsuccessful tests have large output, so sorting the contents of the directory by size makes it easy to find failing tests.

    Snippets are listed (along with their input and output block sizes) in snippets.txt (under the asmtest directory, books/projects/x86isa/tools/execution/asmtest). This file is parsed by a script that then generates the file snippets.h, used for building the asmtest executable. Once the executable is built, the same script can be used to build the file snippets.lsp, which tells ACL2 the input/output sizes and entry point addresses of each snippet.

    Adding Snippets

    To add and test a new snippet:

    • Add an entry to snippets.txt giving the name and input/output sizes.
    • Add an assembly function (with the same name as given in snippets.txt) to a .nasm file in the snippets/ subdirectory (or add a new .nasm file). See existing examples. (C functions could be supported too with small adjustments to the makefile.)
    • Run make in the asmtest directory to rebuild the asmtest executable and remake snippets.lsp. (Note this should be done on an x86 machine with the gcc, nasm, readelf, and python3. Perhaps we could use cross-compilation to do this on other kinds of machines.)
    • Somehow generate a binary input file for your snippet. Its size should be an integer multiple of the input block size of the snippet. One way to generate basic random inputs is e.g. head -c nbytes /dev/urandom > my_inputs.bin.
    • Execute your snippet on the input file, e.g. asmtest -i my_inputs.bin -n my_testname my_snippet
    • Ensure that asmtest.lisp is certified (this doesn't depend on previous steps).
    • Test the outputs from the execution on the x86 model following the examples in tests/example.lsp, e.g. (test-snippetconf-event "my_testname.conf").

    Generating Tests

    Some snippets are generated with testgen, which uses Intel's XED to iterate over instructions and generate tests for them. This is automatically done when you build asmtest with the provided Makefile. testgen generated tests are named testgen_<idx> where <idx> is the index of the instruction in the XED database.

    A Python script "text_to_binary.py" is provided to help generate test input files. Mainly it helps write a bunch of numbers into a binary file. See the usage message for that script for details. Currently it only supports either explicitly specified or randomly generated inputs.

    Future Work

    Regression Tests

    If this proves to be a successful framework for testing the x86 model, we'll want to set things up so that we can easily run a regression of the x86 model that includes snippet tests. This could either (a) be part of the ACL2 regression suite or else (b) a separate regression only used when working on the x86isa project.

    If (a) we include snippet tests as part of the ACL2 regression, we don't want to require an X86 machine and all the tools used to build and run the snippets for the regression to succeed. So we either need to (1) conditionalize the snippet testing on having an appropriate machine with all the necessarly tools, or (2) include in version control everything necessary to run the tests. (1) seems restrictive and likely prone to errors. (2) would require including the asmtest binary executable, all the input data files we want to regress (or a platform-independent way of recreating them), and output files (so that we don't require an x86 machine to run the asmtest executable). Perhaps instead of having a version-controlled monolithic asmtest executable, we should support multiple such executables based on a shared codebase but containing different sets of snippets. That way some executables could be checked into version control for inclusion in the ACL2 regression but some are source-only, and testing new snippets doesn't require modifying a version-controlled executable.

    If (b) we don't include the snippet test regression in the ACL2 regression, perhaps we only include source code in the repository and construct the regression test by creating books in a subdirectory excluded from the main ACL2 books makefile.

    Test input generation

    We'll need some tools to generate good inputs for testing various sorts of instructions. Should these tools should be written in ACL2 or another language?