Calvin Lin
    Professor of Computer Science
    University of Texas, Austin
 
   
 
Home
Honors and Awards
Research Projects
Publications
Teaching
Students
Research Summary

  My research takes a broad approach to increasing programmer productivity by improving system performance, correctness, and ease of programming. This approach is not constrained by system boundaries, as I have done work in languages, compilers, and micro-architecture. In particular, I have designed new languages, introduced the notion of library-level analysis and optimization, invented new pointer analysis algorithms, and developed new micro-architectural techniques for improving branch prediction and memory system performance.

Indeed, one theme of my work is to increase information flow across system boundaries, including the boundaries between the programmer & machine, library & compiler, compiler & branch predictor, pointer analysis & pointer analysis client, and CPU & DRAM. At the same time, another theme is to encapsulate powerful mechanisms to make them more usable. For example, the Broadway compiler encapsulates sophisticated static analysis mechanisms to provide compiler support to software libraries, and my vision of Turnkey Pointer Analysis encapsulates both pointer analysis algorithms and policies in the hopes of making pointer analysis vastly simpler to use.

From the perspective of the system stack, my projects can be listed as follows:

The remainder of this page describes these projects in reverse chronological order, with active projects listed first.

Turnkey Pointer Analysis

  Pointer analysis is an enabling technology for many program analyses and transformations. Our near-term goal is to qualitatively increase the efficiency of various dimensions of pointer analysis: (1) We have made inclusion-based analysis 4 times faster while using 7 times less memory; (2) we have increased the scalability of flow-sensitive pointer analysis by two orders of magnitude, from tens of thousands of lines of C to 1.9 million lines of C; (3) we believe that context-sensitive pointer analysis can be improved using similar techniques.

Our larger vision is to make pointer analysis substantially easier to use, transforming pointer analysis from a black art practiced by few to a turnkey technology used by everyone. Currently, sophisticated pointer analyses are rarely used. Not only are they difficult to implement and debug, but it is difficult to know which algorithm to use for any given situation, as the decision can be affected by the source language, by the particular program to be analyzed, and by the imperfectly understood precision requirements of the client, ie, the analysis that uses the pointer information.

To fulfill our vision, we plan to significantly rethink our earlier work on Client-Driven pointer analysis, which provides an adaptive method of applying extra precision in the pointer analysis to those parts of the program that require it, as driven by the specific client analysis. However, Client-Driven pointer analysis is limited in the types of pointer analysis algorithms that can be employed, so it has limited scalability. Our new approach will require a new algorithmic framework that customizes the pointer analysis algorithm, customizes the interface between the client and the pointer analysis, and may even customize the representation of the pointer information itself--and performs this customization automatically.

Impact:

    Our optimizations for inclusion-based pointer analysis have been incorporated into the gcc and LLVM compilers, and our techniques have helped Semantic Designs, Inc. significantly improve the scalability of their industrial strength software engineering tools.

Publications:

Project Page:
          See Ben Hardekopf's home page for downloads

Personnel:
          Samuel Guyer, Ben Hardekopf, Teck Bok Tok, Jia Chen


 
back to top

Domain-Specific Compilation: The Broadway Compiler

  Programmers typically reason at a higher level of abstraction than is provided by any particular general purpose programming language, such as C or Java. The Broadway project seeks to raise the level of abstraction at which compilers and tools operate. By matching the programmer's level of abstraction-- and in particular by incorporating domain-specific information-- such tools can greatly improve programmer productivity.

This project currently focuses on making existing and future libraries more usable, more efficient, and more portable. Our Broadway Compiler is a C compiler that can be configured with domain-specific information to optimize and analyze the use and implementation of software libraries. The key is a simple annotation language that can express domain-specific information which cannot be inferred through conventional static analysis. Our approach provides a separation of concerns that shields the application programmer from the annotations and that allows programmers to use simple, clean interfaces.

The Broadway system has many uses:

  • We have shown how Broadway can essentially match guru-optimized programs that are written using the PLAPACK parallel linear algebra library.
  • We have used the same compiler and annotation language to detect errors and security vulnerabilities in C programs that use the Standard C library, producing significantly lower false-positive rates than competing solutions.
  • We have used Broadway to translate untrusted C programs into programs that dynamically enforce annotation-specified security policies. When used to perform taint analysis, our system has orders of magnitude less overhead than previous systems because its deep static analysis can dramatically reduce the amount of dynamic instrumentation that is necessary.
  • We are currently extending Broadway to introduce a new type of software test input generation which we refer to as targeted test generation and which we believe will significantly increase the scalability of automated test generation systems.
  • We are currently extending Broadway to help create and tune parallel libraries.

Publications:

Personnel:
          Samuel Guyer, Walter Chang

 
back to top

Improving Memory System Performance

  As both CPUs and DRAM have become increasingly complex parallel systems, it makes sense to revisit the interface between these two systems, namely, the memory controller. This project explores novel techniques for making small changes to the memory controller to improve memory system bandwidth, to reduce DRAM latency through prefetching, and to improve energy efficiency.

Publications:

Personnel:
          Akanksha Jain, Hao Wu, Curtis Dunham, Ibrahim Hur


 
back to top

Branch Prediction for Future Technologies

  This project studies dynamic branch prediction with a focus on the constraints of future technologies in which latencies within a chip become increasingly significant. Such environments present a tradeoff between branch predictor accuracy and predictor delay: Complex predictors with large tables can achieve high accuracy, which would usually increase IPC, but large tables lead to either multiple cycle access or to limits on clock rate, both of which reduce IPC. We have studied various mechanisms for dealing with this tradeoff between delay and accuracy:
  • Hierarchical predictor organizations
  • Neural systems that replace two-bit counters
  • Techniques that shift much of the burden of branch prediction to the compiler, including a novel way of encoding in each branch instruction a boolean function that removes the tables altogether

Impact:

    Our most influential work has been the introduction of the Perceptron predictor, which has led to a flurry of new neural-based branch predictors.

Publications:

Personnel:
          Daniel Jiménez, Heather Hanson, Stephen W. Keckler


 
back to top

Component-Based Programming with Java Layers

  This project aims to simplify the creation and maintenance of large, complex software, particularly families of related software versions. Java Layers (JL) is a language that accomplishes this goal by extending Java in three ways:
  • JL lets users trivially create applications from components by writing simple type equations.
  • JL provides a simple mechanism for creating components. These components, or layers,are a type of Java class that obey certain restrictions on their interfaces.
  • JL allows programmers to specify design rules, which provide semantic constraints about how layers can be meaningfully composed.
We have evaluated our Java Layers compiler by using it to build a family of Widget libraries for diverse platforms that include a PC, PalmOS, and cell phones. A key scientific question is whether a domain-independent language such as JL can achieve the same level of performance as domain-specific languages, which can hard-code domain-specific optimizations into their compilers. Thus, an open question is whether domain-specific information can be cleanly specified and passed to the JL compiler.

JL is an instance of Batory's Feature-Oriented model of software generators. For more information on this model, see Don Batory's research page.

Publications:

Project Page:
          Java Layers home page

Personnel:
          Rich Cardone, Adam Brown

 
back to top

C-Breeze Compiler Infrastructure

  C-Breeze is a compiler infrastructure that's intended to be small and easy to use, both as a teaching tool and a research tool (the Broadway compiler is built on top of C-Breeze). C-Breeze is currently a source-to-source translator for ANSI C (ISO 9899), but we are in the process of retargetting the compiler to the LLVM infrastructure so that we can support C++.

C-Breeze is written in C++ and is organized as a series of phases. Phases are provided to parse source code into an AST, to dismantle the AST to a low level intermediate representation, to build a control flow graph, and to convert to SSA form. Various other analysis and optimization phases are available, such as a context-sensitive interprocedural pointer analysis, PRE using SSA form, a configurable interprocedural dataflow analysis engine, and more mundane phases such as liveness analysis and constant propagation. New phases can be added by using the Visitor design pattern through built-in phases to traverse and change the AST.

Project Page:
          C-Breeze home page

Personnel:
  Samuel Guyer, Daniel Jiménez, Mike Kistler, Viktor Lapinskii, Michael Michael, Teck Bok Tok


 
back to top

ZPL Parallel Programming Language

  As a postdoc, I led the design and implementation of the ZPL programming language until 1996. ZPL focuses on arrays as a source of data parallelism. Its key goals are (1) to provide a language that allows programmers to reason-- at a coarse level-- about performance in a machine-independent manner, and (2) to simplify programming effort through concise and expressive syntax. A key design principle was to omit features, such as direct array indexing, that are at odds with the language goals.

Impact:

    ZPL's convenience and conciseness have influenced the design of two prominent parallel languages that emphasize programmer productivity, namely, Chapel and X10, which are being developed by Cray and IBM, respectively.

Publications:

  • ZPL: A Machine Independent Programming Language for Parallel Computers
    with B. Chamberlain, S. Choi, E. Lewis, L. Snyder, and W. Weathersby
    IEEE Transactions on Software Engineering, 26(3), March 2000. Special Issue on Architecture-Independent Languages and Software Tools for Parallel Processing. pp. 197-211.

  • Regions: An Abstraction for Expressing Array Computation
    with B. Chamberlain, E. Lewis, and L. Snyder
    ACM International Conference on Array Programming Languages, August, 1999, pp. 41-49.

  • The Case for High Level Parallel Programming in ZPL
    with B. Chamberlain, S. Choi, E. Lewis, L. Snyder, and W. Weathersby
    IEEE Computational Science and Engineering, 5(3), July-September 1998, pp. 76-86.

  • Abstractions for Portable, Scalable Parallel Programming
    with G. Alverson, W. Griswold, D. Notkin and L. Snyder
    IEEE Trans. on Parallel and Distributed Systems, 9(1), 1998, pp. 1-17.

  • The Implementation and Evaluation of Fusion and Contraction in Array Languages
    with E Lewis and L. Snyder
    1998 ACM SIGPLAN Conference on Programming Language Design and Implementation, Montreal, June 1998, pp 50-59.

  • ZPL's WYSIWYG Performance Model
    with B. Chamberlain, S. Choi, E Lewis, L. Snyder and W. Weathersby
    Third International Workshop on High-Level Parallel Programming Models and Suportive Environments, Orlando, March 1998, pp. 50-61.

  • Factor-Join: A Unique Approach to Compiling Array Languages for Parallel Machines
    with B. Chamberlain, S. Choi, E. Lewis, L. Snyder, and W. Weathersby
    Languages and Compilers for Parallel Computing, D. Sehr, U. Banerjee, D. Gelernter, A. Nicolau and D. Padua eds., Springer-Verlag 1996, pp. 481-500.

  • The Portable Parallel Implementation of Two Novel Mathematical Biology Algorithms in ZPL
    with M. D. Dikaiakos, D. Manoussaki, and D. Woodward
    9th Int'l Conf. on Supercomputing, Barcelona, 1995, pp. 365-374.

  • A Portable Parallel N-Body Solver
    with E. Lewis, L. Snyder and G. Turkiyyah
    Proceedings of the 7th SIAM Conference on Parallel Processing for Scientific Computing, San Francisco, 1995, pp 231-236.

  • SIMPLE Performance Results in ZPL
    with L. Snyder
    Languages and Compilers for Parallel Computing, K. Pingali, U. Banerjee, D. Gelernter, A. Nicolau and D. Padua eds., Springer-Verlag 1994, pp. 361-375.

  • ZPL: An Array Sublanguage
    with L. Snyder
    Languages and Compilers for Parallel Computing, U. Banerjee, D. Gelernter, A. Nicolau and D. Padua eds., Springer-Verlag 1994, pp. 96-11.

  • A Portable Implementation of SIMPLE
    with Lawrence Snyder
    International Journal of Parallel Programming, 20(5), 1991, pp. 363-401.

  • A Comparison of Programming Models for Shared Memory Multiprocessors
    with Lawrence Snyder
    Proceedings of the International Conference on Parallel Processing 1990, II:163-170, 1990.

Project Page:
          ZPL home page

Personnel:
  Brad Chamberlain, Sung-Eun Choi , Steven Deitz , George Forman , E Lewis , Ton Ngo , Larry Snyder , W. Derrick Weathersby.
back to top

Other Projects:

  Last updated: November 11, 2021