Finding Good Compilation Sequences
For more than forty years, we have built compilers that follow a simple structural model: they apply a fixed set of passes to every program. A typical compiler has a handful of optimization levels that add extra passes into the process. This approach has allowed us to make progress on compilation and optimization. It has allowed us to build and debug compilers. However, it has not placed us in a position where we can deliver high-quality compilers in a timely fashion. Despite forty-five years of research on optimization and code quality, we still have trouble producing compilers that generate excellent code for new architectures and new applications.
One area that research has largely avoided is the structure of the compilers that we build. Modern compilers are organized along the same basic lines that were used in the first Fortran compiler, in the late 1950s. The time has come to fundamentally rethink the way that we organize and execute optimizing compilers. In our research, we are building and evaluating an adaptive compiler. This compiler changes its behavior in response to both the application and the target machine. It uses a simple feedback mechanism in an attempt to minimize an explicit objective function, such as execution time or code size.
This talk will describe results from large-scale experiments that characterize the search spaces in which our adaptive compiler operates, and some of the ways that we can use those insights to design effective search techniques. It will discuss some of the engineering challenges that arise in building adaptive compilers. It will suggest directions for future research.
This talk will be accessible to a broad audience.
Keith D. Cooper is a professor in the Department of Computer Science and in the Department of Electrical Engineering at Rice University, as well as chair of the Computer Science Department. His research has focused on techniques for compiler-based optimization and code generation. His work has included algorithms for interprocedural analysis and optimization, for register allocation and scheduling, and for a variety of individual optimization problems.
His current interests include work on adaptive compilers, on code generation for aggressive microprocessors, on smoothing the behavior of Grid-style programs, on code understanding, and on link-time and run-time reoptimization. He has been deeply involved in construction of two buildings, and advises several Houston-area schools on matters of technology. With Dr. Linda Torczon, he has written an introductory textbook on compiler construction, Engineering a Compiler (Morgan Kaufmann, 2003).
Solving General Purpose Computing Problems using GPUs
A few short years ago, single-chip PC 3D graphics solutions arrived on the market. Since then, graphics performance has approximately doubled every 6-9 months, far exceeding Moore's Law. There is evidence that this geometric performance growth is not only possible, but inevitable. The reason lies in the way that graphics architectures have evolved, and the fact that this evolution has taken a very different path than CPUs. As GPUs become more flexible, powerful, and programmable, their architecture is well-suited to embrace the massive parallelism and data flow that is inherent in graphics, shading, and other hard computational problems. Consequently, many previously difficult problems in graphics can now be solved interactively, and some in real time. What will become of graphics research as the previously hard problems become easy? Which other computationally hard problems are well-suited to GPU solution?
David Kirk is Chief Scientist and Vice President of Architecture at NVIDIA. He was previously Chief Scientist and head of technology for Crystal Dynamics, and prior to that worked on developing graphics hardware for engineering workstations at Apollo/Hewlett-Packard. David holds B.S. and M.S. degrees from MIT and M.S. and Ph.D. degrees from the California Institute of Technology, and is the author/inventor of over 100 technical publications and patents in the area of computer graphics and hardware. At Siggraph 2002 in San Antonio, TX, David was the recipient of the ACM Siggraph Computer Graphics Achievement Award, honoring him for his contributions to the field.