UTCS Colloquium-James Larus/Microsoft: "Spending Moore's Dividend" ACES 2.402, Tuesday, February 3, 2009 2:00 p.m.

Contact Name: 
Jenna Whitney
Feb 3, 2009 2:00pm - 3:30pm

There is a sign up schedule
for this event:


Type of Talk:  UTCS Colloquium

 Speaker/Affiliation:  James Larus/Microsoft


me: Tuesday, February 3, 2009 2:00 p.m.

Location:  ACES 2.402

Host:  Kathryn McKinley

Talk Title:  "Spendi

ng Moore''s Dividend"

Talk Abstract:

Over the past three

decades, regular, predictable improvements in computers were the norm. Th

is progress is attributable to Moore''s Law, the steady 40% per year incre

ase in the number transistors per unit area. These decades were the period

in which the personal computer and packaged software industries were born a

nd matured. Software development was facilitated by the comforting knowledg

e that every generation of processors would run much faster than its predec


This era is over and the industry has embarked on a histo

ric transition from sequential to parallel computation. The introduction of
mainstream parallel (multicore) processors in 2004 marked the end of a rem

arkable 30-year period during which sequential computer performance increas

ed 40 - 50% per year.

Fortunately, Moore''s Law has not been re

pealed. Semiconductor technology is still doubling the transistors on a chi

p every two years. However, this flood of transistors is now used to incre

ase the number of independent processors on a chip, rather than making an

individual processor run faster.  The challenge that the industry now
faces is how to make parallel computing mainstream. This talk looks at one
facet of this problem by asking how software consumed previous performance
growth and whether multicore processors can satisfy the same needs. In sho

rt, how did we spend dividends of Moore''s law, and what can we do in the

Speaker Bio:

James Larus is Director of Software Arch

itecture for the Data Center Futures team in Microsoft Research.

Larus has been an active contributor to the programming languages, compil

er, and computer architecture communities. He has published many papers an

d served on numerous program committees and NSF and NRC panels. Larus becam

e an ACM Fellow in 2006.

Larus joined Microsoft Research as a Se

nior Researcher in 1998 to start and, for five years, led the Software Pr

oductivity Tools (SPT) group, which developed and applied a variety of inn

ovative techniques in static program analysis and constructed tools that fo

und defects (bugs) in software. This group''s research has both had conside

rable impact on the research community, as well as being shipped in Micros

oft products such as the Static Driver Verifier and FX/Cop and other, wide

ly-used internal software development tools. Larus then became the Research
Area Manager for programming languages and tools and started the Singulari

ty research project, which demonstrated that modern programming languages

and software engineering techniques could fundamentally improve software ar


Before joining Microsoft, Larus was an Assistant a

nd Associate Professor of Computer Science at the University of Wisconsin-M

adison, where he published approximately 60 research papers and co-led the
Wisconsin Wind Tunnel (WWT) research project with Professors Mark Hill and
David Wood. WWT was a DARPA and NSF-funded project investigated new approa

ches to simulating, building, and programming parallel shared-memory comp

uters. Larus’s research spanned a number of areas: including new and effic

ient techniques for measuring and recording executing programs’ behavior,
tools for analyzing and manipulating compiled and linked programs, progra

mming languages for parallel computing, tools for verifying program correc

tness, and techniques for compiler analysis and optimization.


arus received his MS and PhD in Computer Science from the University of Cal

ifornia, Berkeley in 1989, and an AB in Applied Mathematics from Harvard

in 1980. At Berkeley, Larus developed one of the first systems to analyze

Lisp programs and determine how to best execute them on a parallel computer