Skip to main content

Subsection 0.3.2 Best algorithms of the 20th century

An article published in SIAM News, a publication of the Society for Industrial and Applied Mathermatics, lists the ten most important algorithms of the 20th century [10]:

  1. 1946: John von Neumann, Stan Ulam, and Nick Metropolis, all at the Los Alamos Scientific Laboratory, cook up the Metropolis algorithm, also known as the Monte Carlo method.

  2. 1947: George Dantzig, at the RAND Corporation, creates the simplex method for linear programming.

  3. 1950: Magnus Hestenes, Eduard Stiefel, and Cornelius Lanczos, all from the Institute for Numerical Analysisat the National Bureau of Standards, initiate the development of Krylov subspace iteration methods.

  4. 1951: Alston Householder of Oak Ridge National Laboratory formalizes the decompositional approach to matrix computations.

  5. 1957: John Backus leads a team at IBM in developing the Fortran optimizing compiler.

  6. 1959–61: J.G.F. Francis of Ferranti Ltd., London, finds a stable method for computing eigenvalues, known as the QR algorithm.

  7. 1962: Tony Hoare of Elliott Brothers, Ltd., London, presents Quicksort.

  8. 1965: James Cooley of the IBM T.J. Watson Research Center and John Tukey of PrincetonUniversity and AT&T Bell Laboratories unveil the fast Fourier transform.

  9. 1977: Helaman Ferguson and Rodney Forcade of Brigham Young University advance an integer relation detection algorithm.

  10. 1987: Leslie Greengard and Vladimir Rokhlin of Yale University invent the fast multipole algorithm.

Of these, we will explicitly cover three: the decomposition method to matrix computations, Krylov subspace methods, and the QR algorithm. Although not explicitly covered, your understanding of numerical linear algebra will also be a first step towards understanding some of the other numerical algorithms listed.