Skip to main content

Section 2 What will I learn in this course?

Computers are now essential in everyday life. Incorrect and/or slow programs lead to frustration in the best case and disaster in the worst. Thus, how to construct correct programs that can attain high performance is a skill that all who program computers must master.
In this course, we teach “goal-oriented programming” the way Edsger Dijkstra intended: You will learn how to derive programs hand-in-hand with their proofs of correctness. Matrix computations (linear algebra) is the domain from which we draw examples. Typically, we end up with a family of algorithms (programs) all of which compute a given operation. From this family we can then pick the algorithm that has desirable properties. For example, one algorithm may be easier to parallelize than another algorithm or it may inherently be able to attain better performance on a given architecture. You will then learn techniques for mapping the appropriate algorithms to computer architectures so that they can attain high performance.

Subsection 2.1 Prerequisites

You need to have taken a course on linear algebra. You need to have prior experience with basic proof techniques and predicate logic as taught in CS311 or a discrete mathematics class. Major programming assignments will be in the C programming language. You need to either know rudimentary C, or be able to learn it quickly.

Subsection 2.2 Text/Materials

This class is based on materials developed by Prof. Robert van de Geijn, Dr. Maggie Myers, and Dr. Devangi N. Parikh. You can access these materials from ulaff.net.
You need to install Matlab on your computer. UT has a site license. Instructions on how to access the license will be provided.

Subsection 2.3 Learning Objectives

By the end of the semester you should be able to:
  • Code in C, use makefiles to compile code, and use pointer arithmetic for computing the addresses of the arrays.
  • Understand how the implementation of your code affects the performance of the code.
  • Transform your implementation such that it takes advantage of the various architecture features available.
  • Translate your code so that you can use vector instructions.
  • Block code for cache hierarchy.
  • Parallelize (not paralyze) your code.
  • Calculate the peak performance of your machine.
  • Prove that simple code segments of your code are correct.
  • Derive your code to be correct.
  • Derive a family of algorithms for a given linear algebra operation.
  • Compare/contrast/analyze the performance of the members of a family of algorithms and reason which algorithm will perform better.
  • Typeset in LaTeX.

Subsection 2.4 Detailed Calendar

Date Day Topic
Jan 16 2024 Tuesday Motivating Activity
Jan 18 2024 Thursday Review: Linear algebra operations
Jan 23 2024 Tuesday Accessing and storing matrices in memory
Jan 25 2024 Thursday Floating point error, absolute and relative error, project support
Jan 30 2024 Tuesday Loop ordering and its effect on performance
Feb 01 2024 Thursday Matrix multiplication as a loop around other matrix operations
Feb 06 2024 Tuesday Vector registers, instruction latency and througput
Feb 08 2024 Thursday Importance of hiding instruction latency, microkernels
Feb 13 2024 Tuesday Memory hierarchy
Feb 15 2024 Thursday Amortizing data movement
Feb 20 2024 Tuesday Imporance of contiguous memory access
Feb 22 2024 Thursday Multicore programming
Feb 27 2024 Tuesday FLAME worksheet
Feb 29 2024 Thursday FLAME worksheet
Mar 05 2024 Tuesday Review: logic and reasoning
Mar 07 2024 Thursday Hoare triple and weakest precondition
Mar 12 2024 Tuesday Spring Break
Mar 14 2024 Thursday Spring Break
Mar 19 2024 Tuesday Deriving simple code segments
Mar 21 2024 Thursday Deriving if statements
Mar 26 2024 Tuesday Deriving while loops
Mar 28 2024 Thursday Advanced Matrix Operations
Apr 02 2024 Tuesday Advanced Matrix Operations
Apr 04 2024 Thursday Advanced Matrix Operations
Apr 09 2024 Tuesday Explorations: Extrapolating Goto Algorithm to other operations
Apr 11 2024 Thursday Explorations: Extrapolating Goto Algorithm to other operations
Apr 16 2024 Tuesday Exam
Apr 18 2024 Thursday Explorations: Beyond linear algebra operations
Apr 23 2024 Tuesday Project Presentations
Apr 25 2024 Thursday Project Presentations