Skip to main content

Section 2 What will I learn in this course?

Computers are now essential in everyday life. Incorrect and/or slow programs lead to frustration in the best case and disaster in the worst. Thus, how to construct correct programs that can attain high performance is a skill that all who program computers must master.

In this course, we teach “goal-oriented programming” the way Edsger Dijkstra intended: You will learn how to derive programs hand-in-hand with their proofs of correctness. Matrix computations (linear algebra) is the domain from which we draw examples. Typically, we end up with a family of algorithms (programs) all of which compute a given operation. From this family we can then pick the algorithm that has desirable properties. For example, one algorithm may be easier to parallelize than another algorithm or it may inherently be able to attain better performance on a given architecture. You will then learn techniques for mapping the appropriate algorithms to computer architectures so that they can attain high performance.

Subsection 2.1 Prerequisites

You need to have taken a course on linear algebra. You need to have prior experience with basic proof techniques and predicate logic as taught in CS311 or a discrete mathematics class. Major programming assignments will be in the C programming language. You need to either know rudimentary C, or be able to learn it quickly.

Subsection 2.2 Text/Materials

This class is based on materials developed by Prof. Robert van de Geijn, Dr. Maggie Myers, and Dr. Devangi N. Parikh. You can access these materials from ulaff.net.

You need to install Matlab on your computer. UT has a site license. Instructions on how to access the license will be provided.

Subsection 2.3 Learning Objectives

By the end of the semester you should be able to:

  • Code in C, use makefiles to compile code, and use pointer arithmetic for computing the addresses of the arrays.

  • Understand how the implementation of your code affects the performance of the code.

  • Transform your implementation such that it takes advantage of the various architecture features available.

  • Translate your code so that you can use vector instructions.

  • Block code for cache hierarchy.

  • Parallelize (not paralyze) your code.

  • Calculate the peak performance of your machine.

  • Prove that simple code segments of your code are correct.

  • Derive your code to be correct.

  • Derive a family of algorithms for a given linear algebra operation.

  • Compare/contrast/analyze the performance of the members of a family of algorithms and reason which algorithm will perform better.

  • Typeset in LaTeX.

Subsection 2.4 Detailed Calendar

Date Day Topic Due Dates
Jan 10, 2023 Tuesday Motivating Activity
Jan 12, 2023 Thursday Review: Linear algebra operations
Jan 17, 2023 Tuesday Accessing and storing matrices in memory
Jan 19, 2023 Thursday Floating point error, absolute and relative error, project support
Jan 24, 2023 Tuesday Loop ordering and its effect on performance Project One Due
Jan 26, 2023 Thursday Matrix multiplication as a loop around other matrix operations
Jan 31, 2023 Tuesday Vector registers, instruction latency and througput
Feb 02, 2023 Thursday Importance of hiding instruction latency, microkernels
Feb 07, 2023 Tuesday Memory hierarchy
Feb 09, 2023 Thursday Amortizing data movement
Feb 14, 2023 Tuesday Imporance of contiguous memory access
Feb 16, 2023 Thursday Multicore programming
Feb 21, 2023 Tuesday FLAME worksheet
Feb 23, 2023 Thursday FLAME worksheet Project Two Due
Feb 28, 2023 Tuesday Review: logic and reasoning
Mar 02, 2023 Thursday Hoare triple and weakest precondition
Mar 07, 2023 Tuesday Deriving simple code segments
Mar 09, 2023 Thursday Deriving if statements
Mar 14, 2023 Tuesday Spring Break
Mar 16, 2023 Thursday Spring Break
Mar 21, 2023 Tuesday Deriving while loops
Mar 23, 2023 Thursday Advanced Matrix Operations Project Three Due
Mar 28, 2023 Tuesday Advanced Matrix Operations
Mar 30, 2023 Thursday Advanced Matrix Operations
Apr 04, 2023 Tuesday Explorations: Extrapolating Goto Algorithm to other operations
Apr 06, 2023 Thursday Explorations: Extrapolating Goto Algorithm to other operations
Apr 11, 2023 Tuesday Explorations: Beyond linear algebra operations
Apr 13, 2023 Thursday Explorations: Beyond linear algebra operations
Apr 18, 2023 Tuesday Project Four Presentations
Apr 20, 2023 Thursday Project Four Presentations Project Four Due