Is my code fast? Can it be faster? Scientific computing, machine learning, and data science are about solving problems that are compute intensive. Choosing the right algorithm, extracting parallelism at various levels, and amortizing the cost of data movement are vital to achieving scalable speedup and high performance.
LAFF-On Programming for High Performance uses the simple but important example of matrix-matrix multiplication to illustrate fundamental techniques for attaining high-performance on modern CPUs. A carefully designed sequence of exercises leads the learner from a naive implementation to one that effectively utilizes instruction level parallelism and culminates in a high-performance multithreaded implementation. Along the way, it is discovered that careful attention to data movement is key to efficient computing.
The free-to-audit, four-week, self-paced course “LAFF-On Programming for High Performance” launches its first offering by Texas Computer Science faculty Robert van de Geijn, Maggie Myers, and Devangi Parikh on June 4, 2019 on the edX platform. The Linear Algebra - Foundations to Frontiers (LAFF) online courses connect the theory of linear algebra to issues encountered in computer architecture, software engineering, and program correctness. The loosely connected LAFF courses build on ongoing research in high-performance linear algebra libraries, exposing participants to cutting-edge research while teaching them the fundamentals of the field. More information can be found on the edX website.
“We will include this course as one of the main onboarding materials for our team.’'
- Dr. Misha Smelyanskiy, Technical Lead and Manager of AI System Co-design Group at Facebook.