In modern systems, concurrency and parallelism are no longer niche areas reserved for specialists and experts, but a cross-cutting concern to which all designers and developers are exposed. Technology trends suggest concurrency and parallelism are increasingly a cornerstone subject to which all successful programmers will require significant exposure. The objective of this course is to provide students with strong background on parallel systems fundamentals along with experience with a diversity of both classical and modern approaches to managing and exploiting concurrency, including shared memory synchronization, parallel architectures such as GPUs, as well as distributed parallel frameworks such as MPI and map-reduce.
This course explores parallel systems, from languages to hardware, from large-scale parallel computers to multicore chips, and from traditional parallel scientific computing to modern uses of parallelism. Includes discussion of and research methods in graphics, languages, compilers, architecture, and scientific computing.
- Basic background/terminology/theory
- Shared memory synchronization
- Massively parallel architectures
- Distributed execution frameworks
- Runtimes and front-end programming
- Latency vs. throughput
- Hidden vs. exposed parallelism
- Performance issues
- Parallel algorithms instructive examples