- Principles of Parallel Programming, Calvin Lyn and Lawrence Snyder, Pearson
- Parallel Programming for Multicore and Cluster Systems, Thomas Dauber and Gudula Rünger, Springer
- Programming Massively Parallel Processors, David B. Kirk and Wen-mei W. Hwu, Morgan Kaufmann
- An introduction to Parallel Programming, Peter Pacheco, Morgan Kaufmann
Learning Objectives
The goal of this course is to introduce students to techniques of parallel programming and HPC.
At the end of the course the student will know the basics of parallel programming or multicore systems, clusters and GPGPU. He'll know the basics paradigms of parallel programming in Java, C++11, Pthreads, OpenMP, MPI and CUDA.
For each project it must be written a technical report and a presentation that describe the work done and reports the performance of the parallel version of the program vs. the sequential one.
Programming projects are chosen by the students from a list proposed by the instructor. They can be developed alone or in couple.
The goal of these projects is to show the capabilities of:
- knowing how to implement a parallel program using one (6 credit version of the course) or two (9 credits version of the course) frameworks and languages presented in the lectures
- knowing how to evaluate the effects and differences of parallel programming vs. sequential programming
- knowing how to measure the performance of a parallel program vs a sequential one
- knowing how to write a technical report and make a technical presentation.
The goal of the mid-term written examination is to evaluate the knowledge of multi-core and GPU programming techniques.
Course program
Types of parallelism (instructions, transactions,task, thread, memory, .)
Parallelism models (SIMD, MIMD, SPMD, .)
CPUs and parallel architectures
Design Patterns for parallel computing (Master/Worker, Message passing)
Parallelization strategies, task parallelism, data parallelism, work sharing
Parallel programming in C/C++ (C++11) and Java
Concurrent data structures