15/02/2021 NO LESSON
17/02/2021 Course introduction – Parallel programming frameworks and high-level approach to parallel programming over different platforms: MPI, TBB, OpenCL as main examples, oneAPI and SYCL as unifying approaches; course organization and prerequisites; reference books and studying material.
– MPI (Message Passing Interface) standard – brief history and aim of the standard, single program / multiple data execution model, compilation and linkage model; issues in supporting multiple programming languages and uses (application, utility library and programming language support) with a static compilation and linkage approach. Portability in parallel programming: functional and non-functional aspects, performance tuning and performance debugging. MPI basic concepts MPI as a parallel framework that supports a structured approach to parallel programming. Basic concepts of MPI: communicators (definition, purpose, difference between inter and intra-communicators, process ranks).
22/02/2021 NO LESSON
24/02/2021 MPI basic concepts – Point to point communication (concepts of envelope, local/global completion, blocking/non-blocking primitive, send modes); collective communications (definition, communication scope, global serialization, freedom of implementation in the standard); MPI datatypes (basic meaning and use, primitive / derived datatypes); MPI Datatypes relationship with sequential language types. MPI library initialization and basic MPI usage; point to point communication semantics (buffer behaviour, receive, status objects, MPI_PROC_NULL); MPI primitive datatypes.
01/03/2021 MPI – Derived MPI datatypes (purpose as explicitly defined meta-data provided to the MPI implementation, multiple language bindings, code-instantiated metadata, examples). MPI datatypes semantics: typemap and type signature (matching rules for communication, role in MPI-performed packing and unpacking); core primitives for datatype creation ( MPI_Type_* : contiguous, vector, hvector, indexed, hindexed, struct; commit, free) and examples. Point to point communication modes (MPI_BSEND, MPI_SSEND; MPI_RSend usage); non-blocking communication (Wait and Test group of primitives, semantics, MPI_Request object handles to active requests); canceling and testing cancellation of non-blocking primitives (issues and pitfalls, interaction with MPI implementation, e.g. MPI_finalize).
03/03/2021 Interested students (either of this academic year or any previous one) need to contact the teacher
08/03/2021
10/03/2021
15/03/2021
17/03/2021
22/03/2021
24/03/2021
29/03/2021
31/03/2021
07/04/2021
12/04/2021
14/04/2021
19/04/2021
21/04/2021
26/04/2021
28/04/2021
03/05/2021
05/05/2021
10/05/2021
12/05/2021
17/05/2021
19/05/2021