Strumenti Utente

Strumenti Sito


magistraleinformaticanetworking:spd:lezioni16.17

Questa è una vecchia versione del documento!


Journal of Lessons, SPD year 2016-2017

Journal

  • 20/02/2017 Course introduction Parallel programming frameworks and high-level approach to parallel programming over different platforms: MPI, TBB and OpenCL as main examples; course organization and prerequisites; reference books and studying material.
    MPI (Message Passing Interface) standard : brief history and aim of the standard, single program / multiple data execution model, compilation and linkage model; issues in supporting multiple programming languages and uses (application, utility library and programming language support) with a static compilation and linkage approach. Portability in parallel programming: functional and non-functional aspects, performance tuning and performance debugging.
  • 22/02/2017 MPI basic concepts : MPI as a parallel framework that supports a structured approach to parallel programming. Basic concepts of MPI: communicators (definition, purpose, difference between inter and intra-communicators, process ranks); point to point communication (concepts of envelope, local/global completion, blocking/non-blocking primitive, send modes); collective communications (definition, communication scope, global serialization, freedom of implementation in the standard); MPI datatypes (basic meaning and use, primitive / derived datatypes, relationship with sequential language types).
  • 27/02/2017 MPI : MPI library initialization and basic MPI usage; point to point communication semantics (buffer behaviour, receive, wildcards, status objects, MPI_PROC_NULL), basic and derived MPI datatypes (purpose as explicitly defined meta-data provided to the MPI implementation, multiple language bindings, code-instantiated metadata, examples). MPI datatypes (semantics, typemap and type signature, matching rules for communication, role in MPI-performed packing and unpacking); core primitives for datatype creation ( MPI_Type_* : contiguous, vector, hvector, commit, free) and examples.
  • 01/03/2017 MPI : more derived datatypes (indexed, hindexed, struct); point to point communication modes (MPI_BSEND, MPI_SSEND; MPI_RSend usage); non-blocking communication (Wait and Test group of primitives, semantics, MPI_Request object handles to active requests); canceling and testing cancellation of non-blocking primitives (issues and pitfalls, interaction with MPI implementation, e.g. MPI_finalize); communicators and groups (communicator design aim and programming abstraction, local and global information, attributes and virtual topologies, groups as local objects, primitives for locally creating and managing groups).
  • 06/03/2017 MPI : intracommunicators (basic primitives concerning size, rank, comparison); communicator creation as a collective operation, MPI_Comm_create basic and general case; MPI_Comm_split; collective communications (definition and semantics, execution environment, basic features, agreement of key parameters among the processes, constraints on Datatypes and typemaps for collective op.s, overall serialization vs synchronization, potential deadlocks); taxonomy of MPI collectives (blocking/non-blocking, synchronization/communication/communication+computation, asymmetry of the communication pattern, variable size versions, all- versions).
  • 08/03/2017 MPI Lab Basic program structure. Examples with derived datatypes.
  • 13/03/2017 MPI Lab Implementing communication with assigned asynchronicity degree in MPI. Structured parallel programming in MPI, separation of concerns in practice. Structured parallel patterns in MPI and communicator handling.
  • 15/03/2017 16/03/2017 MPI Lab Farm skeleton implementation – MPI MPI collectives with both computation and communication: Reduce (and variants) and Scan (and variants). Using MPI operators with Reduce and Scan. Defining custom user operators, issues and implementation of operator functions.
  • 20/03/2017 MPI Lab Asynchronous channel implementation, Farm skeleton implementation. Parallel code basic debugging.
  • 22/03/2017 postponed
  • 27/03/2017
  • 29/03/2017
  • 03/04/2017
  • 05/04/2017
  • 26/04/2017 Intro to GPU-based computing GPGPU and OpenCL. Development history of modern GPUs, graphic pipeline, HW/FW implementations, load unbalance related to the distribution of graphic primitives executed, more “general purpose” and programmable core design; generic constraints and optimizations of the GPU approach; modern GPU architecture, memory optimization and constraints, memory spaces. GPGPU, and transition to explicitly general purpose programming languages for GPU. Management of large sets of thread processors, concept of command queue and concurrent execution of tasks; consequences on the constraint over synchronization of large computations split among several thread processors.

Slides, Notes and References to papers

Date Slides Notes References / Info
20/02, 22/02 Course introduction
22/02, 27/02 MPI Lesson 1
27/02, 01/03 MPI Lesson 2
01/03, 06/03 MPI Lesson 3 MPI Lesson 4
06/03 MPI Lesson 5
08/03, 13/03 MPI Lab slides
16/03 MPI Lesson 6
20/03
22/03
magistraleinformaticanetworking/spd/lezioni16.17.1493818672.txt.gz · Ultima modifica: 03/05/2017 alle 13:37 (8 anni fa) da Massimo Coppola

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki