Distributed and shared memory parallelization with MPI and OpenMP |
|
|
From Monday 23 September 2013 - 08:00 To Friday 27 September 2013 - 17:00
| by
This e-mail address is being protected from spambots. You need JavaScript enabled to view it
| Hits : 870 |
|
Distributed memory parallelization with the Message Passing Interface MPI (Mon+Tue, for beginners):
On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an full introduction into MPI-1. Further aspects are domain decomposition, load balancing, and debugging. An MPI-2 overview and the MPI-2 one-sided communication is also taught. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI).
Shared memory parallelization with OpenMP (Wed, for beginners):
The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented.
Advanced topics in parallel programming (Thu+Fri):
Topics are MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, OpenMP on clusters, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. Hands-on sessions are included.
See more information |
Location : Großer Seminarraum, HLRS (Höchstleistungsrechenzentrum Stuttgart), Universität Stuttgart, Allmandring 30, D-70550 Stuttg |
Back
|