Detailed description: This training course covers parallel programming with the message-passing interface (MPI) and OpenMP parallel programming. The course consists of discussions, delivered through lectures and examples in the form of hands-on exercises. The topics covered are directly applicable to almost every parallel computer architecture. Participants are advised to obtain basic knowledge of parallel programming prior to the course.
To exploit large massively parallel cluster paradigms combining MPI and OpenMP is used. Moreover MPI and OpenMP standards are evolving including new ideas and features to become increasingly effective in new machines. This gives developers of HPC applications a smooth path of evolution of their applications without having to deal with heavy re-factoring to take up new technologies.
The 3-day course will cover topics including parallelism, OpenMP tasks, the OpenMP memory model, performance tuning, hybrid OpenMP + MPI and OpenMP implementations. The course is aimed at programmers seeking to deepen their understanding of OpenMP.
The course is delivered in an intensive format using UL-FME’s training facilities. It is taught using a variety of methods including formal lectures, practical exercises, programming examples and informal tutorial discussions. After the course the participants should be able to write more efficient OpenMP programs.
Target audience: The target audience consists of postgraduate students and young researchers of natural and technical sciences, engineers from industry where supercomputing can be used as competitive advantage (automotive, electronic, material industry), logistics, etc.
Prerequisite knowledge: For the hands-on sessions participants should know how to work on the Unix/Linux command line and have intermediate skills in programming with C/C++. Since the focus of the school is on parallelization, participants have to be familiar with the topic and must have basic knowledge in OpenMP and MPI.
Skills to be gained:
- Be able to set up and run a simulation in parallel on an HPC cluster
- learn how to use OpenMP (Open Multi-Processing)
- learn how to use the MPI (Message Passing Interface)
- get an introduction to the OpenMPI library project
- learn how to express numerical problems in parallel programming paradigms
- gaining an awareness of potential design and performance pitfalls in heterogeneous architectures