Course: Advanced parallel programming

Europe/Ljubljana
Description


SLO: Ta tečaj zajema napredno vzporedno programiranje z vmesnikom za posredovanje sporočil (MPI) in vzporedno programiranje OpenMP. Tečaj je sestavljen iz diskusij, izvedenih s predavanji in primerov v obliki praktičnih vaj. Zajete teme so neposredno uporabne za skoraj vsako vzporedno računalniško arhitekturo. Udeležencem svetujemo, da pred tečajem pridobijo osnovno znanje vzporednega programiranja.

ENG: This training course covers advanced parallel programming with the message-passing interface (MPI) and OpenMP parallel programming. The course consists of discussions, delivered through lectures and examples in the form of hands-on exercises. The topics covered are directly applicable to almost every parallel computer architecture. Participants are advised to obtain basic knowledge of parallel programming prior to the course.


Organizator/Organizer

SLO: Trening je dogodek EuroHPC. Organizira ga laboratorij LECAD na Fakulteti za strojništvo Univerze v Ljubljani.

ENG: This Training is an EuroHPC event. It is organized by LECAD laboratory at Faculty of Mechanical Engineering, University of Ljubljana, Slovenia.

Fakulteta za strojništvo Ostalo LogotipLeCAD     

  • Wednesday 10 February
    • 09:00 10:30
      Introduction to EuroHPC: How to write efficient OpenMP programs; Hybrid MPI + OpenMP programming;

      How to identify performance bottlenecks, perform numerical computations efficiently. Hybrid application programs using MPI + OpenMP are now commonplace on large HPC systems.
      There are two main motivations for this combination of programming models:
      - Reduction in memory footprint
      - Improved performance

      Conveners: Janez Povh (University of Ljubljana, Faculty of Mechanical Engineering), Leon Kos
    • 10:30 12:00
      Profiling OpenMP and MPI applications, performance evaluation and optimizing of OpenMP applications

      Design; choosing a parallel algorithm, discussion about the paradigms, starting with a serial code towards parallelization, testing!
      Optimization; premature optimization, unnecessarily optimization, optimizing communications > computation, data transfer, MPI collective operations.

      Convener: Leon Kos

      module load tau
      cp -r /home/leon//PTC_OpenMPI-MP_profiling/$HOME/PTC_OpenMPI-MP_profiling
      cd /$HOME/PTC_OpenMPI-MP_profiling/examples/openmpi/simple-work
      tau_cc.sh -tau_makefile=/opt/pkg/software/tau/2.29.1/x86_64/lib/Makefile.tau-mpi-openmp -tau_options=-optCompInst simple.c
      mpirun -np 4 tau_exec -io ./a.out

      pprof
      paraprof
       

    • 12:00 13:00
      Lunch break 1h
    • 13:00 15:00
      Advanced MPI: User-defined datatypes

      Explaining user defined datatypes, used for communication purposes, that are required for advanced usage od MPI-I/O. This feature is particularly useful to library writers.

      Convener: Leon Kos

      Exercise 1

      cd MPI
      cp tasks/C/Ch12/derived-contiguous-skel.c 04
      cd 04
      gedit derived-contiguous-skel.c
      mpicc derived-contiguous-skel.c
      srun -n 4 --partition=haswell ./a.out

      Exercise 2

      1662  ls tasks/C/Ch12/
       1663  cp tasks/C/Ch12/derived-contiguous-skel.c 04
       1664  ls 04
       1665  cd 04
       1666  gedit derived-contiguous-skel.c
       1667  bg
       1668  man MPI_Type_contiguous
       1669  mpicc derived-contiguous-skel.c
       1670  env --unset=LD_PRELOAD srun -n 3 --partition=haswell ./a.out
       1671  mpicc derived-contiguous-skel.c
       1672  env --unset=LD_PRELOAD srun -n 3 --partition=haswell ./a.out
       1673  cd ..
       1674  cp tasks/C/Ch12/derived-struct-skel.c 04
       1675  cp tasks/C/Ch12/solutions/derived-struct.c 04
       1676  cd 04
       1677  diff -u derived-struct-skel.c derived-struct.c | less
       1678  emacs derived-struct-skel.c derived-struct.c &

    • 15:00 17:00
      Parallel File I/O with MPI

      MPI I/O is an API standard for parallel I/O that allows multiple processes of a parallel program to access data in a common file simultaneously. MPI I/O maps I/O reads and writes to message-passing sends and receives. Implementing parallel I/O can improve the performance of your parallel application.

      Convener: Leon Kos
  • Thursday 11 February