Parallel Programming: Glossary

Key Points

Introduction
  • Parallelization is essential for achieving good performance on modern computer architectures.

  • There are many different forms of parallelization. The optimal approach depends on both the type of software problem and the type of hardware available.

Introduction to Distributed-Memory Parallelization
  • Distributed-memory parallelization is the primary mechanism for acheiving parallelization between nodes.

  • Distributed-memory parallelization tends to have larger memory requirements than other parallelization techniques.

MPI Hands-On - mpi4py
  • Where possible, use collective communication operations instead of point-to-point communication for improved efficiency and simplicity.

  • Intelligent design choices can help you reduce the memory footprint required by MPI-parallelized codes

MPI Hands-On - C++
  • Where possible, use collective communication operations instead of point-to-point communication for improved efficiency and simplicity.

  • Intelligent design choices can help you reduce the memory footprint required by MPI-parallelized codes

Introduction to Shared-Memory Parallelization
  • Shared-memory parallelization enables lower memory requirements than distributed-memory parallelization.

  • Subtle bugs that are difficult to identify and fix are common when using shared-memory parallelization.

OpenMP Hands-On
  • It is extremely important to carefully avoid race conditions.

  • Achieving good performance with OpenMP often requires larger code refactoring.

Glossary

FIXME