ENCCS – GPU programming: when, why and how?

Graphics Processing Units (GPUs) are the workhorses of many high-performance computing (HPC) systems worldwide. Today, the majority of HPC computing power available to researchers and engineers comes from GPUs or other accelerators. As a result, programming GPUs has become increasingly important for developers working on HPC software.

At the same time, the GPU ecosystem is complex. Multiple vendors compete in the high-end GPU market, each offering their own software stack and development tools. Beyond that, there is a wide variety of programming languages, libraries, and frameworks for writing GPU code. This makes it challenging for developers and project leaders to navigate the landscape and select the most appropriate GPU programming approach for a given project and its technical requirements.

This workshop is a follow-up event of the webinars in the previous week. We will provide a comprehensive description of GPU programming concepts and models, including

  • Directive-based models (OpenACC, OpenMP)
  • Non-portable kernel-based models (CUDA, HIP)
  • Portable kernel-based models (Kokkos, alpaka, OpenCL, SYCL, etc.)
  • High-level language support (Python, Julia)
  • Multi-GPU programming with MPI
  • Hands-on examples implemented using several models
  • Notes on preparing code for GPU porting

Who is this workshop for?

This workshop is most relevant to researchers and engineers who already develop software which runs on CPUs in workstations or supercomputers. Familiarity with one or more programming languages like C/C++, Fortran, Python or Julia is recommended.

If you are not yet familiar with the basics of GPU programming concepts and models, we recommend attending the introductory webinar series offered the week before. These sessions provide the necessary background to help you get the most out of this workshop.

This training is hosted by ENCCS, register and find out more HERE