High Performance Computing - Services for Research
High Performance Computing
Puhti is a supercomputer that caters to a wide range of use cases. It has a powerful CPU partition with almost 700 nodes with a range of memory sizes and local storage options, all connected with a fast interconnect. Puhti allows the user to reserve compute and memory resources flexibly, and the user can run anything from interactive single core data processing to medium scale simulations spanning multiple nodes.
There are also 80 GPU nodes, with total of 320 GPUs. This partition is suitable for all kinds workloads capable of utilizing GPUs, even heavy AI models that span multiple nodes.
Puhti has wide selection of scientific software installed.
Mahti is a supercomputer designed for massively parallel jobs requiring high floating point performance and a fast interconnect. The system has in total 1404 nodes equipped with powerful AMD Rome CPUs. These are connected with a fast interconnect, allowing jobs to scale across the full system. In Mahti user reserves full nodes so that the jobs can extract full performance from each node. Mahti is in particular geared towards medium to large scale simulations requiring Petaflops of compute power. Also smaller parellel workloads that are able to use full nodes efficiently can utilize Mahti.
- The Pilot Usage period starts on 6.7.2020
- The general availability of Mahti for all users is expected in early August.
When LUMI starts operating in early 2021, it will be one of the most competitive supercomputers in the world. The design philosophy for LUMI was to create a platform which makes it possible to use AI, especially deep learning, and traditional large-scale simulations combined with high-performance data analytics to solve a single research problem. The theoretical peak performance of LUMI will be over 200 petaflops (2*1017 floating point operations per second), making it one of the fastest supercomputers in the world.
- Storage: over 60 petabytes with a sizeable flash layer providing more than 1 terabytes of bandwidth.
- Used technologies: The supercomputer achieves its high performance with a large number of nodes with accelerators (GPUs). In addition the system is complemented by a CPU only partition, IaaS cloud services and a large object storage solution.
- Data center preparation: March 2020 – October 2020
- System procurement: November 2019 – June 2020
- System installations: Q4/2020
- Operations: Q1/2021-Q4/2026
Allas is CSC's general purpose research data storage for both new and old infrastructure. This system is based on CEPH object storage technology and it provides 12 PB of storage capacity and will be the backbone for providing a rich environment for storing, sharing and analyzing data across the CSC compute infrastructure.