Technical information of the new environment

Phase 1, Puhti and Allas

Supercomputer partition

  • Theoretical peak performance of 2.0 Petaflops
  • Latest generation Intel Xeon processors, code name Cascade Lake, 30480 cores
  • Compute nodes have a mix of memory sizes, ranging from 96 GB up to 1.5 TB
  • 4 PB Lustre parallel storage system by DDN
  • Interconnect network HDR InfiniBand 200 Gbps by Mellanox, nodes connected with 100Gbps HDR100 links

 Artificial intelligence partition

  • The total peak performance of the AI system is 2.5 Petaflops from the GPUs
  • Comprises 80 nodes with 4 Nvidia V100 GPUs + 2 CPUs each (320 GPUs)
  • Each node has 3.2 TB of fast local storage
  • Dual rail HDR100 interconnect network connectivity providing 200Gbps aggregate bandwidth
  • This partition is engineered to allow GPU intensive workloads to scale well to multiple nodes

Data management solution

In the first phase CSC will also offer a new common data management solution Allas for both new and old infrastructure. This system is based on CEPH object storage technology and it provides 12 PB of storage capacity and will be the backbone for providing a rich environment for storing, sharing and analyzing data across the CSC compute infrastructure.

Phase 2, Mahti

  • Atos BullSequana XH2000 supercomputer with 6.4 Petaflops of theoretical peak performance
  • AMD EPYC processors, code name Rome, 200 000 cores
  • Each node will be equipped with 256 GB of memory
  • 8 PB Lustre parallel storage system
  • Interconnect network HDR InfiniBand by Mellanox