High Performance Computing

Puhti Supercomputer

The new air-cooled supercomputer Puhti and data management system Allas will be installed during the summer of 2019. More information about the utilization project can be found here.

Link to the user guide TBA.

Puhti supercomputer

  • In total 682 CPU nodes, with a theoretical peak performance of 1.8 Petaflops
  • Each node is equiped with two latest generation Intel Xeon processors, code name Cascade Lake, with 20 cores each running at 2.1 GHz (Xeon Gold 6230)
  • The 682 compute nodes have a mix of memory sizes:
    • 192 GB on 532 nodes
    • 384 GB on 132 nodes, with 40 also containing a 3.2 TB NVMe disk for fast local storage
    • 768 GB on 12 nodes
    • 1.5 TB on 6 nodes
  • Interconnect network HDR InfiniBand 200 Gbps by Mellanox, nodes connected with 100Gbps HDR100 links
  • 4+ PB Lustre parallel storage system by DDN
     

 Puhti-AI Artificial intelligence partition

  • In total 80 nodes with a total peak performance of 2.7 Petaflops
  • Each node has
    • Two latest generation Intel Xeon processors, code name Cascade Lake, with 20 cores each running at 2.1 GHz (Xeon Gold 6230)
    • Four Nvidia Volta V100 GPUs with 32 GB of memory each
    • 384 GB of main memory
    • 3.2 TB of fast local storage
    • Dual rail HDR100 interconnect network connectivity providing 200Gbps aggregate bandwidth
  • This partition is engineered to allow GPU intensive workloads to scale well to multiple nodes
     

Allas - data management solution

In the first phase CSC will also offer a new common data management solution Allas for both new and old infrastructure. This system is based on CEPH object storage technology and it provides 12 PB of storage capacity and will be the backbone for providing a rich environment for storing, sharing and analyzing data across the CSC compute infrastructure.

Sisu: Cray XC40 Supercomputer

The Sisu supercomputer (sisu.csc.fi) is the most powerful supercomputer in Finland and one of the most powerful in Northern Europe. Sisu's Cray XC40 system architecture is designed from the ground up for High Performance Computing (HPC).

Sisu is targeted at massively parallel applications that can effectively run on hundreds to thousands of compute cores in parallel. Obtaining this level of scaling with even very tightly-coupled parallel computations is made possible by the extremely high-bandwidth, low-latency Aries interconnect.

The current second phase of Sisu consists of nine cabinets, with a total theoretical peak performance of 1688 TFLOPS.

  • 1688 compute nodes, each with two 12 core Intel Xeon E5-2690v3 (Haswell) 2,6 GHz CPUs, totalling in 40512 cores
  • 64 GB of memory (2,67 GB/core) in each node
  • Aries interconnect between compute nodes

Sisu's performance was significantly improved in the second phase upgrade in August 2014.

Using more than 1024 cores for a single calculation requires that the application's scalability be demonstrated. The instructions for demonstrating the scalability on Sisu can be found here.

Sisu will be decommisioned on 16.8.2019. More information about the new supercomputers Puhti and Mahti and data management system Allas can be found here.

Sisu User Guide ►


Taito: HP Apollo 6000 XL230a/SL230s Supercluster

The Taito supercluster (taito.csc.fi) is intended for serial (single-core) and small to medium-size parallel jobs. There are also several "fat nodes" for jobs requiring a large amount of memory.

Taito consists of sixteen cabinets, with a total theoretical peak performance of 600 TFLOPS. Taito has been deployed in two phases that presently coexist.

Second phase (available January 2015 - December 2019):

  • 407 Apollo 6000 XL230a Gen9 server blades, each with two 12 core Intel Xeon E5-2690v3 (Haswell) 2,6 GHz CPUs, for a total of 9768 cores
  • 128 GB of memory on normal nodes
  • 256 GB of memory on 10 "big memory" nodes

Decommissioned: First phase (available April 2013 - February 2019):

  • 576 HP ProLiant SL230s Gen8 servers each equipped with two 8 core Intel Xeon E5-2670 (Sandy Bridge) 2,6 GHz CPUs, totaling in 9216 cores.
  • 64 GB of memory on normal nodes
  • 256 GB (16 GB/core) of memory in 16 "big memory" nodes
  • 1,5 TB (48 GB/core) of memory in 2 "hugemem" nodes, with 32 cores each


The compute nodes are connected with a high-bandwidth, low-latency Infiniband FDR interconnect.

A subset of the compute nodes is allocated for the Pouta cloud service. The exact amount of nodes allocated for Pouta is adjusted depending on the demand for cloud resources.

After the decomissioning of Taito Sandy Bridge nodes, computing capacity is reduced until the new environment is taken into use. To compensate this, we will allow smaller workloads on the Cray XC40 supercomputer Sisu; the minimum number of nodes in "small" batch job queue will be reduced from three to one. Access to Sisu can be applied by the project manager of CSC project in the customer portal at https://sui.csc.fi/group/sui/resources-and-applications.

The Haswell nodes on Taito will remain in the use until the end of 2019. More information about the new supercomputers Puhti and Mahti and data management system Allas can be found here.

Taito User Guide ►

 

Taito GPU

The Taito GPU system contains servers with NVidia GPU accelerators. These specialized processors are specifically designed for high performance computing and can provide superior performance compared to traditional CPUs. However, to utilize the capabilities fully, one needs to either have an existing application that is compatible with the processors or perform the porting and optimization work.

CSC and other European HPC centers provide regular courses on porting and optimizing for these architectures.

The following GPU resources are currently available:

  • 12 nodes with 2 NVidia K80 GPUs in each node
    • The K80 GPU cards have each 2 GPUs so there are GPUs per node, these are all connected to the first CPU
    • 2 Xeon E5-2680 v3 CPUs with 12 cores each running at 2,5 GHz
    • 256 GB of DDR4 memory
    • 850GB of HDD scratch space
  • 20 nodes with 4 NVidia P100 (Pascal) GPUs in each node
    • The number of nodes may change as they are shared with cPouta
    • The P100 GPUs are connected in pairs to each CPU
    • 2 Xeon E5-2680 v4 CPUs with 14 cores each running at 2,4 GHz
    • 512 GB of DDR4 memory
    • 2 x 800 GB of Sata SSD scratch space

Access and usage instructions can be found in the Taito user guide (chapter NVidia GPU (Taito-GPU)).

 

See also