High Performance Computing

Sisu: Cray XC40 Supercomputer

The Sisu supercomputer (sisu.csc.fi) is the most powerful supercomputer in Finland and one of the most powerful in Northern Europe. Sisu's Cray XC40 system architecture is designed from the ground up for High Performance Computing (HPC).

Sisu is targeted at massively parallel applications that can effectively run on hundreds to thousands of compute cores in parallel. Obtaining this level of scaling with even very tightly-coupled parallel computations is made possible by the extremely high-bandwidth, low-latency Aries interconnect.

The current second phase of Sisu consists of nine cabinets, with a total theoretical peak performance of 1688 TFLOPS.

  • 1688 compute nodes, each with two 12 core Intel Xeon E5-2690v3 (Haswell) 2,6 GHz CPUs, totalling in 40512 cores
  • 64 GB of memory (2,67 GB/core) in each node
  • Aries interconnect between compute nodes

Sisu's performance was significantly improved in the second phase upgrade in August 2014.

Using more than 1024 cores for a single calculation requires that the application's scalability be demonstrated. The instructions for demonstrating the scalability on Sisu can be found here.

Sisu User Guide ►


Taito: HP Apollo 6000 XL230a/SL230s Supercluster

The Taito supercluster (taito.csc.fi) is intended for serial (single-core) and small to medium-size parallel jobs. There are also several "fat nodes" for jobs requiring a large amount of memory.

Taito consists of sixteen cabinets, with a total theoretical peak performance of 600 TFLOPS. Taito has been deployed in two phases that presently coexist.

Second phase (available since January 2015):

  • 407 Apollo 6000 XL230a Gen9 server blades, each with two 12 core Intel Xeon E5-2690v3 (Haswell) 2,6 GHz CPUs, for a total of 9768 cores
  • 128 GB of memory on normal nodes
  • 256 GB of memory on 10 "big memory" nodes

First phase (available since April 2013):

  • 576 HP ProLiant SL230s Gen8 servers each equipped with two 8 core Intel Xeon E5-2670 (Sandy Bridge) 2,6 GHz CPUs, totaling in 9216 cores.
  • 64 GB of memory on normal nodes
  • 256 GB (16 GB/core) of memory in 16 "big memory" nodes
  • 1,5 TB (48 GB/core) of memory in 2 "hugemem" nodes, with 32 cores each


The compute nodes are connected with a high-bandwidth, low-latency Infiniband FDR interconnect.

A subset of the compute nodes is allocated for the Pouta cloud service. The exact amount of nodes allocated for Pouta is adjusted depending on the demand for cloud resources.

Taito User Guide ►

 

Taito GPU

The Taito GPU system contains servers with NVidia GPU accelerators. These specialized processors are specifically designed for high performance computing and can provide superior performance compared to traditional CPUs. However, to utilize the capabilities fully, one needs to either have an existing application that is compatible with the processors or perform the porting and optimization work.

CSC and other European HPC centers provide regular courses on porting and optimizing for these architectures.

The following GPU resources are currently available:

  • 12 nodes with 2 NVidia K80 GPUs in each node
    • The K80 GPU cards have each 2 GPUs so there are GPUs per node, these are all connected to the first CPU
    • 2 Xeon E5-2680 v3 CPUs with 12 cores each running at 2,5 GHz
    • 256 GB of DDR4 memory
    • 850GB of HDD scratch space
  • 20 nodes with 4 NVidia P100 (Pascal) GPUs in each node
    • The number of nodes may change as they are shared with cPouta
    • The P100 GPUs are connected in pairs to each CPU
    • 2 Xeon E5-2680 v4 CPUs with 14 cores each running at 2,4 GHz
    • 512 GB of DDR4 memory
    • 2 x 800 GB of Sata SSD scratch space

Access and usage instructions can be found in the Taito user guide (chapter NVidia GPU (Taito-GPU)).

 

See also