1.1 Sisu supercomputer

1.1.1 User policy

Sisu (sisu.csc.fi) is a Massively Parallel Processor (MPP) supercomputer managed by CSC - IT Center for Science. Sisu, like all the computing services of CSC, aim to enhance science and research in Finland. Thus, usage of Sisu is free of charge for the researchers working in the Finnish universities.

Researchers that want to use Sisu should first register as CSC users and then apply for a computing project. This registration process is described in the chapters 1.2.1 and of the CSC computing environment user guide.

A computing project at CSC has a common computing quota that can be extended by application. Use of Sisu or any other server will consume the computing quota granted to the project. The jobs running in Sisu reserve the resources in nodes ( i.e in chunks of 24 cores). One node hour consumes 48 billing units from the computing quota of the project.

The Sisu supercomputer is intended for well-scaling parallel jobs. The taito.csc.fi supercluster should be used for serial and modestly scaling tasks. The partitions (i.e. the batch job queues) available in Sisu are listed in Table 1.1. In Sisu, the minimum size of parallel jobs is 72 compute cores and the maximum size is at most 19200 computer cores. The number of simultaneously running jobs is limited to 30 per user. Jobs that utilize 72-1008 cores (3-42 nodes) can be used in Sisu without scaling tests, but if you wish to use more than 1008 cores you should first demonstrate efficient usage of the resources with scaling tests. The instructions for scaling tests can be found from:


Even if you use less than 1008 cores for a job you must make sure that you are using the resources efficiently i.e. that your code, with the used input, does scale to the selected number of cores. The rule of thumb is that when you double the number of cores, the job will need to run at least 1.5 times faster. If it doesn't, use less cores. If your task doesn't scale to at least 72 cores, use Taito and run your scaling tests again. Note that scaling depends on the input (model system) as well as the used code. If you are unsure, contact CSC Service Desk.


Table 1.1 Batch job partitions in Sisu

Partition Minimum number of nodes Maximum number of nodes Maximum number of cores Maximum running time Notes
 test  1  24  576  30 minutes  
 test_large  1  800  19200  4 hours Jobs in this partition have very low priority.
 small  3  24  576  12 hours  
 small_long  3  24  576  72 hours  
 large  24  400  9600  72 hours Scaling tests are required for jobs that use more than 42 nodes
 gc         Special queue for grand challenge jobs. Limits will be set based of the needs of the projects.

1.1.2 Hardware

Sisu (sisu.csc.fi) is a Massively Parallel Processor (MPP) supercomputer produced by Cray Inc., belonging to the XC40 family. It consists of nine high-density water-cooled cabinets for the compute nodes, and one cabinet for login and management nodes. In November 2012 the first phase of Sisu reached the position #118 of the Top500 list of fastest supercomputers in the world, with a theoretical peak of 244.9 teraflop/s (TF). The system was updated in September 2014, with additional new cabinets and new processors for the whole system. After the update theoretical peak performance of the system is 1688 TF. With this performance Sisu is ranked as 37th in the top500 list of most efficient computers in the world (Novermber 2014 release of the Top500 list). More information on the Linpack test and how it is used to rank the fastest supercomputers of the world can be found on the TOP500 website (http://www.top500.org/).

Sisu is composed of 422 compute blades, each of which hosts 4 computing nodes. Therefore, there are 1688 compute nodes (called XC40 Scalar Compute nodes) in total. Each node has 2 sockets. In the whole system there are 40512 cores available for computing, provided by 3376 processors, 12-core Intel (Xeon) Haswell (E5-2690v3, 64bits). These processors operate at a clock rate of 2.6GHz. The Haswell processors are well suited for high-performance computing and they comprise several components: twelve cores with individual L1 and L2 caches, an integrated memory controller, three QPI links, and an L3 cache shared within the socket. The processor supports several instructions sets, most notably the Advanced Vector Extensions 2 (AVX2) instruction set; however, older instructions sets are still supported. Each Haswell core has dedicated 32KB of L1 cache, and 768 KB of L2 cache. The L3 cache is shared among the processors, and its size is 30 MB. Each node has 8 slots of 8GB DDR4 DIMMs, operating at 2133 MHz, for a total of 64GB per compute node. This means that there are 2,67 GB of memory available per core. The compute nodes have no local storage, and run a lightweight linux kernel provided by Cray, called Compute Node Linux (CNL).

Figure 1. Configuration of a Sisu compute node.

In addition to the compute nodes, Sisu has 6 login nodes, used for logging into the system, submitting jobs, I/O, and service usage. Each login node has 2 Intel Sandy Bridge processors, unlike the compute nodes, and 256 GB of memory. The operating system is SUSE Linux Enterprise Server 11, installed on 2 TB of local storage. The system comprises also other servers for managing the supercomputer, the network, and the storage connections.

As said, each compute blade contains 4 compute nodes, but also one Aries chip for high-speed networking, connected to the computing nodes with PCIe x16 Gen3 links. Aries is a proprietary interconnect fabric, designed by Cray, using proprietary protocols. The topology of the network is called dragonfly, which is an "n-dimensional" torus, where rings in each dimension are replaced with an all-to-all connection among the nodes of that dimension. Dragonfly is considered a direct network topology, as it requires fewer optical links and no external top switches. The heart of the performance for massively parallel runs lies in this Aries interconnection network between the compute nodes.

Table 1.2 Configuration of the Sisu.csc.fi supercomputer. All nodes consist of two 12-core Intel Haswell 2.6 GHz processors. The aggregate performance of the system is 1688 TF.

Node type Number of nodes Number of cores/Node Total number of cores Memory/node
Login node 6 16 64 256 GB
Computing node 1688 24 40512 64 GB


The following commands can give some useful information from the whole Sisu system or from the current node a user is logged in.

To get a quick overview of all Sisu compute node characteristics use the following command:

sinfo -Nel
(print information in a compute node oriented format)
sinfo -el 
(print information in a partition/queue oriented format)

For information about the disk systems one can use the following command:

df -h

Details about the available processors on the current node can be checked with:

cat /proc/cpuinfo

And details about the current memory usage on the node is shown with:

cat /proc/meminfo



    Previous chapter     One level up     Next chapter