Implementing next generation computing and data management environment - Services for Research
Implementing CSC's next generation computing and data management environment
Supercomputer Mahti is now available for researchers
The Mahti supercomputer is designed for medium and large parallel jobs requiring high floating point performance and a fast interconnect. The system has in total 1404 compute nodes, each equipped with two powerful AMD Rome CPUs and 256 GiB of memory. The CPUs are the fastest available AMD processors (7H12), with 64 cores per processor running at 2.6 GHz. This system is currently among the 50 fastest systems in the world (https://www.top500.org), and the fastest supercomputer in the Nordics. The scratch storage is 8.7 PiB in size and has twice the bandwidth of Puhti's scratch storage.
The user experience in Mahti is similar to Puhti, and many tools and policies are identical. At the same time there are some key differences, and you should evaluate which system is better suited for your workloads:
- In Mahti user reserves full nodes so that the jobs can extract full performance from each node, in Puhti even serial jobs are possible.
- Much larger jobs are possible, with up to 200 nodes with 25600 cores available for a single job. Note that access to the large queue requires a scalability test.
- The CPU architecture is different, and this impacts usage of compilers and numerical libraries.
- The list of preinstalled software is more limited since it only includes applications that are able to scale to a full node with 128 cores.
- No special purpose nodes are available, and workloads utilizing large amounts of memory, local disks or GPUs are better suited for Puhti.
Further links and info
- Webinar Getting started with Mahti on Wed 2.9. at 10:00,
- Getting access to Mahti
- User guide
- Mahti quick guide
Taito and HPC archive decommissioning
As CSC's new supercomputer Puhti and Allas object storage are now available and Taito and HPC archive are about to close, we advice you to carefully read the recent news published on this topic: https://research.csc.fi/web/guest/-/taito-and-hpc-archive-are-about-to-be-closed
Please make sure that your workflows run on Puhti, and to move all relevant data to Allas (or elsewhere) from home, work and project directories. Please read the migration tutorial for information on how to move data from Taito in our docs.csc.fi user guide: https://docs.csc.fi/data/Allas/migration_tutorial/
To ease the transition to Mahti, CSC will operate Taito with limited support and low service level in early 2020. In practice it means that
- Support and user documentation is not available for Taito in 2020 (support for data migration is provided)
- New features are not implemented, and new software is not installed
- Taito, or parts of Taito, may be permanently closed at any time with no warning.
HPC archive service will also be shut down later in 2020. No new data should be written to archive anymore. In January 2020 the archive will be set in read-only mode. For larger datasets (more than 1 TiB) we kindly ask you to contact CSC servicedesk so that the datatransfers can be done efficiently. Further details on transferring data from HPC Archive will be provided later.
CSC chose Atos as the single vendor to deliver the next CSC computing environment that consists of the next-generation supercomputer and data management systems.
In the first phase, a new air-cooled supercomputer Puhti and data management system Allas were installed in 2019. The first phase includes
- A supercomputer partition with total peak performance of 1.8 Petaflops.
- A partition for Artificial Intelligence research with total peak performance of 2.7 Petaflops from GPUs.
- A new common data management solution for both new and old infrastructure. This system is based on CEPH object storage technology and it provides 12 PB of storage capacity.