3.1 Virtual machine flavors, Billing Unit rates and quotas

cPouta consumes the same billing units as Sisu and Taito. You can find more information in the CSC Computing environment user guide.

Users can use whole nodes or fractions of the available resources. The Virtual Machine flavors available in cPouta are listed in Table 3.1.

Starting from January 1st 2017, cPouta billing will be based on flavor-hours instead of CPU-hours. Table 3.1 has been updated with new Billing Unit coefficients related to this change.

Table 3.1 Available virtual machine flavors in cPouta and their Billing Unit coefficients for 2017. Note that the default cPouta user account allows users to launch only a subset of the available virtual machine flavors.
Standard flavors
Flavor Cores Memory Disk (root) Disk (ephemeral) Disk (total) Memory/core Billing Units/h
  standard.tiny   1 1000 MB 80 GB 0 GB 80 GB 1000 MB 0.5
  standard.small    2 2000 MB 80 GB 0 GB 80 GB 1000 MB 1
 standard.medium  3 4000 MB 80 GB 0 GB 80 GB 1333 MB 2
  standard.large   4 8000 MB 80 GB 0 GB 80 GB 2000 MB 4
  standard.xlarge   6 16000 MB 80 GB 0 GB 80 GB 2667 MB 8
HPC flavors
Flavor Cores Memory Disk (root) Disk (ephemeral) Disk (total) Memory/core Billing Units/h
  hpc-gen1.1core   1 3750 MB 80 GB 0 GB 80 GB 3750 MB 2
  hpc-gen1.4core   4 15000 MB 80 GB 0 GB 80 GB 3750 MB 8
  hpc-gen1.8core   8 30000 MB 80 GB 0 GB 80 GB 3750 MB 16
  hpc-gen1.16core   16 60000 MB 80 GB 0 GB 80 GB 3750 MB 32
  hpc-gen2.2core   2 10000 MB 80 GB 0 GB 80 GB 5000 MB 4
  hpc-gen2.8core   8 40000 MB 80 GB 0 GB 80 GB 5000 MB 15
    hpc-gen2.16core     16 80000 MB 80 GB 0 GB 80 GB 5000 MB 30
    hpc-gen2.24core    24 120000 MB 80 GB 0 GB 80 GB 5000 MB 45
  hpc-gen2.48core   48 240000 MB 80 GB 0 GB 80 GB 5000 MB 90
I/O flavors
Flavor Cores Memory Disk (root) Disk (ephemeral) Disk (total) Memory/core Billing Units/h
  io.70GB   2 10000 MB 20 GB 70 GB 90 GB 5000 MB 5
  io.160GB   4 20000 MB 20 GB 160 GB 180 GB 5000 MB 10
  io.340GB   8 40000 MB 20 GB 340 GB 360 GB 5000 MB 20
  io.700GB   16 80000 MB 20 GB 700 GB 720 GB 5000 MB 40
GPU flavors
Flavor GPUs Cores Memory Disk (root) Disk (total) Memory/core Billing Units/h
  gpu.1.1gpu   1 14 125000 MB 80 GB 80 GB 8928 MB 60
  gpu.1.2gpu   2 28 250000 MB 80 GB 80 GB 8928 MB 120
  gpu.1.4gpu   4 56 500000 MB 80 GB 80 GB 8928 MB 240
Deprecated flavors
Flavor Cores Memory Disk (root) Disk (ephemeral) Disk (total) Memory/core Billing Units/h
  tiny   1 1024 MB 10 GB 110 GB 120 GB 1024 MB 2
  mini   1 3500 MB 10 GB 110 GB 120 GB 1750 MB 2
  small   4 15360 MB 10 GB 220 GB 230 GB 3840 MB 8
  medium   8 30720 MB 10 GB 440 GB 450 GB 3840 MB 16
  large   12 46080 MB 10 GB 660 GB 670 GB 3840 MB 24
  fullnode   16 61440 MB 10 GB 900 GB 910 GB 3840 MB 32

 

Which type of flavor should I use?

Standard flavors

Typical use cases:

  • Web services (non-HPC)
  • Software development

These are generic flavors that are useful for running regular web services like a web server with a database backend or some other relatively light usage. They provide better availability compared to HPC flavors for several reasons:

  • Cloud administrators can move these virtual machines from one host machine to another without causing a break in service
  • The servers used to host these instances have redundant power and networking
  • The root disks are stored on a central storage system (Ceph) that stores three copies of all data

However, these flavors are not suitable for computationally intensive workloads. The virtual CPUs used in these instances are overcommitted, which means 32 hyperthreaded CPU cores are used to provide more than 32 virtual cores.

HPC flavors

Typical use cases:

  • Scientific applications

If your use case is computationally intensive, you should use one of the HPC flavors. The availability for these instances is not as high as with the standard flavors, but you get better performance:

  • Faster CPUs
  • No overcommitment of CPU cores
    • hpc-gen1: one virtual core maps to one physical core (hyperthreading not enabled)
    • hpc-gen2: one virtual core maps to one hyperthreaded core
  • Faster networking between virtual machines (40 Gb/s vs. 10 Gb/s)

However, there are some limitations:

  • No redundant networking
  • No redundant power (hpc-gen1 only, hpc-gen2 does have redundant power)
  • No redundancy for root disks - they are stored locally on SATA disks on each physical server

I/O flavors

Typical use cases:

  • Hadoop/Spark
  • Non-critical databases
  • Clustered databases

I/O flavors are intetended to give you the best I/O performance on the virtual machine root and ephemeral disks. They are backed by local SSDs on the servers they run on. The SSDs are configured in a RAID-0 configuration for maximal performance. This means there is an increased risk of loss of a virtual machine in case of hardware problems. The risk of disk failure is larger than on the other flavors, so it's especially important to be aware of the risks of data-loss on these flavors.

As these instances are also tightly tied to the hardware, you may expect downtime of instances during maintenance of the hardware. Resize/migration functionality also does not work for these instances. The bulk of the storage is available as an ephemeral disk, normally under /dev/vdb.

Often you want to create clusters of servers with the io.* flavors. In these cases you probably want to have your virtual machines land on different physical servers. This can not currently be done in the web interface. To achieve this, please refer to the anti-affinity group commands in our command line instructions.

The availability for these instances is not as high as with the standard flavors, but you get significantly better I/O performance:

  • No overcommitment of hyperthreaded CPU cores
  • Fast disk I/O speeds

However, there are some limitations:

  • You must be aware of the possibility of disk failures.
  • Maintenance work can cause larger disruption, lack of resizing functionality.

GPU Flavors

Typical Usecases: 

  • High Performance Compute applications leveraging GPUs
  • Machine- and deep learning i.e. TensorFlow
  • Rendering

GPU flavors are intended to give you high-performance computing using GPGPU (General Purpose computing on Graphical Processing Units). GPGPUs can significantly speed up some algorithms and applications. The gpu.1.* flavor family in cPouta have NVIDIA Tesla P100 GPGPUs.

The GPGPUs are suitable for deep learning, scientific computing as well as for remote desktop, rendering or visualization. The GPGPU flavors are backed by local SSD on the servers. The SSD is configured in RAID-1 and this is where the OS root disk is stored. You can use the volumes for storing larger data sets. If you need to read and write a lot of data between the disk and GPGPU, this might affect the performance.

To take advantage of the acceleration which GPGPUs provide, the applications you run must have support for using them. If you write your own applications, the Optimization Service can offer help in leveraging the GPGPUs.

We know GPGPUs can be used for a lot of cool and interesting things, but please remember the resource usage must comply with the Terms of Use.

Flavor characteristics:

  • No  overcommitment  of  hyperthreadeded  CPU cores
  • Root disks are on redundant SSDs in RAID-1.
  • Networking between machines is 10Gb/s
  • Networking is redundant
  • Power is redundant
  • Plenty of memory per GPU.

However, there are some limitations & caveats:

  • As we use PCI passthrough to get the whole GPGPU into the instance, the administrators are not able to access the GPGPU and check its health. Please report errors or problems with the GPGPUs to CSC (and attach the output of the command " nvidia - smi  -q" when you do so).
  • Application  needs to be able to utilize the GPU to get a speedup.

As these instances are also tightly tied to the hardware, you may expect downtime of instances during maintenance of the hardware.

The same type of GPGPUs are also available in the batch system on Taito https://research.csc.fi/taito-gpu.

Installation and Configuration of GPU images

We have specific CUDA images available for use with the GPU nodes. These images come pre-installed with the freshest CUDA version. One may use any other images with the GPU flavors, but in this case you will have to install the required libraries yourself. If you want to use your own images, https://research.csc.fi/pouta-adding-images has more details about how CSC customizes the images.

Deprecated flavors

This is the set of original flavors that has been available since cPouta was launched. You should not launch any new virtual machines using any of these flavors. Existing virtual machines that use these flavors will continue to work. We will maintain these flavors for a period of time, but they will be removed at some point in the near future.

Quota

Each cPouta project has a cloud computing quota that limits the usage of simultaneous cloud resources. The default quota listed below is quite small, but users can apply for extensions through the resource allocation process of CSC.

Table 3.2 Default size of the cPouta project quota.

Instances 8
Cores 8
Memory  32 GB  
  Floating IPs    2
Storage 1 TB

 

With the default resources, a cPouta user could launch 8 standard.tiny instances or 2 standard.large instances or 1 hpc-gen1.8core instance. The purpose of the quota is to prevent individual users from reserving the entire cluster and thus preventing access for other users.

cPouta usage is also limited by the computing time quota of the project. This quota is consumed by all the users in the project in all the servers of CSC.

Storage is also limited by a quota. The default quota for new projects is 1 TB. Additional allocations can be requested through our resource allocation process. In the future CSC will implement time based billing for storage. CSC reserves the right to charge for storage above a certain threshold in the future.

Previous chapter   One level up   Next chapter