Back

How do I migrate away from legacy Pouta instances?

There are multiple ways to migrate away from legacy virtual machine types (tiny, small etc.) and many options where to migrate to. The decision on how to migrate should be based on your workload type, how you have currently configured the legacy instance and how you store your data.

Do I need to migrate?

If you don't need your virtual machine (VM) any more, please remove it. You will not be bothered about those virtual machines again. If you do not need your project any more please contact servicedesk@csc.fi.

If you want to keep your virtual machine, you will need to migrate it, if it's any of the following cPouta flavors: tiny, mini, small, medium, large, fullnode or the following ePouta flavors: tb.westmere.32core.

Please note, these are generic migration instructions. Depending on your internal infrastructure you may need to make adjustments to these procedures. These procedures are suggestions, you may have easier or better migration options available, if you know your infrastructure well.

How to determine where to migrate

Please check the available flavors you can use from:

https://research.csc.fi/pouta-flavours

For workloads that are not compute intensive, we recommend migrating to standard flavors. For compute intensive workloads, the hpc flavors are recommended. The tb flavors should be used for workloads with high memory requirements.

Differences between cPouta legacy flavors and migration options

The change which requires your action, is that the new flavor options do not have an ephemeral disk. An ephemeral disk is an extra disk that you may use for data, but this data is not stored into a snapshot of the virtual machine. You may or may not be using the ephemeral disk in your current legacy virtual machine. Any data on the ephemeral disk can be moved to a volume.

hpc-gen1 flavors are very similar to the tiny, small etc. legacy flavors in other aspects. Please note that hpc-gen1/hpc-gen2 flavor disks are backed only by the compute node local disk and thus are just as much vulnerable to disk failure or server failure as the legacy flavors. Their primary use case is HPC clusters.

standard flavors have their disks on shared storage, and they are more failure tolerant. You will not lose your virtual machine in case the hypervisor that is hosting your standard flavor fails.

Differences between ePouta legacy flavors and migration options

The hpc/io/tb VM flavors and underlying compute servers typically do not offer redundant root disks. The tb.westmere.32core VM flavor was an exception to the rule since it provided many terabytes of redundant server local storage. This type of storage is discontinued with the deprecation of the tb.westmere.* flavors. All important data in Pouta should be stored in volumes, which are superior to server local storge in terms of redundancy.

Due to technical constraints in the server infrastructure, a direct one-click conversion from tb.westmere.32core to replacement tb.3 or tb.4 flavors is not possible. Thus we have outlined a bit different migration path for them. Please see "ePouta: Migrating tb.westmere.32core VMs" below for more details.

How to check if you have data on the ephemeral disks

With Linux images, ephemeral disks should appear as /dev/vdb and most often get automounted to /mnt, so first you should check if there's any data in that particular directory.

However, it's also possible to mount the ephemeral disk elsewhere, so to be certain you should also check with e.g. lsblk that there isn't a device called /dev/vdb mounted somewhere else, or with LVM tools that the disk is not serving any logical volumes.

To help you identify the ephemeral disk, the size of the ephemeral disks are provided in the following table.

Flavor Ephemeral disk size in GB (approx.)
tiny 110
mini 110
small 220
medium 440
large 660
fullnode 990
tb.westmere.32core 3250

Please note, that if you have volumes connected, in some cases the ephemeral disk may be something else than /dev/vdb (e.g. /dev/vdc or /dev/vdd).

Useful links to our documentation

Virtual machine flavors

Creating snapshots (5.1.1)

Using volumes

Ephemeral storage

Launching instances and floating IPs (3.2.2 and 3.2.3)

cPouta: Moving the VM if you have data on a legacy flavor ephemeral disk

In this case it's important to understand that this data will be lost if you have not moved it away by the time the legacy servers are decommissioned and their disks get wiped. If you have determined that you have no data on the ephemeral disk you need to keep, you can do the "Moving the VM if you don't have data on a legacy flavor ephemeral disk" step instead, as it's simpler.

If the amount of data on the ephemeral disk is reasonably small, you can also consider copying the data to the root disk of the source VM. If you do, you can do the "Moving the VM if you don't have data on a legacy flavor ephemeral disk" step instead.

With the following step-by-step instructions, the data is transferred from the server's local disk to a volume that is stored in the Ceph storage cluster, which is much more resilient to hardware failure.

  1. Create a persistent volume that is big enough to fit at least the data you currently have on the ephemeral disk.
  2. Attach volume to the legacy VM from OpenStack web UI.
  3. On the VM, figure out the device name the volume got when it was attached. This is most likely /dev/vdc, unless you have more volumes attached.
  4. In a shell on the VM, create a filesystem:
    1. $ sudo mkfs.xfs /dev/vdc
  5. Mount /dev/vdc to a suitable path:
    1. $ sudo mkdir /my-volume && sudo mount /dev/vdc /my-volume
  6. Copy over everything you want:
    1. $ sudo rsync -av -A -X /mnt /my-volume
  7. Unmount the volume
    1. $ sudo umount /dev/vdc
  8. Detach the volume from the legacy VM
  9. If the VM has any additional volumes, unmount those volumes, then detach those volumes.
  10. Comment out all volumes except the / (root) partion in /etc/fstab
  11. Shutdown the VM.
  12. Create snapshot of VM instance, and give it a descriptive name. Snapshots become OpenStack images.
  13. A snapshot can take a long time to complete (if it failed, see Caveats).
  14. Launch a new instance from an image, choose snapshot image you created as the image. Select the appropriate flavor. Remember to add correct security groups.
  15. Attach the volume to the new instance.
  16. The volume should show up as /dev/vdb.
  17. Add an entry to /etc/fstab about /dev/vdb so that it will persist across reboots.
  18. Attach any additional volumes to the instance.
  19. If you had a public IP on your original instance, detach it from that instance and add it to the new instance.
  20. If all went well, you can now continue running your workload as you did in the legacy VM.

cPouta: Moving the VM if you don't have data on a legacy flavor ephemeral disk

  1. If the VM has any volumes, unmount those volumes, then detach those volumes.
  2. Comment out all volumes in /etc/fstab that is not / (root).
  3. Shutdown the VM.
  4. Create snapshot of VM instance, and give it a descriptive name. Snapshots become OpenStack images.
  5. A snapshot can take a long time to complete (If it failed see Caveats).
  6. Launch a new instance from an image, choose snapshot image you created as the image. Select the appropriate flavor. Remember to add correct security groups.
  7. Attach any additional volumes to the instance.
  8. If you had a public IP on your original instance, detach it from that instance and add it to the new instance.
  9. If all went well, you can now continue running your workload as you did in the legacy VM.

ePouta: Migrating tb.westmere.32core VMs

First determine if you have data on the ephemeral disk. See "How to check if you have data on the ephemeral disks" above.

If you have data on the ephemeral disk, perform steps 1. through 8. from "cPouta: Moving the VM if you have data on a legacy flavor ephemeral disk" section above to copy the data to a volume. Then continue to next steps.

If you do not have data on the ephemeral disk, continue directly to next steps. The extra steps compared to cPouta legacy VM migration are highlighted in bold.

  1. If the VM has any additional volumes, unmount those volumes, then detach those volumes.
  2. Comment out all volumes except the / (root) partion in /etc/fstab
  3. Shutdown the VM.
  4. Create snapshot of VM instance, and give it a descriptive name. Snapshots become OpenStack images.
  5. A snapshot can take a long time to complete (if it failed, see Caveats).
  6. From Pouta Web UI, choose Project -> Compute -> Images -> [the snapshot you just created] -> Create Volume. For size, input 80GB. Leave all other values to their defaults.
  7. From Project -> Compute -> Instances, perform the "Launch Instance" action and select Instance Boot Source to be "Boot from Volume". Select the volume that was created in the previous step. For flavor, select the tb.3 or tb.4 VM that best fits your requirements.
  8. Remember to add correct security groups.
  9. Attach the data volume to the new instance.
  10. On tb.3 or tb.4 there is a non-redundant, RAID0 disk typically as device /dev/vdb. This NVMe backed disk should be used as scratch for operations that require fast I/O. The data volume you attached in the previous step should show up as /dev/vdc.
  11. Add an entry to /etc/fstab about /dev/vdc so that it will persist across reboots. Also add an entry about /dev/vdb if scratch disk is required.
  12. Attach any additional volumes to the instance.
  13. If all went well, you can now continue running your workload as you did in the legacy VM.

Caveats

Please note that each computing project has 1 TiB of space for snapshots. Sadly the error message if you run out of space during a snapshot is very non-intuitive. If a snapshot fails, this is a likely reason for it.

We recommend that you keep the snapshots for as long as the virtual machines keep running.

Using this migration method the internal IP address of the new instance will not be the same as the IP address of the old instance. If you use the internal IP address anywhere, please take this into account. In cPouta, Floating IP addresses can be moved over to the new instance.

If you do not have enough resources (amount of VMs, cores, ram) to complete the migration, contact servicedesk@csc.fi and ask for the quota to be increased.

No Pouta VM instances are backed up by CSC. Please remember to have backups of any important data in case of problems.

If you still have questions

If none of the above scenarios fit your use case or you have questions or concerns about the migration procedure, plase contact servicedesk@csc.fi.