Back

How do I migrate my data away from legacy volumes?

Migration options

There are multiple ways to migrate data from the legacy "hpc-storage"/"-" type volumes to the newer "standard" type volumes. Here are just two examples. You can alternatively consider e.g. NFS, SCP or any approach that best suits your use case.

In-VM Migration (recommended)

With this option the data is transferred inside the VM, from a mounted volume to another. Expected migration throughput is approx. 100 MB/s, so for example for a 1 TB volume, the transfer will take approximately 3 hours.

  1. Think through is the volume size currently optimal e.g.
    • Should the replacement volume be smaller if the previous one is underutilized?
    • Should the replacement volume be bigger if the previous one is filled up?
  2. Create a new "standard" type volume once decided on size
  3. Attach the "standard" type volume to the VM where the "hpc-storage"/"-" type volume is currently attached to
  4. Partition, format and mount the new volume
  5. Stop all processes and software that would be writing data to the old volume
  6. Copy the data over, for example with:
    • $ rsync -arv /mnt/hpc-storage-volume/directory /mnt/standard-volume/
  7. Run the same rsync command a couple of additional times, to expose nothing is really writing to the old volume anymore
  8. Once you are sure that all data has been copied over, unmount and detach the old "hpc-storage"/"-" type volume
  9. If the old volume was specified in /etc/fstab or other bootup scripts
    • Remove it and add a respective "standard" volume entry in its place
    • We highly recommend referencing the new volume in /etc/fstab by its UUID (which can be found out for example by running lsblk -f), example syntax:
      • UUID=dc975f4e-e8bd-47eb-b07b-854d91d64eb1   /home   xfs   defaults   0   0
    • Once modifications to /etc/fstab are complete, reboot the VM
  10. Verify that the system is mounting the new volume and software relying on volume-based mounts is operating as it should
  11. Delete the "hpc-storage" or "-" type volume and its snapshots (if present).

One caveat for this approach is that SElinux contexts are not retained.

Volume Retyping As Self Service

OpenStack provides a retyping feature which can be used to convert volumes. The new, converted volume retains the UUID of the old volume. However, there are some downsides:

  • Retyping is currently available only as an offline operation, in other words the volume must be in detached state when starting to retype
  • The storage gateway is able to process only two retyping operations simultaneously
  • Performance is not as good as with the recommended option above.

Expected throughput is approx. 40 MB/s, so for example for a 1 TB volume, the retyping will take approximately 7 hours.

  1. Stop all processes and software that are writing data to the old volume
  2. Unmount and detach the volume from the VM
  3. Delete snapshots of the volume (if any)
  4. Source your project's openrc.sh into a shell environment where you have OpenStack CLI tools installed
  5. Execute the following command: $ cinder retype VOLUME_UUID standard --migration-policy on-demand
  6. Wait (retyping can take a long time)
  7. Once retyping is complete, attach the volume back to the VM.

If you have started retyping a volume and it takes much longer than expected, please contact cloud-support@csc.fi.

Volume Retyping By Cloud Support

Same as above, but we'll take care of the retyping for you.

  1. Stop all processes and software that are writing data to the old volume
  2. Unmount and detach the volume from the VM
  3. Delete snapshots of the volume (if any)
  4. Notify cloud-support@csc.fi that you'd like this particular volume retyped.

 

About root volumes

At this time we are unable to provide a migration method that would work for bootable (root) volumes. Thus for this type of volumes, we suggest users to copy their data out of the root file system for example with rsync.