4.4.6 Accessing the object storage from Taito
The usage will strongly depend on what you will do with the data. The "swift", "s3" and "s3cmd" command line tools are already installed on Taito.
|Command line tool||Requirements|
|swift||Computing project openrc.sh file downloaded from https://pouta.csc.fi & sourced to shell.|
Following environment variables present in environment:
More info here.
|s3cmd||Configuration file .s3cfg populated (more info here).|
You can use any of the commands to stage in the data you need to process to the project/scratch disk and process the data like you would process any other data.
For S3 use cases, you can also store the ec2 credentials with the job. This is the recommended way of accessing objects from a compute job. When you don't need the credentials anymore you can delete them with:
$ openstack credential delete
Please note that ec2 credentials do not work against other Openstack services.
There is also the possibility to create temp URLs for the objects you need to access, and use those URLs to access the data from compute jobs from Taito. The benefit of using temp URLs are that no credentials are needed on in Taito.
|Previous chapter||One level up||Next chapter|