My job runs out of memory in

In each user is allowed to use up to 128 GB of memory. However, the memory is over-subscribed and there is no session specific memory allocations (like in normal batch jobs). In practice this means if there are several memory intensive jobs from different users running in the same Taito-shell node,  your jobs may run out of memory even though it is using much less than what the limit would allow you to use.

You can see the current amount of free memory in Taito-shell with command:

    free -g

If you need more memory than what is available, you can reconnect to and hope to end up to a node with more free memory or run your task as a batch job.