FAQ

How Do I …?

Simply fill out the user access application form with your new ssh-key and send it to hpc+applications@ruhr-uni-bochum.de. Already existing keys are not invalidated if you send in multiple keys. If your old key need to be invalidated, please inform us immediately.

Simply edit the user list in your project application, save it under the exact same filename as before, and send it to hpc+applications@ruhr-uni-bochum.de.

Unfortunately it is not possible to delegate any of the applications to other people. However, you may fill out the application proposal, but the actual applicant has to send it in from their RUB email address, to prevent fraud.

If your job is the only job on the node (e.g. you specified the --exclusive flag) you may simply use ssh <nodename>. SSH connections to compute nodes are only permitted if a user has a running job on it.

If your job shares the node with other jobs you should use srun --pty --overlap --jobid=<jobid> /bin/bash, which will connect your terminal to the already running job. You will have access to the exact same resources, that your job allocated. Thus, it is not possible to confuse your resources with the ones from other jobs.

When you try to connect to the cluster the following error might occur:

$ ssh <username>@login1.elysium.hpc.ruhr-uni-bochum.de -i ~/.ssh/elysium 
<username>@login1.elysium.hpc.ruhr-uni-bochum.de: Permission denied (publickey,hostbased)

One possibility is that you are not a member of an HPC project. Please verify that your supervisor added you to one of their projects.

It might be that you are using the wrong key. Please verify that the specified key file (the one after the -i flag) contains the key you supplied with your user application. If you are using an SSH config entry please make sure that the IdentityFile path is set correctly.

If you verified that you are using the correct key please add the -vvv flag to your ssh command and send the output to hpc-helpdesk@ruhr-uni-bochum.de.

The following commands expect an SSH config entry as it is shown here.

Data can be copied to and from the cluster using scp, or rsync. We strongly recommend rsync due to the many quality of life features.

# Copy local to cluster
rsync -r --progress --compress --bwlimit=10240 <local_source_path> login001.elysium.hpc.rub.de:<remote_destination_path>

# Copy cluster to local
rsync -r --progress --compress --bwlimit=10240 login001.elysium.hpc.rub.de:<remote_source_path> <local_destination_path>

The paths to the data which shall be copied and the destination, as well as the username need to be adjusted. Note that there is no trailing “/” at the end of the source path. If there was one, the directories contents, not the directory itself would be copied.

Flags:

  • -r enables recursive copies (directories and their content)
  • --progress gives you a live update about the amount that has been copied already and an estimate of the remaining time
  • --compress attempts to compress the data on the fly to speed up the data transfer even more
  • --bwlimit limits the data transfer rate in order to leave some bandwidth to other people who want to copy data, or work interactively.

If multiple file are to be copied to/from the cluster, the data should be packed into a tar-archive before sending:

# create a tar archive
tar -cvf myfiles.tar dir_or_files.*

# extract a tar archive
tar -xvf myfiles.tar

Note that running multiple instances of rsync, or scp will not speed up the copy process, but slow it down even more!

Compute nodes can only connect to hosts in the university network by default, and for good reasons. Only the login nodes have internet access.

Please organize your computations in such a way that internet access is only required for preparation and postprocessing, i.e. before your computations start or after they end. For these purposes, internet access from the login nodes is sufficient.

If you absolutely must access hosts outside of the university network from a compute node, you can use the RUB WWW Proxy Cache: export https_proxy=https://www-cache.rub.de:443. However, make sure to use the cache responsibly, and keep in mind the following drawbacks:

  • Your computations depend on the availability of external network resources, which introduces the risk of job failure and therefore waste of resources.
  • The proxy cache may be bandwidth limited.
  • Network transfer times on compute nodes are fully billed in the FairShare system.

The rub-quota tool reports disk usage on both /home and /lustre.

According to the Terms of Use, publications must contain an acknowledgement if HPC resources were used. For example: “Calculations (or parts of them) for this publication were performed on the HPC cluster Elysium of the Ruhr University Bochum, subsidised by the DFG (INST 213/1055-1).”

More answers coming soon.

In the meantime, please see Help.