Skip to the content.

Mease Lab HPC Setup

Instructions / reference information for using mease-lab-to-nwb on the bwForCluster Helix.

Important: The previous bwForCluster MLS&WISO has been replaced by bwForCluster Helix

Migration to the new cluster

If you were already using MLS&WISO with username hd_ab123:

Account registration

Register for an account (see BwForCluster_User_Access for full instructions):

Login

Log in to the cluster with username hd_UNIID, where UNIID is your uni-id:

ssh hd_ab123@helix.bwservices.uni-heidelberg.de

It will ask you for your bwForCluster service password, and your OTP (the current 6-digit token displayed in your authenticator app)

SDS

The files on SDS are located at

/mnt/sds-hd/sd19b001

Initial setup

Once you are logged in and can access SDS, do

source /mnt/sds-hd/sd19b001/HPC_INSTALLATION_HELIX/init.sh

You should then be in the measelab conda environment, with these programs installed & on path:

To do this automatically every time you log on or run a job (recommended), add the above line to your ~/.bashrc with this command:

echo "source /mnt/sds-hd/sd19b001/HPC_INSTALLATION_HELIX/init.sh" >> ~/.bashrc

Then each time you login you should see (measelab) at the start of your command line to show that you are in the measelab conda environment.

Interactive Jupyter use

There is a helper script for setting up and using a remote jupyter server on the cluster. To use it, type

setup-jupyter

Type setup-jupyter --help or see setup-jupyter for more information.

Jupyter tips

bwVisu

This is a new service for using graphical user interface programs - in particular Phy - on HPC.

Account registration

Login

Use

Interactive command-line use

To see how many idle nodes are currently available:

sinfo_t_idle

To run an interactive job on a node with a gpu (i.e. log on to it and run commands there):

srun --partition=single --time=0:30:00 --nodes=1 --ntasks-per-node=1 --mem=16gb --gres=gpu:A40:1 --pty /bin/bash

This asks for 30mins with 1 cpu, 1 A40 GPU, and 16GB of ram.

(Note not all of the system RAM is available, e.g. if the machine has 64gb the most you can ask for is around 60gb)

If you don’t mind which type of GPU you get, you can simply use

--gres=gpu:1

Once the job starts you will be logged into the machine - if you didn’t add the source line to your ~/.bashrc file you will have to run it again manually.

Batch jobs

Longer jobs can be submitted as batch jobs to a queue, and will run when resources are available.

See mease-hpc-setup examples or wiki.bwhpc.de/e/Helix/Slurm for more information on the batch system.

If you have a running batch job and you want to log in to the node where it is running you can do

srun --jobid=123456 --pty /bin/bash

Then e.g. htop to see CPU/RAM use, or nvidia-smi to see GPU use.

MUA-Analysis

The MUA-Analysis repo is cloned at /mnt/sds-hd/sd19b001/MUA-analysis.

See /mnt/sds-hd/sd19b001/Liam/MUA_examples/example1 for an example of how to run this on HPC.

Files

The two matlab files are copied from Example_experiment, with these changes:

After modifying example_experimental_parameters.m I ran matlab -batch example_experimental_parameters to re-generate the .mat file.

There is also a file submit.sh: this describes what resources your analysis job needs and what command it should run.

The file slurm-1271232.out and the folder /mnt/sds-hd/sd19b001/Liam/ECE_testing_data/2021-08-20_M6_S1_ECE_Processing_example are generated outputs from running the analysis.

submit.sh

#!/bin/bash

#SBATCH --partition=gpu-single
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=12
#SBATCH --gres=gpu:1
#SBATCH --time=1:00:00

time matlab -batch ECE_Workflow_example

To run the analysis

Notes