diff --git a/docs/bids/bidskit.md b/docs/bids/bidskit.md new file mode 100644 index 0000000..c3ee89d --- /dev/null +++ b/docs/bids/bidskit.md @@ -0,0 +1,271 @@ +--- +layout: default +title: BIDS +nav_order: 4 +has_children: true +permalink: /docs/bids +--- + +# BIDS +{:.no_toc} + +## Table of contents +{: .no_toc .text-delta } + +1. TOC +{:toc} + +The [Brain Imaging Data Structure (BIDS)](http://bids.neuroimaging.io) is an increasingly adopted standard for organizing neuroimaging files into a consistent, self-documenting file tree that can be easily processed with general purpose analysis pipelines. Many of the pipelines covered in this guide (and [many others](http://bids-apps.neuroimaging.io/apps/)) are [BIDS Apps](https://bids-apps.neuroimaging.io). A BIDS app provides a consistent process for analyzing data, and usually requires little to no study-specific configuration since filenames and parameters can be identified directly from the BIDS input data. + +## DICOM to BIDS conversion +To reap the benefits of BIDS and take advantage of many prebuilt processing pipelines, you first need to convert your raw data into a BIDS-compliant structure. This guide will focus on converting raw MRI data to BIDS, but there are an increasing number of [BIDS extensions](https://bids-specification.readthedocs.io/en/latest/06-extensions.html#bids-extension-proposals) for storing non-MRI data as well as processed data outputs. + +### Overview of the conversion process + +Conversion from DICOM to BIDS usually involves the following initial configuration steps: + +1. Organize your DICOM files into folders. At a minimum, DICOMs will need to be organized into different folders by subject. If you anticipate multiple imaging sessions, DICOMs should be further organized by session. Depending on the conversion tool you use, you may also need to have each scan series stored in a separate directory. You will need to do this for every new subject. +2. Analyze DICOM metadata. DICOM files contain a lot of useful information about the scan—most of the imaging parameters, the name of scan protocol, and what type of scan sequence was used. This information will be used to name your BIDS files, in conjunction with some rules that you specify. Some conversion programs provide a 'first pass' conversion step that will extract available metadata into a template that you can fill out. +3. Define naming rules. You will need to determine a rule for uniquely identifying each different type of scan based on the scan metadata. Sometimes, this can be as simple as using the name of the scan protocol (provided you have clearly named your scans), but you can also use more sophisticated rules such as distinguishing scans based on post-processing filters or scan duration, depending on the tool you use. +4. Create study metadata files. To be a valid BIDS dataset, some information about the study and participants is also required. Some conversion tools will generate template files for you to complete. + +Once you have completed steps 1-4, you are ready to actually convert your DICOMs into the BIDS format. Most of the work is in defining the initial conversion rules, which only needs to be done once per study, provided you do not change the scan parameters involved in the naming rules. + + + + Converting your DICOM data to BIDS format is required prior to using BIDS apps, such as mriqc and fmriprep and the BIRC implementation of [[Containerized HCP]]. + +There are several options for converting your raw DICOM files into a BIDS directory structure. This guide will cover the basics of using bidskit on the Storrs computing cluster. '''Throughout this guide, replace abc12345 with your own NetID.''' + + +## Converters + +###Bidskit +####_Prepare Data for conversion_ +1. First download your files to your local machine from NiDB as described in the [NiDb User Guide](http://birc-int.psy.uconn.edu/wiki/index.php/NiDB_User_Guide) +2. Then,create a dataset folder with a semi-descriptive name and with a `sourcedata/` subfolder containing your raw DICOM data, organized by subject, or by subject and session. + - e.g. A typical DICOM directory tree might look something like the following, where Ab0001, Ab0002 are subject IDs and first, "second" are session names. + + ``` +CoolNameForYourData/ +├── sourcedata +│ ├── Ab0001 +│ │ ├── first +│ │ │ └── [DICOM Images] +│ │ └── second +│ │ └── [DICOM Images] +│ └── Ab0002 +│ │ ├── first +│ │ │ └── [DICOM Images] +│ │ └── second +│ │ └── [DICOM Images] +... +``` + +3. Copy your files to the Storrs HPC cluster + - Use `scp` to copy files to and from the cluster + - e.g. To copy a file named `TEST.txt` from the desktop on your local machine to your `/scratch` folder on the cluster:`scp $HOME/Desktop/TEST.txt abc12345@login.storrs.hpc.uconn.edu:/scratch/abc12345` + - e.g.To copy a folder named `test` and its contents to the HPC use the `-r` option: `scp -r -p $HOME/Desktop abc12345@login.storrs.hpc.uconn.edu:/home/abc12345` +3. In the `scratch` folder on the cluster, use the 'mkdir' command to create a folder named `scripts`. + +####_Convert raw data to BIDS structure using bidskit_ +- There are 2 steps in the conversion process. The first pass conversion creates some template files that tell `bidskit` how to name your files. You will need to manually edit the output of this first stage once when setting up your project. Once setup, you do not need to repeat this step. +- The second pass conversion will convert data from any new participants into BIDS format. + +**Step 1: First Pass Conversion:** + +- The first pass conversion will identify the protocols in your dicom tree and constructs a translation template for you to use in the second pass conversion. This step is done once when you are setting up your analysis and does **not** need to be repeated for every participant. + +*Note: Don't forget to login into the cluser using `ssh` with your netID* + +1. On your local computer create an SBATCH script that contains the following code under the SLURM job array: + + ``` + module load singularity + singularity run /scratch/birc_ro/birc-bids_latest.sif\ + bidskit -d /scratch/abc12345/CoolNameForYourData/sourcedata --no-sessions + ``` + *Note: See [Storrs computing cluster Wiki](https://wiki.hpc.uconn.edu/index.php/SLURM_Guide) for how to create a SLURM job array* + +2. Use `scp` to copy this file to the `/scratch/abc12345/CoolNameForYourData/scripts` folder on the cluster. Then submit your job to the cluster (e.g.`sbatch myJob.sh`) + + *Note: If you have multiple scanning sessions in your study (usually this means your study has a longitudinal component), omit the --no-sessions argument* + + - Your output directory structure will look something like this: + + ``` + CoolNameForYourData/ + ├── CHANGES + ├── README + ├── code + │ └── Protocol_Translator.json + ├── dataset_description.json + ├── derivatives + ├── participants.json + ├── participants.tsv + ├── sourcedata + │ ├── Ab0001 + │ │ ├── first + │ ... + │ + │ └── Cc0002 + │ ... + │ + └── work + ├── sub-Ab0001 + │ ├── ses-first + │ └── ses-second + └── sub-Ab0002 + ├── ses-first + └── ses-second + ``` + +3. Edit the Protocol_Translator.json file: `bidskit` creates a JSON series name translator in the code folder during the first pass conversion. You'll use this file to specify how you want individual series data to be renamed into the output BIDS source directory. This step is done once when you are setting up your analysis and does not need to be repeated for every participant. + - Open a new terminal and copy Protocol_Translator.json onto desktop: + ```scp abc123@login.storrs.hpc.uconn.edu:/scratch...Protocol_Translator.json $HOME/Desktop``` + + - Open the Protocol_Translator.json file in a text editor. Initially the BIDS directory, filename suffix and IntendedFor fields will be set to their default values of "EXCLUDE_BIDS_Name", "EXCLUDE_BIDS_Directory" and "UNASSIGNED" (the IntendedFor field is only relevant for fieldmap series and links the fieldmap to one or more series for distortion correction). It will look something like this: + + ``` + { + "Localizer":[ + "EXCLUDE_BIDS_Directory" + "EXCLUDE_BIDS_Name", + "UNASSSIGNED" + ], + "rsBOLD_MB_1":[ + "EXCLUDE_BIDS_Directory" + "EXCLUDE_BIDS_Name", + "UNASSSIGNED" + ], + "T1_2":[ + "EXCLUDE_BIDS_Directory" + "EXCLUDE_BIDS_Name", + "UNASSSIGNED" + ], + "Fieldmap_rsBOLD":[ + "EXCLUDE_BIDS_Directory" + "EXCLUDE_BIDS_Name", + "UNASSSIGNED" + ], + ... + } + ``` + *Note: the double quotes are a JSON requirement* + + - Edit the BIDS directory and filename suffix entries for each series with the BIDS-compliant filename suffix (excluding the sub-xxxx_ses-xxxx_ prefix and any file extensions) and the BIDS purpose directory name (anat, func, fmap, etc). In the example above, this might look something like the following: + + ``` + { + "Localizer":[ + "EXCLUDE_BIDS_Directory", + "EXCLUDE_BIDS_Name", + "UNASSIGNED" + ], + "rsBOLD_MB_1":[ + "func", + "task-rest_acq-MB_run-01_bold", + "UNASSIGNED" + ], + "T1_2":[ + "anat", + "T1w", + "UNASSIGNED" + ], + "Fieldmap_rsBOLD":[ + "fmap", + "acq-rest_epi", + ["func/task-rest_acq-MB_run-01_bold"] + ], + ... + } + ``` + + - If multiple runs are found with identical names, run numbers will automatically be added to the filenames, e.g. task-rest_acq-MB_bold becomes task-rest_acq-MB_run-01_bold, task-rest_acq-MB_run-01_bold, etc. Review the [BIDS specification](https://bids.neuroimaging.io/) for more information on the appropriate naming convention. + - If you want to use distortion correction in the HCP pipeline, one of the field maps must specify your anatomical files as targets in the IntendedFor section: + + ``` + ... + "Fieldmap_rsBOLD":[ + "fmap", + "acq-rest_run-01epi", + ["anat/run-01_T1w", "anat/run-01_T2w"] + ], + ... + ``` + + +4. **Edit the data_description.json**: In the root of the BIDS hierarchy (the bids directory in this example), a dataset_description.json template is created. This is a JSON file describing the dataset. Every dataset needs to include this file with the following mandatory fields: + + - Name: name of the dataset + - BIDSVersion :The version of the BIDS standard that was used + +Edit the provided template to include, at a minimum, the name of the dataset and BIDSversion: +``` +{ + "Name": "My First BIDS Dataset", + "BIDSVersion": "1.1.1" +} +``` + +- In addition the following fields can be provided: + + - License: what license is this dataset distributed under? (see appendix II of the [BIRC Spec](https://bids-specification.readthedocs.io/en/stable/99-appendices/02-licenses.html) for list of common licenses with suggested abbreviations) + - Authors: List of individuals who contributed to the creation/curation of the dataset + - Acknowledgements :who should be acknowledged in helping to collect the data + - Funding: sources of funding (grantnumbers) + +**Step 2: Second Pass Conversion:** + +- The `bidskit` now has enough information to correctly organize the converted Nifti images and JSON sidecars into a BIDS directory tree. Any protocol series in the `Protocol_Translator.json` file with a BIDS name or directory begining with "EXCLUDE" will be skipped (useful for excluding localizers, teleradiology acquisitions, etc from the final BIDS directory). + +1. Just Rerun the exact same bidskit command you used for the first pass conversion above. This will populate the BIDS source directory from the working conversion directory (e.g.,/scratch/abc12345/CoolNameForYourData) + +``` +module load singularity + singularity run /scratch/birc_ro/birc-bids_latest.sif\ + bidskit -d /scratch/abc12345/CoolNameForYourData/sourcedata --no-sessions +``` + +Your output data structure should look something like this: + +``` +CoolNameForYourData/ +├── CHANGES +├── README +├── code +│ └── Protocol_Translator.json +├── dataset_description.json +├── derivatives +├── participants.json +├── participants.tsv +├── sourcedata +│ ├── Ab0001 +│ │ ├── first +│ ... +│ +│ └── Ab0002 +│ ... +│ +├── sub-Ab0001 +│ ├── ses-first +│ │ ├── anat +│ │ ├── dwi +│ │ ├── fmap +│ │ └── func +│ └── ses-second +│ ... +│ +├── sub-Ab0002 +│ ├── ses-first +│ ... +│ +└── work + ├── sub-Ab0001 + │ ├── ses-first + │ └── ses-second + └── sub-Ab0002 + ├── ses-first + └── ses-second + +``` diff --git a/docs/fmri-preprocessing/hcp.md b/docs/fmri-preprocessing/hcp.md index 53dffc2..e568659 100644 --- a/docs/fmri-preprocessing/hcp.md +++ b/docs/fmri-preprocessing/hcp.md @@ -12,53 +12,193 @@ has_children: false ## Table of contents {: .no_toc .text-delta } -1. TOC -{:toc} +1. Overview +2. Required Inputs +3. Running the Container +4. General Instructions +5. TroubleShooting +6. Ouputs ## Overview -Overview of the pipeline and references/links +The following guide contains instructions for how to execute a standardized minimal preprocessing pipeline for Human Connectome Project(HCP) data. Using modified FreeSurfer pipeline in combination with FSL preprocessing and surface projection, this pipeline implements surface based processing for high resolution fMRI and anatomical readout distortion correction to handle high resolution anatomical images. It also allows for [multimodal surface mapping] (https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/MSM) for aligning cortical surfances in a way that improves SNR. +**Included preprocessing steps** +(These are the minimal steps required before performing any statistics or group-level comparisons) -## Required Inputs - -Describe: +Anatomical: - Distortion correction - Surface construction - Alignment to standard space Functional: -- BIDS structure -- T1w and T2w anatomicals -- Fieldmaps for anatomical, fMRI, and diffusion -- Fieldmaps need to be specified correctly during BIDS conversion -- gradient coefficients (optional) +- Distortion correction +- Motion correction +- Alignment to standard space +- Surface projection +[**Major Stages of Preprocessing Pipeline**] (https://www.ncbi.nlm.nih.gov/pubmed/23668970) -## Pulling the container +- PreFreeSurfer + - Prepares anatomical data for FreeSurfer - Corrects for gradient distortions - Aligns T1w and T2w - Corrects for bias field (magnetic field inhomogeneities) - Downsamples to 1mm - Main output is a corrected T1 anatomical file - FreeSurfer + - Runs a modified FreeSurfer pipeline - PostFreeSurfer + - Creates CIFTI and GIFTI formats - Creates a midthickness surface between white and pial - Calculates myelin maps - Registration to standard space (via MSMSulc or MSMAll) - fMRIVolume + - distortion correction - Motion correction - Registration to T1 and MNI space - fMRISurface + - maps volume fMRI to surface (Surface data is not in MNI space!) - creates CIFTI files with 32k mesh - Also stages for ICA cleanup of fMRI, and diffusion data +See +## Required Inputs +- HCP compatible dataset + - [Collect] (https://github.com/Washington-University/HCPpipelines/wiki/FAQ#3-what-mri-data-do-i-need-to-use-the-hcp-pipelines) high resolution fMRI (2-2.5mm), spin echo field maps, and submillimeter T1 and T2w anatomical images + - [Freely available scan data] (db.humanconnectome.org) +- Data organized into [BIDS] (http://bids.neuroimaging.io/) structure + - convert data using [bidskit] (http://birc-int.psy.uconn.edu/wiki/index.php/Using_bidskit) + - If you want to use distortion correction in the HCP pipeline, one of the field maps must specify your anatomical files (T1w and T2w) as targets in the IntendedFor section. If you are using bidskit, you might include the following code in your `Protocol_Translator.json` file: + `... +"Fieldmap_rsBOLD":[ + "fmap", + "acq-rest_run-01_epi", + ["anat/run-01_T1w", "anat/run-01_T2w"] + ], +...` + - `Fieldmap_rsBOLD` is the name of your field map protocol + - `acq-rest_run-01_epi` is the name you wish to give the field map and `"anat/run-01_T1w"` + - `"anat/run-01_T2w"` refer to the BIDS names for your T1w and T2w scans. +- T1w and T2w anatomicals +- Fieldmaps (`/fmap`) for anatomical, fMRI, and diffusion + - Fieldmaps need to be specified correctly during BIDS conversion! +- gradient coefficients (optional) : `coeff.grad` +## Running the container ### Singularity -- Describe where the container file is created -- Describe steps to connect to HPC and load appropriate modules +- This container runs on the high performance computing cluster (HPC) +- The BIDS compatible HCP pipeline container is located at `/scratch/birc_ro/bids_hcp_birc.sif` +- The HCP pipeline script is called `run.py` located at the root directory `/` in the container. + - positional arguments for `run.py`: + - `bids_dir`:The directory with the input dataset formatted according to the BIDS standard. + - `output_dir`: The directory where the output files should be stored. (If you are running group level analysis this folder should be prepopulated with the results of the participant level analysis. + - `{participant}`: (Level of the analysis that will be performed. Multiple participant level analyses can be run in parallel using the same output_dir.) + - Optional arguments for `run.py`: + - `-h, --help`:show this help message and exit + - ` --participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]`:The label of the participant that should be analyzed. The label corresponds to `sub-` from the BIDS spec (**so it does not include "sub-"**). If this parameter is not provided all subjects should be analyzed. Multiple participants can be specified with a space separated list. + - `--n_cpus N_CPUS`:Number of CPUs/cores available to use. + - `--stages {PreFreeSurfer,FreeSurfer,PostFreeSurfer,fMRIVolume,fMRISurface} [{PreFreeSurfer,FreeSurfer,PostFreeSurfer,fMRIVolume,fMRISurface} ...]`: Which stages to run. Space separated list. By default, all stages will be run + - `--anat_unwarpdir`: Direction to unwarp 3D anatomicals. Required if distortion correction and PreFreeSurfer are specified. One of x, y, z, -x, -y, -z.**(For most cases at the BIRC, `--anat_unwarpdir z` would be the way to go)** + - ` --license_key LICENSE_KEY`:FreeSurfer license key - letters and numbers after "*" in the email you received after registration. To register (for free) visit this [link] (https://surfer.nmr.mgh.harvard.edu/registration.html) + - `-v, --version`: show program's version number and exit + +See the [Containerized HCP page] (http://birc-int.psy.uconn.edu/wiki/index.php/Containerized_HCP) on the BIRC wiki for more information. -```shell -singularity pull rhancock/hcpbids -``` - -## Running the container ## General instructions -- Describe command line options here - - -### Singularity - -- Example command line here -- Example SLURM script here -- Example of SLURM submission +1. Create a BIDS directory for your data on the HPC + - e.g. `/scratch/abc12345/bids` (replace abc12345 with your netID) +2. Create a directory to save the output (`mkdir hcp_output`) from the HCP pipeline (subject directories will be created within this) + - e.g. `/scratch/abc12345/hcp_output` +3. Create a SLURM script using this template script. (This example script will run on a single subject named 26494191. It will process all of the NIFTI files under /scratch/abc12345/bids/sub-26494191 and place the output under /scratch/abc12345/hcp_output/sub-26494191) + +Example Code: + + #!/bin/bash + #SBATCH --mail-type=ALL # Mail events (NONE, BEGIN, END, FAIL, ALL) + #SBATCH --mail-user=last.first@uconn.edu # Your email address + #SBATCH --nodes=1 # OpenMP requires a single node + #SBATCH --ntasks=1 # Run a single serial task + #SBATCH --cpus-per-task=8 # Number of cores to use + #SBATCH --mem=32gb # Memory limit + #SBATCH --time=48:00:00 # Time limit hh:mm:ss + #SBATCH -e error_%A_%a.log # Standard error + #SBATCH -o output_%A_%a.log # Standard output + #SBATCH --job-name=HCP # Descriptive job name + #SBATCH --partition=serial # Use a serial partition 24 cores/7days + + export OMP_NUM_THREADS=8 #<= cpus-per-task + export ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS=8 #<= cpus-per-task + ##### END OF JOB DEFINITION ##### + + module load singularity + singularity run /scratch/birc_ro/bids_hcp_birc.sif \ + /run.py /scratch/psyc5171/hcp_example/to_process/bids /scratch/psyc5171/abc12345/hcp_output participant \ + --participant_label 26494191 \ + --license_key "41240" --gdcoeffs /scratch/psyc5171/hcp_example/to_process/coeff.grad --anat_unwarpdir z + +4. Save the code above as `/scratch/abc12345/sbatch_hcp.sh` +- change First.Last@uconn.edu to your own email address. +5. SSH to the cluster: `ssh abc12345@login.storrs.hpc.uconn.edu` +6. Go to where your script is located: `cd /scratch/abc12345` +7. Run `sbatch sbatch_hcp.sh` to submit your job to the cluster. You will get an email when the job finishes or if anything goes wrong. + +There is no need to modify any of the HCP scripts or pass additional parameters when using BIDS-compliant data. Information about effective echo spacing, phase encode direction, resolution, etc are taken from the BIDS files. + +##TroubleShooting + +- All runs must be acquired with the same number and order of slices so that the fMRI data matches the fieldmap. + +- Example error from analysis with incorrect scan protocol (slice timing for the third run is different than previous runs): + + Traceback (most recent call last): + File "/run.py", line 421, in + stage_func() + File "/run.py", line 140, in run_generic_fMRI_volume_processsing + run(cmd, cwd=args["path"], env={"OMP_NUM_THREADS": str(args["n_cpus"])}) + File "/run.py", line 30, in run + raise Exception("Non zero return code: %d"%process.returncode) + Exception: Non zero return code: 1 + + - To locate the differences, use `diff` to compare the .json files for each condition + - e.g. `diff sub-26494191_task-oploc_run-01_bold.json sub-26494191_task-adapt_run-01_bold.json` + - Suggested solutions: + - Prevent this error by making sure the scan protocol is correct before running the pipeline. + - pad or resample data so that the dimensions match + - cut slices from the overall dataset using `fslroi` (direction is design-specific) + - e.g. `fslroi big.nii.gz resized.nii.gz 0 -1 0 -1 0 59` ## Outputs -- General overview of output files -- +###Terms for Spaces where data could is located +- Native: the subject's anatomy (not to be confused with fsaverage spaces) +- MNI: volumetric standard space (MNI152) + - Surface data is not in MNI space! +- fsaverage (`reg`): FreeSurfer average surface space +- `fs_LR`: standard HCP space, with left-right correspondence (use these files in analysis) +- `MNINonLinear` contains volumetric data in MNI space and data in various non-MNI surface spaces +- `reg.native` contains files not in native space + - e.g. `L.sphere.native.surf.gii` is in native space, but `L.sphere.reg.native.surf.gii` is in fsaverage +- `fsaverage_LR` contains files **not** in fsaverage space + +###Surface Mesh Resolution: +- 164k: high resolution + - use for anatomical analysis +- 32k: low resolution + - use for overlaying fMRI results + +###Registration: +- FreeSurfer (native): FreeSurfer registration +- MSMSulc: MSM curvature based registration +- MSMAll: MSM registration by curvature, myelin, and rsfMR + - Use files with `MSMAll` or `MSMsulc` for best registration + +###Anatomical Filename Structure: +- `${subject}.${hemisphere}.${surface}_${registration}.${meshk_fs_LR.surf.gii` + - e.g. 130619.R.midthickness_MSMAll.164k_fs_LR.surf.gii + +###CIFTI file format: ("grayordinates") +- Contains multiple structures +- Can mix volumetric and surface data +- Commonly left and right surfaces, subcortex (voxels), cerebellum (voxels) +- spatial locations can be dense (all voxels/vertices) or parcels (anatomically/functionally defined regions) +- Values can be scalar, series, label, or connectivity. + - `.dtseries.nii` is a dense timeseries (e.g BOLD data) + - `.pscalar.nii` is a parcellation with scalar values (e.g. a statistic) + +###GIFTI file format: +- Contains only surface data (vs multiple surfaces and/or voxels in CIFTI) + - `.surf.gii`: surface geometry of vertices and triangles + - `.label.gii`: functional/anatomical labels + - `.shape.gii` and `.func.gii`: metric files of scalar values (triangle area, thickness, curvature, statistics from one hemisphere) + +###Partially documented outputs from example HCP subject +- Locatated at `/scratch/psyc5171/hcp_example` + - `$SubjectID /` + - `T1w` + - `fsaverage_LR32k`: fs_LR space 32k mesh anatomy - `MNINonLinear` - `fsaverage_LR32k` : fs_LR space 32k mesh metrics - `Native` : fsaverage space 164k meshes (high resolution) - `Results` : fs_LR space fMRI on 32k mesh diff --git a/docs/fmri-preprocessing/sbatch_hcp.sh b/docs/fmri-preprocessing/sbatch_hcp.sh new file mode 100755 index 0000000..244c831 --- /dev/null +++ b/docs/fmri-preprocessing/sbatch_hcp.sh @@ -0,0 +1,22 @@ +#!/bin/bash +#SBATCH --mail-type=ALL # Mail events (NONE, BEGIN, END, FAIL, ALL) +#SBATCH --mail-user=emily.yearling@uconn.edu # Your email address +#SBATCH --nodes=1 # OpenMP requires a single node +#SBATCH --ntasks=1 # Run a single serial task +#SBATCH --cpus-per-task=8 # Number of cores to use +#SBATCH --mem=32gb # Memory limit +#SBATCH --time=48:00:00 # Time limit hh:mm:ss +#SBATCH -e error_%A_%a.log # Standard error +#SBATCH -o output_%A_%a.log # Standard output +#SBATCH --job-name=HCP # Descriptive job name +#SBATCH --partition=serial # Use a serial partition 24 cores/7days + +export OMP_NUM_THREADS=8 #<= cpus-per-task +export ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS=8 #<= cpus-per-task +##### END OF JOB DEFINITION ##### + +module load singularity +singularity run /scratch/birc_ro/bids_hcp_birc.sif \ +/run.py /scratch/psyc5171/hcp_example/to_process/bids /scratch/psyc5171/eay15101/hcp_output participant \ +--participant_label 26494191 \ +--license_key "41240" --gdcoeffs /scratch/psyc5171/hcp_example/to_process/coeff.grad --anat_unwarpdir z