This lesson is in the early stages of development (Alpha version)

Functional Neuroimaging Analysis in Python

Exploring Preprocessed fMRI Data from fMRIPREP

Overview

Teaching: 20 min
Exercises: 5 min
Questions
  • How does fMRIPrep store preprocessed neuroimaging data

  • How do I access preprocessed neuroimaging data

Objectives
  • Learn about fMRIPrep derivatives

  • Understand how preprocessed data is stored and how you can access key files for analysis

Exploring Preprocessed fMRI Data from fMRIPREP

BIDS applications such as fMRIPREP output data into a full data structure with strong similarity to BIDS organization principals. In fact, there is a specification for derivatives (outputs derived from) BIDS datasets; although this is a current work in progress, details can be found in: BIDS Derivatives.

In this tutorial, we’ll explore the outputs generated by fMRIPREP and get a handle of how the data is organized from this preprocessing pipeline

Luckily the semi-standardized output for fMRIPrep is organized in such a way that the data is easily accessible using pyBIDS! We’ll first show what the full data structure looks like, then we will provide you with methods on how you can pull specific types of outputs using pyBIDS.

The fMRIPrep Derivative Data Structure

First let’s take a quick look at the fMRIPrep data structure:

tree -L 1 '../data/ds000030/derivatives/fmriprep'

../data/ds000030/derivatives/fmriprep/
├── sub-10171
├── sub-10292
├── sub-10365
├── sub-10438
├── sub-10565
├── sub-10788
├── sub-11106
├── sub-11108
├── sub-11122
├── sub-11131
├── sub-50010
├── sub-50035
├── sub-50047
├── sub-50048
├── sub-50052
├── sub-50067
├── sub-50075
├── sub-50077
├── sub-50081
└── sub-50083

First note that inside the fMRIPrep folder, we have a folder per-subject. Let’s take a quick look at a single subject folder:

tree '../data/ds000030/derivatives/fmriprep/sub-10788'
../data/ds000030/derivatives/fmriprep/sub-10788/
├── anat
│   ├── sub-10788_T1w_brainmask.nii.gz
│   ├── sub-10788_T1w_preproc.nii.gz
│   ├── sub-10788_T1w_space-MNI152NLin2009cAsym_brainmask.nii.gz
│   └── sub-10788_T1w_space-MNI152NLin2009cAsym_preproc.nii.gz
└── func
    ├── sub-10788_task-rest_bold_confounds.tsv
    ├── sub-10788_task-rest_bold_space-fsaverage5.L.func.gii
    ├── sub-10788_task-rest_bold_space-fsaverage5.R.func.gii
    ├── sub-10788_task-rest_bold_space-MNI152NLin2009cAsym_brainmask.nii.gz
    ├── sub-10788_task-rest_bold_space-MNI152NLin2009cAsym_preproc.nii.gz
    ├── sub-10788_task-rest_bold_space-T1w_brainmask.nii.gz
    └── sub-10788_task-rest_bold_space-T1w_preproc.nii.gz

2 directories, 11 files

As you can see above, each subject folder is organized into an anat and func sub-folder.

Specifically:

This data is single-session, so a session folder is missing here - but with multiple sessions you will see anat and ses-[insert_session_here] folders where each session folder contain a func folder.

Hopefully you’re now convinced that the outputs of fMRIPREP roughly follows BIDS organization principles and is, in fact, quite simple. The filenames themselves give you a full description of what each file is (check the slides to get an idea of what each file means!

Now let’s see how we can pull data in using pyBIDS!

Let’s import pyBIDS through the bids module first:

import bids

We can make a bids.BIDSLayout object as usual by just feeding in the fmriprep directory! However, one caveat is that since the fmriprep outputs are not really BIDS but BIDS-like, we have to turn off bids validation:

layout = bids.BIDSLayout('../data/ds000030/derivatives/fmriprep/',validate=False)

Now that we have a layout object, we can pretend like we’re working with a BIDS dataset! Let’s try some common commands that you would’ve used with a BIDS dataset:

First, we’ll demonstrate that we can grab a list of pre-processed subjects much like in the way we would grab subjects from a raw BIDS dataset:

layout.get_subjects()
['10171',
 '10292',
 '10365',
 '10438',
 '10565',
 '10788',
 '11106',
 '11108',
 '11122',
 '11131',
 '50010',
 '50035',
 '50047',
 '50048',
 '50052',
 '50067',
 '50075',
 '50077',
 '50081',
 '50083']

We can also do the same for tasks

 layout.get_tasks()
 ['rest']

Now let’s try fetching specific files. Similar to how you would fetch BIDS data using pyBIDS, the exact same syntax will work for fMRIPREP derivatives. Let’s try pulling just the preprocessed anatomical data.

Recall that the anatomical folder is named as follows:

tree '../data/ds000030/derivatives/fmriprep/sub-10788/anat'^
../data/ds000030/derivatives/fmriprep/sub-10788/anat
├── sub-10788_T1w_brainmask.nii.gz
├── sub-10788_T1w_preproc.nii.gz
├── sub-10788_T1w_space-MNI152NLin2009cAsym_brainmask.nii.gz
└── sub-10788_T1w_space-MNI152NLin2009cAsym_preproc.nii.gz

0 directories, 4 files

The file that we’re interested in is of form sub-[subject]_T1w_preproc.nii.gz. Now we can construct a pyBIDS call to pull these types of files specifically:

preproc_T1 = layout.get(datatype='anat',suffix='preproc',extension='.nii.gz')
preproc_T1
[<BIDSImageFile filename='/mnt/tigrlab/projects/jjeyachandra/scwg2018_python_neuroimaging/data/ds000030/derivatives/fmriprep/sub-10171/anat/sub-10171_T1w_preproc.nii.gz'>,
 <BIDSImageFile filename='/mnt/tigrlab/projects/jjeyachandra/scwg2018_python_neuroimaging/data/ds000030/derivatives/fmriprep/sub-10171/anat/sub-10171_T1w_space-MNI152NLin2009cAsym_preproc.nii.gz'>,
 <BIDSImageFile filename='/mnt/tigrlab/projects/jjeyachandra/scwg2018_python_neuroimaging/data/ds000030/derivatives/fmriprep/sub-10292/anat/sub-10292_T1w_preproc.nii.gz'>,
 ...

Note that we also pulled in MNI152NLin2009cAsym_preproc.nii.gz data as well. This is data that has been transformed into MNI152NLin2009cAsym template space. We can pull this data out by further specifying our layout.get using the space argument:

mni_preproc_T1 = layout.get(datatype='anat',suffix='preproc',extension='.nii.gz',space='MNI152NLin2009cAsym')
mni_preproc_T1
[<BIDSImageFile filename='/mnt/tigrlab/projects/jjeyachandra/scwg2018_python_neuroimaging/data/ds000030/derivatives/fmriprep/sub-10171/anat/sub-10171_T1w_space-MNI152NLin2009cAsym_preproc.nii.gz'>,
 <BIDSImageFile filename='/mnt/tigrlab/projects/jjeyachandra/scwg2018_python_neuroimaging/data/ds000030/derivatives/fmriprep/sub-10292/anat/sub-10292_T1w_space-MNI152NLin2009cAsym_preproc.nii.gz'>,
 ...

What if we wanted to pull out the data in T1 “native space” (it really is a template space, since it is merged T1s)? Unfortunately for this specific version of fMRIPREP this isn’t directly possible using layout.get. Instead we’ll use a bit of python magic to pull the data that we want:

In newer versions of fMRIPREP space is included in the native T1w file filename as space-T1w - in this case you can pull the data by using layout.get(..., space='T1w')

native_preproc_T1 = [f for f in preproc_T1 if f not in mni_preproc_T1]
native_preproc_T1

Similarily fMRI data can be pulled by specifying datatype='func' and using the suffix argument as appropriate:

Exercise 1

  1. Get the list of all preprocessed functional data
  2. Get the list of functional data in MNI152NLin2009cAsym space
  3. Get the list of functional data in T1w space (native)

    ## Solution All the functional data

    func_data = layout.get(datatype='func', suffix='preproc')
    

    MNI152NLin2009cAsym Functional Data

    mni_func_data = layout.get(datatype='func', suffix='preproc', space='MNI152NLin2009cAsym')
    mni_func_data
    

    Native/T1w space functional data

    native_func_data = [b for b in func_data if b not in mni_func_data]
    native_func_data
    

    Now that we have a handle on how fMRIPREP preprocessed data is organized and how we can pull this data. Let’s start working with the actual data itself!

Key Points

  • fMRIPrep stores preprocessed data in a ‘BIDS-like’ fashion

  • You can pull files using pyBIDS much like how you can navigate raw BIDS data


Introduction to Image Manipulation using Nilearn

Overview

Teaching: 30 min
Exercises: 15 min
Questions
  • How can be perform arithmetic operations on MR images

Objectives
  • Use Nilearn to perform masking and mathematical operations

  • Learn how to resample across modalities for image viewing and manipulation

Introduction

Nilearn is a functional neuroimaging analysis and visualization library that wraps up a whole bunch of high-level operations (machine learning, statistical analysis, data cleaning, etc…) in easy-to-use commands. The neat thing about Nilearn is that it implements Nibabel under the hood. What this means is that everything you do in Nilearn can be represented by performing a set of operations on Nibabel objects. This has the important consequence of allowing you, yourself to perform high-level operations (like resampling) using Nilearn then dropping into Nibabel for more custom data processing then jumping back up to Nilearn for interactive image viewing. Pretty cool!

Setting up

The first thing we’ll do is to important some Python modules that will allow us to use Nilearn:

import os
import matplotlib.pyplot as plt
from nilearn import image as img
from nilearn import plotting as plot
from bids import BIDSLayout

#for inline visualization in jupyter notebook
%matplotlib inline 

Notice that we imported two things:

  1. image as img - allows us to load NIFTI images using nibabel under the hood
  2. plotting as plot- allows us to using Nilearn’s plotting library for easy visualization

First let’s grab some data from where we downloaded our FMRIPREP outputs:

#Base directory for fmriprep output
fmriprep_dir = '../data/ds000030/derivatives/fmriprep/'
layout= BIDSLayout(fmriprep_dir, validate=False)
T1w_files = layout.get(subject='10788', datatype='anat', suffix='preproc')
brainmask_files = layout.get(subject='10788', datatype='anat', suffix='brainmask')

Here we used pyBIDS (as introduced in earlier sections) to pull a single participant. Specifically, we pulled all preprocessed (MNI152NLin2009cAsym, and T1w) anatomical files as well as their respective masks. Let’s quickly view these files for review:

#Display preprocessed files inside of anatomy folder
for f in T1w_files:
    print(f.path)
/mnt/tigrlab/projects/jjeyachandra/scwg2018_python_neuroimaging/data/ds000030/derivatives/fmriprep/sub-10171/anat/sub-10171_T1w_preproc.nii.gz
/mnt/tigrlab/projects/jjeyachandra/scwg2018_python_neuroimaging/data/ds000030/derivatives/fmriprep/sub-10171/anat/sub-10171_T1w_space-MNI152NLin2009cAsym_preproc.nii.gz

Now that we have our files set up, let’s start performing some basic image operations.

Basic Image Operations

In this section we’re going to deal with the following files:

  1. sub-10171_T1w_preproc.nii.gz - the T1 image in native space
  2. sub-10171_T1w_brainmask.nii.gz - a mask with 1’s representing the brain and 0’s elsewhere.
t1 = T1w_files[0].path
bm = brainmask_files[0].path

t1_img = img.load_img(t1)
bm_img = img.load_img(bm)

Using the plotting module (which we’ve aliased as plot), we can view our MR image:

plot.plot_anat(t1_img)

image-title-here

Try viewing the mask as well!

plot.plot_anat(bm_img)

Let’s start performing some image operations. The simplest operations we can perform is element-wise, what this means is that we want to perform some sort of mathematical operation on each voxel of the MR image. Since voxels are represented in a 3D array, this is equivalent to performing an operation on each element (i,j,k) of a 3D array. Let’s try inverting the image, that is, flip the colour scale such that all blacks appear white and vice-versa. To do this, we’ll use the method

img.math_img(formula, **imgs) Where:

In order to invert the image, we can simply flip the sign which will set the most positive elements (white) to the most negatve elements (black), and the least positives elements (black) to the least negative elements (white). This effectively flips the colour-scale:

invert_img = img.math_img('-a', a=T1)
plot.plot_anat(invert_img)

image-title-here

Applying a Mask

Let’s extend this idea of applying operations to each element of an image to multiple images. Instead of specifying just one image like the following:

img.math_img('a+1',a=img_a)

We can specify multiple images by tacking on additional variables:

img.math_img('a+b', a=img_a, b=img_b)

The key requirement here is that when dealing with multiple images, that the size of the images must be the same. The reason being is that we’re deaing with element-wise operations. That means that some voxel (i,j,k) in img_a is being paired with some voxel (i,j,k) in img_b when performing operations. So every voxel in img_a must have some pair with a voxel in img_b; sizes must be the same.

We can take advantage of this property when masking our data using multiplication. Masking works by multipling a raw image (our T1), with some mask image (our bm). Whichever voxel (i,j,k) has a value of 0 in the mask multiplies with voxel (i,j,k) in the raw image resulting in a product of 0. Conversely, any voxel (i,j,k) in the mask with a value of 1 multiplies with voxel (i,j,k) in the raw image resulting in the same value. Let’s try this out in practice and see what the result is:

masked_t1 = img.math_img('a*b', a=T1, b=bm)
plot.plot_anat(masked_T1)

*image-title-here

As you can see areas where the mask image had a value of 1 were retained, everything else was set to 0

Exercise #1

Try applying the mask such that the brain is removed, but the rest of the head is intact!

Solution

inverted_mask_t1 = img.math_img('a*(1-b)', a=T1, b=bm)
plot.plot_anat(inverted_mask_t1)

*image-title-here

Slicing

Recall that our data matrix is organized in the following manner:

image-title-here

Slicing does exactly what it seems to imply. Given our 3D volume, we can pull out 2D subsets (called “slices”). Here’s an example of slicing moving from left to right via an animated GIF:

image-title-here

What you see here is a series of 2D images that start from the left, and move toward the right. Each frame of this GIF is a slice - a 2D subset of a 3D volume. Slicing can be useful for cases in which you’d want to loop through each MR slice and perform a computation; importantly in functional imaging data slicing is useful for pulling out timepoints as we’ll see later!

Sourced from: https://en.wikipedia.org/wiki/Neuroimaging#/media/File:Parasagittal_MRI_of_human_head_in_patient_with_benign_familial_macrocephaly_prior_to_brain_injury_(ANIMATED).gif

Slicing is done easily on an image file using the attribute .slicer of a Nilearn image object. For example we can grab the $10^{\text{th}}$ slice along the x axis as follows:

x_slice = t1_img.slicer[10:11,:,:]

The statement $10:11$ is intentional and is required by .slicer. Alternatively we can slice along the x-axis using the data matrix itself:

t1_data = t1_img.get_data()
x_slice = t1_data[10,:,:]

This will yield the same result as above. Notice that when using the t1_data array we can just specify which slice to grab instead of using :. We can use slicing in order to modify visualizations. For example, when viewing the T1 image, we may want to specify at which slice we’d like to view the image. This can be done by specifying which coordinates to cut the image at:

plot.plot_anat(t1_img,cut_coords=(50,30,70))

The cut_coords option specifies 3 numbers:

Remember plot.plot_anat yields 3 images, therefore cut_coords allows you to display where to take cross-sections of the brain from different perspectives (axial, sagittal, coronal)

This covers the basics of image manipulation using T1 images. To review in this section we covered:

In the next section we will cover how to integrate additional modalities (functional data) to what we’ve done so far using Nilearn. Then we can start using what we’ve learned in order to perform analysis and visualization!

Key Points

  • MR images are essentially 3D arrays where each voxel is represented by an (i,j,k) index

  • Nilearn is Nibabel under the hood, knowing how Nibabel works is key to understanding Nilearn


Integrating Functional Data

Overview

Teaching: 30 min
Exercises: 15 min
Questions
  • How is fMRI data represented

  • How can we access fMRI data along spatial and temporal dimensions

  • How can we integrate fMRI and structural MRI together

Objectives
  • Extend the idea of slicing to 4 dimensions

  • Apply resampling to T1 images to combine them with fMRI data

Integrating Functional Data

So far most of our work has been examining anatomical images - the reason being is that it provides a nice visual way of exploring the effects of data manipulation and visualization is easy. In practice, you will most likely not analyze anatomical data using nilearn since there are other tools that are better suited for that kind of analysis (freesurfer, connectome-workbench, mindboggle, etc…).

In this notebook we’ll finally start working with functional MR data - the modality of interest in this workshop. First we’ll cover some basics about how the data is organized (similar to T1s but slightly more complex), and then how we can integrate our anatomical and functional data together using tools provided by nilearn

Functional data consists of full 3D brain volumes that are sampled at multiple time points. Therefore you have a sequence of 3D brain volumes, stepping through sequences is stepping through time and therefore time is our 4th dimension! Here’s a visualization to make this concept more clear:

image-title-here

Each index along the 4th dimensions (called TR for “Repetition Time”, or Sample) is a full 3D scan of the brain. Pulling out volumes from 4-dimensional images is similar to that of 3-dimensional images except you’re now dealing with:

img.slicer[x,y,z,time] !

Let’s try a couple of examples to familiarize ourselves with dealing with 4D images. But first, let’s pull some functional data using PyBIDS!

import os
import matplotlib.pyplot as plt #to enable plotting within notebook
from nilearn import image as img
from nilearn import plotting as plot
from bids.layout import BIDSLayout
import numpy as np
%matplotlib inline

These are the usual imports. Let’s now pull some structural and functional data using pyBIDS:

fmriprep_dir = '../data/ds000030/derivatives/fmriprep/'
layout=BIDSLayout(fmriprep_dir, validate=False)
T1w_files = layout.get(subject='10788', datatype='anat', suffix='preproc')
brainmask_files = layout.get(subject='10788', datatype='anat', suffix='brainmask')
func_files = layout.get(subject='10788', datatype='func', suffix='preproc')
func_mask_files = layout.get(subject='10788', datatype='func', suffix='brainmask')

We’ll be using functional files in MNI space rather than T1w space. Recall, that MNI space data is data that was been warped into standard space. These are the files you would typically use for a group-level functional imaging analysis!

func_mni = func_files[1].path
func_mni_img = img.load_img(func_mni)

fMRI as a time-series signal

First note that fMRI data contains both spatial dimensions (x,y,z) and a temporal dimension (t). This would mean that we require 4 dimensions in order to represent our data. Let’s take a look at the shape of our data matrix to confirm this intuition:

func_mni_img.shape
(60, 86, 65, 152)

Notice that the Functional MR scan contains 4 dimensions. This is in the form of $(x,y,z,t)$, where $t$ is time. We can use slicer as usual where instead of using 3 dimensions we use 4.

For example:

func.slicer[x,y,z]

vs.

func.slicer[x,y,z,t]

Exercise

Try pulling out the 5th TR and visualizing it using plot.plot_epi. plot_epi is exactly the same as plot_anat except it displays using colors that make more sense for functional images…

#Pull the 5th TR
func_vol5 = func_mni_img.slicer[:,:,:,4]
plot.plot_epi(func_vol5)

image-title-here

What fMRI actually represents

We’ve represented fMRI as a snapshot of MR signal over multiple timepoints. This is a useful way of understanding the organization of fMRI, however it isn’t typically how we think about the data when we analyze fMRI data. fMRI is typically thought of as time-series data. We can think of each voxel (x,y,z coordinate) as having a time-series of length T. The length T represents the number of volumes/timepoints in the data. Let’s pick an example voxel and examine its time-series using func_mni_img.slicer:

#Pick one voxel at coordinate (60,45,88)
single_vox = func_mni_img.slicer[59:60,45:46,30:31,:].get_data()
single_vox.shape
(1, 1, 1, 152)

As you can see we have 1 element in (x,y,z) dimension representing a single voxel. In addition, we have 152 elements in the fourth dimension. In totality, this means we have a single voxel with 152 timepoints. Dealing with 4 dimensional arrays are difficult to work with - since we have a single element across the first 3 dimensions we can squish this down to a 1 dimensional array with 152 time-points. We no longer need the first 3 spatial dimensions since we’re only looking at one voxel and don’t need (x,y,z) anymore:

single_vox = single_vox.flatten()
single_vox.shape
(152,)

Now we have a single 1-D array with 152 elements. This 1D array represents the single voxel we pulled out earlier and its 152 timepoints. Now we can visualize the signal coming from this signal voxel using a time-series plot!

First let’s import the standard python plotting library matplotlib:

import matplotlib.pyplot as plt

Let’s now plot this:

# Make an array counting from 0 --> 152, this will be our x-axis
x_axis = np.arange(0, single_vox.shape[0])

# Plot our x and y data, the 'k' just specifies the line color to be black
plt.plot( x_axis, single_vox, 'k')

# Label our axes
plt.xlabel('Timepoint')
plt.ylabel('Signal Value')

image-title-here

As you can see from the image above, fMRI data really is just a signal per voxel over time!

Resampling

Recall from our introductory exploration of neuroimaging data:

If we’d like to overlay our functional on top of our T1 (for visualization purposes, or analyses), then we need to match the size of the voxels!

Think of this like trying to overlay a 10x10 JPEG and a 20x20 JPEG on top of each other. To get perfect overlay we need to resize (or more accurately resample) our JPEGs to match!

Resampling is a method of interpolating in between data-points. When we stretch an image we need to figure out what goes in the spaces that are created via stretching - resampling does just that. In fact, resizing any type of image is actually just resampling to new dimensions.

Let’s resampling some MRI data using nilearn:

#Files we'll be using (Notice that we're using _space-MNI..._ which means they are normalized brains)
T1_mni = T1w_files[1].path
T1_mni_img = img.load_img(T1_mni)

Let’s take a quick look at the sizes of both our functional and structural files:

print(T1_mni_img.shape)
print(func_mni_img.shape)
(193, 229, 193)
(60, 86, 65, 152)

Taking a look at the spatial dimensions (first three dimensions), we can see that the number of voxels in the T1 image does not match that of the fMRI image. This is because the fMRI data (which has less voxels) is a lower resolution image. We either need to upsample our fMRI image to match that of the T1 image, or we need to downsample our T1 image to match that of the fMRI image. Typically, since the fMRI data is the one we’d like to ultimately use for analysis, we would leave it alone and downsample our T1 image. The reason being is that resampling requires interpolating values which may contaminate our data with artifacts. We don’t mind having artifacts in our T1 data (for visualization purposes) since the fMRI data is the one actually being analyzed.

Resampling in nilearn is as easy as telling it which image you want to sample and what the target image is. Structure of function:

img.resample_to_img(source_img,target_img,interpolation)

A note on interpolation nilearn supports 3 types of interpolation, the one you’ll use depends on the type of data you’re resampling!

  1. continuous - Interpolate but maintain some edge features. Ideal for structural images where edges are well-defined. Uses $3^\text{rd}$-order spline interpolation.
  2. linear (default) - Interpolate uses a combination of neighbouring voxels - will blur. Uses trilinear interpolation.
  3. nearest - matches value of closest voxel (majority vote from neighbours). This is ideal for masks which are binary since it will preserve the 0’s and 1’s and will not produce in-between values (ex: 0.342). Also ideal for numeric labels where values are 0,1,2,3… (parcellations). Uses nearest-neighbours interpolation with majority vote.
#Try playing around with methods of interpolation
#options: 'linear','continuous','nearest'
resamp_t1 = img.resample_to_img(source_img=T1_mni_img,target_img=func_mni_img,interpolation='continuous')
print(resamp_t1.shape)
print(func_mni_img.shape)
plot.plot_anat(resamp_t1)

image-title-here

Now that we’ve explored the idea of resampling let’s do a cumulative exercise bringing together ideas from resampling and basic image operations.

Exercise

Using Native T1 and T1w resting state functional do the following:

  1. Resample the native T1 image to resting state size
  2. Replace the brain in the T1 image with the first frame of the resting state brain

Files we’ll need

Structural Files

#T1 image
ex_t1 = img.load_img(T1w_files[0].path)

#mask file
ex_t1_bm = img.load_img(brainmask_files[0].path)

Functional Files

#This is the pre-processed resting state data that hasn't been standardized
ex_func = img.load_img(func_files[1].path)

#This is the associated mask for the resting state image.
ex_func_bm = img.load_img(func_mask_files[1].path)

The first step we need to do is to make sure the dimensions for our T1 image and resting state image match each other:

#Resample the T1 to the size of the functional image!
resamp_t1 = img.resample_to_img(source_img=??, target_img=??, interpolation='continuous')
plot.plot_anat(??)
print(resamp_t1.shape)

image-title-here

Next we want to make sure that the brain mask for the T1 is also the same dimensions as the functional image. This is exactly the same as above, except we use the brain mask as the source.

What kind of interpolation should we use for masks?

resamp_bm = img.??(??)

#Plot the image
??

print(resamp_bm.shape)

Once we’ve resampled both our T1 and our brain mask. We now want to remove the brain from the T1 image so that we can replace it with the funtional image instead. Remember to do this we need to:

  1. Invert the T1 mask
  2. Apply the inverted mask to the brain
inverted_bm_t1 = img.math_img(??,a=resamp_bm)
plot.plot_anat(inverted_bm_t1)

Now apply the mask using basic image arithmetic:

resamp_t1_nobrain = img.??(??)
plot.plot_anat(resamp_t1_nobrain)

We now have a skull missing the structural T1 brain. The final steps is to stick in the brain from the functional image into the now brainless head. First we need to remove the surrounding signal from the functional image.

Since a functional image is 4-Dimensional, we’ll need to pull the first volume to work with. This is because the structural image is 3-dimensional and operations will fail if we try to mix 3D and 4D data.

#Let's visualize the first volume of the functional image:
first_vol = ex_func.slicer[??,??,??,??]
plot.plot_epi(first_vol)

As shown in the figure above, the image has some “signal” outside of the brain. In order to place this within the now brainless head we made earlier, we need to mask out the functional MR data as well!

#Mask first_vol using ex_func_bm
masked_func = img.math_img('??', a=??, b=??)
plot.plot_epi(masked_func)

The final step is to stick this data into the head of the T1 data. Since the hole in the T1 data is represented as $0$’s. We can add the two images together to place the functional data into the void:

#Now overlay the functional image on top of the anatomical
combined_img = img.math_img(??)
plot.plot_anat(combined_img)

Solution

#Resample the T1 to the size of the functional image!
resamp_t1 = img.resample_to_img(source_img=ex_t1, target_img=ex_func, interpolation='continuous')
plot.plot_anat(resamp_t1)
print(resamp_t1.shape)

resamp_bm = img.resample_to_img(source_img=ex_t1_bm, target_img=ex_func,interpolation='nearest')
plot.plot_anat(resamp_bm)
print(resamp_bm.shape)

inverted_bm_t1 = img.math_img('1-a',a=resamp_bm)
plot.plot_anat(inverted_bm_t1)

resamp_t1_nobrain = img.math_img('a*b',a=resamp_t1,b=inverted_bm_t1)
plot.plot_anat(resamp_t1_nobrain)

#Let's visualize the first volume of the functional image:
first_vol = ex_func.slicer[:,:,:,0]
plot.plot_epi(first_vol)

masked_func = img.math_img('a*b', a=first_vol, b=ex_func_bm)
plot.plot_epi(masked_func)

#Now overlay the functional image on top of the anatomical
combined_img = img.math_img('a+b',a=resamp_t1_nobrain,b=masked_func)
plot.plot_anat(combined_img)

Key Points

  • fMRI data is represented by spatial (x,y,z) and temporal (t) dimensions, totalling 4 dimensions

  • fMRI data is at a lower resolution than structural data. To be able to combine data requires resampling your data


Preprocessing fMRI Data

Overview

Teaching: 30 min
Exercises: 0 min
Questions
  • What are the standard preprocessing steps?

  • What existing pipelines help with preprocessing?

Objectives
  • Understand the common preprocessing steps

Preprocessing Steps

T1w Preprocessing

Despiking

Motion Correction

Slice Timing Correction

Susceptibility Distortion Correction

Artifact and Structured Noise Removal

Volume Censoring (Scrubbing)

Bandpass Filtering

Spatial Smoothing

Normalization

Using fmriprep

fmriprep is a package developed by the Poldrack lab to do the minimal preprocessing of fMRI data required. It covers brain extraction, motion correction, field unwarping, and registration. It uses a combination of well-known software packages (e.g., FSL, SPM, ANTS, AFNI) and selects the ‘best’ implementation of each preprocessing step. Once installed, fmriprep can be invoked from the command line. We can even run it inside this notebook! The following command should work after you remove the ‘hashtag’ #. However, running fmriprep takes quite some time (we included the hashtag to prevent you from accidentally running it). You’ll most likely want to run it in parallel on a computing cluster.

Key Points

  • fmriprep takes care of several of the preprocessing steps


Cleaning Confounders in your Data with Nilearn

Overview

Teaching: 30 min
Exercises: 10 min
Questions
  • How can we clean the data so that it more closely reflects BOLD instead of artifacts

Objectives
  • Understand the motivation behind confound/nuisance regression

  • Learn how to implement cleaning using nilearn and fmriprep

Introduction

Movement is the enemy of Neuroimagers

Movement is an important problem that we need to deal with in our data. In resting state fMRI movement can induce false correlations between brain regions leading to inaccurate conclusions about brain connectivity. However, there is a unique problem that resting state fMRI faces when dealing with movement:

This is un-like task-based fMRI where there is an expectation that we’ll observe a BOLD signal upon event onset - we have some information about what the true underlying BOLD signal might look like. In order to deal with the problem of movement in resting state fMRI we need to model our fMRI signal to be comprised of true brain signal and motion (confounder) signals. We can make estimates about how motion can influence our data then remove it from the recorded signal; the leftover signal is what we estimate the BOLD signal to be.

This process of removing motion-based artifacts from our data is called confound regression, which is essentially fitting a linear model using motion estimates as regressors then subtracting it out from the signal. Hopefully in this process we get a closer estimate of what the actual brain-induced BOLD signal looks like.

In this section we’ll implement confound regression for resting-state data using nilearn’s high-level functionality.

Setting up

Let’s load in some modules as we’ve done before

import os
from nilearn import image as img
from nilearn import plotting as plt
import matplotilb.pyplot as plt
import numpy as np
import nibabel as nib
%matplotlib inline

Setting up our Motion Estimates

The beauty of FMRIPREP is that it estimates a number of motion-related signals for you and outputs it into:

sub-xxxx_task-xxxx_space-xxxxx…_confounds.tsv

This is basically a spreadsheet that has columns related to each motion estimate type and rows for timepoints. We can view these using a language-python package called pandas.

import pandas as pd

Let’s pick out functional file to clean and pull out the confound.tsv that FMRIPREP computed for us:

#Load functional data, confounds and mask
func = os.path.join(func_dir, 'sub-10788_task-rest_bold_space-MNI152NLin2009cAsym_preproc.nii.gz')
confound = os.path.join(func_dir, 'sub-10788_task-rest_bold_confounds.tsv')
mask = os.path.join(func_dir,'sub-10788_task-rest_bold_space-MNI152NLin2009cAsym_brainmask.nii.gz')

Using pandas we can read in the confounds.tsv file as a spreadsheet and display some rows:

#Delimiter is \t --> tsv is a tab-separated spreadsheet
confound_df = pd.read_csv(confound, delimiter='\t')
confound_df.head()

Each of these confounds is computed automatically by fmriprep.

Picking your Confounds

The choice of which confounds to use in functional imaging analysis is a source of large debate. We recommend that you check out these sources for a start:

  1. https://www.sciencedirect.com/science/article/pii/S1053811917302288#f0005
  2. https://www.sciencedirect.com/science/article/pii/S1053811917302288 For now we’re going to replicate the pre-processing (mostly) from the seminal Yeo1000 17-networks paper: https://www.ncbi.nlm.nih.gov/pubmed/21653723

The (mostly) Yeo 2011 Pre-processing schema

Confound regressors

  1. 6 motion parameters (X, Y, Z, RotX, RotY, RotZ)
  2. Global signal (GlobalSignal)
  3. 2 Largest Principal components of non-grey matter (aCompCor01, aCompCor02)

This is a total of 9 base confound regressor variables. Finally we add temporal derivatives of each of these signals as well (1 temporal derivative for each), the result is 18 confound regressors.

Low/High pass filtering

  1. Low pass filtering cutoff: 0.08
  2. High pass filtering cutoff: 0.009

Low pass filters out high frequency signals from our data. fMRI signals are slow evolving processes, any high frequency signals are likely due to noise High pass filters out any very low frequency signals (below 0.009Hz), which may be due to intrinsic scanner instabilities

Drop dummy TRs

During the initial stages of a functional scan there is a strong signal decay artifact, thus the first 4ish or so TRs are very high intensity signals that don’t reflect the rest of the scan. Therefore we drop these timepoints.

Censoring + Interpolation (leaving out)

Censoring involves removal and interpolation of high-movement frames from the fMRI data. Interpolation is typically done using sophisticated algorithms much like Power et al. 2014.

We won’t be using censoring + interpolation since its fairly complicated and would take up too much time


Setting up Confound variables for regression

Computing temporal derivatives for confound variables

First we’ll select our confound variables from our dataframe. You can do this by specifying a list of confounds, then using that list to pull out the associated columns

confound_vars = ['X','Y','Z','RotX','RotY','RotZ','GlobalSignal','aCompCor01','aCompCor02']
confound_df = confound_df[confound_vars]
confound_df.head()

For each of these confounds, we want to compute the temporal derivatives. The temporal derivatives are defined as the difference between consecutive timepoints:

for col in confound_df.columns:

    #Example X --> X_dt
    new_name = '{}_dt'.format(col)

    #Compute differences for each pair of rows from start to end.
    new_col = confound_df[col].diff()

    #Make new column in our dataframe
    confound_df[new_name] = new_col

confound_df.head()

What the NaN???

As you might have noticed, we have NaN’s in our confound dataframe. This happens because there is no prior value to the first index to take a difference with, but this isn’t a problem since we’re going to be dropping 4 timepoints from our data and confounders anyway!

Dummy TR Drop

Now we’ll implement our Dummy TR Drop. Remember this means that we are removing the first 4 timepoints from our functional image (we’ll also have to do this for our first 4 confound timepoints!):

#First we'll load in our data and check the shape
raw_func_img = img.load_img(func)
raw_func_img.shape

Recall that the fourth dimension represents frames/TRs(timepoints). We want to drop the first four timepoints entirely, to do so we use nibabel’s slicer feature. We’ll also drop the first 4 confound variable timepoints to match the functional scan

#Get all timepoints after the 4th
func_img = raw_func_img.slicer[:,:,:,5:]
func_img.shape

#Drop confound dummy TRs from the dataframe to match the size of our new func_img
drop_confound_df = confound_df.loc[5:]
print(drop_confound_df.shape) #number of rows should match that of the functional image
drop_confound_df.head()

Applying confound regression

Now we’d like to clean our data of our selected confound variables. There are two ways to go about this:

  1. If you have nilearn version 0.5.0 or higher use nilearn.image.clean_img(image,confounds,...)
  2. If you want full control over specific parts of the image you’re cleaning use nilearn.signal.clean(signals,confounds,...)

The first method is probably most practical and can be done in one line given what we’ve already set-up. However, in cases of very large datasets (HCP-style), the second method might be preferable for optimizing memory usage.

First note that both methods take an argument confounds. This is a matrix: IMPLEMENT LATEX MATRIX FIGURE

Therefore our goal is to take our confound matrix and work it into a matrix of the form above. The end goal is a matrix with 147 rows, and columns matching the number of confound variables (9x2=18)

Luckily this is a one-liner!

confounds_matrix = drop_confound_df.as_matrix()

#Confirm matrix size is correct
confounds_matrix.shape

Let’s clean our image!

Using nilearn.image.clean_img

First we’ll describe a couple of this function’s important arguments. Any argument enclosed in [arg] is optional

nilearn.image.clean_img(image,confounds,[low_pass],[high_pass],[t_r],[mask_img],[detrend],[standardize])

Required:

Optional:


What we’re using:

The Repetition Time of our data is 2 seconds, in addition since we’re replicating (mostly) Yeo 2011’s analysis:

In addition we’ll use a mask of our MNI transformed functional image ( mask ) to speed up cleaning

#Set some constants
high_pass= 0.009
low_pass = 0.08
t_r = 2

#Clean!
clean_img = img.clean_img(func_img,confounds=confounds_matrix,detrend=True,standardize=True,
                         low_pass=low_pass,high_pass=high_pass,t_r=t_r, mask_img=mask)

#Let's visualize our result! Doesn't really tell us much, but that's the data we're using for analysis!
plot.plot_epi(clean_img.slicer[:,:,:,50])

Key Points

  • Nuisance regression is an attempt to make sure your results aren’t driven by non-brain signals

  • With resting state, we don’t actually ever know the true signal - we can only attempt to estimate it


Applying Parcellations to Resting State Data

Overview

Teaching: 30 min
Exercises: 15 min
Questions
  • How can we reduce amount of noise-related variance in our data?

  • How can we frame our data as a set of meaningful features?

Objectives
  • Learn about the utility of parcellations as a data dimensionalty reduction tool

  • Understand what the tradeoffs are when using parcellations to analyze your data

Introduction

What is a Brain Atlas or Parcellation?

A brain atlas/parcellation is a voxel-based labelling of your data into “structural or functional units”. In a parcellation schema each voxel is assigned a numeric (integer) label corresponding to the structural/functional unit that the particular voxel is thought to belong to based on some criteria. You might wonder why someone would simply average together a bunch of voxels in a way that would reduce the richness of the data. This boils down to a few problems inherit to functional brain imaging:

  1. Resting state data is noisy, averaging groups of “similar” voxels reduces the effect of random noise effects
  2. Provide an interpretative framework to functional imaging data. For example one parcellation group might be defined as the Default Mode Network which is thought to be functionally significant. So averaging voxels together belonging to the Default Mode Network provides an average estimate of the Default Mode Network signal. In addition the discovery of the Default Mode Network has yielded important insights into the organizational principles of the brain.
  3. Limit the number of statistical tests thereby reducing potential Type I errors without resorting to strong statistical correction techniques that might reduce statistical power.
  4. A simpler way to visualize your data, instead of 40x40x40=6400 data points, you might have 17 or up to 200; this is still significantly less data to deal with!

Applying a Parcellation to your Data

Since the parcellation of a brain is defined (currently) by spatial locations, application of an parcellation to fMRI data only concerns the first 3 dimensions; the last dimension (time) is retained. Thus a parcellation assigns every voxel (x,y,z) to a particular parcel ID (an integer).

Nilearn supports a large selection of different atlases that can be found here. For information about how to select which parcellation to use for analysis of your data we refer you to Arslan et al. 2018.

Retrieving the Atlas

For this tutorial we’ll be using a set of parcellation from Yeo et al. 2011. This atlas was generated from fMRI data from 1000 healthy control participants.

First we’ll load in our packages as usual:

import numpy as np
import nibabel as nib
from nilearn import datasets #provides nilearn example datasets as well as brain parcellation atlases
from nilearn import image
from nilearn import plotting
import matplotlib.pyplot as plt

To retrieve the Yeo atlas we’ll use the fetch_atlas_* family of functions provided for by nilearn.datasets and download it into a local directory:

parcel_dir = '../resources/rois/'
atlas_yeo_2011 = datasets.fetch_atlas_yeo_2011(parcel_dir)

The method datasets.fetch_atlas_yeo_2011() returns a dict object. Examining the keys of the dictionary yields the following:

atlas_yeo_2011.keys()
output

Each of the values associated with a key in atlas_yeo_2011 is a .nii.gz image which contains a 3D NIFTI volume with a label for a given (x,y,z) voxel. Since these images are 3D volumes (sort of like structural images), we can view them using nilearn’s plotting utilities:

#Define where to slice the image
cut_coords(8, -4, 9)
#Show a colorbar
colorbar=True
#Color scheme to show when viewing image
cmap='Paired'

#Plot all parcellation schemas referred to by atlas_yeo_2011
plotting.plot_roi(atlas_yeo_2011['thin_7'], cut_coords=cut_coords, colorbar=colorbar, cmap=cmap, title='thin_7')
plotting.plot_roi(atlas_yeo_2011['thin_17'], cut_coords=cut_coords, colorbar=colorbar, cmap=cmap, title='thin_17')
plotting.plot_roi(atlas_yeo_2011['thick_7'], cut_coords=cut_coords, colorbar=colorbar, cmap=cmap, title='thick_7')
plotting.plot_roi(atlas_yeo_2011['thick_17'], cut_coords=cut_coords, colorbar=colorbar, cmap=cmap, title='thick_17')

image-title-here image-title-here image-title-here image-title-here

The 7 and 17 network parcellations correspond to the two most stable clustering solutions from the algorithm used by the authors. The thin/thick designation refer to how strict the voxel inclusion is (thick might include white matter/CSF, thin might exclude some regions of grey matter due to partial voluming effects).

For simplicity we’ll use the thick_7 variation which includes the following networks:

  1. Visual
  2. Somatosensory
  3. Dorsal Attention
  4. Ventral Attention
  5. Limbic
  6. Frontoparietal
  7. Default

The parcel areas labelled with 0 are background voxels not associated with a particular network.

Spatial Separation of Network

A key feature of the Yeo2011 networks is that they are spatially distributed, meaning that the locations of two voxels in the same network need not be part of the same region. However, there could be some cases in which you might want to examine voxels belonging to a network within a particular region. To do this, we can separate parcels belonging to the same network based on spatial continuity. If there is a gap between two sets of voxels belonging to the same parcel group, we can assign new labels to separate them out. Nilearn has a feature to handle this:

from nilearn.regions import connected_label_regions
region_labels = connected_label_regions(atlas_yeo)
plotting.plot_roi(region_labels,
			cut_coords=(-20,-10,0,10,20,30,40,50,60,70),
			display_mode='z',
			colorbar=True,
			cmap='Paired',
			title='Relabeled Yeo Atlas')

image-title-here

Resampling the Atlas

Let’s store the separated version of the atlas into a NIFTI file so that we can work with it later:

region_labels.to_filename('../resources/rois/yeo_2011/Yeo_JNeurophysiol11_MNI152/relabeled_yeo_atlas.nii.gz')

Resampling Exercise

Our goal is to match the parcellation atlas dimensions to our functional file so that we can use it to extract the mean time series of each parcel region. Using Nilearn’s resampling capabilities match the dimensions of the atlas file to the functional file First let’s pick our functional file. Atlases are typically defined in standard space so we will use the MNI152NLin2009cAsym version of the functional file:

func_file = '../data/ds000030/derivatives/fmriprep/sub-10788/func/sub-10788_task-rest_bold_space-MNI152NLin2009cAsym_preproc.nii.gz'
func_img = nib.load(func_file)

Solution

First examine the size of both files, if they match we are done:

print('Size of functional file:', func_img.shape)
print('Size of atlas file:', region_labels.shape)
./_episodes/01-neuroimaging-fundamentals.md:86:> > ~~~
{: .language-python}

Turns out that they aren't the same! We can match the file sizes simply using `img.resample_to_img`:

resampled_atlas = image.resample_to_img(region_labels, func_img, interpolation = ‘nearest’) plotting.plot_roi(resampled_yeo, func_img.slicer[:,:,:,54]) ~~~

image-title-here Recall, that we use interpolation = 'nearest' because parcel regions are integers. Nearest interpolation preserves the values in the original image. Something like continuous or linear will pick in between values; a parcel value of 2.2215 is not meaningful in this context.

Visualizing ROIs

For the next section, we’ll be performing an analysis using the Yeo parcellation on our functional data. Specifically, we’ll be using two ROIs: 44 and 46.

Exercise

Visualize ROI 44 and 46 HINT: Try using math.img to select ROIs using a conditional statement

Solution

from nilearn import image

roi = 44
roi_mask = image.math_img('a == {}'.format(roi), a=resampled_yeo)
masked_resamp_yeo = image.math_img('a*b',a=resampled_yeo,b=roi_mask)
plotting.plot_roi(masked_resamp_yeo)

image-title-here*

roi  =  46
roi_mask  = image.math_img('a == {}'.format(roi), a=resampled_yeo)
masked_resamp_yeo = image.math_img('a*b',a=resampled_yeo,b=roi_mask)
plotting.plot_roi(masked_resamp_yeo)

image-title-here

Key Points

  • Parcellations group voxels based on criteria such as similarities, orthogonality or some other criteria

  • Nilearn stores several standard parcellations that can be applied to your data

  • Parcellations are defined by assigning each voxel a parcel ‘membership’ value telling you which group the parcel belongs to

  • Parcellations provide an interpretative framework for understanding resting state data. But beware, some of the techniques used to form parcellations may not represent actual brain functional units!


Functional Connectivity Analysis

Overview

Teaching: 30 min
Exercises: 15 min
Questions
  • How can we estimate brain functional connectivity patterns from resting state data?

Objectives
  • Use parcellations to reduce fMRI noise and speed up computation of functional connectivity

Introduction

Now we have an idea of three important components to analyzing neuroimaging data:

  1. Data manipulation
  2. Cleaning and confound regression
  3. Parcellation and signal extraction

In this notebook the goal is to integrate these 3 basic components and perform a full analysis of group data using Intranetwork Functional Connectivity (FC).

Intranetwork functional connectivity is essentially a result of performing correlational analysis on mean signals extracted from two ROIs. Using this method we can examine how well certain resting state networks, such as the Default Mode Network (DMN), are synchronized across spatially distinct regions.

ROI-based correlational analysis forms the basis of many more sophisticated kinds of functional imaging analysis.

PART A NECESSARY?

Lesson Outline

The outline of this lesson is divided into two parts. The first part directly uses what you’ve learned and builds upon it to perform the final functional connectivity analysis on group data.

The second part shows how we can use Nilearn’s convenient wrapper functionality to perform the same task with significantly less effort.

Part A: Manual computation

  1. Functional data cleaning and confound regression
  2. Applying a parcellation onto the data
  3. Computing the correlation between two ROI time-series

Part B: Using Nilearn’s high-level features

  1. Using NiftiLabelsMasker to extract cleaned time-series
  2. Computing the correlation between two ROI time-series
  3. Performing analysis on all subjects
  4. Visualization of final results

Using Nilearn’s High-level functionality to compute correlation matrices

Nilearn has built in functionality for applying a parcellation to a functional image, cleaning the data, and computng the mean all in one step. First let’s get our functional data and our parcellation file:

import os
from nilearn import signal as sgl
from nilearn import image as img
from nilearn import plotting as plot
from nilearn import datasets
import nibabel as nib
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import bids
%matplotlib inline

First let’s grab the data that we want to perform our connectivity analysis on using PyBIDS:

layout = bids.layout.BIDSLayout('../data/ds000030/')
subjects = layout.get_subjects()

Now that we have a list of subjects to peform our analysis on, let’s load up our parcellation template file:

#Load separated parcellation
parcel_file = '../resources/rois/yeo_2011/Yeo_JNeurophysiol11_MNI152/relabeled_yeo_atlas.nii.gz'
yeo_7 = img.load_img(parcel_file)

With all our files loaded, we first create a input_data.NiftiLabelsMasker object.

from nilearn import input_data


masker = input_data.NiftiLabelsMasker(labels_img=yeo_7,
                                      standardize=True,
                                      memory='nilearn_cache',
                                      verbose=1,
                                      detrend=True,
                                     low_pass = 0.08,
                                     high_pass = 0.009,
                                     t_r=2)

The input_data.NiftiLabelsMasker object is a wrapper that applies parcellation, cleaning and averaging to an functional image. For example let’s apply this to our first subject:


#Pick the first subject from our list
example_sub = subjects[0]

#Get functional file and confounds file
func_file = layout.get(subject=example_sub, modality='func',
			type='preproc', return_type='file')[0]
confound_file=layout.get(subject=example_sub, modality='func',
                             type='confounds', return_type='file')[0]

#Load functional file and perform TR drop
func_img = img.load_img(func_file)
func_img = func_img.slicer[:,:,:,tr_drop+1:]

#Convert cnfounds file into required format
confounds = extract_confounds(confound_file,
                                 ['X','Y','Z',
                                 'RotX','RotY','RotZ',
                                 'GlobalSignal','aCompCor01',
                                 'aCompCor02'])

#Drop TR on confound matrix
confounds = confounds[tr_drop+1:,:]

#Apply cleaning, parcellation and extraction to functional data
time_series = masker.fit_transform(func_img,confounds)
time_series.shape
(147,46)

After performing our data extraction we’re left with data containing 147 timepoints and 46 regions. This matches the number of regions in our parcellation atlas.

Exercise

Apply the data extract process shown above to all subjects in our subject list and collect the results. Here is some skeleton code to help you think about how to organize your data:

pooled_subjects = []
ctrl_subjects = []
schz_subjects = []

for sub in subjects:
	#FILL LOOP
	

Solution

pooled_subjects = []
ctrl_subjects = []
schz_subjects = []

for sub in subjects:
    func_file = layout.get(subject=sub, modality='func',
                           type='preproc', return_type='file')[0]
    
    confound_file=layout.get(subject=sub, modality='func',
                             type='confounds', return_type='file')[0]
    
    func_img = img.load_img(func_file)
    func_img = func_img.slicer[:,:,:,tr_drop+1:]
    
    confounds = extract_confounds(confound_file,
                                 ['X','Y','Z',
                                 'RotX','RotY','RotZ',
                                 'GlobalSignal','aCompCor01',
                                 'aCompCor02'])
    
    confounds = confounds[tr_drop+1:,:]
    
    time_series = masker.fit_transform(func_img,confounds)
    pooled_subjects.append(time_series)
    
    if sub.startswith('1'):
        ctrl_subjects.append(time_series)
    if sub.startswith('5'):
        schz_subjects.append(time_series)

Once we have all extracted time series for each subject we can compute correlation matrices. Once again, Nilearn provides functionality to do this as well. We’ll use the module nilearn.connectome.ConnectivityMeasure to automatically apply a pearson r correlation to our schizophrenia and control data:

from nilearn.connectome import ConnectivityMeasure
correlation_measure = ConnectivityMeasure(kind='correlation')

ctrl_correlation_matrices = correlation_measure.fit_transform(ctrl_subjects)
schz_correlation_matrices = correlation_measure.fit_transform(schz_subjects)

To help us visualize the data, we’ve created a helper function for visualization purposes.

def plot_matrices(matrices, matrix_kind):
    n_matrices = len(matrices)
    fig = plt.figure(figsize=(n_matrices * 4, 4))
    for n_subject, matrix in enumerate(matrices):
        plt.subplot(1, n_matrices, n_subject + 1)
        matrix = matrix.copy()  # avoid side effects
        # Set diagonal to zero, for better visualization
        np.fill_diagonal(matrix, 0)
        vmax = np.max(np.abs(matrix))
        title = '{0}, subject {1}'.format(matrix_kind, n_subject)
        plot.plot_matrix(matrix, vmin=-vmax, vmax=vmax, cmap='RdBu_r',
                             title=title, figure=fig, colorbar=False)

Now we can plot the two resulting sets of correlation matrices for our two groups:

plot_matrices(ctrl_correlation_matrices, 'correlation')

image-title-here

plot_matrices(schz_correlation_matrices, 'correlation')

image-title-here

Let’s look at the data that is returned from correlation_measure.fit:

ctrl_correlation.matrices.shape
(10, 46, 46)

We can see that we have a 3D array where the first index corresponds to a particular subject, and the last two indices refer to the correlation matrix (46 regions x 46 regions).

Finally we can extract our two regions of interest by picking the entries in the correlation matrix corresponding to the connection between regions 44 and 46:

ctrl_roi_vec = ctrl_correlation_matrices[:,43,45]
schz_roi_vec = schz_correlation_matrices[:,43,45]

Now that we’ve extracted a value per participant (the connectivity between region 44 and 46), we can visualize this data. To do this, we’ll use seaborn:

import seaborn as sns

#Create dataframes so we can visualize in seaborn
ctrl_df = pd.DataFrame(data={'dmn_corr':ctrl_roi_vec, 'group':'control'})
scz_df = pd.DataFrame(data={'dmn_corr':schz_roi_vec, 'group' : 'schizophrenia'})

#Stack dataframes
df = pd.concat([ctrl_df, scz_df], ignore_index=True)

#Pick 5 random people to display
df.sample(n=5)

OUTPUT:

dmn_corr group
13 0.520369 schizophrenia
9 0.517139 control
16 0.755913 schizophrenia
0 0.387923 control
17 0.530103 schizophrenia

Finally, we can visualize the results we have:

#Visualize results
plot = plt.figure(figsize=(5,5))
ax = sns.boxplot(x='group',y='dmn_corr',data=df,palette='Set3')
ax = sns.swarmplot(x='group',y='dmn_corr',data=df,color='0.25')
ax.set_title('DMN Intra-network Connectivity')
ax.set_ylabel(r'Intra-network connectivity')
plt.show()

image-title-here

Although the results here aren’t significant they seem to indicate that there might be three subclasses in our schizophrenia group - of course we’d need a lot more data to confirm this! The interpretation of these results should ideally be based on some a priori hypothesis!

Key Points

  • MR images are essentially 3D arrays where each voxel is represented by an (i,j,k) index

  • Nilearn is Nibabel under the hood, knowing how Nibabel works is key to understanding Nilearn


Neuroimaging Fundamentals & Nibabel

Overview

Teaching: 20 min
Exercises: 10 min
Questions
  • How are images loaded in Python?

Objectives
  • Lean about imaging data structures.

Why Python?

Types of MR Scans

mr-scan-types

For this tutorial, we’ll be focusing on T1w and resting state fMRI scans.

Neuroimaging File Formats

Format Name File Extension Origin
Analyze .img/.hdr Analyze Software, Mayo Clinic
DICOM none ACR/NEMA Consortium
NIfTI .nii or .img/.hdr Neuroimaging Informatics Technology Initiative
MINC .mnc Montreal Neurological Institute
NRRD .nrrd  

dicom-to-nifti

From the MRI scanner, images are initially collected in the DICOM format and can be converted to NIfTI using dcm2niix.

Intro to NIfTI

NIfTI is one of the most ubiquitous file formats for storing neuroimaging data. We’ll cover a few details to get started working with them. If you’re interested in learning more about NIfTI images, we highly recommend this blog post about the NIfTI format.

Reading NIfTI Images

NiBabel is a Python package for reading and writing neuroimaging data. To learn more about how NiBabel handles NIfTIs, check out the Working with NIfTI images page of the NiBabel documentation.

import nibabel as nib

First, use the load() function to create a NiBabel image object from a NIfTI file. We’ll load in a T1w image from the dataset we’ll be using for this tutorial.

t1_img = nib.load('../data/ds000030/sub-10788/anat/sub-10788_T1w.nii.gz')
type(t1_img)

There are three main components of a NIfTI image:

1. Header: contains metadata about the image, such as image dimensions, data type, etc.

t1_hdr = t1_img.header

You can easily access specific metadata from the NiBabel image header object through dictionary keys.

t1_hdr.keys()

Exercise #1

Extract the value of pixdim from t1_hdr

Solution

t1_hdr['pixdim']

4

2. Data

The data is a multidimensional array representing the image data.

t1_data = t1_img.get_data()
t1_data

The data is stored in a numpy array.

type(t1_data)

We can check some basic properties of the array.

Exercise #2

How many dimensions does t1_data have? What is the size of each dimension? What is the data type?

Solution

t1_data.ndim
t1_data.shape
t1_data.dtype

4

The shape of the data always has at least 3 dimensions (X, Y, and Z) and sometimes a 4th, T (time).
This T1w image has 3 dimensions. The brain was scanned in 176 slices with a resolution of 256 x 256 voxels per slice.

The data type of an image controls the range of possible intensities. As the number of possible values increases, the amount of space the image takes up in memory also increases.

Data Type Range Number of Values
uint8 0, 255 256
uint16 -128, 127 256
uint 16 0, 2^16 2^16
int16 -2^15, 2^15 2^16
float16 ~-2^16, ~2^16 »2^16

3. Affine: tells the position of the image array data in a reference space

The affine array tells the position of the image array data in a reference space. It translates between data-space and world-space.

t1_affine = t1_img.affine
t1_affine

<div class=exercise> EXERCISE: Explore some of the other methods that can be called on the NIfTI image. </div>

Working With Image Data

Slicing

n-dimensional images are just stacks of numpy arrays. Each value in the array is assigned to an x, y or z coordinate.
Drawing

You’ll recall our example T1w image is a 3D image with dimensions $176 \times 256 \times 256$.

Exercise #3

Select the central slice by indexing t1_data eg.(t1_data[x, y, z])

Solution

central_slice = t1_data[t1_data.shape[0]//2 - 1, :, :]
central_slice

4

Instead of indexing, we can also call slicer() central_slice = t1_img.slicer[87:88, :, :].get_data()[0] central_slice

Visualizing

Let’s visualize the central slice.

import matplotlib.pyplot as plt
%matplotlib inline

plt.imshow(central_slice, cmap='gray')

You’ll notice that the image is rotated. :( Don’t worry, we can fix this!

import numpy as np

rot_central_slice = np.rot90(central_slice, k=1)
plt.imshow(rot_central_slice, cmap='gray')

You’ll notice that so far, we’ve only seen a sagittal slice. Lets visualize the sagittal, axial and coronal slices.

# function to display a row of slices
def show_slices(slices):
    """ Function to display row of image slices """
    fig, axes = plt.subplots(1, len(slices))
    for i, slice in enumerate(slices):
        axes[i].imshow(slice.T, cmap="gray", origin="lower")
        for ax in axes:
            ax.axis('off')
slice_0 = t1_data[87, :, :]
slice_1 = t1_data[:, 127, :]
slice_2 = t1_data[:, :, 127]
show_slices([slice_0, slice_1, slice_2])
plt.suptitle("Center slice")

All this is fine but NiBabel makes it even easier to visualize all three planes. Call the orthoview() method on the NiBabel image object.

t1_img.orthoview()

Reshaping

NumPy has a reshape() function for reshaping the data array. Let’s say we want to convert this 3D array into a a 2D array.

t1_data_2d = t1_data.reshape(np.prod(t1_data.shape[:-1]), t1_data.shape[-1])
t1_data_2d.shape

Masks

Next, we will see how to segment the brain from the black background.

plt.hist(t1_data.flatten(), bins = 50)
t1_mask = t1_data > 100
plt.imshow(t1_mask[87, :, :], cmap = 'gray')
test = np.where(t1_mask, t1_data, 0)
plt.imshow(test[87, :, :], cmap = 'gray')

Writing NIfTI Images

Let’s save the mask we just created to a file.

img_mask = nib.Nifti1Image(test, t1_affine, t1_hdr)
img_mask.to_filename('../data/test_mask.nii.gz')

Key Points

  • blah


Introduction to Image Manipulation using Nilearn

Overview

Teaching: 30 min
Exercises: 15 min
Questions
  • How can be perform arithmetic operations on MR images

Objectives
  • Use Nilearn to perform masking and mathematical operations

  • Learn how to resample across modalities for image viewing and manipulation

Introduction

Nilearn is a functional neuroimaging analysis and visualization library that wraps up a whole bunch of high-level operations (machine learning, statistical analysis, data cleaning, etc…) in easy-to-use commands. The neat thing about Nilearn is that it implements Nibabel under the hood. What this means is that everything you do in Nilearn can be represented by performing a set of operations on Nibabel objects. This has the important consequence of allowing you, yourself to perform high-level operations (like resampling) using Nilearn then dropping into Nibabel for more custom data processing then jumping back up to Nilearn for interactive image viewing. Pretty cool!

Setting up

The first thing we’ll do is to important some Python modules that will allow us to use Nilearn:

import os
import matplotlib.pyplot as plt
from nilearn import image as img
from nilearn import plotting as plot
%matplotlib inline #for inline visualization in jupyter notebook

Notice that we imported two things:

  1. image as img - allows us to load NIFTI images using nibabel under the hood
  2. plotting as plot- allows us to using Nilearn’s plotting library for easy visualization

First let’s grab some data from where we downloaded our FMRIPREP outputs:

fmriprep_dir='../data/ds000030/derivatives/fmriprep/{subject}/{mod}/'
t1_dir = fmriprep_dir.format(subject='sub-10788', mod='anat')
func_dir = fmriprep_dir.format(subject='sub-10788', mod='func')

We can view the files as follows:

os.listdir(t1_dir
['sub-10788_T1w_midthickness.L.surf.gii',
 'sub-10788_T1w_midthickness.R.surf.gii',
 'sub-10788_T1w_space-MNI152NLin2009cAsym_brainmask.nii.gz',
 'sub-10788_T1w_preproc.nii.gz',
 'sub-10788_T1w_pial.R.surf.gii',
 'sub-10788_T1w_space-MNI152NLin2009cAsym_class-GM_probtissue.nii.gz',
 'sub-10788_T1w_space-MNI152NLin2009cAsym_class-WM_probtissue.nii.gz',
 'sub-10788_T1w_space-MNI152NLin2009cAsym_class-CSF_probtissue.nii.gz',
 'sub-10788_T1w_brainmask.nii.gz',
 'sub-10788_T1w_dtissue.nii.gz',
 'sub-10788_T1w_inflated.L.surf.gii',
 'sub-10788_T1w_smoothwm.R.surf.gii',
 'sub-10788_T1w_pial.L.surf.gii',
 'sub-10788_T1w_space-MNI152NLin2009cAsym_preproc.nii.gz',
 'sub-10788_T1w_smoothwm.L.surf.gii',
 'sub-10788_T1w_inflated.R.surf.gii',
 'sub-10788_T1w_space-MNI152NLin2009cAsym_warp.h5']

Warming up with Nilearn

Basic Image Operations

In this section we’re going to deal with the following files:

  1. sub-10788_T1w_preproc.nii.gz - the T1 image in native space
  2. sub-10788_T1w_brainmask.nii.gz - a mask with 1’s representing the brain, and 0’s elsewhere
T1 = os.path.join(t1_dir,'sub-10788_T1w_preproc.nii.gz')
bm = os.path.join(t1_dir,'sub-10788_T1w_brainmask.nii.gz')

We can view our data using Nilearn’s plotting module as follows:

plot.plot_anat(T1)

image-title-here

Try viewing the mask as well!

Let’s start performing some image operations. The simplest operations we can perform is element-wise, what this means is that we want to perform some sort of mathematical operation on each voxel of the MR image. Since voxels are represented in a 3D array, this is equivalent to performing an operation on each element (i,j,k) of a 3D array. Let’s try inverting the image, that is, flip the colour scale such that all blacks appear white and vice-versa. To do this, we’ll use the method

img.math_img(formula, **imgs) Where:

In order to invert the image, we can simply flip the sign which will set the most positive elements (white) to the most negatve elements (black), and the least positives elements (black) to the least negative elements (white). This effectively flips the colour-scale:

invert_img = img.math_img('-a', a=T1)
plot.plot_anat(invert_img)

image-title-here

Applying a Mask

Let’s extend this idea of applying operations to each element of an image to multiple images. Instead of specifying just one image like the following:

img.math_img('a+1',a=img_a)

We can specify multiple images by tacking on additional variables:

img.math_img('a+b', a=img_a, b=img_b)

The key requirement here is that when dealing with multiple images, that the size of the images must be the same. The reason being is that we’re deaing with element-wise operations. That means that some voxel (i,j,k) in img_a is being paired with some voxel (i,j,k) in img_b when performing operations. So every voxel in img_a must have some pair with a voxel in img_b; sizes must be the same.

We can take advantage of this property when masking our data using multiplication. Masking works by multipling a raw image (our T1), with some mask image (our bm). Whichever voxel (i,j,k) has a value of 0 in the mask multiplies with voxel (i,j,k) in the raw image resulting in a product of 0. Conversely, any voxel (i,j,k) in the mask with a value of 1 multiplies with voxel (i,j,k) in the raw image resulting in the same value. Let’s try this out in practice and see what the result is:

masked_t1 = img.math_img('a*b', a=T1, b=bm)
plot.plot_anat(masked_T1)

*image-title-here

As you can see areas where the mask image had a value of 1 were retained, everything else was set to 0

Exercise #1

Try applying the mask such that the brain is removed, but the rest of the head is intact!

Solution

inverted_mask_t1 = img.math_img('a*(1-b)', a=T1, b=bm)
plot.plot_anat(inverted_mask_t1)

*image-title-here

Resampling

Recall from the previous lesson using Nibabel to explore neuroimaging data:

This poses problem…

Image that you have two images, one is a 256x256 JPEG image, the other is a 1024x1024 JPEG image. If you were to load up both images into paint, photoshop, or whatever then you can imagine that the first JPEG image would show up to be a lot smaller than the second one. To make it so that both images perfectly overlay each other one thing you could do is to resize the image, maybe by shrinking the larger high-resolution JPEG (1024x1024) down to the smaller low-resolution JPEG.

This JPEG problem is analogous our situation! The T1 image has smaller voxels (higher resolution), and the functional image has larger voxels (lower resolution). Both images represent the same real object and so must be the same size (in mm). Therefore you need more T1 voxels to represent a brain compared to the functional image! You have a mismatch in the dimensions! To fix this issue we need to resize (or more accurately resample) our images so that the dimensions match (same number of voxels).

Resampling

Resampling is a method of interpolating in between data-points. When we stretch an image we need to figure out what goes in the spaces that are created via stretching - this is what resampling does! Similarily, when we squish an image, we have to toss out some pixels - resampling in this context figures out how to replace values in an image to best represent what the original larger image would have looked like

Let’s implement resampling so that our functional image (called EPI) matches our T1 image.

For this section, we’ll use two new files:

mni_T1 = os.path.join(t1_dir,'sub-10788_T1w_space-MNI152NLin2009cAsym_preproc.nii.gz')
mni_epi = os.path.join(func_dir,'sub-10788_task-rest_bold_space-MNI152NLin2009cAsym_preproc.nii.gz')

Where:

First let’s load in our data so we can examine it in more detail, remember Nilearn will load in the image as a nibabel object:

mni_t1_img = img.load_img(mni_T1)
mni_epi_img = img.load_img(mni_epi)
print("Data is type", type(mni_t1_img))
print("T1 dimensions", mni_t1_img.shape)
print("EPI dimensions", mni_epi_img.shape)
Data is type	<class 'nibabel.nifti1.Nifti1Image'>
T1 dimensions	(193, 229, 193)
EPI dimensions	(65, 77, 49, 152)

This confirms our theory that the T1 image has a lot more voxels than the EPI image. Note that the 4th dimension of the EPI image are timepoints which we can safely ignore for now.

We can resample an image using nilearn’s img.resample_to_img function, which has the following structure:

img.resample_to_img(source_img,target_img,interpolation)

Interpolation

Nilearn supports 3 types of interpolation, the one you’ll use depends on the type of data you’re resampling!

  1. continuous - Interpolate but maintain some edge features. Ideal for structural images where edges are well-defined. Uses 3rd-order spline interpolation.
  2. linear (default) - Interpolate uses a combination of neighbouring voxels - will blur. Uses trilinear interpolation.
  3. nearest - matches value of closest voxel (majority vote from neighbours). This is ideal for masks which are binary since it will preserve the 0’s and 1’s and will not produce in-between values (ex: 0.342). Also ideal for numeric labels where values are 0,1,2,3… (parcellations). Uses nearest-neighbours interpolation with majority vote.

Let’s implement this:

resamp_mni_T1 = img.resample_to_img(source_img=mni_t1_img, target_img=mni_epi_img, interpolation='continuous')
print("Resampled T1 dimensions", resamp_mni_T1.shape)
print("EPI dimensions", mni_epi_img.shape)
Resampled T1 dimensions		(65, 77, 49)
EPI dimensions			(65, 77, 49, 152)

image-title-here

As you might notice, we have a blockier version of our T1 image – we’ve reduce the resolution to match that of the EPI image.

Challenge

Using the Native T1 and Resting State in T1 space do the following:

  1. Resample the Native T1 to match the Resting State image
  2. Replace the brain in the T1 image with the first frame of the resting state brain

Some files you’ll need

ex_T1 = os.path.join(t1_dir,'sub-10788_T1w_preproc.nii.gz')
ex_t1_bm = os.path.join(t1_dir,'sub-10788_T1w_brainmask.nii.gz')
ex_func = os.path.join(func_dir,'sub-10788_task-rest_bold_space-T1w_preproc.nii.gz')
ex_func_bm = os.path.join(func_dir,'sub-10788_task-rest_bold_space-T1w_brainmask.nii.gz')

Solution

#Resample
resamp_t1 = img.resample_to_img(source_img=ex_T1,target_img=ex_func,interpolation='continuous')

#Step 2: We need to resample the mask as well!
resamp_bm = img.resample_to_img(source_img=ex_bm,target_img=resamp_t1,interpolation='nearest')

#Step 3: Mask out the T1 image
removed_t1 = img.math_img('a*(1-b)',a=resamp_t1,b=resamp_bm)

#Visualize the resampled and removed brain
plot.plot_anat(removed_t1)

image-title-here

#Load in the first frame of the resting state image
func_img = img.load_img(ex_func)
first_func_img = func_img.slicer[:,:,:,0]

#Mask the functional image and visualize
masked_func = img.math_img('a*b', a=first_func_img, b=ex_func_bm)
plot.plot_img(masked_func)

image-title-here

#Now overlay the functional image on top of the anatomical missing the brain

combined_img = img.math_img('a+b', a=removed_t1, b=masked_func)
plot.plot_anat(combined_img)

image-title-here

Key Points

  • MR images are essentially 3D arrays where each voxel is represented by an (i,j,k) index

  • Nilearn is Nibabel under the hood, knowing how Nibabel works is key to understanding Nilearn


Exploration of Open Neuroimaging Datasets in BIDS format

Overview

Teaching: 30 min
Exercises: 15 min
Questions
  • How does standardization of neuroimaging data ease the data exploration process

Objectives
  • Gain a grasp of the BIDS format

  • Use PyBIDS in to easily explore a BIDS dataset

Tutorial Dataset

For this tutorial, we will be using a subset of a publicly available dataset, ds000030, from openneuro.org. The dataset is structured according to the Brain Imaging Data Structure (BIDS). BIDS is a simple and intuitive way to organize and describe your neuroimaging and behavioural data. Neuroimaging experiments result in complicated data that can be arranged in several different ways. BIDS tackles this problem by suggesting a new standard (based on consensus from multiple researchers across the world) for the arrangement of neuroimaging datasets.

Using the same structure for all of your studies will allow you to easily reuse all of your scripts between studies. Additionally, sharing code with other researchers will be much easier.

Let’s take a look at the participants.tsv file to see what the demographics for this dataset look like.

import pandas as pd

participant_metadata = pd.read_csv('../data/ds000030/participants.tsv', sep='\t')
participant_metadata

OUTPUT:

participant_id diagnosis age gender bart bht dwi pamenc pamret rest scap stopsignal T1w taskswitch ScannerSerialNumber ghost_NoGhost
0 sub-10159 CONTROL 30 F 1.0 NaN 1.0 NaN NaN 1.0 1.0 1.0 1.0 1.0 35343.0 No_ghost
1 sub-10171 CONTROL 24 M 1.0 1.0 1.0 NaN NaN 1.0 1.0 1.0 1.0 1.0 35343.0 No_ghost
2 sub-10189 CONTROL 49 M 1.0 NaN 1.0 NaN NaN 1.0 1.0 1.0 1.0 1.0 35343.0 No_ghost
3 sub-10193 CONTROL 40 M 1.0 NaN 1.0 NaN NaN NaN NaN NaN 1.0 NaN 35343.0 No_ghost
4 sub-10206 CONTROL 21 M 1.0 NaN 1.0 NaN NaN 1.0 1.0 1.0 1.0 1.0 35343.0 No_ghost

From this table we can easily view unique diagnostic groups:

participant_metadata['diagnosis'].unique()
array(['CONTROL', 'SCHZ', 'BIPOLAR', 'ADHD'], dtype=object)

For this tutorial, we’re just going to work with participants that are either CONTROL or SCHZ (diagnosis) and have both a T1w (T1w == 1) and rest (rest == 1) scan. Also, you’ll notice some of the T1w scans included in this dataset have a ghosting artifact. We’ll need to filter these out as well (ghost_NoGhost == 'No_ghost').

We’ll filter this data out like so:

participant_metadata = participant_metadata[(participant_metadata.diagnosis.isin(['CONTROL', 'SCHZ'])) &
                                            (participant_metadata.T1w == 1) &
                                            (participant_metadata.rest == 1) &
                                            (participant_metadata.ghost_NoGhost == 'No_ghost')]
participant_metadata['diagnosis'].unique()
array(['CONTROL', 'SCHZ'], dtype=object)

From this we have a list of participants corresponding to a list of participants who are of interest in our analysis. We can then use this list in order to download participant from software such as aws or datalad. In fact, this is exactly how we set up a list of participants to download for this workshop! Since we’ve already downloaded the dataset, we can now explore the structure using PyBIDS:

import bids.layout
layout = bids.layout.BIDSLayout('../data/ds000030')

The pybids layout object lets you query your BIDS dataset according to a number of parameters by using a get_*() method. We can get a list of the subjects we’ve downloaded from the dataset.

layout.get_subjects()
['10171',
 '10292',
 '10365',
 '10438',
 '10565',
 '10788',
 '11106',
 '11108',
 '11122',
   ...
 '50083']

We can also pull a list of imaging modalities in the dataset:

layout.get_modalities()
['anat', 'func']

As well as tasks and more!:

#Task fMRI
print(layout.get_tasks())

#Data types (bold, brainmask, confounds, smoothwm, probtissue, warp...)
print(layout.get_types())
['rest']

['bold',
 'brainmask',
 'confounds',
 'description',
 'dtissue',
 'fsaverage5',
 'inflated',
 'midthickness',
 'participants',
 'pial',
 'preproc',
 'probtissue',
 'smoothwm',
 'warp']

In addition we can specify sub-types of each BIDS category:

layout.get_types(modality='func')
['brainmask', 'confounds', 'fsaverage5', 'preproc']

We can use this functionality to pull all our fMRI NIfTI files:

layout.get(task='rest', type='bold', extensions='nii.gz', return_type='file')
TO FILL

Finally, we can convert the data stored in bids.layout into a pandas.DataFrame :

df = layout.as_data_frame()
df.head()

OUTPUT:

path modality subject task type
0 /home/jerry/projects/scwg2018_python_neuroimag... NaN NaN rest bold
1 /home/jerry/projects/scwg2018_python_neuroimag... NaN NaN NaN participants
2 /home/jerry/projects/scwg2018_python_neuroimag... NaN NaN NaN NaN
3 /home/jerry/projects/scwg2018_python_neuroimag... anat 10565 NaN brainmask
4 /home/jerry/projects/scwg2018_python_neuroimag... anat 10565 NaN probtissue

Key Points

  • BIDS is an organizational principle for neuroimaging data for transparent data sharing

  • PyBIDS is a python based tool that allows for easy exploration of BIDS-formatted neuroimaging data