Summary and Schedule
This Data Carpentry lesson aims to introduce learners to the analysis of diffusion Magnetic Resonance Imaging (dMRI) data using primarily Python. Its target audience includes researchers willing to discover the dMRI field, as well as under-graduate or graduate students willing to broaden their skills from related fields, such as neuroscience, computational neuroscience, or medical image analysis.
Prerequisites for instructors
This lesson assumes that the instructors have solid knowledge about diffusion MRI, scientific programming tools used throughout the lesson (see the BIDS-dMRI Setup page), and teaching experience. If you are teaching this lesson in a workshop, please see the Instructor notes.
Prerequisites for learners
This lesson assumes that the learners have a basic understanding of how a signal or image is represented digitally, an entry-level knowledge about aspects linked to medical image analysis, and a minimal set of computing and programming skills to install the required tools and to follow the lesson (see the BIDS-dMRI Setup page). This lesson requires learners to have completed the Introduction to MRI and BIDS carpentry corresponding to the Neuroimaging Curriculum.
Setup Instructions | Download files required for the lesson | |
Duration: 00h 00m | 1. Introduction to Diffusion MRI data |
How is dMRI data represented? What is diffusion weighting? |
Duration: 00h 25m | 2. Preprocessing dMRI data |
What are the standard preprocessing steps? How do we register with an anatomical image? |
Duration: 00h 55m | 3. Local fiber orientation reconstruction | What information can dMRI provide at the voxel level? |
Duration: 03h 15m | 4. Diffusion Tensor Imaging (DTI) |
What is diffusion tensor imaging? What metrics can be derived from DTI? |
Duration: 03h 50m | 5. Constrained Spherical Deconvolution (CSD) |
What is Constrained Spherical Deconvolution (CSD)? What does CSD offer compared to DTI? |
Duration: 04h 25m | 6. Tractography | What information can dMRI provide at the long range level? |
Duration: 06h 45m | 7. Local tractography |
What input data does a local tractography method require? Which steps does a local tractography method follow? |
Duration: 07h 15m | 8. Deterministic tractography |
What computations does a deterministic tractography require? How can we visualize the streamlines generated by a tractography method? |
Duration: 07h 45m | 9. Probabilistic tractography |
Why do we need tractography algorithms beyond the deterministic
ones? How is probabilistic tractography different from deterministic tractography? |
Duration: 08h 20m | Finish |
The actual schedule may vary slightly depending on the topics and exercises chosen by the instructor.
This lesson uses JupyterLab as a
web-based interactive computational environment for learning.
JupyterLab
integrates a text editor, a terminal and
Jupyter Notebooks
through a single user-friendly interface,
which will be used to run any code for the lesson. The
JupyterLab
instance and all of the necessary software have
already been pre-installed on Binder. Users may choose to use this
platform to get up and running much quicker, or may choose to run the
notebooks locally. If running locally, there are several pieces of
software that must be installed. Although the main components required
are Python
-based, there are a few additional
non-Python
tools required.
Instructors should allocate some time to have all components installed.
Internet
Binder requires users to have
internet access to launch the Jupyter
Notebooks
. If users choose to run the notebooks locally,
they will require internet access to download and install the software
dependencies.
Binder
Binder enables the user to open the collection of notebooks in this lesson in a web-based executable and interactive environment. No additional software needs to be installed locally in this case; it suffices to click on the launch binder badge in the repository.
Local
If users choose to run the Jupyter
notebooks locally,
the following dependencies will need to be installed:
- ANTs : used to register different anatomical data.
- FSL : used for different data preprocessing steps.
- DIPY : used for diffusion MRI data processing.
- FURY : used for anatomical data visualisation purposes.
- Matplotlib : used for data visualisation purposes.
- Nilearn : used for anatomical data visualisation purposes.
- osfclient : used to download the necessary data.
- PyBIDS : used to check the data structure BIDS compliance.
Bash
The installation instructions require terminal application
(bash
, zsh
, or others) skills.
Git
In order to run the notebooks locally, users will need to retrieve the code using Git or directly as a zipped file from the lesson’s GitHub repository.
Virtual environments
A virtual
environment is a tool that helps to keep dependencies required by
different projects separate by creating an isolated, self-contained
directory tree containing a set of particular Python
packages required for a particular version of Python
, and a
project. This allows one to have different Python
versions
or different versions of a given package within the same computer
without running the risk of mixing incompatible versions of different
packages, and hence eliminating the risk of affecting the development or
execution of a project.
There are multiple package management tools (regardless of whether a virtual environment is used or not). These setup instructions will use pip.
For the setup purposes, it will be assumed that Python is already installed in the local environment. Note that only Python 3+ versions are supported.
Linux
ANTs
and FSL
are command line tools. Given
the disparity and extent of the steps involved in their installation,
users should follow the specific installation instructions in the
corresponding official documentation pages.
The rest of the dependencies (DIPY
, FURY
,
Matplotlib
, Nilearn
, osfclient
,
and PyBIDS
) of the lesson are Python
packages.
These also rely on a number of other Python
packages, which
are automatically installed when installing the former.
In order to install the Python
dependencies, it suffices
to run:
from the root of the repository folder.
Additionally, a helpful module to view anatomical slices have been
included in the utils
folder. To use this in python, run
the following command:
Users may choose to run the notebooks using Jupyter or iPython. The
Jupyter
dependency can be installed by running:
iPython
can be installed through pip
running:
Some Python
distributions, such as the one provided by
Anaconda, might ship these
packages by default.
Test the installation
Test installation information for a package can be checked by running, for example:
Similarly, it can be checked that a given package can be imported in Python by running, for example:
Alternatively, the package version can also be checked by running, for example:
You can also see the packages and versions of all
pip
-installed dependencies by typing:
In order to run the notebooks, the notebook server needs to be started. Once the current directory changed to the root of the code directory, the server is started running:
if using IPython
, and running:
if using JupyterLab
.
In either case, the commands will print some information about the notebook server in the terminal, and a web browser will be opened to the URL of the web application (by default, http://127.0.0.1:8888). The users will be presented to the directory structure of the current directory, and they will be able to run the notebook of interest.
For additional information about Python
setups besides
the package manuals, users are encouraged to read the Programming
with Python Carpentries lesson.
The data used in the lesson is hosted in OSF. It can be downloaded by running:
Notebooks expect them to be placed in the data
folder
that exists in the root of the repository.
Note that the above command clones the entire repository, which may be quite large and take a while to download. Alternatively, data from a single subject is available and can be downloaded by running: