Summary and Schedule
This lesson aims to teach responsible machine learning practices with a focus on fairness, explainability, reproducibility, and safety.
Prerequisite
- Participants should have experience using Python.
- Participants should have a basic understanding of machine learning (e.g., familiar with the concepts like train/test split and cross-validation) and should have trained at least one model in the past.
- Participants should care about the interpretability, reproducibility, and/or fairness of the models they build.
- Participants should have domain knowledge of the field they work in and want to build models for.
Setup Instructions | Download files required for the lesson | |
Duration: 00h 00m | 1. Overview |
What do we mean by “Trustworthy AI”? How is this workshop structured, and what content does it cover? |
Duration: 00h 31m | 2. Preparing to train a model |
For what prediction tasks is machine learning an appropriate
tool? How can inappropriate target variable choice lead to suboptimal outcomes in a machine learning pipeline? What forms of “bias” can occur in machine learning, and where do these biases come from? |
Duration: 00h 31m | 3. Model evaluation and fairness |
What metrics do we use to evaluate models? What are some common pitfalls in model evaluation? How do we define fairness and bias in machine learning outcomes? What types of bias and unfairness can occur in generative AI? What techniques exist to improve the fairness of ML models? |
Duration: 00h 31m | 4. Model fairness: hands-on |
How can we use AI Fairness 360 - a common toolkit - for measuring and improving model fairness? :::::::::::::::::::::::::::::::::::::::::::::::: |
Duration: 00h 31m | 5. Interpretablility versus explainability |
What are model interpretability and model explainability? Why are they
important? How do you choose between interpretable models and explainable models in different contexts? |
Duration: 00h 33m | 6. Explainability methods overview | TODO |
Duration: 00h 33m | 7. Explainability methods: deep dive | TODO |
Duration: 00h 33m | 8. Explainability methods: linear probe | TODO |
Duration: 00h 33m | 9. Explainability methods: GradCAM | TODO |
Duration: 00h 33m | 10. Estimating model uncertainty | TODO |
Duration: 00h 33m | 11. OOD detection: overview, output-based methods |
What are out-of-distribution (OOD) data and why is detecting them
important in machine learning models? How do output-based methods like softmax and energy-based methods work for OOD detection? What are the limitations of output-based OOD detection methods? :::::::::::::::::::::::::::::::::::::::::::::::::: ::::::::::::::::::::::::::::::::::::::: objectives Understand the concept of out-of-distribution data and its significance in building trustworthy machine learning models. Learn about different output-based methods for OOD detection, including softmax and energy-based methods Identify the strengths and limitations of output-based OOD detection techniques. :::::::::::::::::::::::::::::::::::::::::::::::::: |
Duration: 00h 33m | 12. OOD detection: distance-based and contrastive learning |
How do distance-based methods like Mahalanobis distance and KNN work for
OOD detection? What is contrastive learning and how does it improve feature representations? How does contrastive learning enhance the effectiveness of distance-based OOD detection methods? :::::::::::::::::::::::::::::::::::::::::::::::::: ::::::::::::::::::::::::::::::::::::::: objectives Gain a thorough understanding of distance-based OOD detection methods, including Mahalanobis distance and KNN. Learn the principles of contrastive learning and its role in improving feature representations. Explore the synergy between contrastive learning and distance-based OOD detection methods to enhance detection performance. :::::::::::::::::::::::::::::::::::::::::::::::::: |
Duration: 00h 33m | 13. OOD detection: training-time regularization |
What are the key considerations when designing algorithms for OOD
detection? How can OOD detection be incorporated into the loss functions of models? What are the challenges and best practices for training models with OOD detection capabilities? :::::::::::::::::::::::::::::::::::::::::::::::::: ::::::::::::::::::::::::::::::::::::::: objectives Understand the critical design considerations for creating effective OOD detection algorithms. Learn how to integrate OOD detection into the loss functions of machine learning models. Identify the challenges in training models with OOD detection and explore best practices to overcome these challenges. :::::::::::::::::::::::::::::::::::::::::::::::::: |
Duration: 00h 33m | 14. Documenting and releasing a model |
Why is model sharing important in the context of reproducibility and
responsible use? What are the challenges, risks, and ethical considerations related to sharing models? How can model-sharing best practices be applied using tools like model cards and the Hugging Face platform? |
Duration: 00h 33m | Finish |
The actual schedule may vary slightly depending on the topics and exercises chosen by the instructor.
1) Software setup
Installing Python using Anaconda
[Python][python] is a popular language for scientific computing, and a frequent choice for machine learning as well. Installing all of its scientific packages individually can be a bit difficult, however, so we recommend the installer [Anaconda][anaconda] which includes most (but not all) of the software you will need.
Regardless of how you choose to install it, please make sure you install Python version 3.x (e.g., 3.4 is fine). Also, please set up your python environment at least a day in advance of the workshop. If you encounter problems with the installation procedure, ask your workshop organizers via e-mail for assistance so you are ready to go as soon as the workshop begins.
Checkout the [video tutorial][video-windows] or:
- Open [https://www.anaconda.com/products/distribution][anaconda-distribution] with your web browser.
- Download the Python 3 installer for Windows.
- Double-click the executable and install Python 3 using MOST of the default settings. The only exception is to check the Make Anaconda the default Python option.
Checkout the [video tutorial][video-mac] or:
- Open [https://www.anaconda.com/products/distribution][anaconda-distribution] with your web browser.
- Download the Python 3 installer for OS X. Make sure to use the correct version for your hardware, i.e. choose the options with “(M1)” if yours is one of the more recent models containing Apple’s chip.
- Install Python 3 using all of the defaults for installation.
Note that the following installation steps require you to work from the shell. If you run into any difficulties, please request help before the workshop begins.
- Open [https://www.anaconda.com/products/distribution][anaconda-distribution] with your web browser.
- Download the Python 3 installer for Linux.
- Install Python 3 using all of the defaults for installation.
- Open a terminal window.
- Navigate to the folder where you downloaded the installer
- Type
- Press enter.
- Follow the text-only prompts. When the license agreement appears (a
colon will be present at the bottom of the screen) hold the down arrow
until the bottom of the text. Type
yes
and press enter to approve the license. Press enter again to approve the default location for the files. Typeyes
and press enter to prepend Anaconda to yourPATH
(this makes the Anaconda distribution the default Python).
Installing the required packages
Conda is the package management system associated with Anaconda and runs on Windows, macOS, and Linux. Conda should already be available in your system once you installed Anaconda successfully. Conda thus works regardless of the operating system.
Make sure you have an up-to-date version of Conda running. See these instructions for updating Conda if required.
-
Create the Conda Environment: To create a conda environment called
trustworthy_ML
with the required packages, open a terminal (Mac/Linux) or Anaconda prompt (Windows) and type the below command. This command creates a new conda environment namedtrustworthy_ML
and installs the necessary packages from theconda-forge
andpytorch
channels. When prompted to Proceed ([y]/n) during environment setup, press y. It may take around 10-20 minutes to complete the full environment setup. Please reach out to the workshop organizers sooner rather than later to fix setup issues prior to the workshop. -
Activate the Conda Environment: After creating the environment, activate it using the following command.
-
Install
pytorch-ood
,fairlearn
,aif360[Reductions]
, andaif360[inFairness]
using pip. Make sure to do this AFTER activating the environment.SH
pip install torchaudio pip install pytorch-ood pip install fairlearn pip install aif360[Reductions] pip install aif360[inFairness]
Depending on your AIF360 installation, the final two
pip install
commands may or may not work. If they do not work, then installing these sub-packages is not necessary – you can continue on. -
Deactivating environment (complete at end of each day). Deactivating environments is part of good workflow hygiene. If you keep this environment active and then start working on another project, you may inadvertently use the wrong environment. This can lead to package conflicts or incorrect dependencies being used. To deactive your environment, use:
Starting Jupyter Lab
We will teach using Python in Jupyter lab, a programming environment that runs in a web browser. Jupyter requires a reasonably up-to-date browser, preferably a current version of Chrome, Safari, or Firefox (note that Internet Explorer version 9 and below are not supported). If you installed Python using Anaconda, Jupyter should already be on your system. If you did not use Anaconda, use the Python package manager pip to acquire Jupyter (see the Jupyter website for details.)
To start jupyter lab, open a terminal (Mac/Linux) or Anaconda prompt (Windows) and type the command:
Check your software setup
To check whether all packages installed correctly, start a jupyter notebook in jupyter lab as explained above. Run the following lines of code:
PYTHON
import sklearn
print('sklearn version: ', sklearn.__version__)
import pandas
print('pandas version: ', pandas.__version__)
import torch
print('torch version: ', torch.__version__)
This should output the versions of all required packages without giving errors. Most versions will work fine with this lesson, but: - For pytorch, the minimum version is 2.0 - For sklearn, the minimum version is 1.2.2
Fallback option: cloud environment
If a local installation does not work for you, it is also possible to run this lesson in Google Colab. Some packages may need to be installed on the fly within the notebook (TBD).
2) Download and move the data needed
For the fairness evaluation episode, you will need access to the Medical Expenditure Panel Survey Dataset. Please complete these steps to ensure you have access:
Download AI 360 Fairness example data: Medical Expenditure Panel Survey data (zip file)
Unzip h181.zip (right-click and extract all on Windows; double-click zip file on Mac)
-
In the unzipped folder, find the h181.csv file. If you installed
conda
with Anaconda, i.e., as described earlier in this document, move this file to the following location:- Windows:
C:\Users\[Usernmae]\anaconda3\envs\trustworthy_ML\Lib\site-packages\aif360\data\raw\meps\h181.csv
- Mac:
/Users/[Username]/opt/anaconda3/envs/trustworthy_ML/lib/python3.9/site-packages/aif360/data/raw/meps/h181.csv
If you installed
conda
in a different way, or don’t remember how you installed it, check the location of yourtrustworthy_ML
environment (make sure this environment is active, first!):- Windows:
where python3.9
- Mac:
which python3.9
.
Follow the instructions above, but replace everything before
/trustworthy_ML
with the printed path to/trustworthy_ML
. - Windows:
3) Create a Hugging Face account and access Token
You will need a Hugging Face account for the workshop episode on model sharing. Hugging Face is a very popular machine learning (ML) platform and community that helps users build, deploy, share, and train machine learning models.
Create account: To create an account on Hugging Face, visit: huggingface.co/join. Enter an email address and password, and follow the instructions provided via Hugging Face (you may need to verify your email address) to complete the process.
Setup access token: Once you have your account created, you’ll need to generate an access token so that you can upload/share models to your Hugging Face account during the workshop. To generate a token, visit the Access Tokens setting page after logging in. Once there, click “New token” to generate an access token. We’ll use this token later to log in to Hugging Face via Python