Instructor Notes

This is a placeholder file. Please add content here.

Overview


Instructor Note

Inline instructor notes can help inform instructors of timing challenges associated with the lessons. They appear in the “Instructor View”



Preparing to train a model


Discussion points

  • The algorithm could help doctors figure out what rare disease a patient has.
  • First, if the algorithm is used in the wrong hands, it could be used to discriminate against people with diseases. Second, if the algorithm is not accurate (false positive or false negative), trusting its results could lead to improper medical care.
  • Safeguards could include: requiring extensive testing to ensure that the algorithm maintains similar accuracy across racial and gender groups, making sure the algorithm is only accessible to medical professionals, and requiring follow-up testing to confirm the algorithm’s diagnosis.


Discussion points

  • The algorithm proposes detecting an individual’s sexual orientation from images of their face. It is unclear why this is something that needs to be algorithmically detected by any entity.
  • If the algorithm is used by anti-LGBTQ entities, it could be used to harass members of the LGBTQ community (as well as non-LGBTQ people who are flagged as LGBTQ by the algorithm). If the algorithm is accurate, it could be used to “out” individuals who are gay but do not want to publicly share that information.
  • The first case study aims to detect disease. The implication of this research – at least, as suggested by the linked article – is that it can help doctors with diagnosis and give individuals affected by the disease access to treatment. Conversely, if there is a medical reason for knowing someone’s sexual orientation, it is not necessary to use AI – the doctor can just ask the patient.


Discussion points

  1. The goal target variable is healthcare need.
  2. Patients’ healthcare needs are unknown unless they have seen a doctor. Patients who have less access to medical care may have under-documented needs. There might also be differences between doctors or hospital systems in how health conditions are documented.
  3. In the US, there are standard ways that healthcare billing information needs to be documented (this information may be more standardized than medical conditions). There may be more complete data from acute medical emergencies (i.e., emergency room visits) than there is for chronic conditions.
  4. In the US, healthcare access is often tied to employment, which means that wealthier people (who in the US, also tend to be white) have more access to healthcare.


Discussion points

  • The algorithm developers could have tested specifically checked for racial bias in their solution.
  • Discussion about the choice of target variable, including implementing models with different targets and comparing the results, could have exposed the bias.
  • Possibly a more diverse development team – e.g., including individuals who have firsthand experience with struggling to access healthcare – would have spotted the issue.
  • Note that it’s easier to see the mistake in hindsight than in the moment.


Discussion points

  • Social media companies optimize for engagement to maximize profits: if users keep using the service, they can sell more ads and bring in more revenue.
  • It’s hard to come up with an alternate optimization target that could be easily operationalized. Alternate goals could be social connection, learning, broadening one’s worldview, or even entertainment. But how would any of these goals be measured?


Discussion points

  1. Policing data does not provide a complete picture of crime: it only contains data about crimes that are reported. Some neighborhoods (in the US, usually poor neighborhoods with predominantly Black and Brown residents) are over-policed relative to other neighborhoods. As a result, data will suggest that the over-policed neighborhoods have more crime, and then will send more officers to patrol those areas, resulting in a feedback loop. Using techniques to clean and balance the data could help, but the article’s authors also point towards using non-technical solutions, such as algorithmic accountability and oversight frameworks.
  2. Commercially-available facial recognition systems have much higher accuracies on white men than on darker-skinned women. This discrepancy is attributed to imbalances in the training data. This problem could have been avoided if development teams were more diverse: e.g., if someone thought to evaluate the model on darker-skinned people during the development process. Then, collecting more data from underrepresented groups could improve accuracy on those individuals.
  3. Amazon tried to automate the resume-screening part of its hiring process, relying on data (e.g., resumes) from existing employees. However, the AI learned to discriminate against women because Amazon’s existing technical staff skewed heavily male. This could have been avoided in a couple ways: first, if Amazon did not have an existing gender skew, the data would have been cleaner. Second, given the gender skew in Amazon’s employees, model developers could have built in safeguards, e.g., mechanisms to satisfy some notion of fairness, such as deciding to interview an equal proportion of male and female job applicants.


Model evaluation and fairness


Model fairness: hands-on


Interpretablility versus explainability


Explainability methods overview


Explainability methods: deep dive


Explainability methods: linear probe


Explainability methods: GradCAM


Estimating model uncertainty


OOD detection: overviewIntroduction to Out-of-Distribution (OOD) Data


OOD detection: softmaxExample 1: Softmax scores


OOD detection: energyExample 2: Energy-Based OOD Detection


OOD detection: distance-basedExample 3: Distance-Based MethodsLimitations of Threshold-Based OOD Detection Methods


OOD detection: training-time regularizationTraining-time regularization for OOD detection


Documenting and releasing a model