Introduction


  • Machine learning is the process where computers learn to recognise patterns of data.
  • Artificial neural networks are a machine learning technique based on a model inspired by groups of neurons in the brain.
  • Artificial neural networks can be trained on example data.
  • Deep learning is a machine learning technique based on using many artificial neurons arranged in layers.
  • Neural networks learn by minimizing a loss function.
  • Deep learning is well suited to classification and prediction problems such as image recognition.
  • To use deep learning effectively we need to go through a workflow of: defining the problem, identifying inputs and outputs, preparing data, choosing the type of network, choosing a loss function, training the model, refine the model, measuring performance before we can classify data.
  • Keras is a deep learning library that is easier to use than many of the alternatives such as TensorFlow and PyTorch.

Classification by a neural network using Keras


  • The deep learning workflow is a useful tool to structure your approach, it helps to make sure you do not forget any important steps.
  • Exploring the data is an important step to familiarize yourself with the problem and to help you determine the relavent inputs and outputs.
  • One-hot encoding is a preprocessing step to prepare labels for classification in Keras.
  • A fully connected layer is a layer which has connections to all neurons in the previous and subsequent layers.
  • keras.layers.Dense is an implementation of a fully connected layer, you can set the number of neurons in the layer and the activation function used.
  • To train a neural network with Keras we need to first define the network using layers and the Model class. Then we can train it using the model.fit function.
  • Plotting the loss curve can be used to identify and troubleshoot the training process.
  • The loss curve on the training set does not provide any information on how well a network performs in a real setting.
  • Creating a confusion matrix with results from a test set gives better insight into the network’s performance.

Monitor the training process


  • Separate training, validation, and test sets allows monitoring and evaluating your model.
  • Batchnormalization scales the data as part of the model.

Advanced layer types


  • Convolutional layers make efficient reuse of model parameters.
  • Pooling layers decrease the resolution of your input
  • Dropout is a way to prevent overfitting

Transfer learning


  • Large pre-trained models capture generic knowledge about a domain
  • Use the keras.applications module to easily use pre-trained models for your own datasets

Outlook


  • Although the data preparation and model architectures are somewhat more complex, what we have learned in this course can directly be applied to real-world problems
  • Use what you have learned in this course as a basis for your own learning trajectory in the world of deep learning