Content from Introduction


Last updated on 2025-10-27 | Edit this page

Estimated time: 15 minutes

Overview

Questions

  • What is automation in the context of software development, and why is it beneficial?
  • How does Continuous Integration (CI) enhance the software development process?
  • What tasks can be automated using CI?
  • Why is integrating small code changes regularly preferable to integrating large changes infrequently?
  • How can CI be extended to Continuous Delivery (CD) for automating deployment processes?​

Objectives

  • Understand the concept of automation and its role in improving efficiency and consistency in software development.
  • Learn the principles and benefits of Continuous Integration.
  • Identify common tasks that can be automated within a CI pipeline, such as code compilation, testing, linting, and documentation generation.
  • Recognise the importance of integrating code changes frequently to minimize conflicts and maintain a stable codebase.
  • Explore how Continuous Integration can be extended to Continuous Delivery to automate the deployment of packages and applications.

Doing tasks manually can be time-consuming, error-prone, and hard to reproduce, especially as the software project’s complexity grows. Using automation allows computers to handle repetitive, structured tasks reliably, quickly, and consistently, freeing up your time for more valuable and creative work.

Task automation is the process of using scripts or tools to perform tasks without manual intervention. In software development, automation helps streamline repetitive or complex tasks, such as running tests, building software, or processing data.

By automating these actions, you save time, reduce the chance of human error, and ensure that processes are reproducible and consistent. Automation also provides a clear, documented way to understand how things are run, making it easier for others to replicate or build upon your work.

Continuous Integration


Building on the concept of automation, Continuous Integration (CI) is the practice of regularly integrating code changes into a shared code repository and automatically running tasks and key checks each time this happens (e.g. when changes are merged from development or feature branch into main, or even after each commit). This helps maintain code quality and ensures new contributions do not break existing functionality.

A variety of CI services and tools, like GitHub Actions, GitLab CI, or Jenkins, make it easy to set up automated workflows triggered by code changes.

CI can also be extended into Continuous Delivery (CD), which automates the release or deployment of code to production or staging environments.

Principles of Continuous Integration

Software development typically progresses in incremental steps and requires a significant time investment. It is not realistic to expect a complete, feature-rich application to emerge from a blank page in a single step. The process often involves collaboration among multiple developers, especially in larger projects where various components and features are developed concurrently.

Continuous Integration (CI) is based on the principle that software development is an incremental process involving ongoing contributions from one or more developers. Integrating large changes is often more complex and error-prone than incorporating smaller, incremental updates. So, rather than waiting to integrate large, complex changes all at once, CI encourages integrating small updates frequently to check for conflicts and inconsistencies and ensure all parts of the codebase work well together at all times. This becomes even more critical for larger projects, where multiple features may be developed in parallel - CI helps manage the complexity of merging such contributions by making integrations a regular, manageable part of the workflow.

Common Tasks

When code is integrated, a range of tasks can be carried out automatically to ensure quality and consistency, including:

  • compiling the code
  • running a test suite across multiple platforms to catch issues early and checking test coverage to see what tests are missing
  • verifying that the code adheres to project, team, or language style guidelines with linters
  • building documentation pages from docstrings (structured documentation embedded in the code) or other source pages,
  • other custom tasks, depending on project needs.

These steps are typically executed as part of a structured sequence known as the “CI pipeline”.

Why use Continuous Integration?

From what we have covered so far, it is clear that CI offers several advantages that can significantly improve the software development process.

It saves time and effort for you and your team by automating routine checks and tasks, allowing you to focus on development rather than manual verification.

CI also promotes good development practices by enforcing standards. For instance, many projects are configured to reject changes unless all CI checks pass.

Modern CI services make it easy to run its tasks and checks across multiple platforms, operating systems, and software versions, providing capabilities far beyond what could typically be achieved with local infrastructure and manual testing.

While there can be a learning curve when first setting up CI, a wide variety of tools are available, and the core principles are transferable between them, making these valuable and broadly applicable skills.

Services & Tools

There are a wide range of CI-focused workflow services and different tools available to support various aspects of a CI pipeline. Many of these services have Web-based interfaces and run on cloud infrastructure, providing easy access to scalable, platform-independent pipelines. However, local and self-hosted options are also available for projects that require more control or need to operate in secure environments. Most CI tools are generally language- and tool-agnostic; if you can run a task locally, you can likely incorporate it into a CI pipeline.

Popular cloud-based services include GitHub Actions, Travis CI, CircleCI, and TeamCity, while self-hosted or hybrid solutions such as GitLab CI, Jenkins, and Buildbot also available.

Beyond Continuous Integration - Continuous Deployment/Delivery

You may frequently come across the term CI/CD, which refers to the combination of Continuous Integration (CI) and Continuous Deployment or Delivery (CD).

While CI focuses on integrating and testing code changes, CD extends the process by automating the delivery and deployment of software. This can include building installation packages for various environments and automatically deploying updates to test or production systems. For example, a web application could be redeployed every time a new change passes the CI pipeline (an example is this website - it is rebuilt each time a change is made to one of its source pages).

CD helps streamline the release process for packages or applications, for example by doing nightly builds and deploying them to a public server for download, making it easier and faster to get working updates into the hands of users with minimal manual intervention.

Practical Work


In the rest of this session, we will walk you through setting up a basic CI pipeline using GitHub Actions to help you integrate, test, and potentially deploy your code with confidence.

Key Points
  • Automation saves time and improves reproducibility by capturing repeatable processes like testing, linting, and building code into scripts or pipelines.
  • Continuous Integration (CI) is the practice of automatically running tasks and checks each time code is updated, helping catch issues early and improving collaboration.
  • Integrating smaller, frequent code updates is more manageable and less error-prone than merging large changes all at once.
  • CI pipelines can run on many platforms and environments using cloud-based services (e.g. GitHub Actions, Travis CI) or self-hosted solutions (e.g. Jenkins, GitLab CI).
  • CI can be extended to Continuous Delivery/Deployment (CD) to automatically package and deliver software updates to users or deploy changes to live systems.

Content from Example Code


Last updated on 2025-10-28 | Edit this page

Estimated time: 10 minutes

Overview

Questions

  • FIXME

Objectives

  • FIXME

Creating a Copy of the Example Code Repository


For this lesson we’ll need to create a new GitHub repository based on the contents of another repository.

  1. Once logged into GitHub in a web browser, go to https://github.com/UNIVERSE-HPC/ci-example.
  2. Select ‘Use this template’, and then select ‘Create a new repository’ from the dropdown menu.
  3. On the next screen, ensure your personal GitHub account is selected in the Owner field, and fill in Repository name with ci-example.
  4. Ensure the repository is set to Public.
  5. Select Create repository.

You should be presented with the new repository’s main page. Next, we need to clone this repository onto our own machines, using the Bash shell. So firstly open a Bash shell (via Git Bash in Windows or Terminal on a Mac). Then, on the command line, navigate to where you’d like the example code to reside, and use Git to clone it. For example, to clone the repository in our home directory (replacing github-account-name with our own account), and change directory to the repository contents:

BASH

cd
git clone https://github.com/github-account-name/ci-example
cd ci-example

Examining the Code


Next, let’s take a look at the code, which is in the factorial-example/mymath directory, called factorial.py, so open this file in an editor. You may recall we used this example in the last session on unit testing.

As a reminder, the example code is a basic Python implementation of Factorial. Essentially, it multiplies all the whole numbers from a given number down to 1 e.g. given 3, that’s 3 x 2 x 1 = 6 - so the factorial of 3 is 6.

We can also run this code from within Python to show it working. In the shell, ensure you are in the root directory of the repository, then type:

BASH

python

PYTHON

Python 3.10.12 (main, Feb  4 2025, 14:57:36) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 

Then at the prompt, import the factorial function from the mymath library and run it:

PYTHON

>>> from mymath.factorial import factorial
>>> factorial(3)

Which gives us 6 - which gives us some evidence that this function is working. But this isn’t really enough evidence to give us confidence in its overall correctness.

Running the Tests


For this reason, this code repository already has a series of unit tests, that allows us to automate this results checking, written using a Python unit testing framework called pytest. Note that this is a different unit testing framework that we looked at in the last session!

Navigate to the repository’s tests directory, and open a file called test_factorial.py:

PYTHON

import pytest
from mymath.factorial import factorial


def test_3():
    assert factorial(3) == 6

def test_5():
    assert factorial(5) == 120

def test_negative():
    with pytest.raises(ValueError):
      factorial(-1)

The key difference when writing tests for pytest as opposed to unittest, is that we don’t need to worry about wrapping the tests in a class: we only need to write functions for each test, which is a bit simpler. But they otherwise essentially work very similarly in both frameworks.

So essentially, this series of tests will check whether calling our factorial function gives us the correct result, given a variety of inputs:

  • factorial(3) should give us 6
  • factorial(5) should give us 120
  • factorial(-1) should raise a Python ValueError which we need to check for

Setting up a Virtual Environment for pytest


So how do we run these tests? Well, we need to create a virtual environment, since we’re using a unit test framework that’s supplied by another Python library which we need to have access to.

You may remember we used virtual environments previously. So in summary, we need to:

  • Create a new virtual environment to hold packages
  • Activate that new virtual environment
  • Install pytest into our new virtual environment

So:

BASH

python -m venv venv

Then to activate it:

BASH

[Linux] source venv/bin/activate
[Mac] source venv/bin/activate
[Windows] source venv/Scripts/activate

To install pytest:

BASH

pip install pytest

Then, in the shell, we can run these tests by ensuring we’re in the repository’s root directory, and running the following (very similar to how we ran our previous unittest tests):

BASH

python -m pytest tests/test_factorial.py 

You’ll note the output is slightly different:

OUTPUT

============================= test session starts ==============================
platform linux -- Python 3.10.12, pytest-8.3.5, pluggy-1.5.0
rootdir: /home/steve/test/ci-example2
collected 3 items

tests/test_factorial.py ...                                              [100%]

============================== 3 passed in 0.00s ===============================

But essentially, we receive the same information: a . if the test is successful, and a F if there is a failure.

We can also ask for verbose output, which shows us the results for each test separately, in the same way as we did with unittest, using the -v flag:

BASH

python -m pytest -v tests/test_factorial.py 

OUTPUT

============================= test session starts ==============================
platform linux -- Python 3.10.12, pytest-8.3.5, pluggy-1.5.0 -- /home/steve/test/ci-example2/venv/bin/python
cachedir: .pytest_cache
rootdir: /home/steve/test/ci-example2
collected 3 items

tests/test_factorial.py::test_3 PASSED                                   [ 33%]
tests/test_factorial.py::test_5 PASSED                                   [ 66%]
tests/test_factorial.py::test_negative PASSED                            [100%]

============================== 3 passed in 0.00s ===============================

[CHECKPOINT - who’s run the tests and got this output? Yes/No]

Key Points
  • FIXME

Content from Defining a Workflow


Last updated on 2025-10-27 | Edit this page

Estimated time: 10 minutes

Overview

Questions

  • FIXME

Objectives

  • FIXME

How to Describe a Workflow?


Now before we move on to defining our workflow in GitHub Actions, we’ll take a very brief look at a language used to describe its workflows, called YAML.

Originally, the acronym stood for Yet Another Markup Language, but since it’s not actually used for document markup, it’s acronym meaning was changed to YAML Aint Markup Language.

Essentially, YAML is based around key value pairs, for example:

YAML

name: Kilimanjaro
height_metres: 5892
first_scaled_by: Hans Meyer

Now we can also define more complex data structures too. Using YAML arrays, for example, we could define more than one entry for first_scaled_by, by replacing it with:

YAML

first_scaled_by:
  - Hans Meyer
  - Ludwig Purtscheller

Note that similarly to languages like Python, YAML uses spaces for indentation (2 spaces is recommended). Also, in YAML, arrays are sequences, where the order is preserved.

There’s also a short form for arrays:

YAML

first_scaled_by: [Hans Meyer, Ludwig Purtscheller]

We can also define nested, hierarchical structures too, using YAML maps. For example:

YAML

name: Kilimanjaro
height:
  value: 5892
  unit: metres
  measured:
    year: 2008
    by: Kilimanjaro 2008 Precise Height Measurement Expedition

We are also able to combine maps and arrays, for example:

YAML

first_scaled_by:
  - name: Hans Meyer
    date_of_birth: 22-03-1858
    nationality: German
  - name: Ludwig Purtscheller
    date_of_birth: 22-03-1858
    nationality: Austrian

So that’s a very brief tour of YAML, which demonstrates what we need to know to write GitHub Actions workflows.

Enabling Workflows for our Repository


So let’s now create a new GitHub Actions CI workflow for our new repository that runs our unit tests whenever a change is made.

Firstly, we should ensure GitHub Actions is enabled for repository. In a browser:

  1. Go the main page for the ci-example repository you created in GitHub.
  2. Go to repository Settings.
  3. From the sidebar on the left select General, then Actions (and under that, General).
  4. Under Actions permissions, ensure Allow all actions and reusable workflows is selected, otherwise, our workflows won’t run!

Creating Our First Workflow


Next, we need to create a new file in our repository to contain our workflow, and it needs to be located in a particular directory location. We’ll create this directly using the GitHub interface, since we’re already there:

  1. Go back to the repository main page in GitHub.
  2. Select Add file (you may need to expand your browser Window to see Add file) then Create new file.
  3. We need to add the workflow file within two nested subdirectories, since that’s where GitHub will look for it. In filename text box, add .github then add /. This will allow us to continue adding directories or a filename as needed.
  4. Add workflows, and / again.
  5. Add main.yml.
  6. Should end up with ci-example / .github / workflow / main.yml in main in the file field.
  7. Select anywhere in the Edit new file window to start creating the file.

Note that GitHub Actions expects workflows to be contained within the .github/workflows directory.

Let’s build up this workflow now.

Specify Workflow Name and When it Runs

So first let’s specify a name for our workflow that will appear under GitHub Actions build reports, and add the conditions that will trigger the workflow to run:

YAML

name: run-unit-tests

on: push

So here our workflow will run when changes are pushed to the repository. There are other events we might specify instead (or as well) if we wanted, but this one is the most common.

Specify Structure to Contain Steps to Run on which Platform

GitHub Actions are described as a sequence of jobs (such as building our code, or running some tests), and each job contains a sequence of steps which each represent a specific “action” (such as running a command, or obtaining code from a repository).

Let’s define the start of a workflow job we’ll name build-and-test:

YAML

jobs:

  build-and-test:

    runs-on: ubuntu-latest

We only have one job in this workflow, but we may have many. We also specify the operating systems on which we want this job to run. In this case, only the latest version of Linux Ubuntu, but we could supply others too (such as Windows, or Mac OS) which we’ll see later.

When the workflow is triggered, our job will run within a runner, which you can think of as a freshly installed instance of a machine running the operating system we indicate (in this case Ubuntu).

Specify the Steps to Run


Let’s now supply the concrete things we want to do in our workflow. We can think of this as the things we need to set up and run on a fresh machine. So within our workflow, we’ll need to:

  • Check out our code repository
  • Install Python
  • Install our Python dependencies (which is just pytest in this case)
  • Run pytest over our set of tests

We can define these as follows:

YAML

    steps:

    - name: Checkout repository
      uses: actions/checkout@v4

    - name: Set up Python 3.11
      uses: actions/setup-python@v5
      with:
        python-version: "3.11"

We first use GitHub Actions (indicated by uses: actions/), which are small tools we use to perform something specific. In this case, we use:

  • checkout - to checkout the repository into our runner
  • setup-python - to set up a specific version of Python

Note that the name entries are descriptive text and can be anything, but it’s good to make them meaningful since they are what will appear in our build reports as we’ll see later.

YAML

    - name: Install Python dependencies
      run: |
        python3 -m pip install --upgrade pip
        pip3 install -r requirements.txt

    - name: Test with pytest
      run: |
        python -m pytest -v tests/test_factorial.py

Here we use two run steps to run some specific commands, to install our python dependencies and run pytest over our tests, using -v to request verbose reporting.

Callout

What about other Actions?

Our workflow here uses standard GitHub Actions (indicated by actions/*). Beyond the standard set of actions, others are available via the GitHub Marketplace. It contains many third-party actions (as well as apps) that you can use with GitHub for many tasks across many programming languages, particularly for setting up environments for running tests, code analysis and other tools, setting up and using infrastructure (for things like Docker or Amazon’s AWS cloud), or even managing repository issues. You can even contribute your own.

Adding our Workflow to our Repository


So once we’ve finished adding in our workflow to the file, we commit this into our repository:

  1. In the top right of the editing screen select Commit changes....
  2. Add in a commit message, e.g. “Initial workflow to run tests on push”.
  3. Select Commit changes.

This commit action will now trigger the running of this new workflow, since that’s what the workflow is designed to do.

Key Points
  • FIXME

Content from Tracking a Running Workflow


Last updated on 2025-10-28 | Edit this page

Estimated time: 10 minutes

Overview

Questions

  • FIXME

Objectives

  • FIXME

Checking a Running Workflow


We’ve committed our workflow, so how do we know its actually running? Since the workflow is triggered on each git push, if we go back to our main repository page, we should see an orange circle next to the most recent commit displayed just above the directory contents.

When this workflow is complete, it will display either a green tick for success, or a red cross if the workflow encountered an error. You can also see a history of the past workflow runs’ failures or successes for each workflow run triggered on the commits page, by selecting Commits on the right of this most recent commit display.

For more detail we can check the progress of a running workflow by selecting Actions in the top navigation bar (e.g. https://github.com/steve-crouch/ci-example/actions). We can see here that a new run has started, titled with our commit message. This page also shows a historical log of any other previous workflow runs too.

We can also view a complete log of the output from workflow runs, by selecting the first (and only) entry at the top of the list. This will then display a list of our jobs (in this case only build-and-test). If we select build-and-test we’ll see an in-progress log of our workflow run, separated into separate collapsed steps that we may expand to view further detail. Each of the steps is named from the name labels we gave each step. Note that the workflow may still be running at this point, so not all steps may be complete yet.

If we drill down by selecting the Test with pytest entry, we’ll get a breakdown of the thing we’re really interested in:

OUTPUT

Run python -m pytest -v tests/test_factorial.py
============================= test session starts ==============================
platform linux -- Python 3.11.12, pytest-7.2.0, pluggy-1.0.0 -- /opt/hostedtoolcache/Python/3.11.12/x64/bin/python
cachedir: .pytest_cache
rootdir: /home/runner/work/ci-example2/ci-example2
collecting ... collected 3 items

tests/test_factorial.py::test_3 PASSED                                   [ 33%]
tests/test_factorial.py::test_5 PASSED                                   [ 66%]
tests/test_factorial.py::test_negative PASSED                            [100%]

============================== 3 passed in 0.01s ===============================

Which shows us that our tests were successful!

Triggering our Workflow with Code Changes


Now if we make a change to our code, our workflow will be triggered and these tests will run, so let’s make a change.

In GitHub (or on the command line if you prefer), edit the source code file mymath/factorial.py, add an additional line before return factorial. Then save the file (if editing locally), and commit the change.

If we return to the GitHub Actions workflow list and select the most recent workflow run, we should see the workflow execute successfully as before - so we know our change hasn’t broken anything.

Summary


Our workflow will now be triggered every time a change to our code is pushed to our GitHub repository, which means that our code is now always being checked against our tests. Although we must remember to check the workflow log for this to have value. We also need to be sure that our tests sufficiently verify the behaviour of our code as it evolves, so we should ensure we update our tests as necessary, and adding new tests as required to verify new functionality.

Key Points
  • FIXME

Content from Build Matrices


Last updated on 2025-10-28 | Edit this page

Estimated time: 10 minutes

Overview

Questions

  • FIXME

Objectives

  • FIXME

Running Workflows over Multiple Platforms


So far, every time our workflow is triggered, it runs over a single operating system, Ubuntu. From an automation perspective this is incredibly helpful, since although running our unit tests is a quick process, by automating it this cumulative savings in time becomes considerable. However, what if we wanted to test our code across different versions of Python installed on different platforms, such as Windows and Mac OS? Let’s look at a feature called build matrices which allows us to do this, and really show the value of using CI to test code at scale.

Suppose the intended users of our software use either Ubuntu, Mac OS, or Windows, and have Python versions 3.10 through 3.12 installed, and we want to support all of these. Assuming we have a suitable test suite, it would take a considerable amount of time to set up testing platforms to run our tests across all these platform combinations. Fortunately, CI can do the hard work for us very easily.

First, let’s update our workflow to specify which platforms and Python versions we wish to run, by adding/changing the following where runs-on is defined:

YAML

    strategy:
      matrix:
        os: ["ubuntu-latest", "macos-latest", "windows-latest"]
        python-version: ["3.10", "3.11", "3.12"]

    runs-on: ${{ matrix.os }}

Here we define a build matrix that specifies each of the os and python-version we want to test, such that new jobs will be created that run our tests for every permutation of these two variables. So, we should expect 9 jobs to run in total.

We also change runs-on to refer to the os component of our matrix, using {{ }} as a means to reference these values.

Similarly, we need to update our Python setup section to make use of the python-version component of our build matrix:

YAML

    - name: Set up Python ${{ matrix.python-version }}
      uses: actions/setup-python@v5
      with:
        python-version: ${{ matrix.python-version }}

Once we’ve saved our workflow, commit the changes to the repository as before.

If we view the most recent GitHub Actions workflow run, we should see that a new job has been created for each of the 9 permutations.

Note all jobs are running in parallel (up to the limit allowed by our account) which potentially saves us a lot of time waiting for testing results. Therefore overall, this approach allows us to massively scale our automated testing across platforms we wish to test.

Failed CI Builds


A CI build can fail when, e.g. a used Python package no longer supports a particular version of Python indicated in a GitHub Actions CI build matrix. In this case, the solution is either to upgrade the Python version in the build matrix (when possible), or to downgrade the package version (and not use the latest one like we have been doing in this course).

Also note that, if one job fails in the build for any reason, all subsequent jobs will get cancelled because of the default behavior of GitHub Actions. From GitHub’s documentation:

GitHub will cancel all in-progress and queued jobs in the matrix if any job in the matrix fails. This behaviour can be controlled by changing the value of the fail-fast property in the strategy section:

YAML

...
   strategy:
     fail-fast: false
     matrix:
...

This would ensure that all matrix jobs will be run to completion regardless of any failures, which is useful so that we are able to identify and fix all failures at once, as opposed to having to fix each in turn.

Key Points
  • FIXME