Summary and Setup
Welcome
This lesson teaches the fundamentals of Natural Language Processing (NLP) in Python. It will equip you with the foundational skills and knowledge needed to carry over text-based research projects. The lesson is designed with researchers in the Humanities and Social Sciences in mind, but is also applicable to other fields of research.
On the first day we will dive in to text preprocessing and word embeddings while epxloring semantic shifts in various words over multiple decades. The second day begins with an introduction to transformers, and we will work on classification and named entity recognition with the BERT model. In the afternoon, we willl cover large language language models, and you will learn how to build your own agents.
Prerequisites
Before joining this course, participants should have:
- Basic Python programming skills
Software Setup
Installing Python
Python is a popular language for scientific computing, and a frequent choice for machine learning as well. To install Python, follow the Beginner’s Guide or head straight to the download page.
Please set up your python environment at least a day in advance of the workshop. If you encounter problems with the installation procedure, ask your workshop organizers via e-mail for assistance so you are ready to go as soon as the workshop begins.
Installing the required packages
Pip is the package management system built into Python. Pip should be available in your system once you installed Python successfully. Please note that installing the packages can take some time, in particular on Windows.
Open a terminal (Mac/Linux) or Command Prompt (Windows) and run the following commands.
- Create a virtual
environment called
nlp_workshop
:
python3 -m venv nlp_workshop
py -m venv nlp_workshop
- Activate the newly created virtual environment:
source nlp_workshop/bin/activate
nlp_workshop\Scripts\activate
Remember that you need to activate your environment every time you restart your terminal!
- Install the required packages:
nlp_workshop\Scripts\activate
python3 -m pip install jupyter torch transformers scikit-learn spacy gensim langgraph langchain-ollama langchain-text-splitters langchain-nomic nomic[local] seqeval datasets wordcloud
py -m pip install install jupyter torch transformers scikit-learn spacy gensim langgraph langchain-ollama langchain-text-splitters langchain-nomic seqeval datasets wordcloud
Jupyter Lab
We will teach using Python in Jupyter Lab, a programming environment that runs in a web browser. Jupyter Lab is compatible with Firefox, Chrome, Safari and Chromium-based browsers. Note that Internet Explorer and Edge are not supported. See the Jupyter Lab documentation for an up-to-date list of supported browsers.
To start Jupyter Lab, open a terminal (Mac/Linux) or Command Prompt (Windows) and type the command:
jupyter lab
Ollama
We will use Ollama to run large language models. It can be downloaded here:
Next, download the model that we will be using from a terminal (Mac/Linux) or Command Prompt (Windows) by typing the command:
ollama pull llama3.1:8b
Data Sets
Delpher newspapers
Download the form page of the Algemeen Dagblad from July 21 1969 as txt file from Delpher. To do so, click on the link and navigate to the right hand side of the web page. There you’ll find an icon with an arrow pointing down:

Click on this icon and select txt
among the downloading
options. Save the file as ad.txt
in the folder that you
will be using for the workshop.
Similarly, download the following pages from Amigoe di Curacao : weekblad voor de Curacaosche eilanden as txt file
Word2Vec
Download Word2Vec models trained on 6 national Dutch newspaper data spanning a time period from 1950 to 1989 (Wevers, M., 2019). These models are available on Zenodo.
Spacy Dutch
Download the trained pipelines for Dutch from Spacy. To do so, open a terminal (Mac/Linux) or Command Prompt (Windows) and type the command:
python -m spacy download nl_core_news_sm