The course part in this programme consists of:

  • Compulsory courses (16 EC):
    two courses that every student in the programme takes, plus two mini-courses
  • Primary electives (30 EC):
    four courses (out of a set of 16 courses) 
  • Secondary electives (30 EC):
    four courses to give your programme a personal flavour, to be chosen from a broad pre-defined set of secondary elective courses (a.o. short research projects or courses from other, related programmes such as computer science, neuroscience and cognition, linguistics and philosophy). 

Compulsory courses (16 EC)

Methods in AI research (compulsory)

Artificial Intelligence is a fast-paced and challenging field that is making visible inroads into our everyday life.
I in Utrecht offers a unique interdisciplinary approach, integrating the areas of computer science and agent systems, cognition and psychology, logic and philosophy, and linguistics.
Because of this interdisciplinary character, the variety of techniques and methods used is considerable, ranging from theoretical to empirical, and from formal mathematical to more informal philosophical.
In this course, we will introduce the various perspectives on AI in Utrecht and the methods associated with them.
We will look at the basics of machine learning, logic and symbolic reasoning, cognitive science and computational linguistics, and discuss the part they play in modern AI systems.We will further discuss important methods commonly used in AI research: knowledge modelling, system engineering, and empirical evaluation of machine learning and human-computer interaction.
We further practice general academic skills such as reviewing literature, working in teams and scientific writing.
The linking pin of the course is a central lab project in which you will develop, describe, test, and evaluate a dialog system (sometimes also referred to as “chatbot”).
In this way, the theory from the lectures forms the basis of a real AI application that you will evaluate with users.

Course form
Lectures and lab sessions.

See the Blackboard pages of the course.

Philosophy of A.I. (compulsory)

This course will make students familiar with fundamental issues in the philosophy of AI, and will introduce them to several current discussions in the field. Students will practice their argumentation and presentation skills, both in class discussions and in writing.
The course is split up in three parts. The first part is a quick overview of the fundamental issues and core notions in philosophy of AI. It addresses topics such as the Turing Test, the Chinese Room Argument, machine intelligence, machine consciousness, weak and strong AI, and the Symbol System Hypothesis. In order to establish a shared background for all students, the material of this part will be assessed with an entrance test already in week 3.
In the second part of the course, there will be an in-depth discussion of several current topics in the field, for example on ethics and responsibility in AI, transhumanism, or the relation between AI and data science. On each topic, there will be a lecture, and a seminar with class discussions and student presentations. Students prepare for those discussions by posting a thesis with one or more supporting arguments about the required reading. In the third part of the course, students will write a philosophical paper, and will provide feedback on their fellow students' draft papers. More information in Blackboard

This course is for Students Artificial Intelligence, History and Philosophy of Science, and RMA Philosophy. Students of other MA-programmes, please contact the Course Coordinator.
The entrance requirements for Exchange Students will be checked by International Office and the Programme coördinator. Therefore, you do not have to contact the Programme coördinator yourself.

Introducing Natural Sciences (compulsory)

There are two morning sessions with several speakers introducing the student to the the education system of the graduate school, its rules, its curricula, general and practical information about personnel and administration, specific information about the programme itself and expectations of the programme board about their students, honours education, specific profiles across disciplines and the profession of teacher.
Knowing what kind of skills and attitudes the labour market is looking for is considered as important. Workshops will train students to enhance awareness about their own strengths and weaknesses or introduce them to the work and life of PhD students.
Students will have ample time to get to known each other and their programme board.
Lunches, drinks and a concluding dinner will be organised.

Dilemmas of the scientist (compulsory)

The course Dilemmas of the Scientists consists of two workshops: one in your first year (course code FI-MHPSDL1) and one in your second year (course code FI-MHPSDIL). Both are mandatory for all students of the Graduate School of Natural Sciences. The workshops have separate course codes because the course spans over two academic years. The 0,5 ects attached to the course are credited once you've completed FI-MHPSDIL. Please note: the talk about research integrity during the master introduction days is *not* part of the course.

This workshop (FI-MHPSDIL) is the second-year workshop. It is intended for second-year master students who have already completed FI-MHPSDL1. If you have not yet completed FI-MHPSDL1, you should do that first. If that leads to scheduling conflicts, please contact the course coordinator.

The workshop is offered both in semester 1 and semester 2. When you should take the workshop depends on when you’ve started your master programme. If you’ve started your master in (or before) February 2019, you should take the workshop in semester 1. If you started your master in September 2019, you should take the workshop in semester 2.

During this workshop, we will discuss dilemmas of integrity that you yourself have encountered during your studies.

Primary electives (30 EC)

Intelligent agents

This course is about the theory of so-called intelligent agents, pieces of software that perceive, reason, and act while displaying some degree of autonomy.
The agent concept calls for an integration of several topics in artificial intelligence, such as knowledge representation and reasoning (in particular reasoning about action and change) and planning. Agent technology, as the field is generally called, has a great potential of applications, ranging from intelligent personal assistants to e-commerce and robotics.
The course is devoted mainly to the philosophical and theoretical (mostly logical) foundations of the area of intelligent agents, both focusing on single agents and on multi-agent systems.

Please see Blackboard for further details:

Overview in brief: * introduction "What are intelligent agents?" * agent architectures * knowledge representation, ontologies, Web Ontology Language (OWL) * agent communication * goals * trust and privacy

Course form
Lectures, presentations, project

Textbooks and a collection of articles.

Machine learning for human vision and language

Machine learning with deep convolutional neural networks (deep learning) is being applied increasingly broadly in computer science, technology and scientific research. This method allows computer systems to perform tasks that have previously been impossible or inaccurate for computers, but typically straightforward for humans. Tasks like visual object identification and natural language processing have traditionally been investigated by cognitive scientists and linguists, but recent applications of deep learning to these tasks also positions them at the center of recent artificial intelligence developments. Therefore, it is important for AI students and researchers to understand the links between cognitive science and AI.

In this course, you will learn the principles behind deep learning, an approach inspired by the structure of the brain. You will learn how these principles are implemented in the brain, focusing on the two aspects of visual processing and language (semantic or syntactic) processing. You will build your own deep learning systems for the interpretation of natural images and language, using modern high-level neural network APIs that make implementation of these systems accessible and efficient.

The course goals will be examined in the following ways:
- Students will attend lectures introducing the approach taken in deep learning systems, comparing this to how deep learning is implemented in biological brains, and introducing the main applications of deep learning to cognitive science and linguistics. Their understanding of this content will be assessed in a final exam.
- Students will participate in discussions and reviews of relevant literature, which will be graded.
- Students will work through lab practical assignments on visual processing and on language processing. The resulting reports will be graded .

Computational argumentation

This course gives an introduction to the computational study of argumentation in AI, a currently popular subfield of symbolic AI.
The course especially focuses on formal models of argumentation and their application in areas like commonsense reasoning, legal reasoning and multi-agent interaction.

The computational study of argumentation concerns two aspects: reasoning and dialogue. Argumentation as a form of reasoning makes explicit the reasons for the conclusions that are drawn and how conflicts between reasons are resolved.
Systems for argumentation-based inference were originally developed in the field of nonmonotonic logic, which formalizes qualitative reasoning with incomplete, uncertain or inconsistent information.
Argument-based systems have been very successful as nonmonotonic logics, since they are based on very natural concepts, such as argument, counterargument, rebuttal and defeat. In this course the following formalism will be discussed:

  • Default logic (a still influential early nonmonotonic logic)
  • The theory of abstract argumentation frameworks (the generally accepted formal foundation of the field)
  • The theory of structured argumentation frameworks, with a special focus on the ASPIC+ approach.
  • Formal accounts of change operations on argumentation frameworks
  • Formal models of legal case-based reasoning

Argumentation as a form of dialogue concerns the rational resolution of conflicts of opinion by verbal means. Intelligent agents may disagree, for instance, about the pros and cons of alternative proposals, or about the factual basis of such proposals.
Dialogue systems for argumentation formally define protocols for argumentation dialogues and thus enable a formal study of the dynamics of argumentative agent interaction, including issues of strategic choice. In this course two examples of such dialogue systems will be discussed.

This course is the replacement of INFOCR Commonsense reasoning and argumentation. Only one of the two courses can be part of your graduation program.

Course form
Interactive lectures, self-study with exercises.

A reader (freely available online), online articles and educational software tools.

Data mining

This course is aimed at students of the Computing Science (COSC) master program.

Topics covered include (content can vary somewhat from year to year):

  • Classification Tree Algorithms, Bagging and Random Forests
  • Graphical Models (including Bayesian Networks)
  • Frequent Pattern Mining
  • Text Mining
  • Social Network Mining

    Course form
    Lectures and Computer Lab.

    Selected book chapters, articles, and lecture notes.

    Logic and Language

    This course covers advanced methods and ideas in the logical analysis of language, with an emphasis on type-logical methods for the analysis of natural language syntax and semantics. The course has a 'capita selecta' format, focusing on various aspects of the connection between language and reasoning.
    The current edition consists of three parts. In the first part, we introduce Intuitionistic Logic and its sub-structural variants, emphasizing their equivalence to computational models, in line with the Curry-Howard isomorphism between logics and functional programming languages. Among the inhabitants of this substructural territory, we find categorial grammar logics, deductive systems that allow us to argue about natural language syntax in a strictly formal manner. During the second part, we extend these logics with structural control operators, permitting explicit control of restricted forms of reordering and restructuring. Further, we see how the proofs-as-programs interpretation can provide a natural bridge between syntactic structure and meaning composition. Finally, in the third part of the course, we switch to an applied perspective, where we investigate the applicability of modern machine-learning algorithms towards extracting and building numerical representations of proofs from natural language sentences.

    N.B. You can enroll in this course via Osiris during the 'wijzigingsdagen' of the Humanities faculty: 25th and 26th of October 2021.
    N.B. Inschrijven in Osiris voor deze cursus is mogelijk op 25 en 26 oktober 2021 tijdens de wijzigingsdagen van de faculteit Geesteswetenschappen (GW).

    Advanced machine learningAdvanced machine learning

    This course treats two advanced topics in machine learning: causal inference (the study of cause-effect relations), and reinforcement learning (learning to interact with an environment).

    Modern machine learning methods have achieved spectacular results on various tasks. Yet there are pitfalls and limitations that can't be overcome simply by increasing the amounts of data and computing power. For example, standard methods assume that the data are drawn from a single, unchanging probability distribution. The two main topics that we cover in this course both deal with situations where that is not the case.

    The first topic, causal inference, is the subfield of machine learning that studies causes and effects: if we make a change to one random variable in a system, for which other variables does the distribution change? An understanding of these cause-and-effect relations allows us to predict the results of a change in the environment. We will also look at the problem of learning these relations from data.

    Second, reinforcement learning is about the design of agents that can learn to interact with an unknown environment. Recent advances in supervised learning (such as deep learning) can be built on by reinforcement learning methods. This brings with it a unique set of challenges that we will cover in this course.

    The following knowledge will be assumed in this course:

    • solid proficiency in mathematics, in particular probability theory (e.g. ability to understand and manipulate formulas involving conditional probabilities and expectations), linear algebra, basic calculus
    • programming skill in Python
    • understanding of basic machine learning theory and methods, for example from the bachelor course Machine Learning (KI3V15001)

    Course form
    Lectures; tutorials/practical sessions


    • Judea Pearl, Madelyn Glymour, Nicholas P. Jewell. Causal Inference in Statistics: A Primer. Wiley, 2016.
    • Richard S. Sutton, Andrew G. Barto. Reinforcement Learning: An Introduction (second edition). MIT Press, 2018. (pdf available from authors' website:
    • additional material that will be made available online

    Cognitive Modeling

    Formal models of human behavior and cognition that are implemented as computer simulations - cognitive models - play a crucial role in science and industry. In science, cognitive models formalize psychological theories.
    This formalization allows one to predict human behavior in novel settings and to tease apart the parameters that are essential for intelligent behavior.
    Cognitive models are used to study many domains, including learning, decision making, language use, multitasking, and perception and action.
    The models take many forms including dynamic equation models, neural networks, symbolic models, and Bayesian networks. In industry, cognitive models predict human behavior in intelligent 'user models'.
    These user models are used for example for human-like game opponents and intelligent tutoring systems that adaptively change the difficulty of a game or training program to a model of the human's capacities. Similarly, user models are used in the design and evaluation of interfaces: what mistakes are humans likely to make in a task, what information might they overlook on an interface, and what are the best points to interrupt a user (e.g., with an e-mail alert) such that this interruption does not overload them?
    To be able to develop, implement, and evaluate cognitive models and user models, you first need to know which techniques and methods are available and what are appropriate (scientific or practical) questions to test with a model.
    Moreover, you need practical experience in implementing (components of) such models. In this course you will get an overview of various modeling techniques that are used world-wide and also by researchers in Utrecht (esp. in the department of psychology and the department of linguistics).
    You will learn their characteristics, strengths and weaknesses, and their theoretical and practical importance. Moreover, you will practice with implementing (components of) such models during lab sessions.

    Pattern recognition

    In this course we study statistical pattern recognition and machine learning.

    The subjects covered are:

    General principles of data analysis: overfitting, the bias-variance trade-off, model selection, regularization, the curse of dimensionality.
    Linear statistical models for regression and classification.
    Clustering and unsupervised learning.
    Support vector machines.
    Neural networks and deep learning.

    Knowledge of elementary probability theory, statistics, multivariable calculus and linear algebra is presupposed.

    Logic and Computation

    Students will learn how to answer one or more of the following research questions by means of an actor-based methodology in which each question will be addressed from multiple perspectives.+ What is a program?+ What is a computer?+ What are the practical implications of undecidability?+ What is the distinction between a stored-program computer and a universal Turing machine?+ What is the difference between a model (of computation) and a physical computer? This is a reading &writing course. Attendance is obligatory. Homework will already be handed out during the first week of class with a firm deadline in the second week. Late arrivals in class will only be tolerated once; in other cases, they can lead to a deduction of the student’s final grade. The aim of the course on proofs as programs is to get an understanding of type theory and its role within logic, linguistics, and computer science and get acquainted to the Curry-Howard correspondence, relating types to propositions and programs to proofs. More information in Blackboard.

    This course is for students in Artificical Intelligence, as well as students in History and Philosophy of Science and the RMA Philosophy. Students of other MA-programmes, please contact the Course Coordinator. Students History and Philosophy of Science and Artificial Intelligence experiencing problems with enrollment, please contact the Student Desk Humanities,

    Multi-agent systems

    This course focuses on multi-agent issues and will consist of lecture, seminar and lab sessions.
    The lectures will cover the following topics:

    • Game theory
    • Auctions
    • Communication
    • Social choice
    • Mechanism Design
    • Normative Multi-Agent Systems

    The seminar sessions consists of student presentations and will cover other multi-agent system issues such as:

    • Logics for Multi-Agent Systems
    • Multi-Agent Organisations and Electronic Institutions
    • Normative Multi-Agent Systems
    • Argumentation and Dialogues in Multi-Agent Systems
    • Multi-Agent Negotiation
    • Communication and coordination in Multi-Agent Systems
    • Development of Multi-Agent Systems

    Each student is expected to present some papers on one of the abovementioned topics.
    In the lab sessions the students will develop multi-agent systems on different platforms such as 2APL and Jade.

    Experimentation in psychology, linguistics, and AI

    Both science and industry are interested in creating precise formal models of human behaviour and cognition. To help build, test and optimise such models, one needs to create and run experiments. Students participating in this course will learn (I) how to design experiments given an existing model, (II) how to implement experiments using various tools and, finally, (III) how to extract data from the recorded responses for analysis purposes.

    Most theoretical claims in linguistics and psychology are made by positing a formal model. The aim of such models is to make precise predictions. Moreover, the predictions of a model need to tested with formal experiments. The results of the experiment may or may not lead to changes in the model and thus lead to a new set of testable predictions. Essential in the modelling-experimenting cycle is careful experimental design. The course covers the practical and theoretical considerations for experimental research, from posing the research question to interpreting and reporting experimental results.

    In industry, experiments are also used frequently. For example, to assess how people use interfaces (e.g., where do they look or click, or how particular text influences their subsequent choices?), to test what the best design of a product is, or to test the appropriateness of a user model (e.g., do people learn what the model predicts them to learn, do they have a more immersive experience when a model guides adaptation of the software?).

    In this course you will get an overview of various experimenting techniques that are used world-wide and also by researchers in Utrecht (esp. in the department of psychology and the department of linguistics). You will learn how to use such techniques for testing specific models, as well as where the limits of these technique lie. In the practicals you will also gain hands-on experience with the implementation, data manipulation and data analysis steps of experimenting.

    The learning goals will be examined in three ways:

    1. Students will read and critically reflect on selected articles from the experimental literature. They will prepare a short presentation based on the critical reflection. The presentation will be graded.
    2. Students will implement experiments and work with experimental data during practicals. These will be graded.
    3. Students will design and implement an experiment on a topic of their own choice and write a note reporting on the experiment. Implementation and report will be graded.

    Evolutionary computing

    Evolutionary algorithms are population-based, stochastic search algorithms based on the mechanisms of natural evolution. This course covers how to design representations and variation operators for specific problems. Furthermore convergence behavior and population sizing are analysed. The course focuses on the combination of evolutionary algorithms with local search heuristics to solve combinatorial optimization problems like graph bipartitioning, graph coloring, and bin packing.

    Social computing

    Social Computing can be broadly defined as ‘computational facilitation of social studies and human social dynamics as well as the design and use of ICT technologies that consider social context’ (Wang et al., 2007).
    Watt argues that ‘if handled appropriately, data about Internet-based communication and interactivity could revolutionize our understanding of collective human behaviour.’ (Watts, 2007).
    If you like to apply computational approaches to social phenomenon and problems with the aim of bringing a broader perspective to social problems, this course is your first step.
    The course provides you with knowledge of various methodologies used within the spare of social computing.

    Course format
    Online lectures, discussion sessions, team work.
    Students are expected to attend each lecture.

    The course literature consists of assigned papers, all of which will be made available in Microsoft Teams. Every lecture has two to four assigned readings, as well as one discussion paper.
    Students are expected to have read the papers before the class.
    See the course schedule for the list of assigned readings for each class. Some readings are marked as "additional readings," and exempted from the exam, but made available for students who would like to continue studying the particular subject at greater depth.
    Course information, course documents and assignments for in-class discussion will be posted on Microsoft Teams prior to the class meeting. Lecture slides will be made available after each lecture.
    The online lectures as well as student presentations will be recorded, and made available via Teams after the lecture.

    Multi-agent learning

    This seminar focuses on forms of machine learning that typically occur in multi-agent systems. Topics include learning and teaching, fictitious play, rational learning, no-regret learning, targeted learning, multi-agent reinforcement learning and evolutionary learning.

    Natural language processing

    This course is an advanced introduction to the study of language from a computational perspective, and to the fields of computational linguistics (CL)/Natural Language processing (NLP). It synthesizes research from linguistics and computer science and covers formal models for representing and analyzing words, sentences and documents. Students will learn how to analyse sentences algorithmically, and how to build interpretable semantic representations, emphasising data-driven and machine learning approaches and algorithms. The course will cover a number of standard models and algorithms (language models, HMMs, chart and transition based syntactic parsing, distributed semantic models, various neural network models) that are used throughout NLP and applications of these methods in tasks such as machine translation or text summarization.

    Logics for safe Artificial Intelligence

    This course is about ensuring the safety and reliability of autonomous AI agents and multi-agent systems. In order to guarantee that the behaviour of a system achieves its objectives, we use formal proofs rather than empirical studies, and either formally verify that the system behaves in accordance with the specified objectives, or automatically synthesise provably correct behaviours from specifications of the system objectives.
    The formal techniques for doing this include epistemic and temporal logics and their combinations, and constitute the main technical content of the course. The emphasis is on mastering these techniques, their computational aspects, and the use of tools implementing them to verify and synthesise AI agents and multi-agent systems.
    The course also prepares students for undertaking research on formal aspects of artificial intelligence, and provides the foundation for undertaking Masters projects on developing safe and reliable AI systems. Lab sessions will introduce students to relevant specification and modelling techniques and the use of tools such as MCMAS and SynKit for the verification and synthesis of AI agents and multi-agent systems.

    Course format
    Two lectures a week and a lab/practical session for mastering techniques and tools for the specification, verification and synthesis of AI agents and multi-agent systems.

    Entry requirements
    Assigned study entrance permit for the master

    Human centered machine learning

    The impact of machine learning (ML) systems on our society has been increasing rapidly, ranging from systems that influence the content that we see online (e.g., ranking algorithms, advertising algorithms) to systems that enhance or even replace human decision making (e.g. in hiring processes). However, machine learning systems often perpetuate or even amplify societal biases—biases we are often not even aware of.
    What’s more, most machine learning systems are not transparent, which hampers their practical uptake and makes it challenging to know when to trust (or not trust) the output of these systems.

    The course will cover examples from various areas of AI. Given the expertise of the lecturers we will also zoom in on specific examples from natural language processing and multimodal affective computing research. Our discussion will also be informed by relevant literature from the social sciences. An interest in these areas is therefore desirable.

    Course form
    There will be lectures and practical exercises. Students are also expected to discuss and present academic articles. The course also contains a group project.

    Study material
    Most of the material we will read are academic articles. A few candidate readings are:

    • “Why Should I Trust You?” Explaining the Predictions of Any Classifier, Ribeiro et al., KDD 2016
    • A Unified Approach to Interpreting Model Predictions, Lundberg and Lee, NeurIPS 2017
    • Fairness and Machine learning, Limitations and Opportunities, Solon Barocas, Moritz Hardt, Arvind Narayanan,
    • Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Buolamwini and Gebru, Proceedings of Machine Learning Research 2018
    • Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings, Bolukbasi et al., NeurIPS 2016
    • The Mythos of Model Interpretability, Lipton, ACM Queue, 2018
    • Beyond Saliency: Understanding Convolutional Neural Networks from Saliency Prediction on Layer-wise Relevance Propagation, Li et al., Image and Vision Computing 2019.

    A selection of the course material of 2020-2021 can be found here: (slides, syllabus, programming assignments).
    Course material for 2021-2022 may change, although this should give a good impression of the type of content and activities.

    The course requires familiarity with machine learning (including neural networks) and proficiency in Python. It is recommended that students have completed at least one course on machine learning, such as “Pattern recognition” (INFOMPR) or “Advanced machine learning (INFOMAML)”. We expect students to already have experience with developing and evaluating machine learning systems. When in doubt, please contact the course coordinator (dr. Nguyen).

    Secondary electives (30 EC)

    Program semantics and verification

    Most modern software is quite complex. The most widely used approach to verify them is still by testing, which is inherently incomplete and hard to scale up to cover the complexity.
    In this course we will discuss a number of advanced validation and verification techniques that go far beyond ad-hoc testing. Exploiting them is an important key towards more reliable complex software.

    We will in particular focus on techniques that can be automated, or at least partially automated:

    • Predicate transformation technique, which you can use to symbolically execute a program to calculate its range of input or output. This leads to the bounded but symbolic and fully automated verification technique.
    • Common automated theorem proving techniques, used as the back-end of symbolic execution based verification.
    • We will also discuss several common ways to define the semantic of programs, from which correctness can be defined and proven.
    • Model checking techniques, that can be used to either fully verify a model of a program, even if the number of possible executions is infinite, or to boundedly verify the program itself.

    Course form
    Lectures, projects.

    Lecture notes, on-line documentation, and papers.

    Technologies for learning

    The list of topics we will research includes but is not limited to:

    • student modelling technologies for representing knowledge, metacognitive skills and strategies and affective state of a student working with an adaptive education system
    • technologies for adaptive learning support, such as intelligent tutoring systems and adaptive educational hypermedia
    • technologies for supporting collaborative, group-based and social learning scenarios;
    • technologies exploiting big data set in education for empowering student and teachers, as well as improving the behavior of intelligent educational software
    • modern HCI methods used in education for creating effective learning interfaces including dialog systems, learning companions, serious games and virtual reality

    This academic field is extremely interdisciplinary. Hence, the background necessary to study and work with these technologies can be very diverse: knowledge of data mining and machine learning, parsing and rewriting, artificial intelligence and HCI are all useful. The course material as well as topics for group project will be adjusted to the background of the students in order to use the cumulative expertise of the class as much as possible.
    Course form
    Lectures and seminars.

    All papers are listed on Blackboard.

    Probabilistic reasoning

    Probabilistic models can be used for reasoning and decision support under uncertainty:
    Which exercises are most suitable for Bob to improve his calculus skills?
    How long after infection will we detect classical swine fever on this farm?
    What is the risk of Mr Johnson developing a coronary heart disease?
    Should Mrs Peterson be given the loan she requested?
    Will a studyadvisor-support tool advise you to take this course?

    In complex domains, people have to make judgments and decisions based on uncertain, and often even conflicting, information; a difficult task, even for experts in the domain.
    To support these complex decisions, knowledge-based systems should be able to cope with this type of information. For this reason, models for representing uncertainty and algorithms for manipulating uncertain information are important research subjects within the field of Artificial Intelligence.
    Probability theory is one of the oldest theories dealing with the concept of uncertainty and therefore plays an important role in many decision support systems.

    In this course, we will consider probabilistic models for representing and reasoning under uncertainty.
    More specifically, we will focus on probabilistic graphical models such as Bayesian networks, their underlying theory, and discuss issues and methods related to the construction of such networks for real-life applications.
    In addition we consider methods for Bayesian inference, including the role of Probabilistic Programming.

    2021-2022 Blackboard guest entry:

    Course form
    Lectures, self-assessment exercises.

    1. Syllabus 'Probabilistic Reasoning with Bayesian networks'
    2. Course slides
    3. Selected articles
    All are mandatory and will be made available through the course website.

    Big data

    Big Data is as much a buzz word as an apt description of a real problem: the amount of data generated per day is growing faster than our processing abilities. Hence the need for algorithms and data structures which allow us, e.g., to store, retrieve and analyze vast amounts of widely varied data that streams in at high velocity.

    In this course we will limit ourselves to data mining aspects of the Big Data problem, more specifically to the problem of classification in a Big Data setting. To make algorithms viable for huge amounts of data they should have low complexity, in fact it is easy to think of scenarios where only sublinear algorithms are practical. That is, algorithms that see only a (vanishingly small) part of the data: algorithms that only sample the data.

    We start by studying PAC learning, where we study tight bounds to learn (simple) concepts almost always almost correctly from a sample of the data; both in the clean (no noise) and in the agnostic (allowing noise) case. The concepts we study may appear to allow only for very simple – hence, often weak – classifiers. However, the boosting theorem shows that they can represent whatever can be represented by strong classifiers.

    PAC learning algorithms are based on the assumption that a data set represents only one such concept, which obviously isn’t true for almost any real data set. So, next we turn to frequent pattern mining, geared to mine all concepts from a data set. After introducing basic algorithms to compute frequent patterns, we will look at ways to speed them up by sampling using the theoretical concepts from the PAC learning framework.

    Pattern set mining

    Pattern mining is characteristic for data mining. Whereas data analysis is usually concerned with models – i.e., succinct descriptions of all data – pattern mining is about local phenomena. Patterns describe – or even are – subgroups of the data that for some reason are deemed interesting; a description and a reason that usually involves some – if any -- of the variables (attributes features) rather than all. In the past few decades – the total existence of data mining – pattern mining has proven to be a fruitful research area with many thousands of papers describing a wide variety of pattern languages, interestingness functions, and even more algorithms to discover them. However, there is a problem with pattern mining. Databases tend to exhibit many, very many patterns. It is not uncommon that one discovers more patterns than one has data. Hardly an ideal situation. Hence, the rise of pattern set mining. Can we define and find relatively small, good sets of patterns? In this course we’ll start with a brief discussion of pattern mining. After that we discuss parts of the literature on pattern set mining; only parts because there is too much to discuss it all. What types of solutions have been proposed? How do they work and, actually, do the work?

    Multimedia retrieval

    Multimedia retrieval (MR) is about the search for and delivery of multimedia documents, such as text, images, video, audio, and 2D/3D shapes.

    This course teaches MR from a bottom-up perspective. After introducing what MR is by means of examples and use-cases, the MR pipeline is presented.
    Next, each of the building blocks of this pipeline is discussed in detail, starting with the most basic one (data representation), going through the modeling of human perception of media, feature extraction, matching, evaluation, scalability, and presentation issue.
    At the end of the course, students should understand the theory, techniques, and tools that are involved in designing, building, and evaluating every block in the MR pipeline.
    The overall aim is thus for students to be able to design, build, and evaluate end-to-end MR systems for different types of multimedia data.

    The course covers multimedia retrieval from a multidisciplinary perspective. Aspects taken into account: MR data representation; data (signal, image, shape) processing; understanding and working with high-dimensional data; connections between MR, machine learning, and data visualization; computational scalability and complexity aspects of working with big data collections; and human factors in interactive systems design.

    The course takes a predominantly practical stance: after the theoretical principles of MR are introduced, we focus on how MR is to be practically implemented to be successful.
    Various design and implementation decisions for the MR pipeline building-blocks are discussed, focusing not only on their theoretical merits, but also ease of implementation/parameterization, robustness, and speed.
    Trade-offs between alternative solutions to a given problem are discussed.

    Course form
    Lectures, self-study, presentations, and a project.


    The course has no compulsory textbook, as a significant amount of information is presented in detail in slides, papers, notes, and demos.

    However, the following books are strongly recommended as optional reading material, as they give additional details on the material discussed in the course:

    • H. Eidenberger, "Handbook of Multimedia Information Retrieval", 2012, Atpress, ISBN 9783848222834.
    • L. Da Fontoura Costa, R. Marcondes Cesar Jr, "Shape Analysis and Classification: Theory and Practice", CRC Press
    • A.C. Telea, "Data Visualization - Principles and Practice", 2nd edition, 2014, CRC Press, ISBN 9781466585263

    Visit the course page to find out which chapters from the above books cover which topics of the course.

    Computer vision

    The goal of computer vision is recognize and understand the world through visual information such as images or videos. This course is about the algorithms and mechanisms to extract and classify information from images and video. The course combines theory and practice, with two themes: multi-view reconstruction and CNN image/video classification.

    Crowd simulation

    A huge challenge in computer games and other applications is to simulate large crowds of moving agents in a virtual environment in real-time.
    These agents need to avoid collisions with obstacles and with other characters. Also, it is important that their paths are visually compelling or even realistic (depending on the application).

    In this course, we will study and discuss state-of-the-art research papers on path planning and crowd simulation, and we will analyze how to apply these techniques in applications that need realistic crowds.

    Detailed information can be found on the course website Crowd Simulation (INFOMCRWS) 2021 (

    Course form
    This course is a seminar with regular mandatory meetings.
    In most of these meetings, you will present and discuss research papers. There are also a few standard lectures, and some sessions in which you will present assignment results. See the course website for a detailed schedule.

    There will be various other assignments next to the presentations and abstracts mentioned before.
    In these assignments, you will study crowd simulation problems in current games, and you will work in small group on a selected problem related to crowds.

    A selection of research papers on path planning and crowd simulation is available on the course website. \
    You will give a presentation on one of them, and write summaries and critical reviews of the other papers. We will discuss these papers during our online meetings.

    Advanced cognitive and social psychology for HCI

    Emerging technologies are progressively affecting the way we relate, connect, learn, and work. In this course you will study psychological processes associated with the use of digital technologies, to help understand how technology affects us, and to enhance interactions between humans and technologies. We will discuss research in relation to the use and design of a range of applications and devices, for instance, cell phones, social media, video games, and the Internet. The course will include topics such as the relation between cognitive processes and emotions, social identity and group behavior, and interpersonal relationships. The course will mainly draw on theories from cognitive and social psychology, and will involve critical analysis and understanding of these theories in light of our digital world.

    Adaptive interactive systems

    This course is about the design and evaluation of interactive systems that automatically adapt to users and their context. It discusses the layered design and evaluation of such systems. It shows how to build models of users, groups and context, and which characteristics may be useful to model (including for example preferences, ability, personality, affect, inter-personal relationships). It shows how adaptation algorithms can be inspired by user studies. It covers standard recommender system techniques such as content-based and collaborative filtering, as well as research topics such as person-to-person recommendation, task-to-person recommendation, and group recommendation. It also discusses explanations for adaptive interactive systems and usability issues (such as transparency, scrutability, trust, effectiveness, efficiency, satisfaction, diversity, serendipity, privacy and ethics). The course content will be presented in the context of various application domains, such as personalized behaviour change interventions, personalized news, and personalized e-commerce.body { font-size: 9pt;

    Meaningful (Linked) Data Interaction

    Seminar Multimedia Discourse Interaction

    Multimedia Discourse Interaction addresses the complexity of interacting with information present in different information carriers, such as language (written or spoken), image, video, music and (scientific) data. The goal is to convey information to a user in an effective way.

    Knowledge of cognitive capabilities and limitations, such as information processing speeds, can be used to inform the design of useful and efficient ways of searching, browsing, studying, analysing and communicating information in a way that is appropriate to a user's task, knowledge and skills. Subsequently, the fragments of relevant information that are selected from multiple sources must be combined for meaningful presentation to the user. Models and theories exist, for example in artificial intelligence, but also in the fields of film theory and computational linguistics, that describe communication structures, such as narratives or arguments. These can be used to inform the process of selecting and assembling specific media fragments or selections of data into a presentation appropriate to an end‐user's information needs.

    Information presentation consists of combining atomic pieces of information into some communication structure that facilitates viewers in understanding the relationship between the pieces. For example, in text, multiple words are strung together according to established structures, namely grammatically correct sentences. Similarly, a media fragment, for example a film shot, represents some atom of meaning. Fragments can be combined together into a communication structure meaningful to the viewer. This is precisely the task that a film director carries out. Individual communication structures, for example that relate different positions of an argument, for specific domains, for example the utility of war, have been modelled in the literature. When these are implemented and used to present video fragments to a human viewer, the video sequence is perceived as conveying a coherent argument and discourse.

    The seminar explores literature from diverse subfields, including artificial intelligence, semantic web, multimedia and document engineering, providing a range of perspectives on the challenges.

    Course from
    This course is set up as a seminar. It challenges the participants to acquire and disseminate knowledge about a complex subject in an interactive way. The moderators make a pre-selection of relevant research papers and web references. Students are expected to supplement these with their own literature search. They are expected to take the lead on proposing, preparing and presenting projects. Participants will work in groups of 2 on a joint project. Group meetings are mandatory.

    Exam Form

    • Attendance of meetings is obligatory
    • Individual: Oral presentations of various topics
    • Group: Report on project that also details the individual contributions

    Natural language generation

    The taught component of the course will consist of four parts:

    I. General Introduction. In the first part of the course you will learn what the different aims of practical and theoretical NLG can be, what are the main elements of the standard NLG pipeline, how NLG systems are built, and how they are evaluated. Template-based and end-to-end systems will be discussed briefly.

    II. Practical systems. You will get acquainted with a range of practical applications of NLG; a few will be discussed in detail: candidates applications are medical decision support, knowledge editing, and robo-journalism. Strengths, weaknesses, and opportunities for the practical deployment of these systems will be discussed. If time allows, we will devote attention to multimodal systems, which produce documents in which pictures or diagrams complement a generated text.

    III. Module in focus: Referring Expressions Generation. We will zoom in on one part of the standard NLG pipeline, which is responsible for the generation of referring expressions (e.g., as when an NLG system says “the city where you work”, or “the area north of the river Rhine”). We will discuss a range of rule-based algorithms, and some that are based on Machine Learning.

    IV. Perspectives on NLG. We will discuss what linguists, philosophers, and other theoreticians have to say about human language production, and how this relates to NLG. We may start with a Gricean approach, and continue with the Bayesian-inspired Rational Speech Acts approach. We will ask how accurate and how explanatory existing NLG algorithms are as models of human language production (i.e., human speaking and writing), and what are the main open questions for research in this area.

    The core of the course will be presented in lectures. Additionally, students will be asked to read, present, and discuss some key papers and systems which illustrate the issues listed above.

    Applied Cognitive Psychology Research Toolbox

    In this course students will learn how to, as an applied cognitive psychologist, apply knowledge of human cognitive, sensory, and motor abilities in day-to-day practice. Therefore, topics from applied cognitive psychology, such as product ergonomics, decision making, signal detection theory, Fitts law, and information theory will be discussed. Through lectures, visiting lecturers from professional practice, and assignments the student learns how psychological knowledge can be applied in everyday practice and how a question from daily practice can be investigated. In addition, the student will learn (computer)skills which allow the student to work as a cognitive psychologist in a company. The guest lectures provide the student with examples and information on applications of cognitive psychology in the occupational field.

    Aspects of academic development

    • Academic level of thinking and acting
    • Translating psychological knowledge to the occupational field
    • Studying, structuring and analyzing information

    Foundations of Sound Patterns

    This course offers an introduction to major theoretical approaches and core methodologies in the areas of phonetics, phonology, and infant sound acquisition.

    Reasoning about Meaning in Linguistic Communication

    Meaning is a slippery, multifaceted concept. This is mainly because, when we communicate by linguistic means, meaning comes about not just via linguistic conventions but also via reasoning processes that are integral to communicative interaction. In this course we look at formal and computational theories of both linguistic meaning and the reasoning that underlies meaningful communication. A key ingredient of any such theory is the semantics/pragmatics distinction. This division between conventional linguistic sources of meaning on the one hand and meanings that are intentional in nature on the other is often a core assumption made in theories of linguistic communication. But it is also a source of intense debate, since many of the hot topics in the study of meaning today are topics that straddle the semantics/pragmatics divide in interesting and largely unexpected ways. Interestingly, the emerging debates rely heavily on empirical and analytical methods that are new to the field, ranging from experimental to computational methods. As a result, the study of meaning in linguistic communication is shifting from an analytical philosophical discipline to a field that overlaps with cognitive science and artificial intelligence.

    A central question raised throughout the course is what analytical tools we need to conduct a science of meaning. The analytical philosophical tradition has it that it suffices to relate meaning to truth-conditions (the circumstances under which a sentence is true), but there are clear drawbacks to such a narrow view. In the course, we look at ways of going beyond the orthodoxy, in particular by asking what role probabilistic computational models could play in a theory of meaning.

    The goal of this course is twofold: (i) to allow the students to understand some of the key empirical and theoretical questions that drive research in this area; (ii) to have the students acquire skills that allow them to conduct their own research in this area and propose novel models of meaning in linguistic communication, be they logical, probabilistic or hybrid in nature.

    Cognitive and computational aspects of word meaning

    Natural language semantics relies on various empirical methods, involving experimental data, machine learning, corpus analysis and linguistic questionnaires. The course presents topics where developing formal and computational semantic models heavily depends on empirical work in lexical and conceptual semantics, common sense reasoning, and computational semantics. Students choose a research problem and study selected articles on that problem. Based on this study, students formulate an empirical hypothesis and test it in the end project.

    Topics in Philosophy of Mind

    This “Topics Seminar” explores in depth issues and texts in Philosophy of the mind.

    Topic of 21-22 is Introspection as a Source of Knowledge

    Ever since Descartes turned his mind’s eye inwards to secure an indubitable foundation for knowledge, philosophers have wrestled with the problem of the reliability of acquiring knowledge via introspection. Descartes was optimistic in this regard and claimed that on the basis of introspection he could claim that the nature of the self was to be a res cogitans. Gassendi in his objection to the Metaphysical Meditations stated that knowing that one thinks is not enough to establish what one is. Gassendi’s objection forces upon us a distinction between 1) knowing our particular thoughts and 2) knowing what kind of thing one’s self is. Hume was optimistic about the first kind of selfknowledge and sceptical about the second. His famous ‘elusiveness of the self’ thesis is: “For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure. I can never catch myself at any time without a perception, and can never observe anything but the perception. When my perceptions are remov’d for any time, as by sound sleep; so long am I insensible of myself, and may truly be said not to exist.” Kant went on to state that the nature of the self as it is in itself is unknowable.
    In this course we will study the classical origins of the question whether introspection can be a reliable source of knowledge, not just about the self, but also about the deliverances of inner awareness. In doing so, we will rely, of course, on modern commentators like Andrew Brook and Beatrice Longuesse. We then proceed by reading responses to Descartes and Kant in both the continental and analytical tradition. We will read Heidegger’s curiously neglected response to Hume in Being and Time, and Merleau-Ponty’s defence of embodied subjectivity. If time permits, we will read parts of Walter Schulz, Ich und Welt.
    In the analytical tradition the modern debate about these issues have probably started with Wittgenstein’s notorious private language argument. This will lead us to discussing the reception of Wittgenstein in the writings of McDowell and Crispin Wright. Shoemaker’s attack on the perceptual model of introspection will be studied and his famous claim that first person thoughts are immune to error through mis-identification. This brings us to more recent accounts of introspection, in particular the work of Evans on self-identification, Peacocke on the role of consciousness, and Michael Martin on the limits of self-awareness.
    This is very much a research seminar, so suggestions of participants for articles to read are more than welcome.

    Digital Ethics

    As more and more aspects of our lives - including research in the humanities - become digitalized, there is an urgent need for careful reflection on the ethical issues raised by digitalization, informed both by an understanding of central ethical concepts and knowledge of how various technologies are deployed. This course is devoted to understanding the methods, principles, procedures, and institutions that govern the appropriate use of digital technology. Central ethical concepts addressed in the course include privacy, autonomy, nondiscrimination, transparency, responsibility, authenticity, and social justice. Central concepts form digital technology include datafication, algorithms, visualization, and access management.

    The course will make central use of the “Digital Ethics Decision Aid (DEDA)” developed by the Utrecht Data School with the collaboration of the Ethics Institute. Using this tool as a guide, we will examine several pivotal cases that raise fundamental issues regarding the responsible use of digital technology, such as the unintentional discovery of confidential information in medical scans or database searches, or disputed claims to authenticity or ownership related to digital reproduction.

    In addition, the field of ethics is itself subject to transformation to the extent to which a variety of digital methods are increasingly used to assist, automate, or even replace decision-making. Central here are questions regarding of the implications of Big Data processing, “smart” searchbots, automated decision supports, and techniques of data visualization for ethical judgments.

    Informed by the lectures, readings, seminar discussions, and hands-on use of the DEDA, students form research teams to work jointly in developing and presenting their own ethical analyses of a concrete case. Building on the experience of a concrete analysis, students then each write a research paper on a digital ethics topic of their own choosing.

    Interested M.A. students without a background in philosophy, ethics, or digital humanities may qualify to take the course; however, they should first contact the course coordinator:
    The entrance requirements for Exchange Students will be checked by International Office and the Programme coordinator. Therefore, you do not have to contact the Programme coordinator yourself.

    Topics in Epistemology and Philosophy of Science

    Topic of 2020-2021:

    Social epistemology is a relatively recent subdiscipline that investigates the epistemic effects of social interactions and social systems. For most of this course we will understand this as complementary and not opposed to more traditional, "individual" epistemology. Social epistemology is a very active field of research that has produced a lot of exciting publications in recent years. Part of its appeal is due to its immediate applicability to pressing societal issues like, for instance, the phenomenon of filter bubbles and echo chambers, the problem of "fake news", or the (apparent) rise of conspiracy theories in political discourse.
    In this course we will first examine different ways to characterize social epistemology itself. As the course progresses, we will focus on some of the central topics in social epistemology: testimony, peer disagreement, the problem of identifying experts, epistemic injustice, group justification, and the epistemology of collective agents.
    The central reading will be Alvin Goldman and Dennis Whitcomb's anthology "Social Epistemology", but we will also discuss recent publications by authors such as Miranda Fricker, Sanford Goldberg, Jennifer Lackey, or Thi Nguyen."     
    This course is for students in the RMA Philosophy programme and History & Philosophy of Science; students from other M.A. programmes (such as Applied Ethics), should check with the course coordinator or the RMA Philosophy coordinator (, before enrolling, to ensure that they have the requisite philosophical background. The entrance requirements for Exchange Students will be checked by International Office and the Programme coördinator. Therefore, you do not have to contact the Programme coördinator yourself.

    Social and Affective Neuroscience

    Period (from – till): 12 January 2022 - enddate tba.
    Dr. David Terburg
    Departement Psychologie
    Faculteit Sociale Wetenschappen
    5 lectures, course coordinator.

    Dr. Peter Bos
    Departement Psychologie
    Faculteit Sociale Wetenschappen
    2 lectures

    Dr. Jack van Honk
    Departement Psychologie
    Faculteit Sociale Wetenschappen
    2 lectures
    Course description
    This course offers comprehensive knowledge of the theoretical and experimental paradigms in the neuroscience of social and emotional behavior, based on the latest developments in these fields. The future of science as a “unity of knowledge” best reflects itself in Social and Affective Neuroscience. The primary aim is to teach students about the state-of-the-art in these multidisciplinary burgeoning fields, which combine neuroscience, psychology, biology, endocrinology, and economics, and to show how this multidisciplinary approach contributes to new knowledge concerning brain functions and social psychopathologies (e.g. social phobia, psychopathy, autism).
    In this course we want to show you how the exciting field of social neuroscience looks like today, not only by giving an overview of the most important work in this field but also by letting you practice with the activities of a social neuroscientist. Therefore, this course offers both theoretical lectures and practical sessions. Each Social & Affective Neuroscience course day starts with a lecture and is followed by an activity or assignment in which you become a social neuroscientist yourself.

    Literature/study material used
    Recent Scientific Review Articles on the Neuroscience of Emotion and Emotional Disorders (updated each year).
    You can register for this course via Osiris Student. More information about the registration procedure can be found here on the Studyguide.
    Mandatory for students in Master’s programme
    * CN students are strongly recommended to follow one of these courses:
    Social and Affective Neuroscience and/or Neurocognition of memory and attention

    Optional for students in other GSLS Master’s programme:

    Prerequisite knowledge:
    Relevant BA

    Neurocognition of Memory and Attention

    Period (from – till): 7 Febuary 2022 - 23 May 2022
    Prof. Dr. J.J. Bolhuis, Sociale Wetenschappen / Bètawetenschappen – Psychologische Functieleer (course coordinator),
    Prof. Dr. J.L. Kenemans, Sociale Wetenschappen / Bètawetenschappen – Psychologische Functieleer,
    Prof. Dr. A. Postma, Sociale Wetenschappen – Psychologische Functieleer,
    Prof. N. Ramsey, UMCU.
    Course description
    Topics in Memory and Attention research, especially those concerning the interface of attention and memory (e.g., working memory and the control of selective attention), as well as the interfaces between memory/ attention and other domains (perception, action, emotion). The main emphasis is on underlying neurobiological processes, as revealed in human and animal models.
    The course consists of 15 sessions during the above time period, on monday afternoons at 15:15h - 17:00h.

    Literature/study material used:

    L. Kenemans & N. Ramsey (2013. Psychology in the brain: Integrative cognitive neuroscience (293 pages). Palgrave Macmillan.

    Articles: To be announced

    You can register for this course via Osiris Student. More information about the registration procedure can be found here on the Studyguide.
    The maximum of participants is 40.

    Mandatory for students in own Master’s programme:

    Optional for students in other GSLS Master’s programme:

    Prerequisite knowledge:
    Relevant bachelor, basic neuroscience (as in “Cognitive Neuroscience” by Gazzaniga et al.)

    Philosophy of Neuroscience

    Period (from - till): Period 4: 30 May 2022 - June 2022, exact enddate tba.

    Course description
    This course is offers compact, rigorous and practical journey in the philosophy of neuroscience, the interdisciplinary study of neuroscience, philosophy, cognition and mind. Philosophy of neuroscience explores the relevance of neuroscientific studies in the fields of cognition, emotion, consciousness and philosophy of mind, by applying the conceptual rigor and methods of philosophy of science. The teaching will start with the basics of philosophy of science including the work of Popper, Lakatos, Kuhn and Feyerabend, and use a methodological evaluation scheme developed from this work that allows rigorous evaluating neuroscientificresearch as science or pseudoscience. Furthermore, there will be attention for the historical routes of neuroscience starting with Aristotle, and the conceptual problems in neuroscience, methodological confusions in neuroscience, dualism and fysicalism. The main aim of the course is provide wide-ranging understanding of the significance, strengths and weaknesses of fields of neuroscience, which helps in critical thinking, creativity, methodological precision and scientific writing.

    Literature/study material used
    Book Chapters and Articles on Neurophilosophy and Philosophy of Neuro(science).
    You can register for this course via Osiris Student. More information about the registration procedure can be found here on the Studyguide. Max. 25 students per edition.

    Mandatory for students in own Master’s programme:

    Optional for students in other GSLS Master’s programme:

    Basic fMRI Analysis

    Period (from – till): 7 Febuary 2022 - 11 March 2022

    Course coordinator: Dr. Mathijs Raemaekers
    Brain Center Rudolf Magnus
    University Medical Center Utrecht

    Course description
    Functional Magnetic Resonance Imaging (fMRI) is one of the major methods for measuring neural activity in humans, and techniques for processing and analysing the data are under constant development. Basic understanding of analysis techniques is not only relevant for students who are planning to work with fMRI data, but also necessary for critical evaluation of existing literature. The course will provide students with hands-on experience with the execution of the most well established techniques and perform a full fMRI analysis from individual datasets to groupwise results. Students will learn to perform the necessary steps using the SPM12 software package (Statistical Parametric Mapping 2012). The course includes:
    -General properties of the MRI/fMRI data formats
    -fMRI preprocessing including:

    1. Correction for subject head motion in the scanner (realignment)
    2. Aligning MRI images of different modalities (coregistration)
    3. Accounting for differences in timing of the different slices in fMRI datasets (Slice timing correction)
    4. Transforming individual brains to standard space to allow for comparisons across subjects (Normalization)

    -Statistical Analysis including:

    1. Detecting brain activity in individual subjects using the General Linear Model
    2. Correcting the statistical results for multiple comparisons
    3. Performing second-level/groupwise statistics using the General Linear Model

    There is a strong focus on practical application, where a theoretical background is immediately followed by implementation during combined lecture/workgroup sessions. In addition, students get home assignments to analyse data individually. The student must have a laptop with an installation of MATLAB 2007a or later. MATLAB with a student’s license can be obtained from All other software will be provided during the course.Following the Basic fMRI Analysis course is a prerequisite for following the Advanced fMRI analysis course.

    Literature/study material used:
    -SPM12 starters Guide
    -SPM12 manual
    -Reader fMRI preprocessing & analysis
    -Lecture slides
    Course material will be provided as PDF’s before and during the course.

    You can register for this course via Osiris Student. More information about the registration procedure can be found here on the Studyguide.

    Mandatory for students in Master’s programme:

    Optional for students in other GSLS Master’s programme:

    Prerequisite knowledge:
    Basic Statistical Knowledge

    Requirements engineering

    The course will cover the following topics:

    • The RE process and its activities
    • Standards and tools
    • Agile RE, user stories
    • Requirements elicitation
    • Linguistic aspects of natural language requirements
    • From requirements to architectures
    • Requirements prioritization
    • Maturity assessment
    • (Verification of) formal specifications
    • Release planning
    • Requirements traceability
    • Crowd RE

    All information about the course will be made available through Blackboard before the course starts.

    To qualify for the retake exam, the grade of the original must be at least 4.

    Software architecture

    The course on software architecture deals with the concepts and best practices of software architecture. The focus is on theories explaining the structure of software systems and how system’s elements are meant to interact given the imposed quality requirements.Topics of the course are:

    • Architecture influence cycles and contexts
    • Technical relations, development life cycle, business profile, and the architect’s professional practices
    • Quality attributes: availability, modifiability, performance, security, usability, testability, and interoperability
    • Architecturally significant requirements, and how to determine them
    • Architectural patterns in relation to architectural tactics
    • Architecture in the life cycle, including generate-and-test as a design philosophy; architecture conformance during implementation
    • Architecture and current technologies, such as the cloud, social networks, and mobile devices
    • Architecture competence: what this means both for individuals and organizations

    ICT entrepreneurship

    There is no content available for this course.

    Business intelligence

    This course deals with a collection of computer technologies that support managerial decision making by providing information of both internal and external aspects of operations. They have had a profound impact on corporate strategy, performance, and competitiveness, and are collectively known as business intelligence.During this course the following BI topics will be covered:

    • Business perspective
    • Statistics
    • Data management
    • Data integration
    • Data warehousing
    • Data mining
    • Reporting and online analytic processing (i.e., descriptive analytics)
    • Quantitative analysis and operations research (i.e., predictive analytics)
    • Management communications (written and oral)
    • Systems analysis and design
    • Software development

    Method engineering

    Method Engineering is defined as the engineering discipline to design, construct, and adapt methods, techniques and tools for the development of information systems. Similarly as software engineering is concerned with all aspects of software production, so is method engineering dealing with all engineering activities related to methods, techniques and tools.Typical topics in the area of method engineering are:

    • Method description and meta-modeling
    • Method fragments, selection and assembly
    • Situational methods
    • Method frameworks, method comparison
    • Incremental methods
    • Knowledge infrastructures, meta-case, and tool support

    Research internship AI

    The AI research internship is a project that can be performed individually or, in some cases, in a small group. The topic of the research internship should be relevant for artificial intelligence and may involve the development of a software, a theoretical investigation, or experimental research. The project should always have an (applied) research component. Furthermore, the student should always deliver a research report in addition to, for example, software or experimental results. There are two types of projects.

    • External projects are non-thesis internships performed at an external organisation (university, research institute, company). The supervision is mostly done at the external institute, but a UU examiner who determines the final grade is needed as well.
    • Internal projects are performed with the AI staff at Utrecht University. Possible internships are posted on KonJoin, or a student can contact AI staff and/or propose their own concrete project ideas. The project is then supervised and examined directly by a UU staff member.

    Note that because of our focus on facilitating thesis projects, a supervisor/examiner cannot always be guaranteed for research internships.
    Please start looking for an examiner on time, so that you can follow another course if the research internship turns out to be impossible.

    There is a 7.5 EC research internship (INFOMRIAI) and a 15 EC research internship (INFOMRIAI1).
    Research internships can start and end at any time, but please take into account study load and possible University holidays accordingly.