Courses

The Artificial Intelligence Master's programme consists of the following course parts:
 

  • Compulsory courses (16 EC): 2 courses that every student in the programme takes (Methods in AI research and Philosophy of AI), plus two mini-courses (Introducing natural sciences and Dilemmas of the scientist)
  • Primary electives (30 EC): 4 courses (out of a set of 12 courses) 
  • Secondary electives (30 EC): 4 courses to be freely chosen from a.o
    - the remaining primary electives options;
    - courses from other related Masters such as other computer science programmes, Neuroscience and Cognition, Linguistics and Philosophy etc;
    - other master's courses from within or outside the UU (subject to approval);
    - or you can choose to do a research internship.

Compulsory courses (16 EC)

Methods in AI research (compulsory)

Because of its interdisciplinary character, the variety of techniques used in artificial intelligence is considerable. In this course an overview is provided in three modules which focus on different aspects of artificial intelligence: techniques from logic and linguistics (module 1), from computer science (module 2) and from cognitive neuroscience (module 3). Reasoning being the general theme, the course shows what forms this can take in the different areas, ranging from idealization (module 1) to computation (module 2) to experiments (module 3).In the first module the students will study the formal aspects of reasoning in artificial intelligence. After an introduction describing the emergence, through the ages, of formal reasoning in philosophy and the sciences, students will be introduced to various formal systems and methods of proof, such as natural deduction, sequent calculi, Hilbert-style systems and display calculi. It will be shown how different views on reasoning can be captured in these proof systems. As an example, intuitionistic logic, linear logic as used in linguistics and fuzzy logic will be discussed. The type theoretic Curry-Howard isomorphism that connects proofs to programs will be treated. It will be shown how one can in a precise way capture various aspects of reasoning, such as its complexity, by the structure and size of proofs. The relation to famous open problems in computer science is explained. At the end of this module students will be able to construct proofs in the various proof systems and to translate proofs from one format to another. They understand and can use the mentioned logics and understand the different views on reasoning underlying them. They understand the proofs-as-programs paradigm and know what a normal or cut-free proof is.The second module covers various foundational techniques that are used for the development of intelligent systems in general and artificial agents in particular. We begin by refreshing the memory of the student by giving a crash course into modal logic. We begin with the general framework (syntax and Kripke-style semantics) and review applications such as epistemic logic, doxastic logic, temporal logic, dynamic logic and deontic logic. Also so-called minimal model modal logic with neighbourhood semantics with as application coalition logic. Students will do exercises with these standard techniques to be properly prepared for the other courses in the curriculum, in particular Intelligent Agents. Following the modal logic part, we focus on multi-agent programming techniques that can be used to implement multi-agent systems. We present a multi-agent programming language, present its operational semantics, and explain how its properties can be analysed by means of modal logic. The students will work with some programming exercise to master the use of the programming language. The final part of this module discusses probabilistic techniques for multi-agent learning, such as conditional expectation, Markov chains, Markov reward chains, decision reward chains, Markov decision processes (MDPs), stochastic games (multiple-player MDP's).In the third module, students will be introduced to current methods in cognitive brain research with examples taken from vision and language research using classroom lectures and practicals. This module will cover the entire spectrum of skills and techniques available to the cognitive neuroscience community, including neurophysiological research methods (such as fMRI and EEG), psychophysics, experimental design, modeling and basics data analyses. The practicals will give students hands-on experience with a number of techniques to create experiments in vision and/or language, acquire data and analyze the results from these experiments using SPSS and Matlab.

Philosophy of A.I. (compulsory)

This course will make students familiar with fundamental issues in the philosophy of AI, and will introduce them to several current discussions in the field. Students will practice their argumentation and presentation skills, both in class discussions and in writing.
The course is split up in three parts. The first part is a quick overview of the fundamental issues and core notions in philosophy of AI. It addresses topics such as the Turing Test, the Chinese Room Argument, machine intelligence, machine consciousness, weak and strong AI, and the Symbol System Hypothesis. In order to establish a shared background for all students, the material of this part will be assessed with an entrance test already in week 3.
In the second part of the course, there will be an in-depth discussion of several current topics in the field, for example on ethics and responsibility in AI, transhumanism, or the relation between AI and data science. On each topic, there will be a lecture, and a seminar with class discussions and student presentations. Students prepare for those discussions by posting a thesis with one or more supporting arguments about the required reading. In the third part of the course, students will write a philosophical paper, and will provide feedback on their fellow students' draft papers.

This course is for Students Artificial Intelligence, History and Philosophy of Science, and RMA Philosophy. Students of other MA-programmes, please contact the Course Coordinator.
The entrance requirements for Exchange Students will be checked by International Office and the Programme coördinator. Therefore, you do not have to contact the Programme coördinator yourself.

Introducing Natural Sciences

There are two morning sessions with several speakers introducing the student to the the education system of the graduate school, its rules, its curricula, general and practical information about personnel and administration, specific information about the programme itself and expectations of the programme board about their students, honours education, specific profiles across disciplines and the profession of teacher.
Knowing what kind of skills and attitudes the labour market is looking for is considered as important. Workshops will train students to enhance awareness about their own strengths and weaknesses or introduce them to the work and life of PhD students.
Students will have ample time to get to known each other and their programme board.
Lunches, drinks and a concluding dinner will be organised.

Dilemmas of the scientist

This course consists of one workshop. Themes that will be addressed in this course:
The course discusses dilemmas of integrity in the practice of doing academic research. Students will learn what such dilemmas are and how they can deal with them in practice.

Students can only attend this course after they have completed the first workshop.

Primary electives (30 EC)

Logic and Computation

Students will learn how to answer one or more of the following research questions by means of an actor-based methodology in which each question will be addressed from multiple perspectives.+ What is a program?+ What is a computer?+ What are the practical implications of undecidability?+ What is the distinction between a stored-program computer and a universal Turing machine?+ What is the difference between a model (of computation) and a physical computer? This is a reading &writing course. Attendance is obligatory. Homework will already be handed out during the first week of class with a firm deadline in the second week. Late arrivals in class will only be tolerated once; in other cases, they can lead to a deduction of the student’s final grade. The aim of the course on proofs as programs is to get an understanding of type theory and its role within logic, linguistics, and computer science and get acquainted to the Curry-Howard correspondence, relating types to propositions and programs to proofs.

This course is for students in Artificical Intelligence, as well as students in History and Philosophy of Science and the RMA Philosophy. Students of other MA-programmes, please contact the Course Coordinator. Students History and Philosophy of Science and Artificial Intelligence experiencing problems with enrollment, please contact the Student Desk Humanities, studentdesk.hum@uu.nl

Intelligent agents

This course is about the theory and realisation of intelligent agents, pieces of software that display some degree of autonomy, realised by incorporating "high-level cognitive / mental attitudes" into both modelling and implementation of this kind of software. The agent concept calls for an integration of several topics in artificial intelligence, such as knowledge representation and reasoning (in particular reasoning about action and change) and planning. In the course time is devoted to the philosophical and theoretical (mostly logical) foundations of the area of intelligent agents. Furthermore, ways of realising them by special architectures and so-called agent-oriented programming languages in which one can program the "mental states" of agents are described. This course presents the introductory theory for the agent-oriented courses in the Master programme.

Machine learning for human vision and language

Machine learning with deep convolutional neural networks (deep learning) is being applied increasingly broadly in computer science, technology and scientific research. This method allows computer systems to perform tasks that have previously been impossible or inaccurate for computers, but typically straightforward for humans. Tasks like visual object identification and natural language processing have traditionally been investigated by cognitive scientists and linguists, but recent applications of deep learning to these tasks also positions them at the center of recent artificial intelligence developments. Therefore, it is important for AI students and researchers to understand the links between cognitive science and AI.

In this course, you will learn the principles behind deep learning, an approach inspired by the structure of the brain. You will learn how these principles are implemented in the brain, focusing on the two aspects of visual processing and language (semantic or syntactic) processing. You will build your own deep learning systems for the interpretation of natural images and language, using modern high-level neural network APIs that make implementation of these systems accessible and efficient.

The course goals will be examined in the following ways:
- Students will attend lectures introducing the approach taken in deep learning systems, comparing this to how deep learning is implemented in biological brains, and introducing the main applications of deep learning to cognitive science and linguistics. Their understanding of this content will be assessed in a final exam.
- Students will participate in discussions and reviews of relevant literature, which will be graded.
- Students will work through lab practical assignments on visual processing and on language processing. The resulting reports will be graded .

Logic and Language

This course covers advancedmethods and ideas in the logical analysis of language, especially in relation to type-logical grammars, the parsing-as-deduction paradigm, and their combination with formal semantics of natural language. The course has a 'capita selecta' format, focusing on various aspects of the connection between language and reasoning. The 2014-2015 installment studies discourse dynamics from the perspective of continuations and continuation-passing-style interpretations. In the first part of the course, we study the origin of these concepts computer science (the control operators from programming language theory) and logic (double negation embeddings of classical logic into intuitionistic logic). In the second part of the course, we discuss the growing body of literature on natural language semantics that uses continuations to explicitly include the context of evaluation as a parameter in the meaning composition process. Topics include quantifier scope and evaluation order, cross-sentential anaphora, dynamic logic with exceptions.

Students Artificial Intelligence, for registration please contact your programme coordinator during the enrollment period.

Advanced machine learning

Modern machine learning methods have achieved spectacular results on various tasks. Yet there are pitfalls and limitations that can't be overcome simply by increasing the amounts of data and computing power. For example, standard methods assume that the data are drawn from a single, unchanging probability distribution. The two main topics that we cover in this course both deal with situations where that is not the case.

The first topic, causal inference, is the subfield of machine learning that studies causes and effects: if we make a change to one random variable in a system, for which other variables does the distribution change? An understanding of these cause-and-effect relations allows us to predict the results of a change in the environment. We will also look at the problem of learning these relations from data.

Second, reinforcement learning is about the design of agents that can learn to interact with an unknown environment. Recent advances in supervised learning (such as deep learning) can be built on by reinforcement learning methods. This brings with it a unique set of challenges that we will cover in this course.

Cognitive Modeling

Formal models of human behavior and cognition that are implemented as computer simulations - cognitive models - play a crucial role in science and industry.

In science, cognitive models formalize psychological theories. This formalization allows one to predict human behavior in novel settings and to tease apart the parameters that are essential for intelligent behavior. Cognitive models are used to study many domains, including learning, decision making, language use, multitasking, and perception and action. The models take many forms including dynamic equation models, neural networks, symbolic models, and Bayesian networks.

In industry, cognitive models predict human behavior in intelligent 'user models'. These user models are used for example for human-like game opponents and intelligent tutoring systems that adaptively change the difficulty of a game or training program to a model of the human's capacities. Similarly, user models are used in the design and evaluation of interfaces: what mistakes are humans likely to make in a task, what information might they overlook on an interface, and what are the best points to interrupt a user (e.g., with an e-mail alert) such that this interruption does not overload them?

To be able to develop, implement, and evaluate cognitive models and user models, you first need to know which techniques and methods are available and what are appropriate (scientific or practical) questions to test with a model. Moreover, you need practical experience in implementing (components of) such models.

In this course you will get an overview of various modeling techniques that are used world-wide and also by researchers in Utrecht (esp. in the department of psychology and the department of linguistics). You will learn their characteristics, strengths and weaknesses, and their theoretical and practical importance. Moreover, you will practice with implementing (components of) such models during lab sessions.

Relationship between goals and examination
The learning goals will be examined in three ways:

  1. Students will implement components of cognitive models in computer simulations during computer practicals. These assignments will be graded.
  2. Students will evaluate the scientific literature by orally presenting and critiquing scientific papers that include cognitive models. The presentation and critiquing will be graded.
  3. Students will be tested on their general knowledge of cognitive models in an exam.

Multi-agent systems

This course focuses on multi-agent issues and will consist of lecture, seminar and lab sessions.
The lectures will cover the following topics:

  • Game theory
  • Auctions
  • Communication
  • Social choice
  • Mechanism Design
  • Normative Multi-Agent Systems

The seminar sessions consists of student presentations and will cover other multi-agent system issues such as:

  • Logics for Multi-Agent Systems
  • Multi-Agent Organisations and Electronic Institutions
  • Normative Multi-Agent Systems
  • Argumentation and Dialogues in Multi-Agent Systems
  • Multi-Agent Negotiation
  • Communication and coordination in Multi-Agent Systems
  • Development of Multi-Agent Systems

Each student is expected to present some papers on one of the abovementioned topics.
In the lab sessions the students will develop multi-agent systems on different platforms such as 2APL and Jade.

Experimentation in Psychology and Linguistics

Both science and industry are interested in creating precise formal models of human behaviour and cognition. To help build, test and optimise such models, one needs to create and run experiments. Students participating in this course will learn (I) how to design experiments given an existing model, (II) how to implement experiments using various tools and, finally, (III) how to extract data from the recorded responses for analysis purposes.

Most theoretical claims in linguistics and psychology are made by positing a formal model. The aim of such models is to make precise predictions. Moreover, the predictions of a model need to tested with formal experiments. The results of the experiment may or may not lead to changes in the model and thus lead to a new set of testable predictions. Essential in the modelling-experimenting cycle is careful experimental design. The course covers the practical and theoretical considerations for experimental research, from posing the research question to interpreting and reporting experimental results.

In industry, experiments are also used frequently. For example, to assess how people use interfaces (e.g., where do they look or click, or how particular text influences their subsequent choices?), to test what the best design of a product is, or to test the appropriateness of a user model (e.g., do people learn what the model predicts them to learn, do they have a more immersive experience when a model guides adaptation of the software?).

In this course you will get an overview of various experimenting techniques that are used world-wide and also by researchers in Utrecht (esp. in the department of psychology and the department of linguistics). You will learn how to use such techniques for testing specific models, as well as where the limits of these technique lie. In the practicals you will also gain hands-on experience with the implementation, data manipulation and data analysis steps of experimenting.

The learning goals will be examined in three ways:

  1. Students will read and critically reflect on selected articles from the experimental literature. They will prepare a short presentation based on the critical reflection. The presentation will be graded.
  2. Students will implement experiments and work with experimental data during practicals. These will be graded.
  3. Students will design and implement an experiment on a topic of their own choice and write a note reporting on the experiment. Implementation and report will be graded.

Computational argumentation

In commonsense reasoning, people are often faced with incomplete, uncertain or even inconsistent information. To deal with this, they use reasoning patterns where it can be rational to accept a conclusion even if its truth is not guaranteed by the available information. This course focuses on logics that systematise rationality criteria for such `defeasible' reasoning patterns. Logics of this kind are often called `nonmonotonic logics', since new information may invalidate previously drawn conclusions. This course covers some of the best-known nonmonotonic logics, in particular default logic, circumscription and argumentation systems, as well as formal theories of abductive reasoning. Attention is paid to the use of these formalisms in the specification of dynamic systems and in models of multi-agent interaction.

Social computing

There is no content available for this course.

Multi-agent learning

This seminar focuses on forms of machine learning that typically occur in multi-agent systems. Topics include learning and teaching, fictitious play, rational learning, no-regret learning, targeted learning, multi-agent reinforcement learning and evolutionary learning.

Natural language processing

This course is an advanced introduction to the study of language from a computational perspective, and to the fields of computational linguistics (CL)/Natural Language processing (NLP). It synthesizes research from linguistics and computer science and covers formal models for representing and analyzing words, sentences and documents. Students will learn how to analyse sentences algorithmically, and how to build interpretable semantic representations, emphasising data-driven and machine learning approaches and algorithms. The course will cover a number of standard models and algorithms (language models, HMMs, chart and transition based syntactic parsing, distributed semantic models, various neural network models) that are used throughout NLP and applications of these methods in tasks such as machine translation or text summarization.

Secondary electives (30 EC)

Program semantics and verification

There is no content available for this course.

Technologies for learning

In this course you will study advanced software technologies for learning, such as serious games in which you have to develop a sustainable city, simulations such as a virtual company that you have to run, competing against several other virtual companies, intelligent tutoring systems for learning mathematics, physics, or logic, etc. In particular, you will study the underlying intelligence necessary to determine what a student has learned, what a student should do next, give feedback to a student, etc.In this course you will learn about the use of software technology to support student learning.
Student learning is supported by applications such as:

  • Serious games
  • Simulations
  • Intelligent Tutoring Systems
  • Exercise Environments
  • Automatic Assessment Systems

These applications use technologies such as:

  • Model tracing: does a student follow a desirable path towards a solution?
  • Static and (sometimes) dynamic analysis: what is the quality of a student solution?
  • Learning analytics: what do students do in a learning application?
  • User modeling: what does a student know?

which build upon:

  • Strategies, parsing and rewriting
  • Bayesian networks
  • Datamining
  • Constraint solving
  • Artificial Intelligence
  • Domain-specific technologies, such as compiler technology for the domain of programming.

Probabilistic reasoning

Human experts have to make judgments and decisions based on uncertain, and often even conflicting, information. To support these complex decisions, knowledge-based systems should be able to cope with this type of information. Probability theory is one of the oldest theories dealing with the concept of uncertainty. In this course, probabilistic models for manipulating uncertain information in knowledge-based systems are considered. More specifically, the theory underlying the framework of probabilistic networks are considered, and the construction of such networks for real-life applications are discussed.

Data mining

If properly processed and analyzed, data can be a valuable source of knowledge. Data mining provides the theory, techniques and tools to extract knowledge from data. Learning models from data can also be an important part of building a decision support system. In turn, the computer plays an increasingly important role in data analysis: through the use of computers, computationally expensive data mining methods can be applied that were not even considered in the early days of statistical data analysis. In this course a number of well-known data mining algorithms are coved. The type of problems they are suited for, their computational complexity and how to interpret and apply the models constructed with them are covered.

Evolutionary computing

Evolutionary algorithms are population-based, stochastic search algorithms based on the mechanisms of natural evolution. This course covers how to design representations and variation operators for specific problems. Furthermore convergence behavior and population sizing are analysed. The course focuses on the combination of evolutionary algorithms with local search heuristics to solve combinatorial optimization problems like graph bipartitioning, graph coloring, and bin packing.

Big data

Big Data is as much a buzz word as an apt description of a real problem: the amount of data generated per day is growing faster than our processing abilities. Hence the need for algorithms and data structures which allow us, e.g., to store, retrieve and analyze vast amounts of widely varied data that streams in at high velocity.

In this course we will limit ourselves to data mining aspects of the Big Data problem, more specifically to the problem of classification in a Big Data setting. To make algorithms viable for huge amounts of data they should have low complexity, in fact it is easy to think of scenarios where only sublinear algorithms are practical. That is, algorithms that see only a (vanishingly small) part of the data: algorithms that only sample the data.

We start by studying PAC learning, where we study tight bounds to learn (simple) concepts almost always almost correctly from a sample of the data; both in the clean (no noise) and in the agnostic (allowing noise) case. The concepts we study may appear to allow only for very simple – hence, often weak – classifiers. However, the boosting theorem shows that they can represent whatever can be represented by strong classifiers.

PAC learning algorithms are based on the assumption that a data set represents only one such concept, which obviously isn’t true for almost any real data set. So, next we turn to frequent pattern mining, geared to mine all concepts from a data set. After introducing basic algorithms to compute frequent patterns, we will look at ways to speed them up by sampling using the theoretical concepts from the PAC learning framework.

Pattern set mining

Pattern mining is characteristic for data mining. Whereas data analysis is usually concerned with models – i.e., succinct descriptions of all data – pattern mining is about local phenomena. Patterns describe – or even are – subgroups of the data that for some reason are deemed interesting; a description and a reason that usually involves some – if any -- of the variables (attributes features) rather than all. In the past few decades – the total existence of data mining – pattern mining has proven to be a fruitful research area with many thousands of papers describing a wide variety of pattern languages, interestingness functions, and even more algorithms to discover them. However, there is a problem with pattern mining. Databases tend to exhibit many, very many patterns. It is not uncommon that one discovers more patterns than one has data. Hardly an ideal situation. Hence, the rise of pattern set mining. Can we define and find relatively small, good sets of patterns? In this course we’ll start with a brief discussion of pattern mining. After that we discuss parts of the literature on pattern set mining; only parts because there is too much to discuss it all. What types of solutions have been proposed? How do they work and, actually, do the work?

Multimedia retrieval

Multimedia retrieval is about the search for and delivery of multimedia documents: images, sound, video, 3D scenes, and the combination of these. This course deals with the technical aspect of multimedia retrieval such as techniques, algorithms, and data structures for search query formulation, media feature description, matching of descriptions, and indexing.

Pattern recognition

In this course we study statistical pattern recognition and machine learning.

The subjects covered are:

General principles of data analysis: overfitting, the bias-variance trade-off, model selection, regularization, the curse of dimensionality.
Linear statistical models for regression and classification.
Clustering and unsupervised learning.
Support vector machines.
Neural networks and deep learning.

Knowledge of elementary probability theory, statistics, multivariable calculus and linear algebra is presupposed.

Computer vision

As "seeing" the world with your eyes is important and beneficial, so is computer vision. The goal of computer vision is to make computers work like human visual perception, namely, to recognize and understand the world through visual information, such as, images or videos. Human visual perception, after millions of years of evolution, is extremely good in understanding and recognizing objects or scenes. To have similar abilities to human visual perception (or beyond), computer scientists have been attempting to develop algorithms by relying on various visual information, and this course is about those algorithms, particularly the practical side of them.

Crowd simulation

There is no content available for this course.

Social and Affective Neuroscience

Period (from – till): January 2020 - March 2020
Lecturer(s)
Dr. Estrella Montoya
Departement Psychologie
Faculteit Sociale Wetenschappen
4 colleges, 100% van het voorbereiding en nakijk werk voor de examens

Dr. Peter Bos
Departement Psychologie
Faculteit Sociale Wetenschappen
2 colleges

Dr. Jack van Honk
Departement Psychologie
Faculteit Sociale Wetenschappen
2 colleges

Dr. David Terburg
Departement Psychologie
Faculteit Sociale Wetenschappen
1 college
Course description
This course offers comprehensive knowledge of the theoretical and experimental paradigms in the neuroscience of social and emotional behavior, based on the latest developments in these fields. The future of science as a “unity of knowledge” best reflects itself in Social and Affective Neuroscience. The primary aim is to teach students about the state-of-the-art in these multidisciplinary burgeoning fields, which combine neuroscience, psychology, biology, endocrinology, and economics, and to show how this multidisciplinary approach contributes to new knowledge concerning brain functions and social psychopathologies (e.g. social phobia, psychopathy, autism).
In this course we want to show you how the exciting field of social neuroscience looks like today, not only by giving an overview of the most important work in this field but also by letting you practice with the activities of a social neuroscientist. Therefore, this course offers both theoretical lectures and practical sessions. Each Social & Affective Neuroscience course day starts with a lecture and is followed by an activity or assignment in which you become a social neuroscientist yourself.

Literature/study material used
Recent Scientific Review Articles on the Neuroscience of Emotion and Emotional Disorders (updated each year).
Registration
You can register for this course via Osiris Student. More information about the registration procedure can be found here on the Studyguide.
Mandatory for students in Master’s programme
* CN students are strongly recommended to follow one of these courses:
Social and Affective Neuroscience and/or Neurocognition of memory and attention

Optional for students in other GSLS Master’s programme:
Yes.

Prerequisite knowledge:
Relevant BA

Neurocognition of Memory and Attention

Period (from – till): 3 Febuary 2020 - 25 May 2020
Faculty
Prof. Dr. J.L. Kenemans, Sociale Wetenschappen / Bètawetenschappen – Psychologische Functieleer,
Prof. Dr. A. Postma, Sociale Wetenschappen – Psychologische Functieleer,
Prof. Dr. J.J. Bolhuis, Sociale Wetenschappen / Bètawetenschappen – Psychologische Functieleer,
Prof. N. Ramsey, UMCU.
Course description
Topics in Memory and Attention research, especially those concerning the interface of attention and memory (e.g., working memory and the control of selective attention), as well as the interfaces between memory/ attention and other domains (perception, action, emotion). The main emphasis is on underlying neurobiological processes, as revealed in human and animal models.
The course consists of 15 sessions during the above time period, on monday afternoons at 15:15h - 17:00h.

Literature/study material used:
Books:

L. Kenemans & N. Ramsey (2013. Psychology in the brain: Integrative cognitive neuroscience (293 pages). Palgrave Macmillan.

Articles: To be announced

Registration:
You can register for this course via Osiris Student. More information about the registration procedure can be found here on the Studyguide.
The maximum of participants is 40.

Mandatory for students in own Master’s programme:
No.

Optional for students in other GSLS Master’s programme:
Yes.

Prerequisite knowledge:
Relevant bachelor, basic neuroscience (as in “Cognitive Neuroscience” by Gazzaniga et al.)

Philosophy of Neuroscience

Period (from - till): June 2020

Course description
This course is offers compact, rigorous and practical journey in the philosophy of neuroscience, the interdisciplinary study of neuroscience, philosophy, cognition and mind. Philosophy of neuroscience explores the relevance of neuroscientific studies in the fields of cognition, emotion, consciousness and philosophy of mind, by applying the conceptual rigor and methods of philosophy of science. The teaching will start with the basics of philosophy of science including the work of Popper, Lakatos, Kuhn and Feyerabend, and use a methodological evaluation scheme developed from this work that allows rigorous evaluating neuroscientificresearch as science or pseudoscience. Furthermore, there will be attention for the historical routes of neuroscience starting with Aristotle, and the conceptual problems in neuroscience, methodological confusions in neuroscience, dualism and fysicalism. The main aim of the course is provide wide-ranging understanding of the significance, strengths and weaknesses of fields of neuroscience, which helps in critical thinking, creativity, methodological precision and scientific writing.

Literature/study material used
Book Chapters and Articles on Neurophilosophy and Philosophy of Neuro(science).
Registration
You can register for this course via Osiris Student. More information about the registration procedure can be found here on the Studyguide. Max. 40 students.

Mandatory for students in own Master’s programme:
No

Optional for students in other GSLS Master’s programme:
Yes

Applied Cognitive Psychology II

In this course students will learn how to, as an applied cognitive psychologist, apply knowledge of human cognitive, sensory, and motor abilities in day-to-day practice. Therefore, topics from applied cognitive psychology, such as product ergonomics, decision making, signal detection theory, Fitts law, and information theory will be discussed. Through lectures, visiting lecturers from professional practice, and assignments the student learns how psychological knowledge can be applied in everyday practice and how a question from daily practice can be investigated. In addition, the student will learn (computer)skills which allow the student to work as a cognitive psychologist in a company. The guest lectures provide the student with examples and information on applications of cognitive psychology in the occupational field.

Aspects of academic development

  • Academic level of thinking and acting
  • Translating psychological knowledge to the occupational field
  • Studying, structuring and analyzing information

Foundations of Sound Patterns

This course offers an introduction to major theoretical approaches and core methodologies in the areas of phonetics, phonology, and infant sound acquisition.

Reasoning about meaning in linguistic communication

Meaning is a slippery, multifaceted concept. This is mainly because, when we communicate by linguistic means, meaning comes about not just via linguistic conventions but also via reasoning processes that are integral to communicative interaction. In this course we look at formal and computational theories of both linguistic meaning and the reasoning that underlies meaningful communication. A key ingredient of any such theory is the semantics/pragmatics distinction. This division between conventional linguistic sources of meaning on the one hand and meanings that are intentional in nature on the other is often a core assumption made in theories of linguistic communication. But it is also a source of intense debate, since many of the hot topics in the study of meaning today are topics that straddle the semantics/pragmatics divide in interesting and largely unexpected ways. Interestingly, the emerging debates rely heavily on empirical and analytical methods that are new to the field, ranging from experimental to computational methods. As a result, the study of meaning in linguistic communication is shifting from an analytical philosophical discipline to a field that overlaps with cognitive science and artificial intelligence.

A central question raised throughout the course is what analytical tools we need to conduct a science of meaning. The analytical philosophical tradition has it that it suffices to relate meaning to truth-conditions (the circumstances under which a sentence is true), but there are clear drawbacks to such a narrow view. In the course, we look at ways of going beyond the orthodoxy, for instance by asking what role probabilistic, or more in general, computational models could play in a theory of meaning.

The goal of this course is twofold: (i) to allow the students to understand some of the key empirical and theoretical questions that drive research in this area; (ii) to have the students acquire skills that allow them to conduct their own research in this area and propose novel models of meaning in linguistic communication.

Career orientation:
In the course you will work on further developing several general career skills, such as team work, communication, writing and project and time management. 

Cognitive and computational aspects of word meaning

Natural language semantics relies on various empirical methods, involving experimental data, machine learning, corpus analysis and linguistic questionnaires. The course presents topics where developing formal and computational semantic models heavily depends on empirical work in lexical and conceptual semantics, common sense reasoning, and computational semantics. Students choose a research problem and study selected articles on that problem. Based on this study, students formulate an empirical hypothesis and test it in the end project.

Career orientation:
Experimental and computational research; language technology.

Topics in Philosophy of Mind

This “Topics Seminar” explores in depth issues and texts in Philosophy of the mind. The topic of 2019-2020 is: John McDowell’s Mind and World.
In this course we’ll be reading John McDowell’s seminal book, along with some articles dealing with themes from the book. Philosophers have long struggled to give a satisfactory picture of the place of minds in the world. In this important book McDowell diagnoses why this problem is so persistent for (contemporary) philosophy and points an anti-reductionist way to a cure.

Digital Ethics

As more and more aspects of our lives - including research in the humanities - become digitalized, there is an urgent need for careful reflection on the ethical issues raised by digitalization, informed both by an understanding of central ethical concepts and knowledge of how various technologies are deployed. This course is devoted to understanding the methods, principles, procedures, and institutions that govern the appropriate use of digital technology. Central ethical concepts addressed in the course include privacy, autonomy, nondiscrimination, transparency, responsibility, authenticity, and social justice. Central concepts form digital technology include datafication, algorithms, visualization, and access management.

The course will make central use of the “Digital Ethics Decision Aid (DEDA)” developed by the Utrecht Data School with the collaboration of the Ethics Institute. Using this tool as a guide, we will examine several pivotal cases that raise fundamental issues regarding the responsible use of digital technology, such as the unintentional discovery of confidential information in medical scans or database searches, or disputed claims to authenticity or ownership related to digital reproduction.

In addition, the field of ethics is itself subject to transformation to the extent to which a variety of digital methods are increasingly used to assist, automate, or even replace decision-making. Central here are questions regarding of the implications of Big Data processing, “smart” searchbots, automated decision supports, and techniques of data visualization for ethical judgments.

Informed by the lectures, readings, seminar discussions, and hands-on use of the DEDA, students form research teams to work jointly in developing and presenting their own ethical analyses of a concrete case. Building on the experience of a concrete analysis, students then each write a research paper on a digital ethics topic of their own choosing.

Interested M.A. students without a background in philosophy, ethics, or digital humanities may qualify to take the course; however, they should first contact the course coordinator: j.h.anderson@uu.nl.
The entrance requirements for Exchange Students will be checked by International Office and the Programme coordinator. Therefore, you do not have to contact the Programme coordinator yourself.

Advanced cognitive and social psychologyAdvanced cognitive and social psychology

Emerging technologies are progressively affecting the way we relate, connect, learn, and work. In this course you will study psychological processes associated with the use of digital technologies, to help understand how technology affects us, and to enhance interactions between humans and technologies. We will discuss research in relation to the use and design of a range of applications and devices, for instance, cell phones, social media, video games, and the Internet. The course will include topics such as the relation between cognitive processes and emotions, social identity and group behavior, and interpersonal relationships. The course will mainly draw on theories from cognitive and social psychology, and will involve critical analysis and understanding of these theories in light of our digital world.

Adaptive interactive systems

This course is about the design and evaluation of interactive systems that automatically adapt to users and their context. It discusses the layered design and evaluation of such systems. It shows how to build models of users, groups and context, and which characteristics may be useful to model (including for example preferences, ability, personality, affect, inter-personal relationships). It shows how adaptation algorithms can be inspired by user studies. It covers standard recommender system techniques such as content-based and collaborative filtering, as well as research topics such as person-to-person recommendation, task-to-person recommendation, and group recommendation. It also discusses explanations for adaptive interactive systems and usability issues (such as transparency, scrutability, trust, effectiveness, efficiency, satisfaction, diversity, serendipity, privacy and ethics). The course content will be presented in the context of various application domains, such as personalized behaviour change interventions, personalized news, and personalized e-commerce.body { font-size: 9pt;

Multimedia discourse interaction

Seminar Multimedia Discourse Interaction

Multimedia Discourse Interaction addresses the complexity of interacting with information present in different information carriers, such as language (written or spoken), image, video, music and (scientific) data. The goal is to convey information to a user in an effective way.

Knowledge of cognitive capabilities and limitations, such as information processing speeds, can be used to inform the design of useful and efficient ways of searching, browsing, studying, analysing and communicating information in a way that is appropriate to a user's task, knowledge and skills. Subsequently, the fragments of relevant information that are selected from multiple sources must be combined for meaningful presentation to the user. Models and theories exist, for example in artificial intelligence, but also in the fields of film theory and computational linguistics, that describe communication structures, such as narratives or arguments. These can be used to inform the process of selecting and assembling specific media fragments or selections of data into a presentation appropriate to an end‐user's information needs.

Information presentation consists of combining atomic pieces of information into some communication structure that facilitates viewers in understanding the relationship between the pieces. For example, in text, multiple words are strung together according to established structures, namely grammatically correct sentences. Similarly, a media fragment, for example a film shot, represents some atom of meaning. Fragments can be combined together into a communication structure meaningful to the viewer. This is precisely the task that a film director carries out. Individual communication structures, for example that relate different positions of an argument, for specific domains, for example the utility of war, have been modelled in the literature. When these are implemented and used to present video fragments to a human viewer, the video sequence is perceived as conveying a coherent argument and discourse.

The seminar explores literature from diverse subfields, including artificial intelligence, semantic web, multimedia and document engineering, providing a range of perspectives on the challenges.

Course from
This course is set up as a seminar. It challenges the participants to acquire and disseminate knowledge about a complex subject in an interactive way. The moderators make a pre-selection of relevant research papers and web references. Students are expected to supplement these with their own literature search. They are expected to take the lead on proposing, preparing and presenting projects. Participants will work in groups of 2 on a joint project. Group meetings are mandatory.

Exam Form

  • Attendance of meetings is obligatory
  • Individual: Oral presentations of various topics
  • Group: Report on project that also details the individual contributions

Natural language generation

The taught component of the course will consist of four parts:

I. General Introduction. In the first part of the course you will learn what the different aims of practical and theoretical NLG can be, what are the main elements of the standard NLG pipeline, how NLG systems are built, and how they are evaluated. Template-based and end-to-end systems will be discussed briefly.

II. Practical systems. You will get acquainted with a range of practical applications of NLG; a few will be discussed in detail: candidates applications are medical decision support, knowledge editing, and robo-journalism. Strengths, weaknesses, and opportunities for the practical deployment of these systems will be discussed. If time allows, we will devote attention to multimodal systems, which produce documents in which pictures or diagrams complement a generated text.

III. Module in focus: Referring Expressions Generation. We will zoom in on one part of the standard NLG pipeline, which is responsible for the generation of referring expressions (e.g., as when an NLG system says “the city where you work”, or “the area north of the river Rhine”). We will discuss a range of rule-based algorithms, and some that are based on Machine Learning.

IV. Perspectives on NLG. We will discuss what linguists, philosophers, and other theoreticians have to say about human language production, and how this relates to NLG. We may start with a Gricean approach, and continue with the Bayesian-inspired Rational Speech Acts approach. We will ask how accurate and how explanatory existing NLG algorithms are as models of human language production (i.e., human speaking and writing), and what are the main open questions for research in this area.

The core of the course will be presented in lectures. Additionally, students will be asked to read, present, and discuss some key papers and systems which illustrate the issues listed above.

ICT advisory

The advisory discipline is an established industry and employs hundreds of thousands of people. Advisory is best described as “creating value for organisations, through the application of knowledge, techniques and assets, to improve business performance. This is achieved by through the rendering of objective advice and/or the implementation of business solutions” (Markham & O’Mahoney, 2013). Giving advice is not limited to a particular industry and can be found in any industry and on many different topics such as taxes, business strategy, marketing, ICT etc. Logically, the focus of this course is on giving ICT advice but to a variety of industries.
In this course we address ICT advisory from four different perspectives: Descriptive, Practitioner, Critical, and Career perspective. These will be addressed in the lectures of the course and are based on the book that is prescribed for this course. Some of these lectures are delivered by the students themselves, as part of learning how to present and how to provide training. Besides the theory you will be practicing your consultancy skills in the skills workshops. Skills include for example presenting, analysing and writing. Each of the workshops will be provided by a different consultancy company that is based in the Netherlands and concerns a mix of small, medium and large consultancy organisations. Finally, you will practice skills and theory in a project where you have to advise a real client. In this project you will work in teams of three students, where the client that you will be working for is provided by one of the consultancy companies. During the project you will produce a number of intermediate deliverables and the end deliverables are an advice report and a presentation. The deliverables will be graded and are part of your grade for the course.

Several consultancy companies will be participating in this course by providing guest lectures, skills workshops and projects at their clients. At the same time, you also learn more about the different types of consultancies as we have a nice mix of small, medium and large consultancy companies that participate.

Registering for this course using an online form (DO NOT register yourself through Osiris)
The course is intended solely for MBI students in the business and/or technical consultancy profile. Furthermore, students should have completed at least half a year of their MBI program before starting with this course. Exceptionally, students from other masters of the ICS department can apply but there is no guarantee they will be accepted since their background should match the characteristics of the projects of the current course edition. Exchange students and students from other programmes are not accepted in the course.

Please use the following online form to apply to this course https://forms.gle/bMiFdmnK74TTBbEu5

Since you will be working for real clients we only expect motivated students to subscribe and therefore we ask you to write a professional motivation letter (max 1 A4 including letter head and signature) that should be addressed to the coordinator, Sergio España. Include a statement that says that you will invest at least the 210 hours that equal the 7,5 ECTS awarded for the course. The letter is uploaded using the online form mentioned above. By submitting the form you will be considered a candidate to take the course. The coordinator, along with other department members, will select a number of students based on (i) the quality of the motivation letter, (ii) the prior MBI courses taken and their grades, (iii) the amount of consultancy projects that have been acquired and how well they match the background of the applicants. You will be informed of the final decision mid June.

Team formation
We form the teams and assign them to the client projects. Given the nature of this course and to maximise the chances of success, we do not allow you to freely choose your team mates or your project. We need to keep all consultancy companies happy or they will stop working with us, and our way of managing this has yielded excellent results till now. We will inform you of your team members and your project before the course starts.

Non disclosure agreements
Later in the course you might be asked (by the consultancy or the client company) to sign a Non-Disclosure Agreement (NDA) in which you declare that you will handle in the best interest of the client and will not disclose any information you get from the client. In principle, this is fine. But, please, consult with the course coordinator before signing any NDA since we will want to assess it first (you are expected to act responsibly but we also want to protect you from abusive clauses).

Preferred knowledge
It is preferred that students have knowledge of business and ICT as typically covered by the bachelor courses INFOB1ISY, INFOB1OICT and INFOB3SMI.

Data science and society

This is the introductory course for the Applied Data Science profile, the Applied Data Science postgraduate MSc programme, and the Business Informatics (MBI) programme. As such, it's primary objective is to inspire and introduce you to the emerging domain of Applied Data Science. . The following assignments are among the key parts of the course:

  • Book review: Explore data science and its societal impact
  • Mid-term e-exam on data engineering with Hadoop
  • End-term e-exam on data analytics with Spark

The graded deliverables generate the final course grade as follows:

  1. [A] Book review
  2. [B] Mid-term exam
  3. [C] End-term exam
  4. [D] Optional bonus for extraordinary participation/performance

Grade = [A]*0.10 + [B]*0.40 + [C]*0.50 + [D]

NB: To qualify for the second chance exam, all grading components need to be at least 4.0, and component A needs to have been submitted within the allotted time. The 2nd chance exam is an extensive market survey report assignment.

Requirements engineering

The course will cover the following topics:

  • The RE process and its activities
  • Standards and tools
  • Agile RE, user stories
  • Requirements elicitation
  • Linguistic aspects of natural language requirements
  • From requirements to architectures
  • Requirements prioritization
  • Maturity assessment
  • (Verification of) formal specifications
  • Release planning
  • Requirements traceability
  • Crowd RE

All information about the course will be made available through Blackboard before the course starts.

To qualify for the retake exam, the grade of the original must be at least 4.

Software architecture

The course on software architecture deals with the concepts and best practices of software architecture. The focus is on theories explaining the structure of software systems and how system’s elements are meant to interact given the imposed quality requirements.Topics of the course are:

  • Architecture influence cycles and contexts
  • Technical relations, development life cycle, business profile, and the architect’s professional practices
  • Quality attributes: availability, modifiability, performance, security, usability, testability, and interoperability
  • Architecturally significant requirements, and how to determine them
  • Architectural patterns in relation to architectural tactics
  • Architecture in the life cycle, including generate-and-test as a design philosophy; architecture conformance during implementation
  • Architecture and current technologies, such as the cloud, social networks, and mobile devices
  • Architecture competence: what this means both for individuals and organizations

ICT entrepreneurship

A software product is defined as a packaged configuration of software components or a software-based service with auxiliary materials, which is released for and traded in a specific market.
In this course the creation, production and organization of product software will be discussed and elaborated in depth:

  • Requirements management: prioritization for releases, tracing en tracking, scope management
  • Architecture and design: variability, product architectures, internationalization, platforms, localization and customization
  • Development methods: prototyping, realization and maintenance, testing, configuration management, delivery; development teams
  • Knowledge management: web-based knowledge infrastructures,
  • Protection of intellectual property: NDA, Software Patents
  • Organization of a product software company: business functions, financing, venture capital, partnering, business plan, product/service trade-off, diversification

This course is explicitly meant for students Information Science and Computer Science. Pre-arranged or mixed teams are are no problem, it is the product idea that matters.

The aim of this course is to create a prototype and business plan for a novel software product. Students can join the course either with a product idea or without. In both cases your participation in the course must be formally approved.

Business intelligence

This course deals with a collection of computer technologies that support managerial decision making by providing information of both internal and external aspects of operations. They have had a profound impact on corporate strategy, performance, and competitiveness, and are collectively known as business intelligence.During this course the following BI topics will be covered:

  • Business perspective
  • Statistics
  • Data management
  • Data integration
  • Data warehousing
  • Data mining
  • Reporting and online analytic processing (i.e., descriptive analytics)
  • Quantitative analysis and operations research (i.e., predictive analytics)
  • Management communications (written and oral)
  • Systems analysis and design
  • Software development

Method engineering

Method Engineering is defined as the engineering discipline to design, construct, and adapt methods, techniques and tools for the development of information systems. Similarly as software engineering is concerned with all aspects of software production, so is method engineering dealing with all engineering activities related to methods, techniques and tools.Typical topics in the area of method engineering are:

  • Method description and meta-modeling
  • Method fragments, selection and assembly
  • Situational methods
  • Method frameworks, method comparison
  • Incremental methods
  • Knowledge infrastructures, meta-case, and tool support

Research internship AI

A research internship is a project that should be performed by a student and under the guidance of a supervisor. The topic of the research internship should directly be relevant for artificial intelligence and agreed by the supervisor. The projects can involve the development of a software, a theoretical investigation, or an experimental research (see below for some project examples). The projects can be performed either internally in our department, or externally by other departments of our university, other universities or companies. The project should always be performed under the guidance of, and in agreement with, an internal supervisor. The students can have their own concrete project ideas or they may be interested in doing a project on a specific topic. In both cases, they can contact a supervisor with the expertise in the topic of the project in order to discuss the details of the project and if and how it can be performed as a research internship. In some cases, and in agreement with the supervisor, two students can perform a project together.Project examples:
- Agent-based Traffic Simulation
- A Power-based Spectrum for Group Responsibility
- Implementation of an agent library in Unity

Master thesis project (44 EC)

Besides the course part, the programme consists of a 44 EC research part. In this part the student carries out a research project under the supervision of one of the staff members of the research groups offering the AI programme. The project can be done within Utrecht University or in a research-and-development department of a company or research institute, or at a foreign university. In the past, students have carried out external thesis projects in such companies as KPN, Origin, The Dutch Tax and Customs Office, Vitatron Medical B.V, TNO, NS, ING, STRO, VSTEP, LibRT and Playlogic Game Factory, and at foreign universities or with companies in Australia, Finland, Sweden, Germany, Italy, Spain, the UK, the USA and Switzerland.

When done within the Utrecht University, your final thesis project is monitored by a supervisor from the AI-programma teaching staff. When the final project is conducted within a company or external institute, you will be guided by both a local supervisor within the company/institute and a supervisor of the AI-programme teaching staff.