Courses

The course part in this programme consists of:

  • Compulsory courses (16 EC):
    two courses that every student in the programme takes, plus two mini-courses
  • Primary electives (30 EC):
    four courses (out of a set of 16 courses) 
  • Secondary electives (30 EC):
    four courses to give your programme a personal flavour, to be chosen from a broad pre-defined set of secondary elective courses (a.o. short research projects or courses from other, related programmes such as computer science, neuroscience and cognition, linguistics and philosophy). 

Compulsory courses (16 EC)

Methods in AI research (compulsory)

Artificial Intelligence is a fast-paced and challenging field that is making visible inroads into our everyday life. AI in Utrecht offers a unique interdisciplinary approach, integrating the areas of computer science and agent systems, cognition and psychology, logic and philosophy, and linguistics. Because of this interdisciplinary character, the variety of techniques and methods used is considerable, ranging from theoretical to empirical, and from formal mathematical to more informal philosophical.
In this course, we will introduce the various perspectives on AI in Utrecht and the methods associated with them. We will look at the basics of machine learning, logic and symbolic reasoning, cognitive science and computational linguistics, and discuss the part they play in modern AI systems.We will further discuss important methods commonly used in AI research: knowledge modelling, system engineering, and empirical evaluation of machine learning and human-computer interaction. We further practice general academic skills such as reviewing literature, working in teams and scientific writing.The linking pin of the course is a central lab project in which you will develop, describe, test, and evaluate a dialog system (sometimes also referred to as “chatbot”). In this way, the theory from the lectures forms the basis of a real AI application that you will evaluate with users.

Form
Lectures and lab sessions.

Literature
See Blackboard page of the course.

Philosophy of A.I. (compulsory)

This course will make students familiar with fundamental issues in the philosophy of AI, and will introduce them to several current discussions in the field. Students will practice their argumentation and presentation skills, both in class discussions and in writing.
The course is split up in three parts. The first part is a quick overview of the fundamental issues and core notions in philosophy of AI. It addresses topics such as the Turing Test, the Chinese Room Argument, machine intelligence, machine consciousness, weak and strong AI, and the Symbol System Hypothesis. In order to establish a shared background for all students, the material of this part will be assessed with an entrance test already in week 3.
In the second part of the course, there will be an in-depth discussion of several current topics in the field, for example on ethics and responsibility in AI, transhumanism, or the relation between AI and data science. On each topic, there will be a lecture, and a seminar with class discussions and student presentations. Students prepare for those discussions by posting a thesis with one or more supporting arguments about the required reading. In the third part of the course, students will write a philosophical paper, and will provide feedback on their fellow students' draft papers.

This course is for Students Artificial Intelligence, History and Philosophy of Science, and RMA Philosophy. Students of other MA-programmes, please contact the Course Coordinator.
The entrance requirements for Exchange Students will be checked by International Office and the Programme coördinator. Therefore, you do not have to contact the Programme coördinator yourself.

Introducing Natural Sciences (compulsory)

There are two morning sessions with several speakers introducing the student to the the education system of the graduate school, its rules, its curricula, general and practical information about personnel and administration, specific information about the programme itself and expectations of the programme board about their students, honours education, specific profiles across disciplines and the profession of teacher.
Knowing what kind of skills and attitudes the labour market is looking for is considered as important. Workshops will train students to enhance awareness about their own strengths and weaknesses or introduce them to the work and life of PhD students.
Students will have ample time to get to known each other and their programme board.
Lunches, drinks and a concluding dinner will be organised.

Dilemmas of the scientist (compulsory)

This course consists of one workshop. Themes that will be addressed in this course:
The course discusses dilemmas of integrity in the practice of doing academic research. Students will learn what such dilemmas are and how they can deal with them in practice.

Students can only attend this course after they have completed the first workshop.

Primary electives (30 EC)

Intelligent agents

This course is about the theory of so-called intelligent agents, pieces of software that display some degree of autonomy, realised by incorporating `high-level cognitive / mental attitudes' into the modelling of this kind of software. These mental attitudes comprise 'informational' and 'motivational' ones and are often of the so-called BDI kind, dealing with 'beliefs', 'desires' and 'intentions' of agents.

The agent concept calls for an integration of several topics in artificial intelligence, such as knowledge representation and reasoning (in particular reasoning about action and change) and planning. Agent technology, as the field is generally called, has a great potential of applications, ranging from intelligent personal assistants to e-commerce and robotics (where in the latter case often the term 'cognitive robotics' is used). The course is devoted mainly to the philosophical and theoretical (mostly logical) foundations of the area of intelligent agents, both focusing on single agents and on multi-agent systems

Please see the Blackboard page for details.

Overview in brief: * introduction "What are intelligent agents?" * agent architectures * knowledge representation, ontologies, Web Ontology Language (OWL) * agent communication * goals * trust and privacy

Form
Lectures, presentations, project

Literature
Textbooks and a collection of articles.

Machine learning for human vision and language

Machine learning with deep convolutional neural networks (deep learning) is being applied increasingly broadly in computer science, technology and scientific research. This method allows computer systems to perform tasks that have previously been impossible or inaccurate for computers, but typically straightforward for humans. Tasks like visual object identification and natural language processing have traditionally been investigated by cognitive scientists and linguists, but recent applications of deep learning to these tasks also positions them at the center of recent artificial intelligence developments. Therefore, it is important for AI students and researchers to understand the links between cognitive science and AI.

In this course, you will learn the principles behind deep learning, an approach inspired by the structure of the brain. You will learn how these principles are implemented in the brain, focusing on the two aspects of visual processing and language (semantic or syntactic) processing. You will build your own deep learning systems for the interpretation of natural images and language, using modern high-level neural network APIs that make implementation of these systems accessible and efficient.

The course goals will be examined in the following ways:
- Students will attend lectures introducing the approach taken in deep learning systems, comparing this to how deep learning is implemented in biological brains, and introducing the main applications of deep learning to cognitive science and linguistics. Their understanding of this content will be assessed in a final exam.
- Students will participate in discussions and reviews of relevant literature, which will be graded.
- Students will work through lab practical assignments on visual processing and on language processing. The resulting reports will be graded .

Computational argumentation

This course is the replacement of Commonsense reasoning and argumentation (INFOCR). Only one of the two courses can be part of your graduation program.

This course gives an introduction to the computational study of argumentation in AI, a currently popular subfield of symbolic AI. The course especially focuses on formal models of argumentation and their application in areas like commonsense reasoning, legal reasoning and multi-agent interaction.

The computational study of argumentation concerns two aspects: reasoning and dialogue. Argumentation as a form of reasoning makes explicit the reasons for the conclusions that are drawn and how conflicts between reasons are resolved. Systems for argumentation-based inference were orginally developed in the field of nonmonotonic logic, which formalises qualitative reasoning with incomplete, uncertain or inconsistent information. Argument-based systems have been very successful as nonmonotonic logics, since they are based on very natural concepts, such as argument, counterargument, rebuttal and defeat. In this course the following formalism will be discussed:

  • Default logic (a still influential early nonmonotonic logic)
  • The theory of abstract argumentation frameworks (the generally accepted formal foundation of the field)
  • The theory of structured argumentation frameworks, with a special focus on the ASPIC+ approach.
  • Formal accounts of change operations on argumentation frameworks
  • Formal models of legal case-based reasoning

Argumentation as a form of dialogue concerns the rational resolution of conflicts of opinion by verbal means. Intelligent agents may disagree, for instance, about the pros and cons of alternative proposals, or about the factual basis of such proposals. Dialogue systems for argumentation formally define protocols for argumentation dialogues and thus enable a formal study of the dynamics of argumentative agent interaction, including issues of strategic choice. In this course two examples of such dialogeus systems will be discussed.

Form

Interactive lectures (14x2 hours) plus self-study with exercises.

Literature

Online reader, online articles and educational software tools

Data mining

This course is aimed at students of the Computing Science (COSC) master program. It is required that the student has:

  1. Knowledge of algorithms and data structures, at the level of the bachelor course "Datastructuren".
  2. Successfully completed a serious programming course, such as the bachelor course "Imperatief Programmeren".
    Experience with using packages in R or Python is not sufficient.
  3. Knowledge of probability and statistics, at the level of "Onderzoeksmethoden voor Informatica".
  4. Knowledge of linear algebra (such as treated in the bachelor course "Graphics").

Form

Lectures and Computer Lab.

Literature

Selected book chapters, articles, and lecture notes.

Logic and Language

This course covers advanced methods and ideas in the logical analysis of language, with an emphasis on type-logical methods for the analysis of natural language syntax and semantics. The course has a 'capita selecta' format, focusing on various aspects of the connection between language and reasoning.

The current edition consists of three parts. In the first part, we introduce categorial grammar logics, starting with Lambek's Syntactic Calculus. We show how the syntactic calculus can be extended with control operations that allow for restricted forms of reordering and/or restructuring. We discuss the 'proofs-as-programs' interpretation of syntactic derivations, comparing the set theoretic models of formal semantics with vector-based modelling as used in present day NLP. In the second part, we situate typelogical grammars in the wider context of substructural logics. The focus in this part is on prooftheoretic methods (display calculi, decidability) and on soundness/completeness with respect to algebraic and Kripke-style relational semantics. In the final part of the course, we introduce the machine learning perspective on the themes above. We show how from vector-based representations of word meanings one can compute the interpretation of larger phrases in a compositional way.

N.B. You can enroll in this course during the 'wijzigingsdagen' of the Humanities faculty: 26th and 27th of October 2020.
N.B. Inschrijven in Osiris voor deze cursus is mogelijk op 26 en 27 oktober 2020 tijdens de wijzigingsdagen van GW.

Advanced machine learningAdvanced machine learning

This course treats two advanced topics in machine learning: causal inference (the study of cause-effect relations), and reinforcement learning (learning to interact with an environment).

Modern machine learning methods have achieved spectacular results on various tasks. Yet there are pitfalls and limitations that can't be overcome simply by increasing the amounts of data and computing power. For example, standard methods assume that the data are drawn from a single, unchanging probability distribution. The two main topics that we cover in this course both deal with situations where that is not the case.

The first topic, causal inference, is the subfield of machine learning that studies causes and effects: if we make a change to one random variable in a system, for which other variables does the distribution change? An understanding of these cause-and-effect relations allows us to predict the results of a change in the environment. We will also look at the problem of learning these relations from data.

Second, reinforcement learning is about the design of agents that can learn to interact with an unknown environment. Recent advances in supervised learning (such as deep learning) can be built on by reinforcement learning methods. This brings with it a unique set of challenges that we will cover in this course.

The following knowledge will be assumed in this course:

  • solid proficiency in mathematics, in particular probability theory (e.g. ability to understand and manipulate formulas involving conditional probabilities and expectations), linear algebra, basic calculus
  • programming skill in Python
  • understanding of basic machine learning theory and methods, for example from the bachelor course Machine Learning (KI3V15001)

Course form
Lectures; tutorials/practical sessions

Literature

  • Judea Pearl, Madelyn Glymour, Nicholas P. Jewell. Causal Inference in Statistics: A Primer. Wiley, 2016.
  • Richard S. Sutton, Andrew G. Barto. Reinforcement Learning: An Introduction (second edition). MIT Press, 2018. (pdf available from authors' website: http://incompleteideas.net/book/the-book-2nd.html)
  • additional material that will be made available online

Cognitive Modeling

Formal models of human behavior and cognition that are implemented as computer simulations - cognitive models - play a crucial role in science and industry. In science, cognitive models formalize psychological theories. This formalization allows one to predict human behavior in novel settings and to tease apart the parameters that are essential for intelligent behavior.
Cognitive models are used to study many domains, including learning, decision making, language use, multitasking, and perception and action. The models take many forms including dynamic equation models, neural networks, symbolic models, and Bayesian networks. In industry, cognitive models predict human behavior in intelligent 'user models'. These user models are used for example for human-like game opponents and intelligent tutoring systems that adaptively change the difficulty of a game or training program to a model of the human's capacities. Similarly, user models are used in the design and evaluation of interfaces: what mistakes are humans likely to make in a task, what information might they overlook on an interface, and what are the best points to interrupt a user (e.g., with an e-mail alert) such that this interruption does not overload them?
To be able to develop, implement, and evaluate cognitive models and user models, you first need to know which techniques and methods are available and what are appropriate (scientific or practical) questions to test with a model. Moreover, you need practical experience in implementing (components of) such models. In this course you will get an overview of various modeling techniques that are used world-wide and also by researchers in Utrecht (esp. in the department of psychology and the department of linguistics).
You will learn their characteristics, strengths and weaknesses, and their theoretical and practical importance. Moreover, you will practice with implementing (components of) such models during lab sessions.

Pattern recognition

In this course we study statistical pattern recognition and machine learning.

The subjects covered are:

General principles of data analysis: overfitting, the bias-variance trade-off, model selection, regularization, the curse of dimensionality.
Linear statistical models for regression and classification.
Clustering and unsupervised learning.
Support vector machines.
Neural networks and deep learning.

Knowledge of elementary probability theory, statistics, multivariable calculus and linear algebra is presupposed.

Logic and Computation

Students will learn how to answer one or more of the following research questions by means of an actor-based methodology in which each question will be addressed from multiple perspectives.+ What is a program?+ What is a computer?+ What are the practical implications of undecidability?+ What is the distinction between a stored-program computer and a universal Turing machine?+ What is the difference between a model (of computation) and a physical computer? This is a reading &writing course. Attendance is obligatory. Homework will already be handed out during the first week of class with a firm deadline in the second week. Late arrivals in class will only be tolerated once; in other cases, they can lead to a deduction of the student’s final grade. The aim of the course on proofs as programs is to get an understanding of type theory and its role within logic, linguistics, and computer science and get acquainted to the Curry-Howard correspondence, relating types to propositions and programs to proofs.

This course is for students in Artificical Intelligence, as well as students in History and Philosophy of Science and the RMA Philosophy. Students of other MA-programmes, please contact the Course Coordinator. Students History and Philosophy of Science and Artificial Intelligence experiencing problems with enrollment, please contact the Student Desk Humanities, studentdesk.hum@uu.nl

Multi-agent systems

This course focuses on multi-agent issues and will consist of lecture, seminar and lab sessions.
The lectures will cover the following topics:

  • Game theory
  • Auctions
  • Communication
  • Social choice
  • Mechanism Design
  • Normative Multi-Agent Systems

The seminar sessions consists of student presentations and will cover other multi-agent system issues such as:

  • Logics for Multi-Agent Systems
  • Multi-Agent Organisations and Electronic Institutions
  • Normative Multi-Agent Systems
  • Argumentation and Dialogues in Multi-Agent Systems
  • Multi-Agent Negotiation
  • Communication and coordination in Multi-Agent Systems
  • Development of Multi-Agent Systems

Each student is expected to present some papers on one of the abovementioned topics.
In the lab sessions the students will develop multi-agent systems on different platforms such as 2APL and Jade.

Experimentation in Psychology and Linguistics

Both science and industry are interested in creating precise formal models of human behaviour and cognition. To help build, test and optimise such models, one needs to create and run experiments. Students participating in this course will learn (I) how to design experiments given an existing model, (II) how to implement experiments using various tools and, finally, (III) how to extract data from the recorded responses for analysis purposes.

Most theoretical claims in linguistics and psychology are made by positing a formal model. The aim of such models is to make precise predictions. Moreover, the predictions of a model need to tested with formal experiments. The results of the experiment may or may not lead to changes in the model and thus lead to a new set of testable predictions. Essential in the modelling-experimenting cycle is careful experimental design. The course covers the practical and theoretical considerations for experimental research, from posing the research question to interpreting and reporting experimental results.

In industry, experiments are also used frequently. For example, to assess how people use interfaces (e.g., where do they look or click, or how particular text influences their subsequent choices?), to test what the best design of a product is, or to test the appropriateness of a user model (e.g., do people learn what the model predicts them to learn, do they have a more immersive experience when a model guides adaptation of the software?).

In this course you will get an overview of various experimenting techniques that are used world-wide and also by researchers in Utrecht (esp. in the department of psychology and the department of linguistics). You will learn how to use such techniques for testing specific models, as well as where the limits of these technique lie. In the practicals you will also gain hands-on experience with the implementation, data manipulation and data analysis steps of experimenting.

The learning goals will be examined in three ways:

  1. Students will read and critically reflect on selected articles from the experimental literature. They will prepare a short presentation based on the critical reflection. The presentation will be graded.
  2. Students will implement experiments and work with experimental data during practicals. These will be graded.
  3. Students will design and implement an experiment on a topic of their own choice and write a note reporting on the experiment. Implementation and report will be graded.

Social computing

There is no content available for this course.

Multi-agent learning

This seminar focuses on forms of machine learning that typically occur in multi-agent systems. Topics include learning and teaching, fictitious play, rational learning, no-regret learning, targeted learning, multi-agent reinforcement learning and evolutionary learning.

Natural language processing

This course is an advanced introduction to the study of language from a computational perspective, and to the fields of computational linguistics (CL)/Natural Language processing (NLP). It synthesizes research from linguistics and computer science and covers formal models for representing and analyzing words, sentences and documents. Students will learn how to analyse sentences algorithmically, and how to build interpretable semantic representations, emphasising data-driven and machine learning approaches and algorithms. The course will cover a number of standard models and algorithms (language models, HMMs, chart and transition based syntactic parsing, distributed semantic models, various neural network models) that are used throughout NLP and applications of these methods in tasks such as machine translation or text summarization.

Logics for safe AI

This course is about ensuring the safety and reliability of autonomous AI agents and multi-agent systems. In order to guarantee that the behaviour of a system achieves its objectives, we use formal proofs rather than empirical studies, and either formally verify that the system behaves in accordance with the specified objectives, or automatically synthesise provably correct behaviours from specifications of the system objectives.
The formal techniques for doing this include epistemic and temporal logics and their combinations, and constitute the main technical content of the course. The emphasis is on mastering these techniques, their computational aspects, and the use of tools implementing them to verify and synthesise AI agents and multi-agent systems.
The course also prepares students for undertaking research on formal aspects of artificial intelligence, and provides the foundation for undertaking Masters projects on developing safe and reliable AI systems. Lab sessions will introduce students to relevant specification and modelling techniques and the use of tools such as MCMAS and SynKit for the verification and synthesis of AI agents and multi-agent systems.

Course format
Two lectures a week and a lab/practical session for mastering techniques and tools for the specification, verification and synthesis of AI agents and multi-agent systems.

Entry requirements
Assigned study entrance permit for the master

Human centered machine learning

The impact of machine learning (ML) systems on our society has been increasing rapidly, ranging from systems that influence the content that we see online (e.g., ranking algorithms, advertising algorithms) to systems that enhance or even replace human decision making (e.g. in hiring processes). However, machine learning systems often perpetuate or even amplify societal biases—biases we are often not even aware of.
What’s more, most machine learning systems are not transparent, which hampers their practical uptake and makes it challenging to know when to trust (or not trust) the output of these systems.

The course will cover examples from various areas of AI. Given the expertise of the lecturers we will also zoom in on specific examples from natural language processing and multimodal affective computing research. Our discussion will also be informed by relevant literature from the social sciences. An interest in these areas is therefore desirable.

Form
There will be lectures and practical exercises. Students are also expected to discuss and present academic articles. The course also contains a group project.

Literature
Most of the material we will read are academic articles. A few candidate readings are:

  • “Why Should I Trust You?” Explaining the Predictions of Any Classifier, Ribeiro et al., KDD 2016
  • A Unified Approach to Interpreting Model Predictions, Lundberg and Lee, NeurIPS 2017
  • Fairness and Machine learning, Limitations and Opportunities, Solon Barocas, Moritz Hardt, Arvind Narayanan, https://fairmlbook.org/
  • Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, Buolamwini and Gebru, Proceedings of Machine Learning Research 2018
  • Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings, Bolukbasi et al., NeurIPS 2016
  • The Mythos of Model Interpretability, Lipton, ACM Queue, 2018
  • Beyond Saliency: Understanding Convolutional Neural Networks from Saliency Prediction on Layer-wise Relevance Propagation, Li et al., Image and Vision Computing 2019.

Prerequisites
The course requires familiarity with machine learning (including neural networks) and proficiency in Python. It is recommended that students have completed at least one course on machine learning, such as “Pattern recognition” (INFOMPR) or “Advanced machine learning (INFOMAML)”. We expect students to already have experience with developing and evaluating machine learning systems. When in doubt, please contact the course coordinator (dr. Nguyen).

Secondary electives (30 EC)

Program semantics and verification

Most modern software is quite complex. The most widely used approach to verify them is still by testing, which is inherently incomplete and hard to scale up to cover the complexity. In this course we will discuss a number of advanced validation and verification techniques that go far beyond ad-hoc testing. Exploiting them is an important key towards more reliable complex software.

We will in particular focus on techniques that can be automated, or at least partially automated:

  • Predicate transformation technique, which you can use to symbolically execute a program to calculate its range of input or output. This leads to the bounded but symbolic and fully automated verification technique.
  • Common automated theorem proving techniques, used as the back-end of symbolic execution based verification.
  • We will also discuss several common ways to define the semantic of programs, from which correctness can be defined and proven.
  • Model checking techniques, that can be used to either fully verify a model of a program, even if the number of possible executions is infinite, or to boundedly verify the program itself.

Course form
Lectures, projects.

Literature
Lecture notes, on-line documentation, and papers.

Technologies for learning

The list of topics we will research includes but is not limited to:

  • student modelling technologies for representing knowledge, metacognitive skills and strategies and affective state of a student working with an adaptive education system
  • technologies for adaptive learning support, such as intelligent tutoring systems and adaptive educational hypermedia
  • technologies for supporting collaborative, group-based and social learning scenarios;
  • technologies exploiting big data set in education for empowering student and teachers, as well as improving the behaviour of intelligent educational software
  • modern HCI methods used in education for creating effective learning interfaces including dialog systems, learning companions, serious games and virtual reality

This academic field is extremely interdisciplinary. Hence, the background necessary to study and work with these technologies can be very diverse: knowledge of data mining and machine learning, parsing and rewriting, artificial intelligence and HCI are all useful. The course material as well as topics for group project will be adjusted to the background of the students in order to use the cumulative expertise of the class as much as possible.
Course form
Lectures, reading sessions/paper presentations and discussions, research project, periodic quizzes and exam.

Literature
All papers are listed on Blackboard.

Probabilistic reasoning

Bayesian networks can be used for reasoning and decision support under uncertainty:

Which exercises are most suitable for Bob to improve his calculus skills?
How long after infection will we detect classical swine fever on this farm?
What is the risk of Mr Johnson developing a coronary heart disease?
Should Mrs Peterson be given the loan she requested?
Will a studyadvisor-support tool advise you to take this course?

In complex domains, people have to make judgments and decisions based on uncertain, and often even conflicting, information; a difficult task, even for experts in the domain. To support these complex decisions, knowledge-based systems should be able to cope with this type of information. For this reason, models for representing uncertainty and algorithms for manipulating uncertain information are important research subjects within the field of Artificial Intelligence. Probability theory is one of the oldest theories dealing with the concept of uncertainty and therefore plays an important role in many decision support systems.

In this course, we will consider probabilistic models for representing and reasoning under uncertainty. More specifically, we will consider the theory underlying the framework of Bayesian networks, their definition and reasoning, and discuss issues and methods related to the construction of such networks for real-life applications.

The course roughly consists of three parts. As a general introduction to (probabilistic) graphical models, the first part of the course deals with independence relations and their representation by means of undirected and directed graphs. The second part introduces the Bayesian network as a compact representation of a probability distribution on a set of statistical variables; in addition, algorithms that allow for efficiently computing probabilities from a Bayesian network are discussed. The third part of the course concerns the construction of Bayesian networks for real-life applications. Topics covered include automated construction of networks from data, handcrafting the network with the help of domain experts, robustness analysis and evaluation.

Course form
Lectures (twice a week), self-assessment exercises

Literature -mandatory:
1. Syllabus 'Probabilistic Reasoning with Bayesian networks': paper edition sold by A-Eskwadraat, up-to-date online edition available through the course website
2. Course slides, available online.

Literature - additional
Studymanual, also available online.

Evolutionary computing

Evolutionary algorithms are population-based, stochastic search algorithms based on the mechanisms of natural evolution. This course covers how to design representations and variation operators for specific problems. Furthermore convergence behavior and population sizing are analysed. The course focuses on the combination of evolutionary algorithms with local search heuristics to solve combinatorial optimization problems like graph bipartitioning, graph coloring, and bin packing.

Big data

Big Data is as much a buzz word as an apt description of a real problem: the amount of data generated per day is growing faster than our processing abilities. Hence the need for algorithms and data structures which allow us, e.g., to store, retrieve and analyze vast amounts of widely varied data that streams in at high velocity.

In this course we will limit ourselves to data mining aspects of the Big Data problem, more specifically to the problem of classification in a Big Data setting. To make algorithms viable for huge amounts of data they should have low complexity, in fact it is easy to think of scenarios where only sublinear algorithms are practical. That is, algorithms that see only a (vanishingly small) part of the data: algorithms that only sample the data.

We start by studying PAC learning, where we study tight bounds to learn (simple) concepts almost always almost correctly from a sample of the data; both in the clean (no noise) and in the agnostic (allowing noise) case. The concepts we study may appear to allow only for very simple – hence, often weak – classifiers. However, the boosting theorem shows that they can represent whatever can be represented by strong classifiers.

PAC learning algorithms are based on the assumption that a data set represents only one such concept, which obviously isn’t true for almost any real data set. So, next we turn to frequent pattern mining, geared to mine all concepts from a data set. After introducing basic algorithms to compute frequent patterns, we will look at ways to speed them up by sampling using the theoretical concepts from the PAC learning framework.

Pattern set mining

Pattern mining is characteristic for data mining. Whereas data analysis is usually concerned with models – i.e., succinct descriptions of all data – pattern mining is about local phenomena. Patterns describe – or even are – subgroups of the data that for some reason are deemed interesting; a description and a reason that usually involves some – if any -- of the variables (attributes features) rather than all. In the past few decades – the total existence of data mining – pattern mining has proven to be a fruitful research area with many thousands of papers describing a wide variety of pattern languages, interestingness functions, and even more algorithms to discover them. However, there is a problem with pattern mining. Databases tend to exhibit many, very many patterns. It is not uncommon that one discovers more patterns than one has data. Hardly an ideal situation. Hence, the rise of pattern set mining. Can we define and find relatively small, good sets of patterns? In this course we’ll start with a brief discussion of pattern mining. After that we discuss parts of the literature on pattern set mining; only parts because there is too much to discuss it all. What types of solutions have been proposed? How do they work and, actually, do the work?

Multimedia retrieval

Multimedia retrieval (MR) is about the search for and delivery of multimedia documents, such as text, images, video, audio, and 2D/3D shapes.

This course teaches MR from a bottom-up perspective. After introducing what MR is by means of examples and use-cases, the MR pipeline is presented. Next, each of the building blocks of this pipeline is discussed in detail, starting with the most basic one (data representation), going through the modeling of human perception of media, feature extraction, matching, evaluation, scalability, and presentation issue. At the end of the course, students should understand the theory, techniques, and tools that are involved in designing, building, and evaluating every block in the MR pipeline. The overall aim is thus for students to be able to design, build, and evaluate end-to-end MR systems for different types of multimedia data.

The course covers multimedia retrieval from a multidisciplinary perspective. Aspects taken into account: MR data representation; data (signal, image, shape) processing; understanding and working with high-dimensional data; connections between MR, machine learning, and data visualization; computational scalability and complexity aspects of working with big data collections; and human factors in interactive systems design.

The course takes a predominantly practical stance: After the theoretical principles of MR are introduced, we focus on how MR is to be practically implemented to be successful. Various design and implementation decisions for the MR pipeline building-blocks are discussed, focusing not only on their theoretical merits, but also ease of implementation/parameterization, robustness, and speed. Trade-offs between alternative solutions to a given problem are discussed.

Finally, as a 2nd year MSc course, this course has the meta goal to prepare students for their MSc graduation phase. This is done by teaching and assessing technical/scientific reporting and presentation skills.

Course form
Lectures, self-study, presentations, and a project.

Literature

The course has no compulsory textbook, as a significant amount of information is presented in detail in its slides, papers, notes, and demos (all available online here). However, the following books are strongly recommended as optional reading material, as they give additional details on the material discussed in the course:

  • Handbook of Multimedia Information Retrieval (H. Eidenberger; publisher: Atpress; publication date: 2012; index information: ISBN 9783848222834)
  • Shape Analysis and Classification: Theory and Practice (L. Da Fontoura Costa, R. Marcondes Cesar Jr.; publisher: CRC Press; publication date: 2001 (subsequent editions are also fine); index information: ISBN 9780849334931 (for 1st edition))
  • Data Visualization - Principles and Practice (2nd edition) (A. C. Telea; publisher: CRC Press; publication date: 2014; index information: ISBN 9781466585263)

Visit the course page to find out which chapters from the above books cover which topics of the course.

Computer vision

The goal of computer vision is recognize and understand the world through visual information such as images or videos. This course is about the algorithms and mechanisms to extract and classify information from images and video. The course combines theory and practice, with two themes: multi-view reconstruction and CNN image/video classification.

Crowd simulation

There is no content available for this course.

Advanced cognitive and social psychologyAdvanced cognitive and social psychology

Emerging technologies are progressively affecting the way we relate, connect, learn, and work. In this course you will study psychological processes associated with the use of digital technologies, to help understand how technology affects us, and to enhance interactions between humans and technologies. We will discuss research in relation to the use and design of a range of applications and devices, for instance, cell phones, social media, video games, and the Internet. The course will include topics such as the relation between cognitive processes and emotions, social identity and group behavior, and interpersonal relationships. The course will mainly draw on theories from cognitive and social psychology, and will involve critical analysis and understanding of these theories in light of our digital world.

Adaptive interactive systems

This course is about the design and evaluation of interactive systems that automatically adapt to users and their context. It discusses the layered design and evaluation of such systems. It shows how to build models of users, groups and context, and which characteristics may be useful to model (including for example preferences, ability, personality, affect, inter-personal relationships). It shows how adaptation algorithms can be inspired by user studies. It covers standard recommender system techniques such as content-based and collaborative filtering, as well as research topics such as person-to-person recommendation, task-to-person recommendation, and group recommendation. It also discusses explanations for adaptive interactive systems and usability issues (such as transparency, scrutability, trust, effectiveness, efficiency, satisfaction, diversity, serendipity, privacy and ethics). The course content will be presented in the context of various application domains, such as personalized behaviour change interventions, personalized news, and personalized e-commerce.body { font-size: 9pt;

Multimedia discourse interaction

Seminar Multimedia Discourse Interaction

Multimedia Discourse Interaction addresses the complexity of interacting with information present in different information carriers, such as language (written or spoken), image, video, music and (scientific) data. The goal is to convey information to a user in an effective way.

Knowledge of cognitive capabilities and limitations, such as information processing speeds, can be used to inform the design of useful and efficient ways of searching, browsing, studying, analysing and communicating information in a way that is appropriate to a user's task, knowledge and skills. Subsequently, the fragments of relevant information that are selected from multiple sources must be combined for meaningful presentation to the user. Models and theories exist, for example in artificial intelligence, but also in the fields of film theory and computational linguistics, that describe communication structures, such as narratives or arguments. These can be used to inform the process of selecting and assembling specific media fragments or selections of data into a presentation appropriate to an end‐user's information needs.

Information presentation consists of combining atomic pieces of information into some communication structure that facilitates viewers in understanding the relationship between the pieces. For example, in text, multiple words are strung together according to established structures, namely grammatically correct sentences. Similarly, a media fragment, for example a film shot, represents some atom of meaning. Fragments can be combined together into a communication structure meaningful to the viewer. This is precisely the task that a film director carries out. Individual communication structures, for example that relate different positions of an argument, for specific domains, for example the utility of war, have been modelled in the literature. When these are implemented and used to present video fragments to a human viewer, the video sequence is perceived as conveying a coherent argument and discourse.

The seminar explores literature from diverse subfields, including artificial intelligence, semantic web, multimedia and document engineering, providing a range of perspectives on the challenges.

Course from
This course is set up as a seminar. It challenges the participants to acquire and disseminate knowledge about a complex subject in an interactive way. The moderators make a pre-selection of relevant research papers and web references. Students are expected to supplement these with their own literature search. They are expected to take the lead on proposing, preparing and presenting projects. Participants will work in groups of 2 on a joint project. Group meetings are mandatory.

Exam Form

  • Attendance of meetings is obligatory
  • Individual: Oral presentations of various topics
  • Group: Report on project that also details the individual contributions

Natural language generation

The taught component of the course will consist of four parts:

I. General Introduction. In the first part of the course you will learn what the different aims of practical and theoretical NLG can be, what are the main elements of the standard NLG pipeline, how NLG systems are built, and how they are evaluated. Template-based and end-to-end systems will be discussed briefly.

II. Practical systems. You will get acquainted with a range of practical applications of NLG; a few will be discussed in detail: candidates applications are medical decision support, knowledge editing, and robo-journalism. Strengths, weaknesses, and opportunities for the practical deployment of these systems will be discussed. If time allows, we will devote attention to multimodal systems, which produce documents in which pictures or diagrams complement a generated text.

III. Module in focus: Referring Expressions Generation. We will zoom in on one part of the standard NLG pipeline, which is responsible for the generation of referring expressions (e.g., as when an NLG system says “the city where you work”, or “the area north of the river Rhine”). We will discuss a range of rule-based algorithms, and some that are based on Machine Learning.

IV. Perspectives on NLG. We will discuss what linguists, philosophers, and other theoreticians have to say about human language production, and how this relates to NLG. We may start with a Gricean approach, and continue with the Bayesian-inspired Rational Speech Acts approach. We will ask how accurate and how explanatory existing NLG algorithms are as models of human language production (i.e., human speaking and writing), and what are the main open questions for research in this area.

The core of the course will be presented in lectures. Additionally, students will be asked to read, present, and discuss some key papers and systems which illustrate the issues listed above.

Applied Cognitive Psychology II

In this course students will learn how to, as an applied cognitive psychologist, apply knowledge of human cognitive, sensory, and motor abilities in day-to-day practice. Therefore, topics from applied cognitive psychology, such as product ergonomics, decision making, signal detection theory, Fitts law, and information theory will be discussed. Through lectures, visiting lecturers from professional practice, and assignments the student learns how psychological knowledge can be applied in everyday practice and how a question from daily practice can be investigated. In addition, the student will learn (computer)skills which allow the student to work as a cognitive psychologist in a company. The guest lectures provide the student with examples and information on applications of cognitive psychology in the occupational field.

Aspects of academic development

  • Academic level of thinking and acting
  • Translating psychological knowledge to the occupational field
  • Studying, structuring and analyzing information

Foundations of Sound Patterns

This course offers an introduction to major theoretical approaches and core methodologies in the areas of phonetics, phonology, and infant sound acquisition.

Reasoning about Meaning in Linguistic Communication

Meaning is a slippery, multifaceted concept. This is mainly because, when we communicate by linguistic means, meaning comes about not just via linguistic conventions but also via reasoning processes that are integral to communicative interaction. In this course we look at formal and computational theories of both linguistic meaning and the reasoning that underlies meaningful communication. A key ingredient of any such theory is the semantics/pragmatics distinction. This division between conventional linguistic sources of meaning on the one hand and meanings that are intentional in nature on the other is often a core assumption made in theories of linguistic communication. But it is also a source of intense debate, since many of the hot topics in the study of meaning today are topics that straddle the semantics/pragmatics divide in interesting and largely unexpected ways. Interestingly, the emerging debates rely heavily on empirical and analytical methods that are new to the field, ranging from experimental to computational methods. As a result, the study of meaning in linguistic communication is shifting from an analytical philosophical discipline to a field that overlaps with cognitive science and artificial intelligence.

A central question raised throughout the course is what analytical tools we need to conduct a science of meaning. The analytical philosophical tradition has it that it suffices to relate meaning to truth-conditions (the circumstances under which a sentence is true), but there are clear drawbacks to such a narrow view. In the course, we look at ways of going beyond the orthodoxy, in particular by asking what role probabilistic, or more in general, probabilistic computational models could play in a theory of meaning.

The goal of this course is twofold: (i) to allow the students to understand some of the key empirical and theoretical questions that drive research in this area; (ii) to have the students acquire skills that allow them to conduct their own research in this area and propose novel models of meaning in linguistic communication, be they logical, probabilistic or hybrid in nature.

Cognitive and computational aspects of word meaning

Natural language semantics relies on various empirical methods, involving experimental data, machine learning, corpus analysis and linguistic questionnaires. The course presents topics where developing formal and computational semantic models heavily depends on empirical work in lexical and conceptual semantics, common sense reasoning, and computational semantics. Students choose a research problem and study selected articles on that problem. Based on this study, students formulate an empirical hypothesis and test it in the end project.

Topics in Philosophy of Mind

This “Topics Seminar” explores in depth issues and texts in Philosophy of the mind. The topic of 2019-2020 is: John McDowell’s Mind and World.
In this course we’ll be reading John McDowell’s seminal book, along with some articles dealing with themes from the book. Philosophers have long struggled to give a satisfactory picture of the place of minds in the world. In this important book McDowell diagnoses why this problem is so persistent for (contemporary) philosophy and points an anti-reductionist way to a cure.

Digital Ethics

As more and more aspects of our lives - including research in the humanities - become digitalized, there is an urgent need for careful reflection on the ethical issues raised by digitalization, informed both by an understanding of central ethical concepts and knowledge of how various technologies are deployed. This course is devoted to understanding the methods, principles, procedures, and institutions that govern the appropriate use of digital technology. Central ethical concepts addressed in the course include privacy, autonomy, nondiscrimination, transparency, responsibility, authenticity, and social justice. Central concepts form digital technology include datafication, algorithms, visualization, and access management.

The course will make central use of the “Digital Ethics Decision Aid (DEDA)” developed by the Utrecht Data School with the collaboration of the Ethics Institute. Using this tool as a guide, we will examine several pivotal cases that raise fundamental issues regarding the responsible use of digital technology, such as the unintentional discovery of confidential information in medical scans or database searches, or disputed claims to authenticity or ownership related to digital reproduction.

In addition, the field of ethics is itself subject to transformation to the extent to which a variety of digital methods are increasingly used to assist, automate, or even replace decision-making. Central here are questions regarding of the implications of Big Data processing, “smart” searchbots, automated decision supports, and techniques of data visualization for ethical judgments.

Informed by the lectures, readings, seminar discussions, and hands-on use of the DEDA, students form research teams to work jointly in developing and presenting their own ethical analyses of a concrete case. Building on the experience of a concrete analysis, students then each write a research paper on a digital ethics topic of their own choosing.

Interested M.A. students without a background in philosophy, ethics, or digital humanities may qualify to take the course; however, they should first contact the course coordinator: j.h.anderson@uu.nl.
The entrance requirements for Exchange Students will be checked by International Office and the Programme coordinator. Therefore, you do not have to contact the Programme coordinator yourself.

Topics in Epistemology and Philosophy of Science

Topic of 2020-2021:

Social epistemology is a relatively recent subdiscipline that investigates the epistemic effects of social interactions and social systems. For most of this course we will understand this as complementary and not opposed to more traditional, "individual" epistemology. Social epistemology is a very active field of research that has produced a lot of exciting publications in recent years. Part of its appeal is due to its immediate applicability to pressing societal issues like, for instance, the phenomenon of filter bubbles and echo chambers, the problem of "fake news", or the (apparent) rise of conspiracy theories in political discourse.
In this course we will first examine different ways to characterize social epistemology itself. As the course progresses, we will focus on some of the central topics in social epistemology: testimony, peer disagreement, the problem of identifying experts, epistemic injustice, group justification, and the epistemology of collective agents.
The central reading will be Alvin Goldman and Dennis Whitcomb's anthology "Social Epistemology", but we will also discuss recent publications by authors such as Miranda Fricker, Sanford Goldberg, Jennifer Lackey, or Thi Nguyen."     
This course is for students in the RMA Philosophy programme and History & Philosophy of Science; students from other M.A. programmes (such as Applied Ethics), should check with the course coordinator or the RMA Philosophy coordinator (j.h.anderson@uu.nl), before enrolling, to ensure that they have the requisite philosophical background. The entrance requirements for Exchange Students will be checked by International Office and the Programme coördinator. Therefore, you do not have to contact the Programme coördinator yourself.

Social and Affective Neuroscience

Period (from – till): 6 January 2021 - 10 March 2021
Lecturer(s)
Dr. Estrella Montoya
Departement Psychologie
Faculteit Sociale Wetenschappen
4 colleges, 100% van het voorbereiding en nakijk werk voor de examens

Dr. Peter Bos
Departement Psychologie
Faculteit Sociale Wetenschappen
2 colleges

Dr. Jack van Honk
Departement Psychologie
Faculteit Sociale Wetenschappen
2 colleges

Dr. David Terburg
Departement Psychologie
Faculteit Sociale Wetenschappen
1 college
Course description
This course offers comprehensive knowledge of the theoretical and experimental paradigms in the neuroscience of social and emotional behavior, based on the latest developments in these fields. The future of science as a “unity of knowledge” best reflects itself in Social and Affective Neuroscience. The primary aim is to teach students about the state-of-the-art in these multidisciplinary burgeoning fields, which combine neuroscience, psychology, biology, endocrinology, and economics, and to show how this multidisciplinary approach contributes to new knowledge concerning brain functions and social psychopathologies (e.g. social phobia, psychopathy, autism).
In this course we want to show you how the exciting field of social neuroscience looks like today, not only by giving an overview of the most important work in this field but also by letting you practice with the activities of a social neuroscientist. Therefore, this course offers both theoretical lectures and practical sessions. Each Social & Affective Neuroscience course day starts with a lecture and is followed by an activity or assignment in which you become a social neuroscientist yourself.

Literature/study material used
Recent Scientific Review Articles on the Neuroscience of Emotion and Emotional Disorders (updated each year).
Registration
You can register for this course via Osiris Student. More information about the registration procedure can be found here on the Studyguide.
Mandatory for students in Master’s programme
* CN students are strongly recommended to follow one of these courses:
Social and Affective Neuroscience and/or Neurocognition of memory and attention

Optional for students in other GSLS Master’s programme:
Yes.

Prerequisite knowledge:
Relevant BA

Neurocognition of Memory and Attention

Period (from – till): 8 Febuary 2021 - 6 June 2021
Faculty
Prof. Dr. J.J. Bolhuis, Sociale Wetenschappen / Bètawetenschappen – Psychologische Functieleer (course coordinator),
Prof. Dr. J.L. Kenemans, Sociale Wetenschappen / Bètawetenschappen – Psychologische Functieleer,
Prof. Dr. A. Postma, Sociale Wetenschappen – Psychologische Functieleer,
Prof. N. Ramsey, UMCU.
Course description
Topics in Memory and Attention research, especially those concerning the interface of attention and memory (e.g., working memory and the control of selective attention), as well as the interfaces between memory/ attention and other domains (perception, action, emotion). The main emphasis is on underlying neurobiological processes, as revealed in human and animal models.
The course consists of 15 sessions during the above time period, on monday afternoons at 15:15h - 17:00h.

Literature/study material used:
Books:

L. Kenemans & N. Ramsey (2013. Psychology in the brain: Integrative cognitive neuroscience (293 pages). Palgrave Macmillan.

Articles: To be announced

Registration:
You can register for this course via Osiris Student. More information about the registration procedure can be found here on the Studyguide.
The maximum of participants is 40.

Mandatory for students in own Master’s programme:
No.

Optional for students in other GSLS Master’s programme:
Yes.

Prerequisite knowledge:
Relevant bachelor, basic neuroscience (as in “Cognitive Neuroscience” by Gazzaniga et al.)

Philosophy of Neuroscience

Period (from - till): Period 4 (1 - 30 June 2021)

Course description
This course is offers compact, rigorous and practical journey in the philosophy of neuroscience, the interdisciplinary study of neuroscience, philosophy, cognition and mind. Philosophy of neuroscience explores the relevance of neuroscientific studies in the fields of cognition, emotion, consciousness and philosophy of mind, by applying the conceptual rigor and methods of philosophy of science. The teaching will start with the basics of philosophy of science including the work of Popper, Lakatos, Kuhn and Feyerabend, and use a methodological evaluation scheme developed from this work that allows rigorous evaluating neuroscientificresearch as science or pseudoscience. Furthermore, there will be attention for the historical routes of neuroscience starting with Aristotle, and the conceptual problems in neuroscience, methodological confusions in neuroscience, dualism and fysicalism. The main aim of the course is provide wide-ranging understanding of the significance, strengths and weaknesses of fields of neuroscience, which helps in critical thinking, creativity, methodological precision and scientific writing.

Literature/study material used
Book Chapters and Articles on Neurophilosophy and Philosophy of Neuro(science).
Registration
You can register for this course via Osiris Student. More information about the registration procedure can be found here on the Studyguide. Max. 25 students per edition.

Mandatory for students in own Master’s programme:
No

Optional for students in other GSLS Master’s programme:
Yes

Basic fMRI Analysis

Period (from – till): 8 Febuary 2021 - 12 March 2021

Course coordinator: Dr. Mathijs Raemaekers
E-mail: m.raemaekers-2@umcutrecht.nl
Brain Center Rudolf Magnus
University Medical Center Utrecht

Course description
Functional Magnetic Resonance Imaging (fMRI) is one of the major methods for measuring neural activity in humans, and techniques for processing and analysing the data are under constant development. Basic understanding of analysis techniques is not only relevant for students who are planning to work with fMRI data, but also necessary for critical evaluation of existing literature. The course will provide students with hands-on experience with the execution of the most well established techniques and perform a full fMRI analysis from individual datasets to groupwise results. Students will learn to perform the necessary steps using the SPM12 software package (Statistical Parametric Mapping 2012). The course includes:
-General properties of the MRI/fMRI data formats
-fMRI preprocessing including:

  1. Correction for subject head motion in the scanner (realignment)
  2. Aligning MRI images of different modalities (coregistration)
  3. Accounting for differences in timing of the different slices in fMRI datasets (Slice timing correction)
  4. Transforming individual brains to standard space to allow for comparisons across subjects (Normalization)

-Statistical Analysis including:

  1. Detecting brain activity in individual subjects using the General Linear Model
  2. Correcting the statistical results for multiple comparisons
  3. Performing second-level/groupwise statistics using the General Linear Model

There is a strong focus on practical application, where a theoretical background is immediately followed by implementation during combined lecture/workgroup sessions. In addition, students get home assignments to analyse data individually. The student must have a laptop with an installation of MATLAB 2007a or later. MATLAB with a student’s license can be obtained from https://students.uu.nl/gratis-software. All other software will be provided during the course.Following the Basic fMRI Analysis course is a prerequisite for following the Advanced fMRI analysis course.

Literature/study material used:
-SPM12 starters Guide
-SPM12 manual
-Reader fMRI preprocessing & analysis
-Lecture slides
Course material will be provided as PDF’s before and during the course.

Registration:
You can register for this course via Osiris Student. More information about the registration procedure can be found here on the Studyguide.

Mandatory for students in Master’s programme:
No

Optional for students in other GSLS Master’s programme:
Yes.

Prerequisite knowledge:
Basic Statistical Knowledge

Requirements engineering

The course will cover the following topics:

  • The RE process and its activities
  • Standards and tools
  • Agile RE, user stories
  • Requirements elicitation
  • Linguistic aspects of natural language requirements
  • From requirements to architectures
  • Requirements prioritization
  • Maturity assessment
  • (Verification of) formal specifications
  • Release planning
  • Requirements traceability
  • Crowd RE

All information about the course will be made available through Blackboard before the course starts.

To qualify for the retake exam, the grade of the original must be at least 4.

Software architecture

The course on software architecture deals with the concepts and best practices of software architecture. The focus is on theories explaining the structure of software systems and how system’s elements are meant to interact given the imposed quality requirements.Topics of the course are:

  • Architecture influence cycles and contexts
  • Technical relations, development life cycle, business profile, and the architect’s professional practices
  • Quality attributes: availability, modifiability, performance, security, usability, testability, and interoperability
  • Architecturally significant requirements, and how to determine them
  • Architectural patterns in relation to architectural tactics
  • Architecture in the life cycle, including generate-and-test as a design philosophy; architecture conformance during implementation
  • Architecture and current technologies, such as the cloud, social networks, and mobile devices
  • Architecture competence: what this means both for individuals and organizations

ICT entrepreneurship

A software product is defined as a packaged configuration of software components or a software-based service with auxiliary materials, which is released for and traded in a specific market.
In this course the creation, production and organization of product software will be discussed and elaborated in depth:

  • Requirements management: prioritization for releases, tracing en tracking, scope management
  • Architecture and design: variability, product architectures, internationalization, platforms, localization and customization
  • Development methods: prototyping, realization and maintenance, testing, configuration management, delivery; development teams
  • Knowledge management: web-based knowledge infrastructures,
  • Protection of intellectual property: NDA, Software Patents
  • Organization of a product software company: business functions, financing, venture capital, partnering, business plan, product/service trade-off, diversification

This course is explicitly meant for students Information Science and Computer Science. Pre-arranged or mixed teams are are no problem, it is the product idea that matters.

The aim of this course is to create a prototype and business plan for a novel software product. Students can join the course either with a product idea or without. In both cases your participation in the course must be formally approved.

Business intelligence

This course deals with a collection of computer technologies that support managerial decision making by providing information of both internal and external aspects of operations. They have had a profound impact on corporate strategy, performance, and competitiveness, and are collectively known as business intelligence.During this course the following BI topics will be covered:

  • Business perspective
  • Statistics
  • Data management
  • Data integration
  • Data warehousing
  • Data mining
  • Reporting and online analytic processing (i.e., descriptive analytics)
  • Quantitative analysis and operations research (i.e., predictive analytics)
  • Management communications (written and oral)
  • Systems analysis and design
  • Software development

Method engineering

Method Engineering is defined as the engineering discipline to design, construct, and adapt methods, techniques and tools for the development of information systems. Similarly as software engineering is concerned with all aspects of software production, so is method engineering dealing with all engineering activities related to methods, techniques and tools.Typical topics in the area of method engineering are:

  • Method description and meta-modeling
  • Method fragments, selection and assembly
  • Situational methods
  • Method frameworks, method comparison
  • Incremental methods
  • Knowledge infrastructures, meta-case, and tool support

Research internship AI

A research internship is a project that should be performed by a student and under the guidance of a supervisor. The topic of the research internship should directly be relevant for artificial intelligence and agreed by the supervisor. The projects can involve the development of a software, a theoretical investigation, or an experimental research (see below for some project examples). The projects can be performed either internally in our department, or externally by other departments of our university, other universities or companies. The project should always be performed under the guidance of, and in agreement with, an internal supervisor. The students can have their own concrete project ideas or they may be interested in doing a project on a specific topic. In both cases, they can contact a supervisor with the expertise in the topic of the project in order to discuss the details of the project and if and how it can be performed as a research internship. In some cases, and in agreement with the supervisor, two students can perform a project together.Project examples:
- Agent-based Traffic Simulation
- A Power-based Spectrum for Group Responsibility
- Implementation of an agent library in Unity