Compulsory courses (32 EC)

Advanced research methods (compulsory)

Research is curiosity-driven; it is about studying phenomena while becoming the creator of novel inventions. The advanced research methods course provides an initial research experience in the framework of the Master in Business Informatics program. The research methods prevalent in the field of information science (quantitative, qualitative, and design science) are taught. Based on the knowledge in this course and related courses, students can make a well-founded choice for their graduation research, and professional career.

During your studies and professional career, you will come across research in its various manifestations. To learn about how to properly conduct a research project, you need to read about it, and practice by means of exercises and assignments. In the advanced research methods course you will follow systematic protocols for designing and executing a research project. In the context of this course, students will design a comparative-based experiment to validate an artefact in context. This course consists of a combination of lectures, lab sessions, coaching sessions with the teachers, one research assignment, one world café session, and one exam.

Method engineering (compulsory)

Method Engineering is defined as the engineering discipline to design, construct, and adapt methods, techniques and tools for the development of information systems. Similarly as software engineering is concerned with all aspects of software production, so is method engineering dealing with all engineering activities related to methods, techniques and tools.Typical topics in the area of method engineering are:

  • Method description and meta-modeling
  • Method fragments, selection and assembly
  • Situational methods
  • Method frameworks, method comparison
  • Incremental methods
  • Knowledge infrastructures, meta-case, and tool support

Data science and society (compulsory)

Applied Data ScienceThe first course topic that we cover is Applied Data Science (ADS) as positioned in (Braschler et al., 2019) and defined in (Spruit & Lytras, 2018) as “the knowledge discovery process in which analytic systems are designed and evaluated to improve the daily practices of domain experts”. Being the core theme of this course, we cover the need for data scientists (e.g. Davenport & Patil, 2012) and relate this novel topic with the well-known domain of knowledge discovery processes (Chapman et al., 2000). We refer to standardised NIST definitions (Pritzker & May, 2015) to properly ground our ADS perspective.Data AnalyticsData analytics is the multidisciplinary field which aims to make sense of data and observations from everyday life. Its data-driven approach to problem solving includes various methods and techniques. In this theme we focus on discussing why certain approaches work, what common mistakes are made, and so on, using (Lazer et al., 2014; Broniatowski et al., 2014) as a running example. We will also discuss data analytics tasks from both statistical and machine learning perspectives.Big Data & Cloud ComputingThe original course trigger was the inability of researchers to analyse datasets which were simply too big to process on a laptop. On the one hand they can use someone else’s bigger computer (e.g. Cloud Computing) and on the other hand they can employ other data analysis techniques that are designed to be limitlessly scalable. The prime example of such an analysis technique is MapReduce, which we will discuss both from the original Hadoop perspective (Dean & Ghemawat, 2008) as well as from its successors within the increasingly more popular Spark environment (Chambers & Zacharia, 2018). Furthermore, we also note the more philosophical implications of Big Data technologies using (Ambrose, 2015). How do we know that we know? What are the epistemological implications of Big Data analyses on the theory of knowledge? Would a historical perspective be helpful?Natural Language ProcessingWe introduce the field of Natural Language Processing (NLP) as a key technology within data science and artificial intelligence. Applications of NLP are everywhere where people communicate, including web search, scientific papers, emails, customer service, language translation, and clinical reports. Recently, deep learning approaches have obtained very high performance across many different NLP tasks. However, for decades NLP has mostly been based on symbolic approaches instead. Current NLP research aims to meaningfully integrate these two paradigms to better understand human language. Therefore, we will introduce you first to some classical linguistic theories before moving into more recent neural network-based NLP approaches, based on (Clark et al., 2013). Furthermore, the computational experiment assignment will allow you to experiment more in-depth with a state-of-the-art approach within this fast moving field of NLP.Automated Machine LearningAs identified in (Spruit & Jagesar, 2016), one of the major challenges in correctly applying Machine Learning techniques in Applied Data Science projects is the so-called Selection vs Configuration dilemma. Often it is quite hard to select the best algorithm for a given data analysis task, and even harder to properly configure its (hyper-)parameters. Even for data scientists. One promising solution might be Automated Machine Learning (Hutter et al., 2019). Thus, AutoML promises to reduce the human effort necessary for applying machine learning, improve the performance of machine learning algorithms, and improve the reproducibility and fairness of scientific studies.Self-Service Data ScienceIn the Do-It-Yourself week you will work individually on an NLP computational experiment and experience the course vision of self-service data science. The assignment has many variations in datasets, language models and techniques.Societal ImpactYou decide which popular Data Science book with societal impact you read and pitch!Other Trends

In the final lecture we will introduce other interesting data science techniques and developments which we could not cover in the course, but which may be worth investigating in a later course or research project.

Course form

This Corona edition of our course is somewhat differently structured... We do keep the twice-a-week lecture slots, in MS Teams streaming format. However, these sessions will mostly start with an interactive multiple choice quiz, which is just for fun and to informally test your current knowledge, and be followed by a general Q/A session for any remaining questions. These sessions will be recorded and it is not mandatory to attend any lectures.

Regular lecture materials will be provided as videos to be viewed anytime. This is why we will have regular quizes to test and help you remind whether you actually watched and read all materials. The workshop sessions will be taking place online as well in a standard asynchronous discussion channel format on MS Teams. Our TA and SAs will try to answer any queries asap in the Technical Support channel.

Throughout the course, you are given a number of individual (mostly quite small) assignments. The answers to the assignments are to be submitted to the appropriate channel in our DSS 2020 Teams group before the stated deadline (mostly one week after release). There will be no deadline extensions, so be sure to submit appropriately. These assignments will be assessed but not graded: you either PASS or FAIL. When you have FAILed 20 percent or more of the total number of assignments, you will have FAILed the course due to the 'inspanningsverplichting' (course effort) criterion. However, if you did PASS at least 65% of the assignments, you will be given the opportunity to do the REPAIR assignment (which is a relatively big assignment).

e.g. With 16 assignments, you will need to PASS 13/16 (~81%) assignments. In case you have either 11 or 12 PASSes, you qualify for the substantial REPAIR assignment. Should you merely PASS 10 (~63%) or less assignments, then you have FAILed the course without a second chance.

To help you complete the assignments, this class is also supported by the DataCamp learning platform for Python, SQL and more, through a combination of short expert videos and hands-on-the-keyboard exercises.


We provide PDFs for most if not all required literature.

Business process management (compulsory)

There is no content available for this course.

Introduction to Business Informatics (compulsory)

Please note that it is necessary to take this course immediately after starting the MBI program.

Lectures and assignments.

The students are expected to know the contents of the Business Informatics curriculum and regulations.

Introducing Natural Sciences (compulsory)

There are two morning sessions with several speakers introducing the student to the the education system of the graduate school, its rules, its curricula, general and practical information about personnel and administration, specific information about the programme itself and expectations of the programme board about their students, honours education, specific profiles across disciplines and the profession of teacher.
Knowing what kind of skills and attitudes the labour market is looking for is considered as important. Workshops will train students to enhance awareness about their own strengths and weaknesses or introduce them to the work and life of PhD students.
Students will have ample time to get to known each other and their programme board.
Lunches, drinks and a concluding dinner will be organised.

Dilemmas of the scientist (compulsory)

This course consists of one workshop. Themes that will be addressed in this course:
The course discusses dilemmas of integrity in the practice of doing academic research. Students will learn what such dilemmas are and how they can deal with them in practice.

Students can only attend this course after they have completed the first workshop.

Electives (7.5 EC each)

Software production

Software Production is the research domain that covers product development, corporate entrepreneurship, and societal implications of large scale software development in open market conditions. Requirements formulation is an essential activity during software production, for which User Stories have been adopted widely.
The overall goal of this year's seminar is to develop a MOOC with corresponding book on User Stories. We will integrate current scientific knowledge on User Stories into a well-designed set of learning modules with presentations, assignments, and exams. Industrial materials and case material in various media will be included to boost the student experience.

Course form
The course is run as a seminar. Interactive discussions led by students, PhDs, and staff.
The research project is performed individually assisted by peers. Students will present their proposals of chapters and clips, as well as their final deliverables.

Exam form
Various presentations. Written research reports. Knowledge clips

Theories in Information Systems

Course form
The course lectures are divided over nine weeks, where each week has its own theme, e.g. ‘IS and the individual’, ‘IS and society, and ‘IS artifacts’. Most weeks contains two lectures: the Tuesday lecture, in which both lecturers and students will present and discuss the literature that was assigned; and the Thursday lecture in which you will show how you apply the theories described in the literature to your own research proposal.

Selection of scientific papers that will be announced on Blackboard.

Business intelligence

This course deals with a collection of computer technologies that support managerial decision making by providing information of both internal and external aspects of operations. They have had a profound impact on corporate strategy, performance, and competitiveness, and are collectively known as business intelligence.During this course the following BI topics will be covered:

  • Business perspective
  • Statistics
  • Data management
  • Data integration
  • Data warehousing
  • Data mining
  • Reporting and online analytic processing (i.e., descriptive analytics)
  • Quantitative analysis and operations research (i.e., predictive analytics)
  • Management communications (written and oral)
  • Systems analysis and design
  • Software development

ICT startups

ICT Startups is a domain that is relatively new as a research field: it concerns the definition and study of success in ICT entrepreneurship. It has as a goal to support the practitioner field, whether that is incubators, entrepreneurs, software engineers, software product managers, chief technology officers, or entrepreneurship lecturers, with up to date knowledge and practices. These should in turn support entrepreneurs in better decision making in their daily, and often nightly, work. One of the main aspects of this work concerns the construction and engineering of ICT products and services.

The research project in the course can be about one of the following topics:

  • Startups and new venture creation

Success factors for software-intensive startups
Software startup processes
Disruptive innovation and adoption of startups
Managing startup and growth hacking
Intertwined software product and business model development
App economy
Platform-based business models and value co-creation
API economy

  • Software Development and Product Management

Software engineering management and productivity
Lifecycle perspective
Speeding up time-to-market
Effective business model transformation and improvement
Pricing strategies
Design thinking

  • Software Business Development

Business modeling for software products and services
Economics of software companies
Internationalization of software-intensive companies

  • New ideas and emerging areas

Disruptive trends in software business
Business Analytics, data analytics
The future of software-intensive business
Software business and entrepreneurship education
Game business and gamification in software-intensive business

ICT Startups is a continuation course of ICT Entrepreneurship. It cannot be followed without first successfully finishing ICT Entrepreneurship.

Course form/course entry
The course is highly personalized and can be tailored to the ambitions of the students and the advising lecturer.
A research plan must be created by the student and signed by both the course supervisor and the student before a student is allowed to enter the course.
The research plan should at least list: (1) the project goal, (2) the project deliverables, (3) the project’s envisioned outcome, and (4) an assessment plan from the supervisor.


  • Ries, E. (2011), The Lean Startup: How Constant Innovation Creates Radically Successful Businesses. Kindle Edition, Penguin Books Limited. New York, NY.
  • Osterwalder, A., Pigneur, Y., Bernarda, G. & Smith, A. (2014), Value proposition design: How to create products and services customers want, John Wiley & Sons.
  • Paternoster, N., Giardino, C., Unterkalmsteiner, M., Gorschek, T. & Abrahamsson, P. (2014), Software development in startup companies: A systematic mapping study, Information and Software Technology 56(10), 1200–1218.
  • Jansen, S. & van Cann, R. (2012), Software business start-up memories: Key decisions in success stories, Springer.
  • Dorst, K. (2011), ‘The core of ’design thinking’ and its application’, Design studies 32(6), 521–532.
  • Blank, S. & Dorf, B. (2012), The startup owner’s manual: The step-by-step guide for building a great company, K&S Ranch.
  • Lucassen, G., Dalpiaz, F., van der Werf, J. M. E., & Brinkkemper, S. (2016). Improving agile requirements: the quality user story framework and tool. Requirements Engineering, 21(3), 383-403.
  • Siamak Farshidi, Slinger Jansen, Rolf de Jong & Sjaak Brinkkemper (2018) A decision support system for software technology selection, Journal of Decision Systems

Responsible ICT

Responsible ICT focuses on the social and environmental, positive and negative impacts of ICT, and introduces ethical reflections on all the stages of the ICT lifecycle. Humanity is facing outstanding challenges in ensuring world-wide peace, managing global exchange of people and goods without health risks, reducing poverty while increasing equity and inclusion, minimizing climate change, and redesigning the socio-economic system so it contributes to good life for all within planetary boundaries.

ICT is often included as a key ingredient in recipes proposed as solutions to the challenges. The course covers theories and skills that will allow students to deepen into the interrelation between ICT, society and the natural environment to critically assess the roles ICT plays both at the organizational and systemic levels, its capability to be part of the solution, and also the trade-offs it entails..

Data mining

This course is aimed at students of the Computing Science (COSC) master program. It is required that the student has:

  1. Knowledge of algorithms and data structures, at the level of the bachelor course "Datastructuren".
  2. Successfully completed a serious programming course, such as the bachelor course "Imperatief Programmeren".
    Experience with using packages in R or Python is not sufficient.
  3. Knowledge of probability and statistics, at the level of "Onderzoeksmethoden voor Informatica".
  4. Knowledge of linear algebra (such as treated in the bachelor course "Graphics").


Lectures and Computer Lab.


Selected book chapters, articles, and lecture notes.

Technologies for learning

The list of topics we will research includes but is not limited to:

  • student modelling technologies for representing knowledge, metacognitive skills and strategies and affective state of a student working with an adaptive education system
  • technologies for adaptive learning support, such as intelligent tutoring systems and adaptive educational hypermedia
  • technologies for supporting collaborative, group-based and social learning scenarios;
  • technologies exploiting big data set in education for empowering student and teachers, as well as improving the behaviour of intelligent educational software
  • modern HCI methods used in education for creating effective learning interfaces including dialog systems, learning companions, serious games and virtual reality

This academic field is extremely interdisciplinary. Hence, the background necessary to study and work with these technologies can be very diverse: knowledge of data mining and machine learning, parsing and rewriting, artificial intelligence and HCI are all useful. The course material as well as topics for group project will be adjusted to the background of the students in order to use the cumulative expertise of the class as much as possible.
Course form
Lectures, reading sessions/paper presentations and discussions, research project, periodic quizzes and exam.

All papers are listed on Blackboard.

Adaptive interactive systems

This course is about the design and evaluation of interactive systems that automatically adapt to users and their context. It discusses the layered design and evaluation of such systems. It shows how to build models of users, groups and context, and which characteristics may be useful to model (including for example preferences, ability, personality, affect, inter-personal relationships). It shows how adaptation algorithms can be inspired by user studies. It covers standard recommender system techniques such as content-based and collaborative filtering, as well as research topics such as person-to-person recommendation, task-to-person recommendation, and group recommendation. It also discusses explanations for adaptive interactive systems and usability issues (such as transparency, scrutability, trust, effectiveness, efficiency, satisfaction, diversity, serendipity, privacy and ethics). The course content will be presented in the context of various application domains, such as personalized behaviour change interventions, personalized news, and personalized e-commerce.body { font-size: 9pt;

Software ecosystems

vendors no longer function as independent units, where all customers are end-users, where there are no suppliers, and where all software is built in-house. Instead, software vendors have become networked, i.e., software vendors are depending on (communities of) service and software component suppliers, value-added-resellers, and pro-active customers who build and share customizations. Software vendors now have to consider their strategic role in the software ecosystem to survive. With their role in the software ecosystem in mind, software vendors can become more successful by opening up their business, devising new business models, forging long-lasting relationships with partnership networks, and overcoming technical and social challenges that are part of these ecosystem is a set of actors functioning as a unit and interacting with a shared market for software and services, together with the relationships among them. These relationships are frequently underpinned by a common technological platform or market and operate through the exchange of information, resources and artifacts. Several challenges lie in the research area of software ecosystems. To begin with, insightful and scalable modeling techniques for software ecosystems currently do not exist. Furthermore, methods are required that enable software vendors to transform their legacy architectures to accommodate reusability of internal common artifacts and external components and services. Finally, methods are required that support software vendors in choosing survival strategies in software ecosystems.introduce many new research challenges on both a technical and a business level. In a traditionally closed market, software vendors are now facing the challenge of opening up their product interfaces, their knowledge bases, and in some cases even their software. Software vendors must decide how open their products and interfaces are, new business models need to be developed, and new standards for component and service reuse are required. These challenges have been identified but have hardly been picked up by the research community.In this seminar topics on SECOs are discussed. These topics can range from consultancy oriented for product software companies to highly technical for software engineers. The course is largely dependent on student participation. Some example topics are:

  • Virtualized software enterprises
  • Open source software ecosystems
  • Market-specific domain engineering
  • Software ecosystem orchestration
  • Software development communities
  • Software product lines
  • Software product management
  • Publishing APIs
  • API development
  • Formal modeling of business models
  • Architectural implications of reusability
  • Keystone and niche player survival strategy
  • Software ecosystem creation
  • Economic impact of software ecosystems
  • Communities of practice and software reuse
  • Product software and software licensing
  • Software business models
  • Software ecosystem practices and experience
  • Software ecosystem modeling
  • API related topics: design, development, marketing
  • Software ecosystem models
  • A software ecosystem analysis method
  • Strategic advice for software vendors
  • API compatibility over subsequent releases

Data analysis and visualisation

What puts former criminals on the right track? How can we prevent heart disease? Can Twitter predict election outcomes? What does a violent brain look like? How many social classes does 21st century society have? Are hospitals spending too much on health care, or too little? When is a series of spikes in hundreds of website logfiles an operational problem?

Data analysis is the art and science of tackling questions like these by looking at data. Just as cartographers make maps to see what a country looks like, data analysts explore the hidden structures of data by creating informative pictures and summarizing relationships among variables. And just as doctors diagnose sick patients and advise healthy ones on how to stay healthy, data analysts predict important events and variables so we can act on this knowledge. Methods from statistics, machine learning, and data mining play an important part in this process, as well as visualizations that allow the analyst and other humans to better understand what we can conclude from the available facts.

During this course, participants will actively learn how to apply the main statistical methods in data analysis and how to use machine learning algorithms and visualizing techniques. The course has a strongly practical, hands-on focus: rather than focusing on the mathematics and background of the discussed techniques, you will gain hands-on experience in using them on real data during the course and interpreting the results.
This course covers both classical and modern topics in data analysis and visualization:

  1. Exploratory data analysis (EDA);
  2. Supervised machine learning and statistical learning;
  3. Unsupervised learning and data mining techniques;
  4. Visualization (throughout the course).

IMPORTANT ENROLLMENT INFORMATION - please read before registering

This course is a part of the GSNS Profile Applied Data Science. If you have chosen to register for the profile, you will receive preference when registering for this course. To register for this course as a profile student and receive placement preference, you must use the special registration form before the deadline in september. Please see here for more information about the procedure: Any questions regarding the ADS profile and registration for the profile should be directed to the ADS profile coordinator, not the course coordinator.

Other interested (non-profile) students, from any Faculty, are also welcome in this course, provided you are a UU master student that meets the prerequisites. You can enroll via Osiris in the beginning of October. Osiris registration will be closed before this date. See here for the exact dates: Note that for non-profile students, there is no guaranteed placement in the course.

Pattern recognition

In this course we study statistical pattern recognition and machine learning.

The subjects covered are:

General principles of data analysis: overfitting, the bias-variance trade-off, model selection, regularization, the curse of dimensionality.
Linear statistical models for regression and classification.
Clustering and unsupervised learning.
Support vector machines.
Neural networks and deep learning.

Knowledge of elementary probability theory, statistics, multivariable calculus and linear algebra is presupposed.

Enterprise architecture

What kind of business processes are important in our organization, and how can we support these processes using IT? What is the application landscape of our organization, and do we need to update it to improve the speed and flexibility with which we can do business? How can we manage our technical infrastructure to improve access to information for our employees but at the same time minimize security risks?In this course, you will learn the techniques that allow you to answer these and other questions. The core subject of the course is the modelling and analysis of enterprise-wide architectures (i.e. business process architectures, information architectures, application architectures, technical architectures, combination of architectures, and so on). In addition, we will discuss related topics such as risk management and business process modelling.

Software architecture

The course on software architecture deals with the concepts and best practices of software architecture. The focus is on theories explaining the structure of software systems and how system’s elements are meant to interact given the imposed quality requirements.Topics of the course are:

  • Architecture influence cycles and contexts
  • Technical relations, development life cycle, business profile, and the architect’s professional practices
  • Quality attributes: availability, modifiability, performance, security, usability, testability, and interoperability
  • Architecturally significant requirements, and how to determine them
  • Architectural patterns in relation to architectural tactics
  • Architecture in the life cycle, including generate-and-test as a design philosophy; architecture conformance during implementation
  • Architecture and current technologies, such as the cloud, social networks, and mobile devices
  • Architecture competence: what this means both for individuals and organizations

Natural language generation

The taught component of the course will consist of four parts:

I. General Introduction. In the first part of the course you will learn what the different aims of practical and theoretical NLG can be, what are the main elements of the standard NLG pipeline, how NLG systems are built, and how they are evaluated. Template-based and end-to-end systems will be discussed briefly.

II. Practical systems. You will get acquainted with a range of practical applications of NLG; a few will be discussed in detail: candidates applications are medical decision support, knowledge editing, and robo-journalism. Strengths, weaknesses, and opportunities for the practical deployment of these systems will be discussed. If time allows, we will devote attention to multimodal systems, which produce documents in which pictures or diagrams complement a generated text.

III. Module in focus: Referring Expressions Generation. We will zoom in on one part of the standard NLG pipeline, which is responsible for the generation of referring expressions (e.g., as when an NLG system says “the city where you work”, or “the area north of the river Rhine”). We will discuss a range of rule-based algorithms, and some that are based on Machine Learning.

IV. Perspectives on NLG. We will discuss what linguists, philosophers, and other theoreticians have to say about human language production, and how this relates to NLG. We may start with a Gricean approach, and continue with the Bayesian-inspired Rational Speech Acts approach. We will ask how accurate and how explanatory existing NLG algorithms are as models of human language production (i.e., human speaking and writing), and what are the main open questions for research in this area.

The core of the course will be presented in lectures. Additionally, students will be asked to read, present, and discuss some key papers and systems which illustrate the issues listed above.

Big data

Big Data is as much a buzz word as an apt description of a real problem: the amount of data generated per day is growing faster than our processing abilities. Hence the need for algorithms and data structures which allow us, e.g., to store, retrieve and analyze vast amounts of widely varied data that streams in at high velocity.

In this course we will limit ourselves to data mining aspects of the Big Data problem, more specifically to the problem of classification in a Big Data setting. To make algorithms viable for huge amounts of data they should have low complexity, in fact it is easy to think of scenarios where only sublinear algorithms are practical. That is, algorithms that see only a (vanishingly small) part of the data: algorithms that only sample the data.

We start by studying PAC learning, where we study tight bounds to learn (simple) concepts almost always almost correctly from a sample of the data; both in the clean (no noise) and in the agnostic (allowing noise) case. The concepts we study may appear to allow only for very simple – hence, often weak – classifiers. However, the boosting theorem shows that they can represent whatever can be represented by strong classifiers.

PAC learning algorithms are based on the assumption that a data set represents only one such concept, which obviously isn’t true for almost any real data set. So, next we turn to frequent pattern mining, geared to mine all concepts from a data set. After introducing basic algorithms to compute frequent patterns, we will look at ways to speed them up by sampling using the theoretical concepts from the PAC learning framework.

ICT entrepreneurship

A software product is defined as a packaged configuration of software components or a software-based service with auxiliary materials, which is released for and traded in a specific market.
In this course the creation, production and organization of product software will be discussed and elaborated in depth:

  • Requirements management: prioritization for releases, tracing en tracking, scope management
  • Architecture and design: variability, product architectures, internationalization, platforms, localization and customization
  • Development methods: prototyping, realization and maintenance, testing, configuration management, delivery; development teams
  • Knowledge management: web-based knowledge infrastructures,
  • Protection of intellectual property: NDA, Software Patents
  • Organization of a product software company: business functions, financing, venture capital, partnering, business plan, product/service trade-off, diversification

This course is explicitly meant for students Information Science and Computer Science. Pre-arranged or mixed teams are are no problem, it is the product idea that matters.

The aim of this course is to create a prototype and business plan for a novel software product. Students can join the course either with a product idea or without. In both cases your participation in the course must be formally approved.

Requirements engineering

The course will cover the following topics:

  • The RE process and its activities
  • Standards and tools
  • Agile RE, user stories
  • Requirements elicitation
  • Linguistic aspects of natural language requirements
  • From requirements to architectures
  • Requirements prioritization
  • Maturity assessment
  • (Verification of) formal specifications
  • Release planning
  • Requirements traceability
  • Crowd RE

All information about the course will be made available through Blackboard before the course starts.

To qualify for the retake exam, the grade of the original must be at least 4.

Multimedia discourse interaction

Seminar Multimedia Discourse Interaction

Multimedia Discourse Interaction addresses the complexity of interacting with information present in different information carriers, such as language (written or spoken), image, video, music and (scientific) data. The goal is to convey information to a user in an effective way.

Knowledge of cognitive capabilities and limitations, such as information processing speeds, can be used to inform the design of useful and efficient ways of searching, browsing, studying, analysing and communicating information in a way that is appropriate to a user's task, knowledge and skills. Subsequently, the fragments of relevant information that are selected from multiple sources must be combined for meaningful presentation to the user. Models and theories exist, for example in artificial intelligence, but also in the fields of film theory and computational linguistics, that describe communication structures, such as narratives or arguments. These can be used to inform the process of selecting and assembling specific media fragments or selections of data into a presentation appropriate to an end‐user's information needs.

Information presentation consists of combining atomic pieces of information into some communication structure that facilitates viewers in understanding the relationship between the pieces. For example, in text, multiple words are strung together according to established structures, namely grammatically correct sentences. Similarly, a media fragment, for example a film shot, represents some atom of meaning. Fragments can be combined together into a communication structure meaningful to the viewer. This is precisely the task that a film director carries out. Individual communication structures, for example that relate different positions of an argument, for specific domains, for example the utility of war, have been modelled in the literature. When these are implemented and used to present video fragments to a human viewer, the video sequence is perceived as conveying a coherent argument and discourse.

The seminar explores literature from diverse subfields, including artificial intelligence, semantic web, multimedia and document engineering, providing a range of perspectives on the challenges.

Course from
This course is set up as a seminar. It challenges the participants to acquire and disseminate knowledge about a complex subject in an interactive way. The moderators make a pre-selection of relevant research papers and web references. Students are expected to supplement these with their own literature search. They are expected to take the lead on proposing, preparing and presenting projects. Participants will work in groups of 2 on a joint project. Group meetings are mandatory.

Exam Form

  • Attendance of meetings is obligatory
  • Individual: Oral presentations of various topics
  • Group: Report on project that also details the individual contributions

Seminar medical informatics

This seminar is about the development, implementation and evaluation of IS/IT in the health care domain, which can be labeled as 'medical informatics' but also 'health IT' or 'e-health'. Compared to the previous courses, this years' seminar will focus on medical apps and games. This is a relatively new and exciting field that is full of opportunities to explore and evaluate. It is about apps and games to help doctors in their clinical work, to help managers to govern their hospitals, to help patients to cope with their diseases. Three knowledge fields are combined in this course:
(1) Health care: what are the current challenges of health care, how do clinical and organizational processes in health care look like, how do health care systems, organizations and professionals work?
(2) Mobile health: what types of mobile systems are applied in health care, what type of apps do doctors, nurses and patient use - or want to use?
(3) Evaluation studies: what are principles and models to evaluate if apps and games in health will work? how to review apps and games in different stages and from different perspectives? The three fields will be addressed and integrated in this course. After this course, you have gained more knowledge about both the drivers and barriers in medical informatics, and of medical apps and games in health in particular.

Process mining

There is no content available for this course.

Knowledge management

Knowledge management is about organizing, development, and use of knowledge in such a way that it directly contributes to the competitive edge of a company. In the Knowledge Management course we will study the main themes in the field like 'KM models', 'knowledge management strategy', 'communities of practice and knowledge networks', 'knowledge discovery', 'knowledge management systems', and 'intellectual capital'.
For a long time companies relied on the production factors: labor, capital and (raw) material, but today the main production factor is knowledge (P. Drucker). Organizations, such as corporate enterprises, non-profits, educational institutions and governmental agencies, face the continual struggle to transform vast amounts of data, information and content into usable and reusable knowledge. Globalization and technological developments force organizations into a continuous process of change and adaptation.
Alvin Toffler and Peter Drucker already noticed the consequences in the 80's of the previous century. They mention the rise of the information based or knowledge based organization. This new type of organizations mainly consists of so-called 'knowledge workers' that largely depend on knowledge to do their work. Knowledge workers work rather autonomously hence a different organization structure is required that typically consist of less management layers. The growing awareness of knowledge as a distinct factor of production and the need for a new management approach has led to a new field of study and practice - knowledge management.
Another driver has been the development of so called 'knowledge systems'. However, results of implementing such systems are not always as expected. Systems are not always aligned with work practices, people need to know how to trust and interpret information provided, providing information or sharing knowledge is not automatically a part of everybody's job routine.body { font-size: 9pt;

Data intensive systems

There is no content available for this course.