Courses

Compulsory courses (32 EC)

Advanced research methods (compulsory)

Research is curiosity-driven; it is about studying phenomena while becoming the creator of novel inventions. The advanced research methods course provides an initial research experience in the framework of the Master in Business Informatics program. The research methods prevalent in the field of information science (quantitative, qualitative, and design science) are taught. Based on the knowledge in this course and related courses, students can make a well-founded choice for their graduation research, and professional career.

During your studies and professional career, you will come across research in its various manifestations. To learn about how to properly conduct a research project, you need to read about it, and practice by means of exercises and assignments. In the advanced research methods course you will follow systematic protocols for designing and executing a research project. In the context of this course, students will design a comparative-based experiment to validate an artefact in context. This course consists of a combination of lectures, lab sessions, coaching sessions with the teachers, one research assignment, one world café session, and one exam.

Method engineering (compulsory)

Method Engineering is defined as the engineering discipline to design, construct, and adapt methods, techniques and tools for the development of information systems. Similarly as software engineering is concerned with all aspects of software production, so is method engineering dealing with all engineering activities related to methods, techniques and tools.Typical topics in the area of method engineering are:

  • Method description and meta-modeling
  • Method fragments, selection and assembly
  • Situational methods
  • Method frameworks, method comparison
  • Incremental methods
  • Knowledge infrastructures, meta-case, and tool support

Data science and society (compulsory)

Course form
Lectures, tutorials, quizzes, Q&A sessions, assignments.

Literature
We provide PDFs for most if not all required literature. Additional reading will be made known.

The required readings include:

  • Igual, L., & Seguí, S. (2017)."Introduction to Data Science: A Python Approach to Concepts, Techniques and Applications". Switzerland: Springer. [url]
  • Chapman, P., Clinton, J., Kerber, R., Khabaza, T., Reinartz, T., Shearer, C., & Wirth, R. (2000) "CRISP-DM 1.0: Step-by-step data mining guide" SPSS inc, 16. [Sections 1,2] [url]
  • Hutter, F., Kotthoff, L., & Vanschoren, J. (2019). "Automated Machine Learning - Methods, Systems, Challenges". Springer. [Chapter 1 (required); Chapter 8 (additional)] [url]
  • Clark, A., Fox, C., & Lappin, S. (Eds.). (2013). "The handbook of computational linguistics and natural language processing". John Wiley & Sons. [Chapter 1 (required); Chapters 4,9 (additional)] [url]

Business process management (compulsory)

There is no content available for this course.

Introduction to Business Informatics (compulsory)

The objective of the course is to guide students through the complete MBI program, resulting in the creation of an personal MBI study plan.
In 4 weeks time you will encounter the various types of courses, mandatory, primary elective and secondary electives, from which to compose your curriculum as well as some coherent sets of courses.
Boundary conditions will be discussed as well, as are formal constraints (the Education & Exam Regulations).
The course pays some attention to scientific integrity, including fraud and plagiarism.
Near the end the course looks ahead towards the MBI graduation, especially your thesis (project).
Throughout the course social interaction, such as integration of students from Utrecht University, other Dutch universities, including those of Applied Sciences, and international students will be encouraged.

Course form
Lectures and gatherings.

Literature
The course is not based on literature.

Introducing Natural Sciences (compulsory)

There are two morning sessions with several speakers introducing the student to the the education system of the graduate school, its rules, its curricula, general and practical information about personnel and administration, specific information about the programme itself and expectations of the programme board about their students, honours education, specific profiles across disciplines and the profession of teacher.
Knowing what kind of skills and attitudes the labour market is looking for is considered as important. Workshops will train students to enhance awareness about their own strengths and weaknesses or introduce them to the work and life of PhD students.
Students will have ample time to get to known each other and their programme board.
Lunches, drinks and a concluding dinner will be organised.

Dilemmas of the scientist (compulsory)

The course Dilemmas of the Scientists consists of two workshops: one in your first year (course code FI-MHPSDL1) and one in your second year (course code FI-MHPSDIL). Both are mandatory for all students of the Graduate School of Natural Sciences. The workshops have separate course codes because the course spans over two academic years. The 0,5 ects attached to the course are credited once you've completed FI-MHPSDIL. Please note: the talk about research integrity during the master introduction days is *not* part of the course.

This workshop (FI-MHPSDIL) is the second-year workshop. It is intended for second-year master students who have already completed FI-MHPSDL1. If you have not yet completed FI-MHPSDL1, you should do that first. If that leads to scheduling conflicts, please contact the course coordinator.

The workshop is offered both in semester 1 and semester 2. When you should take the workshop depends on when you’ve started your master programme. If you’ve started your master in (or before) February 2019, you should take the workshop in semester 1. If you started your master in September 2019, you should take the workshop in semester 2.

During this workshop, we will discuss dilemmas of integrity that you yourself have encountered during your studies.

Electives (7.5 EC each)

Software production

Software Production is the research domain that covers product development, corporate entrepreneurship, and societal implications of large scale software development in open market conditions. Requirements formulation is an essential activity during software production, for which User Stories have been adopted widely.
The overall goal of this year's seminar is to develop a MOOC with corresponding book on User Stories. We will integrate current scientific knowledge on User Stories into a well-designed set of learning modules with presentations, assignments, and exams. Industrial materials and case material in various media will be included to boost the student experience.

Course form
The course is run as a seminar. Interactive discussions led by students, PhDs, and staff.
The research project is performed individually assisted by peers. Students will present their proposals of chapters and clips, as well as their final deliverables.

Exam form
Various presentations. Written research reports. Knowledge clips

Theories in Information Systems

There is no content available for this course.

ICT startups

ICT Startups is a domain that is relatively new as a research field: it concerns the definition and study of success in ICT entrepreneurship. It has as a goal to support the practitioner field, whether that is incubators, entrepreneurs, software engineers, software product managers, chief technology officers, or entrepreneurship lecturers, with up to date knowledge and practices. These should in turn support entrepreneurs in better decision making in their daily, and often nightly, work. One of the main aspects of this work concerns the construction and engineering of ICT products and services.

The research project in the course can be about one of the following topics:

  • Startups and new venture creation

Success factors for software-intensive startups
Software startup processes
Disruptive innovation and adoption of startups
Managing startup and growth hacking
Intertwined software product and business model development
App economy
Platform-based business models and value co-creation
API economy

  • Software Development and Product Management

Software engineering management and productivity
Lifecycle perspective
Speeding up time-to-market
Effective business model transformation and improvement
Pricing strategies
Design thinking

  • Software Business Development

Business modeling for software products and services
Economics of software companies
Internationalization of software-intensive companies

  • New ideas and emerging areas

Disruptive trends in software business
Business Analytics, data analytics
The future of software-intensive business
Software business and entrepreneurship education
Game business and gamification in software-intensive business

Prerequisites
ICT Startups is a continuation course of ICT Entrepreneurship. It cannot be followed without first successfully finishing ICT Entrepreneurship.

Course form/course entry
The course is highly personalized and can be tailored to the ambitions of the students and the advising lecturer.
A research plan must be created by the student and signed by both the course supervisor and the student before a student is allowed to enter the course.
The research plan should at least list: (1) the project goal, (2) the project deliverables, (3) the project’s envisioned outcome, and (4) an assessment plan from the supervisor.

Literature

  • Ries, E. (2011), The Lean Startup: How Constant Innovation Creates Radically Successful Businesses. Kindle Edition, Penguin Books Limited. New York, NY.
  • Osterwalder, A., Pigneur, Y., Bernarda, G. & Smith, A. (2014), Value proposition design: How to create products and services customers want, John Wiley & Sons.
  • Paternoster, N., Giardino, C., Unterkalmsteiner, M., Gorschek, T. & Abrahamsson, P. (2014), Software development in startup companies: A systematic mapping study, Information and Software Technology 56(10), 1200–1218.
  • Jansen, S. & van Cann, R. (2012), Software business start-up memories: Key decisions in success stories, Springer.
  • Dorst, K. (2011), ‘The core of ’design thinking’ and its application’, Design studies 32(6), 521–532.
  • Blank, S. & Dorf, B. (2012), The startup owner’s manual: The step-by-step guide for building a great company, K&S Ranch.
  • Lucassen, G., Dalpiaz, F., van der Werf, J. M. E., & Brinkkemper, S. (2016). Improving agile requirements: the quality user story framework and tool. Requirements Engineering, 21(3), 383-403.
  • Siamak Farshidi, Slinger Jansen, Rolf de Jong & Sjaak Brinkkemper (2018) A decision support system for software technology selection, Journal of Decision Systems

Responsible ICT

Responsible ICT focuses on the social and environmental, positive and negative impacts of ICT, and introduces ethical reflections on all the stages of the ICT lifecycle. Humanity is facing outstanding challenges in ensuring world-wide peace, managing global exchange of people and goods without health risks, reducing poverty while increasing equity and inclusion, minimizing climate change, and redesigning the socio-economic system so it contributes to good life for all within planetary boundaries.

ICT is often included as a key ingredient in recipes proposed as solutions to the challenges. The course covers theories and skills that will allow students to deepen into the interrelation between ICT, society and the natural environment to critically assess the roles ICT plays both at the organizational and systemic levels, its capability to be part of the solution, and also the trade-offs it entails..

Technologies for learning

The list of topics we will research includes but is not limited to:

  • student modelling technologies for representing knowledge, metacognitive skills and strategies and affective state of a student working with an adaptive education system
  • technologies for adaptive learning support, such as intelligent tutoring systems and adaptive educational hypermedia
  • technologies for supporting collaborative, group-based and social learning scenarios;
  • technologies exploiting big data set in education for empowering student and teachers, as well as improving the behavior of intelligent educational software
  • modern HCI methods used in education for creating effective learning interfaces including dialog systems, learning companions, serious games and virtual reality

This academic field is extremely interdisciplinary. Hence, the background necessary to study and work with these technologies can be very diverse: knowledge of data mining and machine learning, parsing and rewriting, artificial intelligence and HCI are all useful. The course material as well as topics for group project will be adjusted to the background of the students in order to use the cumulative expertise of the class as much as possible.
Course form
Lectures and seminars.

Literature
All papers are listed on Blackboard.

Software ecosystems

vendors no longer function as independent units, where all customers are end-users, where there are no suppliers, and where all software is built in-house. Instead, software vendors have become networked, i.e., software vendors are depending on (communities of) service and software component suppliers, value-added-resellers, and pro-active customers who build and share customizations. Software vendors now have to consider their strategic role in the software ecosystem to survive. With their role in the software ecosystem in mind, software vendors can become more successful by opening up their business, devising new business models, forging long-lasting relationships with partnership networks, and overcoming technical and social challenges that are part of these innovations.software ecosystem is a set of actors functioning as a unit and interacting with a shared market for software and services, together with the relationships among them. These relationships are frequently underpinned by a common technological platform or market and operate through the exchange of information, resources and artifacts. Several challenges lie in the research area of software ecosystems. To begin with, insightful and scalable modeling techniques for software ecosystems currently do not exist. Furthermore, methods are required that enable software vendors to transform their legacy architectures to accommodate reusability of internal common artifacts and external components and services. Finally, methods are required that support software vendors in choosing survival strategies in software ecosystems.introduce many new research challenges on both a technical and a business level. In a traditionally closed market, software vendors are now facing the challenge of opening up their product interfaces, their knowledge bases, and in some cases even their software. Software vendors must decide how open their products and interfaces are, new business models need to be developed, and new standards for component and service reuse are required. These challenges have been identified but have hardly been picked up by the research community.In this seminar topics on SECOs are discussed. These topics can range from consultancy oriented for product software companies to highly technical for software engineers. The course is largely dependent on student participation. Some example topics are:

  • Virtualized software enterprises
  • Open source software ecosystems
  • Market-specific domain engineering
  • Software ecosystem orchestration
  • Software development communities
  • Software product lines
  • Software product management
  • Publishing APIs
  • API development
  • Formal modeling of business models
  • Architectural implications of reusability
  • Keystone and niche player survival strategy
  • Software ecosystem creation
  • Economic impact of software ecosystems
  • Communities of practice and software reuse
  • Product software and software licensing
  • Software business models
  • Software ecosystem practices and experience
  • Software ecosystem modeling
  • API related topics: design, development, marketing
  • Software ecosystem models
  • A software ecosystem analysis method
  • Strategic advice for software vendors
  • API compatibility over subsequent releases

Data analysis and visualisation

What puts former criminals on the right track? How can we prevent heart disease? Can Twitter predict election outcomes? What does a violent brain look like? How many social classes does 21st century society have? Are hospitals spending too much on health care, or too little? When is a series of spikes in hundreds of website logfiles an operational problem?

Data analysis is the art and science of tackling questions like these by looking at data. Just as cartographers make maps to see what a country looks like, data analysts explore the hidden structures of data by creating informative pictures and summarizing relationships among variables. And just as doctors diagnose sick patients and advise healthy ones on how to stay healthy, data analysts predict important events and variables so we can act on this knowledge. Methods from statistics, machine learning, and data mining play an important part in this process, as well as visualizations that allow the analyst and other humans to better understand what we can conclude from the available facts.

During this course, participants will actively learn how to apply the main statistical methods in data analysis and how to use machine learning algorithms and visualizing techniques. The course has a strongly practical, hands-on focus: rather than focusing on the mathematics and background of the discussed techniques, you will gain hands-on experience in using them on real data during the course and interpreting the results.
This course covers both classical and modern topics in data analysis and visualization:

  1. Exploratory data analysis (EDA);
  2. Supervised machine learning and statistical learning;
  3. Unsupervised learning and data mining techniques;
  4. Visualization (throughout the course).

IMPORTANT ENROLLMENT INFORMATION - please read before registering

This course is a part of the GSNS Profile Applied Data Science. If you have chosen to register for the profile, you will receive preference when registering for this course. To register for this course as a profile student and receive placement preference, you must use the special registration form before the deadline in september. Please see here for more information about the procedure: https://students.uu.nl/en/science/academics/applied-data-science. Any questions regarding the ADS profile and registration for the profile should be directed to the ADS profile coordinator, not the course coordinator.

Other interested (non-profile) students, from any Faculty, are also welcome in this course, provided you are a UU master student that meets the prerequisites. You can enroll via Osiris in the beginning of October. Osiris registration will be closed before this date. See here for the exact dates: https://students.uu.nl/en/science/academics/applied-data-science. Note that for non-profile students, there is no guaranteed placement in the course.

Digital transformation and architecture

What kind of business processes are important in our organization, and how can we support these processes using IT? What is the application landscape of our organization, and do we need to update it to improve the speed and flexibility with which we can do business? How can we manage our technical infrastructure to improve access to information for our employees but at the same time minimize security risks?In this course, you will learn the techniques that allow you to answer these and other questions. The core subject of the course is the modelling and analysis of enterprise-wide architectures (i.e. business process architectures, information architectures, application architectures, technical architectures, combination of architectures, and so on). In addition, we will discuss related topics such as risk management and business process modelling.

Software architecture

The course on software architecture deals with the concepts and best practices of software architecture. The focus is on theories explaining the structure of software systems and how system’s elements are meant to interact given the imposed quality requirements.Topics of the course are:

  • Architecture influence cycles and contexts
  • Technical relations, development life cycle, business profile, and the architect’s professional practices
  • Quality attributes: availability, modifiability, performance, security, usability, testability, and interoperability
  • Architecturally significant requirements, and how to determine them
  • Architectural patterns in relation to architectural tactics
  • Architecture in the life cycle, including generate-and-test as a design philosophy; architecture conformance during implementation
  • Architecture and current technologies, such as the cloud, social networks, and mobile devices
  • Architecture competence: what this means both for individuals and organizations

Natural language generation

The taught component of the course will consist of four parts:

I. General Introduction. In the first part of the course you will learn what the different aims of practical and theoretical NLG can be, what are the main elements of the standard NLG pipeline, how NLG systems are built, and how they are evaluated. Template-based and end-to-end systems will be discussed briefly.

II. Practical systems. You will get acquainted with a range of practical applications of NLG; a few will be discussed in detail: candidates applications are medical decision support, knowledge editing, and robo-journalism. Strengths, weaknesses, and opportunities for the practical deployment of these systems will be discussed. If time allows, we will devote attention to multimodal systems, which produce documents in which pictures or diagrams complement a generated text.

III. Module in focus: Referring Expressions Generation. We will zoom in on one part of the standard NLG pipeline, which is responsible for the generation of referring expressions (e.g., as when an NLG system says “the city where you work”, or “the area north of the river Rhine”). We will discuss a range of rule-based algorithms, and some that are based on Machine Learning.

IV. Perspectives on NLG. We will discuss what linguists, philosophers, and other theoreticians have to say about human language production, and how this relates to NLG. We may start with a Gricean approach, and continue with the Bayesian-inspired Rational Speech Acts approach. We will ask how accurate and how explanatory existing NLG algorithms are as models of human language production (i.e., human speaking and writing), and what are the main open questions for research in this area.

The core of the course will be presented in lectures. Additionally, students will be asked to read, present, and discuss some key papers and systems which illustrate the issues listed above.

Big data

Big Data is as much a buzz word as an apt description of a real problem: the amount of data generated per day is growing faster than our processing abilities. Hence the need for algorithms and data structures which allow us, e.g., to store, retrieve and analyze vast amounts of widely varied data that streams in at high velocity.

In this course we will limit ourselves to data mining aspects of the Big Data problem, more specifically to the problem of classification in a Big Data setting. To make algorithms viable for huge amounts of data they should have low complexity, in fact it is easy to think of scenarios where only sublinear algorithms are practical. That is, algorithms that see only a (vanishingly small) part of the data: algorithms that only sample the data.

We start by studying PAC learning, where we study tight bounds to learn (simple) concepts almost always almost correctly from a sample of the data; both in the clean (no noise) and in the agnostic (allowing noise) case. The concepts we study may appear to allow only for very simple – hence, often weak – classifiers. However, the boosting theorem shows that they can represent whatever can be represented by strong classifiers.

PAC learning algorithms are based on the assumption that a data set represents only one such concept, which obviously isn’t true for almost any real data set. So, next we turn to frequent pattern mining, geared to mine all concepts from a data set. After introducing basic algorithms to compute frequent patterns, we will look at ways to speed them up by sampling using the theoretical concepts from the PAC learning framework.

Science Based Entrepreneurship

In Entrepreneurship, students integrate knowledge from previous modules. Issues such as the entrepreneurial process (idea generation, opportunity recognition and opportunity evaluation), the identification of market relations and network, the understanding of different entrepreneurial cultures, and entrepreneurial concepts are being discussed. In addition, students apply acquired knowledge; this includes the development, evaluation and presentation of a business plan.

Work format:

  • Lectures, tutorials, work groups, presentations

Assessment:

  • group project (40%), final exam (60%)

Requirements engineering

The course will cover the following topics:

  • The RE process and its activities
  • Standards and tools
  • Agile RE, user stories
  • Requirements elicitation
  • Linguistic aspects of natural language requirements
  • From requirements to architectures
  • Requirements prioritization
  • Maturity assessment
  • (Verification of) formal specifications
  • Release planning
  • Requirements traceability
  • Crowd RE

All information about the course will be made available through Blackboard before the course starts.

To qualify for the retake exam, the grade of the original must be at least 4.

Meaningful (Linked) Data Interaction

Seminar Multimedia Discourse Interaction

Multimedia Discourse Interaction addresses the complexity of interacting with information present in different information carriers, such as language (written or spoken), image, video, music and (scientific) data. The goal is to convey information to a user in an effective way.

Knowledge of cognitive capabilities and limitations, such as information processing speeds, can be used to inform the design of useful and efficient ways of searching, browsing, studying, analysing and communicating information in a way that is appropriate to a user's task, knowledge and skills. Subsequently, the fragments of relevant information that are selected from multiple sources must be combined for meaningful presentation to the user. Models and theories exist, for example in artificial intelligence, but also in the fields of film theory and computational linguistics, that describe communication structures, such as narratives or arguments. These can be used to inform the process of selecting and assembling specific media fragments or selections of data into a presentation appropriate to an end‐user's information needs.

Information presentation consists of combining atomic pieces of information into some communication structure that facilitates viewers in understanding the relationship between the pieces. For example, in text, multiple words are strung together according to established structures, namely grammatically correct sentences. Similarly, a media fragment, for example a film shot, represents some atom of meaning. Fragments can be combined together into a communication structure meaningful to the viewer. This is precisely the task that a film director carries out. Individual communication structures, for example that relate different positions of an argument, for specific domains, for example the utility of war, have been modelled in the literature. When these are implemented and used to present video fragments to a human viewer, the video sequence is perceived as conveying a coherent argument and discourse.

The seminar explores literature from diverse subfields, including artificial intelligence, semantic web, multimedia and document engineering, providing a range of perspectives on the challenges.

Course from
This course is set up as a seminar. It challenges the participants to acquire and disseminate knowledge about a complex subject in an interactive way. The moderators make a pre-selection of relevant research papers and web references. Students are expected to supplement these with their own literature search. They are expected to take the lead on proposing, preparing and presenting projects. Participants will work in groups of 2 on a joint project. Group meetings are mandatory.

Exam Form

  • Attendance of meetings is obligatory
  • Individual: Oral presentations of various topics
  • Group: Report on project that also details the individual contributions

Process mining

There is no content available for this course.

Knowledge management

Knowledge management is about organizing, development, and use of knowledge in such a way that it directly contributes to the competitive edge of a company. In the Knowledge Management course we will study the main themes in the field like 'KM models', 'knowledge management strategy', 'communities of practice and knowledge networks', 'knowledge discovery', 'knowledge management systems', and 'intellectual capital'.
For a long time companies relied on the production factors: labor, capital and (raw) material, but today the main production factor is knowledge (P. Drucker). Organizations, such as corporate enterprises, non-profits, educational institutions and governmental agencies, face the continual struggle to transform vast amounts of data, information and content into usable and reusable knowledge. Globalization and technological developments force organizations into a continuous process of change and adaptation.
Alvin Toffler and Peter Drucker already noticed the consequences in the 80's of the previous century. They mention the rise of the information based or knowledge based organization. This new type of organizations mainly consists of so-called 'knowledge workers' that largely depend on knowledge to do their work. Knowledge workers work rather autonomously hence a different organization structure is required that typically consist of less management layers. The growing awareness of knowledge as a distinct factor of production and the need for a new management approach has led to a new field of study and practice - knowledge management.
Another driver has been the development of so called 'knowledge systems'. However, results of implementing such systems are not always as expected. Systems are not always aligned with work practices, people need to know how to trust and interpret information provided, providing information or sharing knowledge is not automatically a part of everybody's job routine.body { font-size: 9pt;

Data intensive systems

Nowadays, we are producing data at rates that we have never seen before, creating datasets characterized by extreme Volume, Variety and Velocity.
Unfortunately, traditional data management technologies have been proven limited in managing data with these characteristics. This led to the term Big Data, as a way to refer to this kind of data, and the new technologies that have been developed to cope with such datasets.
This course is an introduction to Big Data management technologies. It aims at providing an understanding of the fundamental principles upon which the Big Data systems have been built, and a good knowledge of the generic features that each such system is having.
The course is also covering the use of such tools in data preparation, i.e., all these tasks that data practitioners need to do before they have the data ready for the analytics.
Some of the topics that are touched in the course, include, but are not limited, to: advanced SQL and Data Consistency, Big Data Systems (Map Reduce, HDFS, Spark), Heterogeneous Data Integration (Mappings, Data Cleaning), Data Imputation, NoSQL Databases (Graph Databases, Column Stores), Stream Processing, Pig Latin, Graph Analytics at Large Scale.

The course is fundamental for the modern data science students since it provides them with required knowledge on the tools that are available for achieving their goals.

Course form
In-class lectures.
Attending lectures may not be mandatory, yet, students are responsible for all announcements and course material discussed in the class, thus, class participation is expected and encouraged.
The lectures consist of presentation of some theories on which Big Data technologies are based, and presentation of specific systems and technologies.

Literature
The course will follow different chapters from books of different tools. An indicative list is:
- Graph Databases
- Seven Databases in Seven Weeks
- Mining Massive Datasets
- Learning Spark
- Designing Data Intensive Applications