How do babies learn their native language? What makes a text difficult or easy to read? How does our brain process spoken and written language, and what does this tell us about the language system?

To answer these questions, we conduct about fifty experiments each year in the linguistics lab at the Institute for Language Sciences, measuring about two thousand adult subjects, and about six hundred infants. In addition, the linguistics lab also supports fieldwork by our researchers, for example in schools.

We make use of a wide range of experimental techniques. In this video, the manager of our Babylab gives a tour of all the labs at the Institute for Language Sciences.

Research with children

Babylab. Foto Ed van Rijswijk
Baby eye-tracking lab

In the Babylab, we conduct research on language acquisition in babies and children between 4 and about 36 months of age. For instance, we investigate how children learn language, how they acquire two (or more) languages simultaneously, and what may be possible risk factors for dyslexia.

To do this, we employ a variety of methods in each of our labs:

  • Babylab: head turn preference
  • Baby eye-tracking lab: eye-tracking (general), visual fixation, visual preference
  • Baby EEG lab: EEG research

Do you have a baby or young child, and would you like to participate in our research? Then check out the Babylab website (in Dutch):

Research with adults

Phonetics lab

In addition to research with babies and young children, we also conduct reading, listening, and speech experiments with adult subjects. Our goal is to investigate how people understand and produce spoken and written language, and to do so we use the following methods, among others:

Artificial grammar learning

In artificial grammar learning, participants are presented with an artificial language, which has been created according to certain rules: an artificial grammar. For example, a sentence from such an artificial language may be “tep wadim lut”. Participants need only listen in this learning phase. This is then is followed by a testing phase in which participants hear sentences consisting of the same artificial words they have just heard. However, these words may not always conform to the rules of the artificial language. It is up to the participant to indicate whether the sentence belongs to the newly learned artificial language or not. This is how we find out which language patterns are easy to learn and which are not.

Reading research (self-paced reading, eye-tracking)

  • In self-paced reading, participants read a text that appears on screen. The text is broken up into words or phrases, and the participant can control when the next piece of text appears according to their reading pace. From the speed at which certain language constructions are read, we can deduce which constructions are more difficult for our brains to process, and which are easier.
  • In reading research with eye-tracking, a text appears on the screen in its entirety, and we measure the time participants look at each word or section of text. Eye-tracking also allows us to see if, and how often a participant looks back at in the text, measuring small eye movements down to two milliseconds in duration.

Visual world paradigm (eye-tracking)

In the visual world paradigm, participants are shown pictures while hearing short stories that are related to the pictures. Hearing the spoken text influences what exactly the participant looks at in the picture. With this technique we can learn a lot about how word order (syntax), meaning (semantics), and context (pragmatics) influence language processing.


Electroencephalography (EEG) measures the electrical activity of the brain ("brain waves"). The participant is given a kind of perforated swimming cap, into which electrodes and conductive gel are inserted. We can use this technique to very precisely measure when and how the brain responds to (anomalies in) the meaning or grammatical form of spoken or written language.


fEMG stands for "facial electromyography". This involves sticking electrodes to the face to measure the activity of certain facial muscles. Often this involves muscles that induce frowning or smiling. With help of this technique, reactions to spoken text can be measured.

Are you interested in participating? Please sign up to our participant database (page in Dutch):

Sign up as a participant

Information for researchers

Are you a researcher at Utrecht University or at the Landelijke Onderzoekschool Taalwetenschap (LOT) and would you like to make use of our labs? Then check out the website for lab users:

Introductory guide lab users

Information for students

 Would you like to do an internship or thesis research at the lab?

  • For research with infants you can contact the manager of the Babylab, Desiree Capel. She knows which research projects are running and at for which experiments internship positions are available.
  • For research with adults, you need a supervisor affiliated with the Institute for Language Sciences; have your supervisor contact lab manager Iris Mulders