Speakers/listeners have (unconscious) knowledge about sound regularities of their native language, which includes metrical structure: where do stressed syllables occur in words? In many languages, stress falls near the edges of words: on the initial, final, or prefinal syllable. It has been hypothesized that this "delimitative" property of word stress assists listeners when they parse speech into words. In an initial stress language, for example, every stressed syllable can be safely taken to mark the beginning of a new word. Experimental evidence for the "Metrical Segmentation Hypothesis" comes mainly from initial stress languages; metrical segmentation in noninitial stress languages has hardly been investigated. Hence, it is largely unknown how language-specific metrical knowledge is that listeners use when parsing speech: do listeners of non-initial stress languages use different cues that specifically fit the metrical structure of their native language? This project has the goal of answering this question by adopting a cross-linguistic perspective: it investigates a much wider sample of languages than has been studied thus far. Moreover, it adds an acquisition perspective on parsing, investigating origins of languagespecific metrical structure in first and second language acquisition. Combining a cross-linguistic approach with acquisition increases chances of learning how fundamental properties of human language are rooted in language processing.
PhD project "Development of emotion recognition in relation to linguistic development". This project aims to investigate the relation between infants’ affective development (recognition of vocal and facial emotion) and linguistic development (in particular, vocabulary growth and language processing ability), as moderated by parent-child interaction. It aims at answering three questions: Q1: How does parental interaction style influence infants’ recognition of facial emotion? Q2: How does parental interaction style influence infants’ language development? Q3: How do infants’ differential responses to facial and vocal emotion predict language development? The project is conducted at the Utrecht Institute of Linguistics OTS, at Utrecht University, the Netherlands. PhD student: Anika van der Klis. Promotor: Rene Kager. Co-promotor: Frans Adriaans.
Postdoc project "Early predictors for phonological development: babbles and birds". Postdoc: Sita ter Haar.
The project aims to faciliate collaboration between Chinese Academy of Social Science (CASS) and Utrecht Institute of Linguistics (Uil OTS). CASS and Uil OTS have worked and are still working together on topics such as lexical tone perception by native versus non-native infants/adults, acquisition of lexical tones, cross-linguistic perception and acquisition of stress, etc. The main focus is how prosody perception develops in infancy.
How do we recognize words in speech? How do we store words and represent their sound shapes in the mental lexicon? How do we cope with conflicting information from two languages, native and non-native?
These are important questions about the human language faculty, which essentially remain unanswered, in spite of many recent discoveries. This programme aims at clarifying how speakers use their knowledge of sound structure (‘phonotactics’) for the purpose of segmentation of connected speech and vocabulary acquisition. We crucially involve second language acquisition, because there the language faculty is challenged by often conflicting knowledge from two languages, native and target. It is known that native language phonotactic knowledge affects speech processing and long-term storage of words in the target language. It is also known that bilinguals use phonotactic knowledge of both languages in word recognition. However, no earlier research has characterized the phonotactic knowledge underlying a second language learner’s growing ability to store words and process speech, and its acquisition.
The hypothesis is that phonotactic knowledge which supports speech processing and word learning is represented outside the mental lexicon, by a set of hierarchically ranked constraints. Second language learners start from a copy of their native language constraint set, and acquire target language phonotactic knowledge by adding and re-ranking constraints, using generalized shapes of words in their developing vocabularies and feedback from successful segmentation of continuous speech. The programme features four innovative aspects. First, it models the unconscious phonotactic knowledge underlying speech segmentation by hierarchically ranked constraints. Second, it features a constraint-based model of phonotactic learning, to be developed by testing it against natural language and artificial language learning by humans and machines. Third, it monitors the second language development of phonotactic knowledge from its initial state until its final state (a bilingual state). Finally, it bridges insights and combines methods from three disciplines: psycholinguistics, language acquisition, and learnability theory.