Research

Language in interaction

At the DCCN I am currently working on a project related to alignment or coordination between two interlocutors in a dialogue. I will investigate with fMRI whether neural representations converge after participants interact with each other.

At the Max Planck Institute for Psycholinguistics in Nijmegen I worked within the Interactional Foundations of Language project where I focused on turn-taking and speech act processing in an interactive setting, using experimental methods, predominantly EEG. One project investigated neural signatures of processes involved in turn-taking, within an interactive EEG quiz-paradigm. Another project employed different behavioral methods to investigate the role of prosodic factors in turn-taking. In a third project I showed with EEG that the length of the gap before taking a turn is meaningful for listeners.

Common ground

In a project in Glasgow funded by a Rubicon grant, I worked on listener’s use of common ground. In a conversation, listeners need to take into account what knowledge is shared by speaker and listener, i.e., in common ground. It is still under debate how fast and under which circumstances common ground information is used by listeners. I used a novel experimental paradigm (MEG), for the first time using brain imaging methods in this research domain.

Prosody

Prosody, or the way in which language is spoken, can be an important source of information to understand language. During my PhD at the Donders Institute in Nijmegen, I studied how different prosodic devices play a role in language understanding using event-related brain potentials (ERPs). I investigated the effect of prosody on how listeners group the words in a sentence. I found that listeners use a prosodic break (‘pause’) to determine the grouping of words. If no break is present, the grouping often depends on other factors. A pitch accent following a break indicates an even stronger grouping, sometimes even leading listeners to ignore grammatical rules. In addition, I investigated listeners’ processing of pitch accents that indicate a contrast (as in: “the blue ball, the RED ball”). It turned out that listeners experience processing problems when an expected accent (like RED) is missing, but they do not experience any problems when they encounter a superfluous accent.