Understanding spoken language involves a complex set of time-sensitive neurobiological mechanisms that transform the auditory input into a structured and meaningful interpretation. Little is known about the nature of these computations although they are essential for human...
Understanding spoken language involves a complex set of time-sensitive neurobiological mechanisms that transform the auditory input into a structured and meaningful interpretation. Little is known about the nature of these computations although they are essential for human language function.
Thus, the goal of our research is to understand the nature of the intermediate processes involved in the transition from early perceptual processing through different representational states to the development of a meaningfully structured utterance, the dynamic spatio-temporal relationship between these processes, and their evolution over time.
To achieve this we combine advanced techniques from neuroimaging, multivariate statistics and computational linguistics to probe directly the dynamic patterns of neural activity across the brain.
We carry out combined electroencephalography and magnetencephalography (EMEG) imaging to capture the real-time electrophysiological activity of the brain, and use Representational Similarity Analysis (RSA) and related multivariate techniques to probe the different types of neural computation that support these dynamic processes of incremental interpretation. Computational linguistic analyses of language corpora allow us to build quantifiable models of different dimensions of language interpretation â€“ from phonetics and phonology to argument structure and anaphora -- and to test for their presence, using RSA, as the utterance unfolds in real time.
We have initially focused on the processing of spoken words which involves complex processes that enable listeners to rapidly and seemingly effortlessly transform the auditory input into a meaningful interpretation. This effortless transition occurs on millisecond timescales, with remarkable speed and accuracy and without any awareness of the complex computations involved. Our research has revealed the real-time neural dynamics of these processes by collecting data about listenersâ€™ brain activity as they hear spoken words. Using novel statistical models of different aspects of the recognition process, we have been able to locate directly which parts of the brain are involved in accessing the stored form and meaning of words and how the competition between different word candidates is resolved neurally in real time. This gives us a uniquely differentiated picture of the neural substrate for the first 500 ms of spoken word recognition. This study, published in the Journal of Neuroscience (doi: 10.1523/JNEUROSCI.2858-16.2016) represents a great advance in understanding the neural processes involved in how speech activates word meaning, and we are now extending this research to determine the neural mechanisms that support the transformation from speech input to meaning using neural oscillations and brain connectivity.
In further research we are investigating the neurocognitive processes which combine the meanings of individual words into larger phrases (e.g. \'yellow banana\'). Once again, we combined electroencephalography and MEG (EMEG), enabling good spatiotemporal resolution of the brain signal. The data has been analysed using multivariate statistical methods similar to for the single words. Preliminary results suggest that presence of the context word (\'yellow\') has unexpectedly strong predictive effects on the lexical and semantic competition related to the second word (â€œbananaâ€) potentially enabling very early access to the second word\'s semantic competitors. This paper is now under revision.
We have also been investigating the computations involved in the incremental integration of words within spoken sentences in two further studies. Both are EMEG experiments. One study was designed to investigate the way that the brain incrementally develops cognitive representations and computations for multiple linguistic aspects during speech comprehension. We are exploring how verb information constrains the integration of a subsequent complement phrase, using state-of-the art statistical models of verb syntactic subcategorization and semantic selection preferences derived from language corpora. Relating the syntactic and semantic measures to the spatio-temporal dynamics of brain activity is revealing that verb-constrained syntactic computations recruit a left fronto-temporal language network whereas verb semantic computations elicit activity in a more distributed bilateral semantic network. This suggests that the brain generates incremental syntactic and semantic predictions at the verb in the sentence in parallel via pre-activation of a likely syntactic frame. This project is ongoing.
A second study investigates how lexically-driven expectations and syntactic complexity are maintained and subsequently integrated when there are discontinuous dependencies in sentences. To address these issues we manipulated the complex long-distance grammatical structure-building that can occur in natural sentences, using embedded relative clauses. Our Representational Similarity analyses combined with computational modelling show how the lexical access of the first verb in the sentence is influenced by the likelihood of taking a direct object when it is encountered, and also subsequently in the effect of surprisal observed on encountering the main verb which comes later in the sentences. This project is ongoing.
This project goes beyond the state-of-the-art in both its conceptualisation of how the dynamic richness and complexity of spoken language comprehension in the brain can be studied, and also in the methods we have developed and employed to quantify complex linguistic information and relate it to dynamic processes in the brain. In particular, we model phonological-lexical, lexical-syntactic, and lexical-semantic properties of our stimuli using novel and detailed probabilistic models, which we have derived from large-scale gating studies and from computational linguistic resources. A major interest is to develop cumulative models for quantifying the different kinds of structures that are developed on-line as people hear spoken language by exploiting new recurrent computational models.
Such an approach goes beyond the state-of-the-art in studies of the cognitive neuroscience of language, which have typically looked at broader contrasts between particular experimental conditions. For example, our use of detailed, multivariate probabilistic models of verb subcategorization information derived from corpus data allows us to investigate syntactic integration effects using naturalistic stimuli, without requiring sentences with syntactic violations or other kinds of manipulations that do not occur in normal speech comprehension. We anticipate that this fine-grained analysis will enable better detection of language deficits and their remediation.
More info: http://cslb.psychol.cam.ac.uk/.