End-of-Year Symposium

End-of-Year Symposium

 

The Leipzig Lectures on Language will finish with a virtual two-day symposium on October 20 and 21, 2021.

The symposium aims to find consensus across the different views presented during the online series. Our program includes:

  • Two keynote lectures by Wiliam Matchin and Simona Mancini, 
  • four hands-on workshops on advanced methodologies relevant to the combinatorics of language and,
  • a poster session and a more informal gathering for participants to exchange ideas on future research.

We hope that this will foster networking and collaborations, leading to new ideas on how to move the field forward.

 

Timetable & Programme

Participants can download a PDF version of the programme here.

 

Hands-on Sessions

The following four hands-on sessions will be held during the symposium. Participation for every workshop is limited to around 30 people. Notice also that two sessions will always be held in parallel, meaning that every participant can at most participate in one workshop every day of the event.

  • Workshop 1 (October 20): Jixing Li (New York University Abu Dhabi, UAE) - "Grammatical predictors for fMRI time-courses during naturalistic listening".
  • Workshop 2 (October 20): Stephan Meylan (Massachusets Institute of Technology, USA) - "Measuring grammatical productivity with Bayesian inferential methods".
  • Workshop 3 (October 21): Cristiano Chesi and Paolo Canal (Scuola Superiore Studi Pavia IUSS, Italy) - "Tracking processing: computational complexity and eyetracking with Minimalist Grammars".
  • Workshop 4 (October 21): Simon W. Townsend and team (University of Zurich, Switzerland) - "Unpacking animal call combinations: An introduction".

Please find the abstracts for the differnt workshops in the symposium booklet.

 

Keynote Speakers

William Matchin, University of South Carolina (USA)

Keynote 1 (October 20, 4pm UTC)

Grammatical parallelism in aphasia revisited

The study of aphasia has driven our understanding of the neurological organization of language since the 1800s, leading to the development of the classical model of Wernicke, Lichtheim, and Geschwind, in which Broca’s area primarily supports language production. In the 1970s, novel experimental paradigms revealed apparent syntactic comprehension deficits in people with fluent Broca’s aphasia and expressive agrammatism. This lead to a widespread movement away from the classical model and towards models of language organization in the brain positing a central syntactic function to Broca’s area. I will present data from several studies of syntactic ability, both in comprehension and production, in people with post-stroke aphasia, showing that damage to the frontal lobe and expressive agrammatism are not associated with syntactic comprehension deficits, contrary to the contemporary received view regarding grammatical parallelism. By contrast, damage to the posterior temporal lobe is associated with both syntactic comprehension and production deficits, a grammatical parallelism consistent with Wernicke's original ideas and the theoretical model developed by Matchin & Hickok (2020).  

Re-watch William Matchin's plenary lecture on "Grammatical parallelism in aphasia revisited" on YouTube.

Watch the recording

Re-watch William Matchin's plenary lecture on "Grammatical parallelism in aphasia revisited" on YouTube.
https://www.youtube.com/watch?v=CVDsSEWTNIA

Simona Mancini, Basque Center on Cognition, Brain, & Language (Spain)

Keynote 2 (October 21, 12pm UTC)

Feature combinatorics

In spite of their structural diversity, human languages share the basic goal to convey fundamental coordinates about the world, such as the time and the temporal organization of an event, the gender, the role and the relation between the individuals involved in an event, to name a few. During comprehension, these properties, or features, are effortlessly extracted from the linguistic input by readers/listeners, who use them to build relations among words and eventually establish the overarching meaning of a sentence. How are these features handled by the comprehension system? Are they differentiated? And if so, when and how? In this talk I will show how distinct types of features and the relations they are involved in are processed, providing eye-tracking, electrophysiological and neuro-anatomical evidence for common and feature-specific mechanisms at work at distinct interface levels.

Re-watch Simona Mancini's plenary lecture on "Feature combinatorics" on YouTube.

Watch the recording

Re-watch Simona Mancini's plenary lecture on "Feature combinatorics" on YouTube.
https://www.youtube.com/watch?v=GtmsmHA8wKo

 

 Poster Session

Due to the virtual nature of the event we have opted to split the poster session into two individual session in order to accommodate time zone differences. The two sessions will take place at different times (one in the [Leipzig] evening and one in the [Leipzig] morning), as indicated in the programme. We hope that this will make it possible for our presenters to at least attend one of the two sessions and share their work with other participants.

The final programme for our poster sessions is part of the symposium booklet which can be download here.

Poster prize winners

We're happy to announce (in no particular order) the three winners of the Leipzig Lectures on Language poster prize:

Julia Cataldo, Universidade Federal do Rio de Janeiro (Brazil)

"Friend or foe: The morphological kinship between words"

Lexical access allows the immediate understanding and production of words online. Despite being a basic linguistic computation, there is a lot of heated theoretical dispute in this area. This study will present an empirical research whose results shed light on the way we access transparent and semantically opaque words (as whole words vs. by affix stripping - Taft, Forster, 1970) and on the method of storing them in the mind (morphologic vs. semantic routes).

The Distributed Morphology (MD - Halle, Marantz, 1993) theory suggests there are different lexical approaches, originated from psychologically different processes. However, we are interested in the access of words that bear a morphological relationship between them and that once also shared a semantic relation, but that under the synchronous perspective have lost it. For instance, liquidação (Brazilian Portuguese for the word sale) derives diachronically from líquido (liquid), but nowadays Brazilian speakers seem to ignore this semantic relationship. This very specific type of morphologic and semantic relationship between words has never been tested before in this language.

In order to evaluate MD predictions, we ran a priming test with a lexical decision judgment (word/non-word). We compared pairs of synchronically semantically unrelated (but morphologically linked) words -like líquido/liquidação (liquid/sale)- with pairs that maintain a transparent compositional relationship -like líquido/liquidificar (liquid/liquefy)- and with pairs that maintain only a semantic relationship -like líquido/aquoso (liquid/aqueous).

The results of a first behavioral pilot test confirmed the MD hypothesis, evidencing i) a decompositional course during processing, regardless of semantic opacity; ii) new entries for words as liquidação (sale) in the mental lexicon; and iii) different psychological processes for the morphologic and semantic routes: linguistic composition for the former and joint memory for the latter.

Our next step will be to run an EGG test with the same design (Bozic et al., 2007; Moris et al., 2007). We expect to find wider ERP amplitudes for the semantically opaque conditions and different latencies between two-different-size stimuli (2 and 3 morphologic layers) for both the transparent and opaque morphological conditions, but not for the semantic-only one. These findings would confirm our previous conclusions for the pilot test.

Cas Coopmans, Max Planck Institute for Psycholinguistics (Netherlands)

"Effects of structure and meaning on cortical tracking of linguistic units in continuous speech"

Recent studies have shown that the brain ‘tracks’ the syntactic structure of phrases [1], and that such phrase structure tracking is modulated by the compositional content of these phrases [2]. Following up on this literature, the current EEG study examines to what extent cortical tracking of linguistic structure is modulated by the compositionality of that structure. We measured EEG of 38 participants who listened to naturally produced stimuli in five different conditions, which systematically modulated the amount of linguistic information. We compared sentences (+syntax, +lexical meaning, +composition) to idioms (+syntax, +lexical meaning, ~composition), syntactic prose (+syntax, +lexical meaning, ~composition), jabberwocky (+syntax, –lexical meaning), and word lists (–syntax, +lexical meaning), and included backward versions of sentences and word lists as acoustic controls. Based on manual annotations of all speech recordings, we derived the frequency band corresponding to the presentation rate of phrases (1.1-2.1 Hz). Tracking was quantified through Mutual Information (MI), both between the EEG data and the speech envelope in this frequency band, and between the EEG data and abstract annotations of syntactic structure (i.e., bracket count).

We consistently found that MI between speech and EEG was higher for sentences than for jabberwocky, but not higher than for idioms or syntactic prose. This result was also found when MI was computed between the EEG signal and the abstract syntax annotations. Phrase structure tracking was also higher for sentences than for word lists, but as this difference was found for the backward versions of these stimuli as well, it could reflect the difference in their acoustics.

Overall, phrase structure tracking was stronger for sentences than for stimuli that lacked either lexical meaning or syntactic structure, but it was not consistently different from stimuli which had lexical meaning and syntactic structure. These findings suggest that cortical tracking of linguistic structure reflects the generation of lexicalized structure [3,4], whether this structure straightforwardly maps onto semantic meaning or not. This conclusion is in line with neurobiological models of language comprehension which make a functional distinction between syntactic structure building and semantic composition.

[1] Ding et al., 2016 Nat. Neurosci.; [2] Kaufeld et al., 2020 J. Neurosci.; [3] Martin, 2020 JoCN; [4] Meyer et al., 2019 LCN

Shailee Jain, The University of Texas at Austin (United States of America)

"Discovering distinct patterns of semantic integration across cortex using natural language encoding models for fMRI"

Encoding models (EM) are a powerful tool for modeling language processing in the brain. While previous research has used word-level EMs to discover how semantic concepts are organized across cortex, we are yet to understand how the brain processes compositional meaning. Recent work has shown that language model (LM) based EMs can be used to study phrase-level processing. LMs are artificial neural networks that learn to predict the next word by developing a representation of the preceding phrase. This phrase-level representation can be extracted for each stimulus word to build encoding models. Here we built EMs using a 12-layer transformer LM (GPT) and data from an fMRI experiment with 5 subjects (3 female) listening to 5 hours of naturally spoken narrative English language stimuli. EMs were learned using ridge regression and performance was measured by testing predictions on held-out datat. The phrase-level model performed well broadly throughout the cortex, highlighting the importance of context in the brain. The learned weights were then used to find phrases that were predicted to maximally activate each voxel, revealing its phrase-level semantic properties. This model predicts voxel response as a function of words constituting a phrase. However, different brain areas integrate over different amounts of information. To investigate these differences, we next assessed how sensitive each voxel was to constituent words in a phrase. We found that most voxels were much more sensitive to changes in recent words but we also found substantial differences across brain areas. Overall, voxels in the right hemisphere integrate over more words, but voxels in prefrontal cortex showed substantial heterogeneity in both hemispheres. Finally, we compared integration across areas that are selective for the same semantic category, such as "places". This revealed significant differences between areas; for example, voxels near parahippocampal place area had small integration windows, while voxels in retrosplenial cortex had longer windows and were particularly sensitive to prepositional phrases involving places. These results paint a more nuanced, and accurate picture of language selectivity across cortex than previous computational models. Further, examining phrase-level selectivity reveals differences among brain areas in the same semantic network, leading to a better understanding of how these areas work together to extract compositional meaning from natural language.

 

Registration

Registration is closed.

The two keynote lectures will be live streamed on our YouTube channel without any need for prior registration.

Go to Editor View