Extended Deadline for ProsLang Abstracts

The deadline for submitting abstracts for the Workshop on the Processing of Prosody across Languages and Varieties (ProsLang) has been extended the 23 April 2018. The organisers (Sasha Calhoun, Paul Warren, Olcay Türk, Mengzhu Yan, and Janet Fletcher) invite submissions of one-page abstracts following these guidelines.

This workshop will be held at Victoria University of Wellington (VUW), New Zealand on 29-30 November 2018. It is sponsored by the Australasian Speech Science and Technology Association (ASSTA) and the Association for Laboratory Phonology (LabPhon). It is a satellite of the 17th Speech Science and Technology Conference, University of New South Wales, Sydney, 4-7 December 2018.

The workshop aims to bring together researchers who are looking at commonalities and differences in the use of prosodic cues in speech processing across different languages, and also different varieties of major languages. The organizers are particularly interested in research on: (i) the role of prosody in semantic interpretation, including information structure; and (ii) prosody as an organisational structure for speech production and perception, including multimodal perspectives.

The invited speakers are:

  • Anne Cutler, MARCS, Western Sydney University
  • Bettina Braun, Universität Konstanz
  • Jennifer Cole, Northwestern University
  • Janet Fletcher, University of Melbourne
  • Nicole Gotzner, Leibniz-ZAS Berlin


Topics include, but are not limited to, cross-linguistic and cross-varietal commonalities and differences in:

  • the role of prosody in signalling information structure, particularly in the activation and resolution of contrast and contrastive alternatives
  • the integration of prosody and morphosyntactic cues in speech comprehension, e.g. as cues to information structure
  • the role of prosody in the management and interpretation of discourse
  • prosodic structure as an organisational frame in speech production or perception
  • links between prosodic structure and multimodal speech cues such as gesture