The header of the Tilburg center for Cognition and Communication

Tilburg center for Cognition and Communication (TiCC)

We study how people communicate with each other and how computer systems can be taught to communicate with us.

vignetweb

TiCC Colloquium: Jordan Zlatev

What: Distinguishing polysemiotic communication from multimodality: conceptual and empirical issues
Where: WZ 206
When: Wednesday, 30 May 2018, 12:45 - 13:45 hours


Abstract

The notion of "multimodality" is popular these days, but it is at least three-way ambiguous in referring to (a) different sensory modalities/senses, (b) different "semiotic modes" in the tradition of social semiotics (Kress, 2009), and (c) (above all) to speech and gestures as more or less integrated "communicative modalities" (Vigliocco, Perniss, & Vinson, 2014). Here, I propose to use multimodality in the sense of the combination of two or more semiotic systems, which consist of signs with system-specific properties, and their interrelations. I will define this notion in more precise forms, focusing on three general and universal semiotic systems language, gesture and depiction, instantiated in particular sociocultural media (e.g. oil paintings, sand drawings, digital art), which may be either unimodal, or multimodal.

Then I will use this cognitive-semiotic framework to present and interpret the results of two empirical studies. In one, we compared the communicative efficacy of unimodal (silent) pantomime, and multimodal pantomime (including vocalizations) in a task where participants were to match such performances to a matrix of transitive events (e.g. BOY KISS WOMAN), represented through pictures (Zlatev, Wacewicz, Zywiczynski, & Van de Weijer, 2017). Against a simple "more is best" prediction, we found that the multimodal condition was not only not more effective but less so. Using the framework outlined, we can say that such multimodal performances were not a case of polysemiotic communication, as the spontaneous vocalizations used by the actors did not comprise a consistent semiotic system.

In a second study (Louhema, 2018), we let participants either see (though the usual sequence of pictures) or hear (through an audio recording) the well known "frog story" (Mayer, 1969). Then each participant retold the story to an addressee. These narratives were video-recorded, transcribed and coded for gestures, story structure (Berman, & Slobin, 1994), connectives and ideophones. Due to the higher degree of iconicity present in the semiotic system of depiction compared to that of language (as realized in speech), we hypothesised a higher number of ideophones and iconic gestures in the narratives translated from the pictures-only condition compared to the speech-only condition. In the latter case, we expected greater narrative coherence, as reflected in a more diverse use of connective devices and a higher number of plot elements. Some, but not all of the predictions were supported, but in general, the results showed that a story given in different unimodal semiotic systems lead to different polysemiotic narratives.

About Jordan Zlatev

His website


When: 30 May 2018 12:45

End date: 30 May 2018 13:45