Aim of project:

The aims of this project are twofold. On one level the project addresses a specific research topic, to be elaborated below. One another level the aim is to contribute to a more general research platform/infrastructure, for addressing various issues on interaction and communication.
Level one: Our aim is to shed light on the effects of the temporal co-ordination of speech and gestures on an addressee, who watches and listens to somebody that tells a story. It is well established that there is a tight semantic-pragmatic and temporal link between speech and gesture – but it remains unknown how this fine-grained temporal synchronization influences the listener/observer (addressee), and how important it is that the alignment of speech and gestures is precise.

We ask the following:

  1. When (at what time interval of displacement) does temporal misalignment between speech and gesture affect comprehension of a narrative, and what are those effects?
  2. Does temporal misalignment between speech and gesture cause semantic integration difficulties in processing as detectable in electrophysiological measurements such as the N400?
  3. When (at what time interval of displacement) does temporal misalignment affect emotional experiences such as irritation?
  4. Are there individual and group differences in sensitivity for temporal misalignment?

To address these questions, the project will use virtual character techniques to enable fine-grained manipulation of parameters. We are developing digital characters (including so called virtual humans) on the basis of motion capture data of natural speech and gesture. By means of these, we will be able to displace gesture relative to speech – to occur both “too early” and “too late” – in a controlled fashion.
The temporal alignment of speech and gestures is a pervasive naturally occurring production phenomenon, and in this sense a fundamental communicative phenomenon. Our project seeks to contribute to the understanding of this phenomenon from the perspective of the addressee. Despite a large literature on how listeners’ process gesture information, no study has addressed the four specific questions posed above.
An obvious reason is that systematic and controlled displacements of gestures in relation to speech using human beings or video recordings are not feasible. The methodology using virtual characters is crucial to the enterprise.

Level two: The project serves as a potential starting point for a digital platform/infrastructure to study several other interaction and communication phenomena.

Publications

  • Clausen-Bruun, M., Ek, T., & Haake, M. (2013). “Size certainly matters – at least if you are a gesticulating digital character: The impact of gesture amplitude on addressees’ information uptake.” In Proc. of the 13th Int. Conf. on Intelligent Virtual Agents (IVA 2013), LNCS, vol. 8108 (pp. 446-447). Berlin/Heidelberg, Germany: Springer-Verlag.
  • Clausen-Bruun. M., Andersson, T., Ek, T., & Thomasson. J. (2013). “Size certainly matters – at least if you are a gesticulating digital character: The impact of gesture amplitude on information uptake.” Lund University Cognitive Studies No 154.
  • Viktorelius, M., Eneroth, L., & Meyer, D., (2013). “Metaphorical gestures, Simulations and Digital characters.” Lund University Cognitive Studies No 154.

For more info, check out – http://www.lucs.lu.se/educational-technology/

The aims of this project are twofold. On one level the project addresses a specific research topic, to be elaborated below. One another level the aim is to contribute to a more general research platform/infrastructure, for addressing various issues on interaction and communication.

Level one: Our aim is to shed light on the effects of the temporal co-ordination of speech and gestures on an addressee, who watches and listens to somebody that tells a story. It is well established that there is a tight semantic-pragmatic and temporal link between speech and gesture – but it remains unknown how this fine-grained temporal synchronization influences the listener/observer (addressee), and how important it is that the alignment of speech and gestures is precise. We ask the following:

1. When (at what time interval of displacement) does temporal misalignment between speech and gesture affect comprehension of a narrative, and what are those effects? 2. Does temporal misalignment between speech and gesture cause semantic integration difficulties in processing as detectable in electrophysiological measurements such as the N400? 3. When (at what time interval of displacement) does temporal misalignment affect emotional experiences such as irritation? 4. Are there individual and group differences in sensitivity for temporal misalignment?

To address these questions, the project will use virtual character techniques to enable fine-grained manipulation of parameters. We are developing digital characters (including so called virtual humans) on the basis of motion capture data of natural speech and gesture. By means of these, we will be able to displace gesture relative to speech – to occur both “too early” and “too late” – in a controlled fashion.

The temporal alignment of speech and gestures is a pervasive naturally occurring production phenomenon, and in this sense a fundamental communicative phenomenon. Our project seeks to contribute to the understanding of this phenomenon from the perspective of the addressee. Despite a large literature on how listeners’ process gesture information, no study has addressed the four specific questions posed above.

An obvious reason is that systematic and controlled displacements of gestures in relation to speech using human beings or video recordings are not feasible. The methodology using virtual characters is crucial to the enterprise.

Level two: The project serves as a potential starting point for a digital platform/infrastructure to study several other interaction and communication phenomena.