This project is completed. For a further research on similar topics see the project Multimodality and timing: A study of audio description.
Jana Holsanova, Richard Andersson, Kenneth Holmqvist
The overriding goal is to investigate how speakers integrate language and pictures in communication. Do pictures contribute to a better common ground and to a ”meeting of minds”? Do they activate concepts, make production and perception easier? Can partners better predict or simulate the others mind? Can we bridge the psycholinguistic tradition (visual world paradigm) and the conversation analysis tradition? The goal of the project is to both broaden the current psycholinguistic discussion and to expand on the experimental evidence in favor of more naturalistic set-ups.
Another goal is to analyze the interplay between visual information retrieval and content structuring of language production flow. While there is a growing body of psycholinguistic experimental research on mappings between language and vision on a word and sentence level, there are almost no studies on how speakers perceive, conceptualize and spontaneously describe complex visual scenes on higher levels of discourse. We investigate the dynamic process of scene inspection, the process of scene description and cognitive processes underlying both. What do we attend to visually when we describe something verbally? How do we construct meaningful units of a scene during the scene discovery?
Holsanova, J. (2010): Myter och sanningar om läsning. Om samspelet mellan språk och bild i olika medier. Norstedts.
Holsanova, J. (2008): Discourse, vision, and cognition. John Benjamins Publishing Company: Amsterdam/Philadelphia.
Holsanova, J. (ed.) (2012): Methodologies for multimodality research. Visual communication Vol. 11, Sage.
Holsanova, J. (forthc.) What matters in visual communication – taking the recipient perspective. In Machin, D. (Ed.) Handbook of Visual communication, Mouton – De Gruyter.
Holsanova, J. (2013): Reception of multimodality: Applying eye tracking methodology in multimodal research. In: Carey Jewitt (Ed.), Routledge Handbook of Multimodal Analysis.
Andersson, R. & Diderichsen, P. (2008). Eye movements as an indicator of spoken language processes. Gärdenfors, P. & Wallin, A. (Eds.) A smorgasbord of cognitive science (pp. 199-214). Bokförlaget Nya Doxa.
Holsanova, J., Johansson, R. & Holmqvist, K. (2008): To tell and to show: the interplay of language and visualisations in communication. In: Gärdenfors, P. & Wallin, A. (eds.), A Smorgasbord of Cognitive Science, 215-229. Nya Doxa.
Peer reviewed journal paper
Holsanova, Jana (2012): New Methods for Studying Visual Communication and Multimodal Integration. Visual communication (2012): 11 (3), pp. 251–257.
Boeriis, M. & Holsanova, J. (2012): Tracking visual segmentation. Connecting semiotic and cognitive perspectives. Visual communication, 11 (3), 259–281, Sage.
Sandgren, O., Ibertsson, T., Andersson, R., Hansson, K. & Sahlén, B. (2011). ‘You sometimes get more than you ask for’: Responses in referential communication between children and adolescents with cochlear implant and hearing peers. International journal of language & communication disorders / Royal College of Speech & Language Therapists, 46, 375-385. Informa Healthcare.
Sandgren, O., Andersson, R., Hansson, K., van de Weijer, J. & Sahlén, B. (submitted). Timing of gazes in child dialogues – a time-course analysis of requests and back channeling in referential communication.
Andersson, R., Ferreira, F. & Henderson, J. (2011). I See What You’re Saying: The integration of complex speech and scenes during language comprehension. Acta Psychologica, 137, 208-216. Elsevier.
Peer reviewed papers in conference proceedings
Andersson, R., Holsanova, J. & Holmqvist, K. (2011). Optional visual information affects conversation content. Artstein, R., Core, M., DeVault, D., Georgila, K., Kaiser, E. & Stent, A. (Eds.) SemDial 2011: Proceedings of the 15th Workshop on the Semantics and Pragmatics of Dialogue (pp. 194-195). ICT.