Integrating Visual and Verbal Meaning in Multimodal Text Comprehension: Towards a Model of Intermodal Relations

Title
Integrating Visual and Verbal Meaning in Multimodal Text Comprehension: Towards a Model of Intermodal Relations
Publication Date
2011
Author(s)
Chan, Eveline
( author )
OrcID: https://orcid.org/0000-0002-1096-0158
Email: echan4@une.edu.au
UNE Id une-id:echan4
Editor
Editor(s): Shoshana Dreyfus, Maree Stenglin and Susan Hood
Type of document
Book Chapter
Language
en
Entity Type
Publication
Publisher
Continuum International Publishing Group
Place of publication
London, United Kingdom
Edition
1
UNE publication id
une:8956
Abstract
The purpose of this chapter is to explore a tentative framework for modelling image-text relations (earlier versions appear in Unsworth 2006, 2008) which describes the extent to which visual and verbal elements contribute to the overall ideational meaning in multimodal texts and the nature of the relationships among the elements. It is intended that such a model will contribute to a richer understanding of students' reading of multimodal texts, while offering a systematic approach to describing inter-semiotic relations in a way that is both useful and accessible to teachers and test-writers. To test the efficacy of the model, the framework has been applied to the analysis of data from a project investigating multimodal reading comprehension in group literacy tests administered by a state government education authority (Unsworth et al. 2006-2008). The questions explored in this research relate to how image and verbiage interact in the test stimulus materials and how students interpret meanings involving image-text relations.
Link
Citation
Semiotic Margins: Meaning in Multimodalites, p. 144-167
ISBN
9781441170163
9781441173225
Start page
144
End page
167

Files:

NameSizeformatDescriptionLink