Integrating Visual and Verbal Meaning in Multimodal Text Comprehension: Towards a Model of Intermodal Relations

Author(s)
Chan, Eveline
Publication Date
2011
Abstract
The purpose of this chapter is to explore a tentative framework for modelling image-text relations (earlier versions appear in Unsworth 2006, 2008) which describes the extent to which visual and verbal elements contribute to the overall ideational meaning in multimodal texts and the nature of the relationships among the elements. It is intended that such a model will contribute to a richer understanding of students' reading of multimodal texts, while offering a systematic approach to describing inter-semiotic relations in a way that is both useful and accessible to teachers and test-writers. To test the efficacy of the model, the framework has been applied to the analysis of data from a project investigating multimodal reading comprehension in group literacy tests administered by a state government education authority (Unsworth et al. 2006-2008). The questions explored in this research relate to how image and verbiage interact in the test stimulus materials and how students interpret meanings involving image-text relations.
Citation
Semiotic Margins: Meaning in Multimodalites, p. 144-167
ISBN
9781441170163
9781441173225
Link
Language
en
Publisher
Continuum International Publishing Group
Edition
1
Title
Integrating Visual and Verbal Meaning in Multimodal Text Comprehension: Towards a Model of Intermodal Relations
Type of document
Book Chapter
Entity Type
Publication

Files:

NameSizeformatDescriptionLink