Title |
Integrating Visual and Verbal Meaning in Multimodal Text Comprehension: Towards a Model of Intermodal Relations |
|
|
Publication Date |
|
Author(s) |
|
Editor |
Editor(s): Shoshana Dreyfus, Maree Stenglin and Susan Hood |
|
|
Type of document |
|
Language |
|
Entity Type |
|
Publisher |
Continuum International Publishing Group |
|
|
Place of publication |
|
Edition |
|
UNE publication id |
|
Abstract |
The purpose of this chapter is to explore a tentative framework for modelling image-text relations (earlier versions appear in Unsworth 2006, 2008) which describes the extent to which visual and verbal elements contribute to the overall ideational meaning in multimodal texts and the nature of the relationships among the elements. It is intended that such a model will contribute to a richer understanding of students' reading of multimodal texts, while offering a systematic approach to describing inter-semiotic relations in a way that is both useful and accessible to teachers and test-writers. To test the efficacy of the model, the framework has been applied to the analysis of data from a project investigating multimodal reading comprehension in group literacy tests administered by a state government education authority (Unsworth et al. 2006-2008). The questions explored in this research relate to how image and verbiage interact in the test stimulus materials and how students interpret meanings involving image-text relations. |
|
|
Link |
|
Citation |
Semiotic Margins: Meaning in Multimodalites, p. 144-167 |
|
|
ISBN |
|
Start page |
|
End page |
|