Please use this identifier to cite or link to this item:
https://hdl.handle.net/1959.11/59366
Title: | Diverging assessments: What, Why, and Experiences |
Contributor(s): | Sakzad, Amin (author); Paul, David (author) ; Sheard, Judithe (author); Brankovic, Ljiljana (author) ; Skerritt, Matthew P (author); Li, Nan (author); Minagar, Sepehr (author); Simon (author); Billingsley, William (author) |
Publication Date: | 2024 |
DOI: | 10.1145/3626252.3630832 |
Handle Link: | https://hdl.handle.net/1959.11/59366 |
Abstract: | | In this experience paper, we introduce the concept of 'diverging assessments', process-based assessments designed so that they become unique for each student while all students see a common skeleton. We present experiences with diverging assessments in the contexts of computer networks, operating systems, ethical hacking, and software development. All the given examples allow the use of generative-AI-based tools, are authentic, and are designed to generate learning opportunities that foster students' meta-cognition. Finally, we reflect upon these experiences in five different courses across four universities, showing how diverging assessments enhance students' learning while respecting academic integrity.
Publication Type: | Conference Publication |
Source of Publication: | Proceedings of the 55th ACM Technical Symposium on Computer Science Education, v.1, p. 1161-1167 |
Publisher: | Association for Computing Machinery, Inc |
Place of Publication: | United States of America |
Fields of Research (FoR) 2020: | 4602 Artificial intelligence |
Peer Reviewed: | Yes |
HERDC Category Description: | E1 Refereed Scholarly Conference Publication |
Appears in Collections: | Journal Article School of Science and Technology
|
Files in This Item:
1 files
Show full item record
Items in Research UNE are protected by copyright, with all rights reserved, unless otherwise indicated.