A3
Tracks
Track 1 | Shaping the future of learning
Friday, February 13, 2026 |
3:00 PM - 3:30 PM |
Ballroom A |
Overview
Tool or collaborator? Emerging evidence about learning with generative
(30 min PRES) Jason Lodge
Speaker
Professor Jason Lodge
The University of Queensland
A3 | Tool or collaborator? Emerging evidence about learning with generative AI
3:00 PM - 3:30 PMSubmission/ Abstract
The emergence of generative artificial intelligence presents a transformative moment for educational psychology, challenging traditional conceptualisations of learning tools and requiring new ways of understanding the role of technology in learning. While the initial discourse focused on academic integrity concerns, emerging evidence suggests a more nuanced reality: students engage with AI through sophisticated patterns of co-regulation that fundamentally alter the learning process.
Theoretical Framework and Research Evidence
Drawing on Lodge, de Barba, and Broadbent's (2023) network of co-regulation model, this presentation argues that generative AI operates neither as a simple tool (like calculators) nor as autonomous collaborators, but within complex human-machine learning networks. Our research (Hawkins, Taylor-Griffiths, & Lodge, 2025) suggests four distinct behavioural patterns: feedforward (initial AI requests), feedback (seeking AI assessment), feedback evaluation (critically assessing AI outputs), and deliberate AI avoidance. Critically, students demonstrate varying levels of sophistication; some exhibit an advanced understanding of AI limitations, comparing outputs across sources and requesting specific types of feedback, while others show a concerning dependence, relying on AI for task completion rather than learning enhancement.
Our study also reveals that feedback literacy is a significant predictor of AI-enhanced essay performance (β = .46, p = .017). Students with stronger feedback literacy skills demonstrate superior ability to evaluate AI-generated feedback, make strategic revision requests ("summarise," "elaborate," "try again"), and maintain learning focus rather than task completion focus. This variability appears linked to students' metacognitive awareness and self-regulated learning capabilities. The finding support the notion that interacting with generative AI feels more like working with a collaborator than a tool.
Implications and Future Directions
These findings challenge prevalent educational responses to AI, suggesting that prohibition or simple regulation overlooks the transformative potential of human-AI co-regulation. Instead, educational interventions should focus on developing students' self-regulated learning capabilities, metacognitive awareness, and feedback literacy skills. The evidence suggests that students with stronger self-regulation skills naturally develop more productive AI relationships, maintaining agency while leveraging AI capabilities for learning enhancement.
This research aligns with the conference theme of "shaping the future of learning" by proposing that AI integration requires fundamental reconceptualisation of learning processes rather than technological overlay on existing practices. Our research suggests that generative AI functions as a component in complex networks of co-regulated learning. A deeper understanding of these dynamics is critical for the future of learning in education and beyond.
Theoretical Framework and Research Evidence
Drawing on Lodge, de Barba, and Broadbent's (2023) network of co-regulation model, this presentation argues that generative AI operates neither as a simple tool (like calculators) nor as autonomous collaborators, but within complex human-machine learning networks. Our research (Hawkins, Taylor-Griffiths, & Lodge, 2025) suggests four distinct behavioural patterns: feedforward (initial AI requests), feedback (seeking AI assessment), feedback evaluation (critically assessing AI outputs), and deliberate AI avoidance. Critically, students demonstrate varying levels of sophistication; some exhibit an advanced understanding of AI limitations, comparing outputs across sources and requesting specific types of feedback, while others show a concerning dependence, relying on AI for task completion rather than learning enhancement.
Our study also reveals that feedback literacy is a significant predictor of AI-enhanced essay performance (β = .46, p = .017). Students with stronger feedback literacy skills demonstrate superior ability to evaluate AI-generated feedback, make strategic revision requests ("summarise," "elaborate," "try again"), and maintain learning focus rather than task completion focus. This variability appears linked to students' metacognitive awareness and self-regulated learning capabilities. The finding support the notion that interacting with generative AI feels more like working with a collaborator than a tool.
Implications and Future Directions
These findings challenge prevalent educational responses to AI, suggesting that prohibition or simple regulation overlooks the transformative potential of human-AI co-regulation. Instead, educational interventions should focus on developing students' self-regulated learning capabilities, metacognitive awareness, and feedback literacy skills. The evidence suggests that students with stronger self-regulation skills naturally develop more productive AI relationships, maintaining agency while leveraging AI capabilities for learning enhancement.
This research aligns with the conference theme of "shaping the future of learning" by proposing that AI integration requires fundamental reconceptualisation of learning processes rather than technological overlay on existing practices. Our research suggests that generative AI functions as a component in complex networks of co-regulated learning. A deeper understanding of these dynamics is critical for the future of learning in education and beyond.
Learning outcomes
By the end of this session, participants will be able to:
1. Analyse Student AI Interactions Through the Co-Regulation Framework
Participants will be able to distinguish between traditional tool-based interactions and co-regulatory learning networks when observing student-AI interactions. They will identify the four distinct behavioural patterns (feedforward, feedback, feedback evaluation, and AI avoidance) in student AI use.
2. Evaluate the Role of Feedback Literacy in AI-Enhanced Learning
Participants will recognise the key indicators of strong feedback literacy (ability to evaluate AI outputs, make strategic revision requests, maintain learning vs. task-completion focus) and understand how metacognitive awareness influences productive human-AI learning partnerships.
3. Design Educational Interventions That Optimise Human-AI Co-Regulation
Participants will be able to develop evidence-based strategies that focus on enhancing students' self-regulated learning capabilities, metacognitive awareness, and feedback literacy skills rather than implementing restrictive AI policies.
1. Analyse Student AI Interactions Through the Co-Regulation Framework
Participants will be able to distinguish between traditional tool-based interactions and co-regulatory learning networks when observing student-AI interactions. They will identify the four distinct behavioural patterns (feedforward, feedback, feedback evaluation, and AI avoidance) in student AI use.
2. Evaluate the Role of Feedback Literacy in AI-Enhanced Learning
Participants will recognise the key indicators of strong feedback literacy (ability to evaluate AI outputs, make strategic revision requests, maintain learning vs. task-completion focus) and understand how metacognitive awareness influences productive human-AI learning partnerships.
3. Design Educational Interventions That Optimise Human-AI Co-Regulation
Participants will be able to develop evidence-based strategies that focus on enhancing students' self-regulated learning capabilities, metacognitive awareness, and feedback literacy skills rather than implementing restrictive AI policies.
.....
Professor Jason Lodge, MAPS, PFHEA, is the Director of the Learning, Instruction, and Technology Lab and Professor of Educational Psychology in the School of Education at The University of Queensland. Jason explores the cognitive, metacognitive, and emotional aspects of learning, particularly with digital technologies, including artificial intelligence. He also serves as an expert advisor to the Australian Government and the OECD on the use of technology in education.
