When students are given space to think, question, and reflect — real learning happens. This page highlights what students said, felt, and discovered while engaging with AI-aware tasks throughout the CES527 course.
Rather than focusing on grades or final answers, this experience asked students to document their process, their confusion, and their growth. The result was a more honest, ethical, and engaged relationship with both engineering content and technology.
Reflections from Students
These are some authentic voices taken from Google Forms and Padlet used throughout the semester:
“At first, I thought AI could just give me the answer. But I realised it wasn’t always correct — and that was the turning point.”
“The diagram from AI looked right, but it didn’t match the real equilibrium. When I checked with my class notes, I understood where it went wrong — and I learned more from that mistake than from a perfect answer.”
“This is the first time I was asked to reflect on how I used a tool, not just what I got from it. It felt different — like I was responsible for what I submitted.”
Other students described how the weekly prompts made them think deeper, or even challenge the AI output. Many said they started noticing patterns in how the AI responded, and how to ask better questions to get more accurate results.
Impact Across Three Cohorts
This AI-aware approach was implemented over three consecutive cohorts:
- Semester 20242 – initial pilot: students were hesitant but curious
- Semester 20244 – refinements added: stronger reflections and confidence seen
- Semester 20252 – scaled version: most students able to critique AI and articulate reasoning independently
By Semester 20252, over 80% of students submitted responses showing:
- Conceptual accuracy
- Clear self-correction after interacting with AI
- Metacognitive awareness (“I thought X, but then realised Y…”)
- Ethical consideration in quoting or referencing AI responses
What Changed for the Students?
By the end of the course, students:
- Moved from passive AI users to critical evaluators
- Began to reflect on their own thinking more than the AI output
- Understood that “learning with AI” requires judgment, not just curiosity
- Felt more confident explaining their reasoning, even if it differed from AI’s
This was the true impact: not just better answers, but better thinkers.