By Dr Sue | Published on May 23, 2025

The rise of tools like Google Gemini and ChatGPT has redefined how research is initiated, planned, and even written. With just a few keywords — AI-aware, alternative assessment, engineering education — these systems can now generate full research plans, executive summaries, and even literature outlines within minutes.
This new landscape raises a critical question for supervisors, examiners, and postgraduate researchers:
When AI can write for you, what then is the true mark of scholarly ownership?
1. The Speed of AI, the Slowness of Thought
AI dramatically accelerates the planning phase of research. With structured and formal outputs, students can now jump straight to writing without ever wrestling with ambiguity — a crucial part of real inquiry. The danger is this: in bypassing the struggle, they might also bypass understanding.
2. The Illusion of Mastery
A well-written report does not always reflect deep comprehension. Supervisors and examiners must now probe deeper — Why did you choose this method? How do you justify this framework?
3. Supervisors as Thinking Partners
Supervisors must now facilitate intellectual dialogues, not just editorial reviews. Their role is to help students think, reflect, and own their ideas — not merely refine AI-generated texts.
4. Assessment Must Evolve
- Metacognitive depth – Does the student reflect on their own process?
- AI literacy – Are they using AI responsibly and transparently?
- Intellectual ownership – Are the ideas truly theirs?
5. Moving Forward: AI-Aware Supervision
Rather than banning AI, we must embed it ethically. Encourage students to:
- Log and reflect on AI usage.
- Compare AI insights with real literature.
- Defend decisions through reflective viva or journals.
Final Thought
The challenge is not that students use AI — the challenge is ensuring they don’t stop thinking when they do. In this AI-rich academic future, the true measure of scholarship lies in the human behind the prompt.