Project members
Claudia Kaiser, Cornelia Wiedenhofer, Nadine Buchmann (Humanities Division).
Project summary
Developing and testing an AI-driven feedback tool to provide real-time, corrective, and contextualised feedback for student translations, enhancing active learning and self-reflection, with the potential for expansion to multiple languages after initial testing with English-German translations.
View final project report (PDF)
AI in Teaching and Learning at Oxford Knowledge Exchange Forum, 9 July 2025
Findings from projects supported by the AI Teaching and Learning Exploratory Fund in 2024–25 were presented at the AI in Teaching and Learning at Oxford Knowledge Exchange Forum at Saïd Business School on Wednesday, 9 July 2025.
Project team members each presented a lightning talk to all event participants, and hosted a series of small group discussions.
Follow the links below to view the lightning talk recording and presentation slides for this project.
View lightning talk recording (Panopto) - TO FOLLOW
View presentation slides (PDF)
Project case study
How did you use the AI tool in your teaching or teaching administrative work?
We piloted the AI feedback tool with Year 1 and 2 students in our prose translation courses, taught in colleges at undergraduate level to support English-to-German translation tasks, both in class and for independent homework.
What was your rationale for using the tool in this way? What were the benefits, both for you and your students' learning?
The rationale was to enhance the traditional feedback loop by providing immediate, tutor-style input that fosters active learning. For students, this meant more time spent revising and greater engagement with their work before submission. For tutors, the tool served as a useful primer - highlighting some issues. However, it also added to the workload, as AI-generated feedback required close scrutiny to ensure accuracy and pedagogical alignment.
Were there any challenges/limitations?
One of the main technical challenges was that the initial version of the tool failed to deliver feedback in the desired format. The switch to a different language model improved performance, but we abandoned the idea of mimicking our tutor feedback such as colour coding due to highlight different types of errors and mistake.
Pedagogically, it was challenging to position the tool as more than a correction engine but as a partner in the learning process. Many students saw it primarily as a quick fix tool, useful for efficiency but not conducive to deeper engagement. From a teaching perspective, we had hoped students would engage more critically with vague explanations and take initiative in diagnosing and correcting issues themselves. This limited uptake may be linked to the current format of the tool, which does not yet support an interactive chat feature.
What did you learn in the process? (is there anything you would like to keep doing or that you might do differently next time?)
A key insight from this project was that AI tools in teaching are most effective as augmentative aids rather than fully autonomous systems. While it was challenging to position the tool as more than a correction engine, it nonetheless proved valuable as a prompt for student-led revision and in-class discussion. Its pedagogical strength does not lie in replacing tutor feedback, but in supporting reflection and engagement when guided appropriately.
How were your actions received by others (eg students, colleagues)? If you conducted an evaluation, what was the feedback you received?
Students were curious and responded positively to the concept of the tool and were engaged during trial sessions. They used it both in tutor-led classroom activities and for independent homework tasks and they saw value in the tool - especially in its immediacy. They particularly appreciated the flexibility to revise work while it was still fresh and reported using the AI feedback to improve their drafts. In post-use questionnaires, they offered constructive suggestions for improving the usability of the tool. All students agreed that the tool helped them improve their translations. Responses were more mixed regarding whether the tool increased their confidence or motivated them to spend more time revising.
Teaching colleagues were open and expressed interest in exploring the tool with their cohorts.
How would you like to build on or develop the work you have done?
We’d like to trial the tool with new cohorts and other colleagues teaching translation to gather more data on user experience and impact on learning. We would like to improve the usability of the feedback tool. Long-term, we hope to develop interactive features and perhaps follow-up task suggestions.