
AI for feedback: what to keep in mind when developing your own tool

A lack of communication skills often leaves UK graduates feeling “naked in the workplace”. To address this, we developed an artificial intelligence (AI) tool that provides feedback on students’ spoken communication, in addition to academic and reflective writing.
Students can upload an audio recording to receive feedback, with the below excerpt providing an example response:
“Your reflection starts with a clear context. You then transition into X, however, you miss a crucial step in explaining the implications of this process…To strengthen this section, consider adding a sentence or two to discuss how you felt about Y and how it impacted your experience…and consider adding a specific example or two of how you think you’ll apply the insights you gained from this experience to future projects or challenges.”
In a previous article, we explored how universities could navigate the shifting relationship between students, educators and AI technologies. Here, we explain how educators can design AI applications that align with their pedagogical values and institutional contexts, along with what to watch out for when doing so.
We recognise that many view AI-generated diagnostic and summative feedback as controversial, so we focus only on formative feedback for the time being.
1. Put ethics first
Responsible development starts with securing research ethics approval and ensuring we understand and apply ethical guidance. Our guiding framework was the European Commission’s AI ethics guidelines, which emphasise human agency, fairness, transparency, privacy and societal benefit. This meant avoiding automated grading and ensuring students understood how their data was handled.
2. Involve students throughout
We used design thinking, a structured, human-centred approach to problem-solving, to better understand students. Their core frustration was a lack of timely, high-quality formative feedback on demand. So, we co-designed the tool over four iterative cycles, incorporating students’ ideas and preferences along the way. Sketches and low-fidelity prototypes enabled rapid feedback. This student-centred approach helped us build a tool that students viewed as useful, as well as innovative.
3. Prioritise transparency
Transparency is about more than compliance. It highlights what responsible AI practice should look like and reinforces trust. Students and colleagues need to see how AI works, not just that it works.
We made disclaimers and privacy notices, approved by our in-house legal and compliance services, clearly visible. They explained that the tool collected no personal data besides the voice.
- Spotlight guide: AI and assessment in higher education
- AI and assessment redesign: a four-step process
- In an artificially intelligent age, frame higher education around a new kind of thinking
4. Expect resistance – and understand it
Surprisingly, the strongest resistance came not from students but from decision-makers within the university, despite ethical safeguards and demonstrated student benefits. Prospect theory, developed by Daniel Kahneman and Amos Tversky, explains that people tend to value potential losses more strongly than possible gains. It suggests we should not take resistance personally, but rather understand that, although unlikely, potential risks can outweigh likely benefits when it comes to institutional thinking.
5. Adapt continually
As generative AI evolves, student expectations change. Ongoing iterations are essential, not only for the interface and functionality, but also for the framing, purpose and scope of the tool.
6. Use open-source code
The code underpinning our work relies on “big tech”, namely Whisper, OpenAI’s machine learning model for speech recognition and transcription, which we use to interpret audio files. We also use Llama, a large language model developed by Meta AI, for pre-prompting (providing foundational instructions to an AI model). Both consist of open-source code that we copied into our tool’s directory. This means that the oral or written files students submit, as well as the feedback files generated, remain private, with none of the tech firms having oversight or access to the data, and only the previously copy-pasted code interpreting the submissions.
Building your own AI tool is not just a technical project. It is a design, ethics and change management challenge. But it is also a journey to understanding how AI works, and an opportunity to collaborate between researchers from different departments. Furthermore, building a tool allows institutions to reclaim agency in an AI landscape increasingly dominated by external solutions. So, what’s next for us? One possibility is enabling students to contribute video submissions, reflecting the growing importance of video generation.
To request access to the feedback tool, please email Isabel.fischer@wbs.ac.uk
Isabel Fischer is a professor of digital innovation, and Sean Enderby is a senior research software engineer at Warwick Business School. Ross Hunter is a postdoctoral researcher in the department of physics at the University of Warwick.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.