Logo

Conversations with bots: teaching students how – and when – to use GenAI for academic writing

A four-step process teaches students how to use GenAI tools to brainstorm ideas, understand and act on feedback and edit their essays in line with assessment rubrics
12 Feb 2026
copy
  • Top of page
  • Main text
  • Additional Links
  • More on this topic
A robot speaking to a student
image credit: iStock/demaerre.

Created in partnership with

Logo

You may also like

When essay feedback does more harm than good
4 minute read
Diamond held in tweezers

Popular resources

Promoting the critical and ethical use of generative artificial intelligence (GenAI) in undergraduate academic writing can be challenging. Novice writers may rely heavily on digital tools and may not always take a critical stance towards them. But there are times when GenAI can be valuable, such as for idea generation and feedback, particularly for students learning English as a second language. So how can we teach students how – and when –  to use these tools?

In this resource, we share a four-step strategy for improving GenAI literacy in an academic writing course. We used an in-house chatbot developed by our university, but the same techniques work with commercial chatbots, such as ChatGPT. 

1. Purpose

We wanted to teach students to use GenAI at different stages of the writing process, not to simply write the entire essay. 

To illustrate this, we gave students an AI-generated essay and asked them to evaluate the quality of the writing. Students identified a range of positive features, along with several errors. We then showed them the prompt we used, which only contained the essay title and required word count. Students realised that a single prompt does not necessarily yield a strong piece of writing.

2. Idea generation

Next, students used a chatbot to generate a list of arguments and counter-arguments to use in their essays. We asked them to brainstorm independently with the chatbot and then share the prompts with their peers. By reviewing each other’s prompts, students identified flaws and strengths. They realised, for example, that they needed to clarify the chatbot’s role and to provide additional details about the task and context, and that more detailed prompts led to more effective ideas. 

For instance, “Act as a writing tutor. I’m writing an academic essay for a first-year English for academic purposes (EAP) university module. Provide five possible ideas for arguments and counter-arguments for: [essay title]” yielded more effective results than “Can you give me ideas to answer: [essay title]”.

3. Engaging with feedback

Next, we gave students feedback on their draft essays in the form of a checklist and comments made in line with the assessment rubrics. We asked students to use GenAI to clarify the requirements of the rubrics and checklist, unpack our feedback and suggest possible responses.

We then asked students to review each other’s work using GenAI and to compare these results with the teacher’s feedback. Students saw that each source of feedback gave valuable suggestions, and synthesising these helped develop feedback literacy.

Through guided practice, we teach students that GenAI is not a replacement for peer and teacher feedback but is complementary to them. In formative assessments, teachers can incorporate all three modes to improve academic outcomes.

4. Proofreading

While students can use tools such as Microsoft Copilot or Grammarly to proofread essays, the targeted and iterative nature of chatbot interactions can lead to more specific and detailed feedback.

We asked students to compare Grammarly and GenAI feedback to understand the value of each tool. They realised that GenAI could provide more detailed feedback in line with assessment rubrics or journal style guidelines. 

Again, careful prompting was key here. Students needed to ask the chatbot to highlight or identify the changes it made. It was also generally more effective to ask for feedback on aspects of their writing, such as grammar, punctuation, academic style or referencing, rather than to ask the tool to proofread everything at once. Students also benefited from asking for suggestions based on the feedback they received or common issues they had experienced. For example, one student who had received peer feedback about academic style prompted the chatbot to offer suggestions on this aspect of her writing. 

Students need clear guidance on how to use GenAI for academic writing. Ours now know that a single prompt does not necessarily lead to a polished essay. But it can produce effective results in the idea generation stage. And that, in the proofreading and feedback stages, more iterative prompting gets the best results. Institutional GenAI policies and guidance are valuable, but the key to building AI literacy is to allow students to practise using and experimenting with GenAI in class.

Joseph Tinsley is an educational developer; Huimin He is a language lecturer, both at Xi’an Jiaotong-Liverpool University.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site