
How AI is quietly distorting academic enquiry – and what to do about it
You may also like
Popular resources
For years, universities taught students how to search: how to evaluate sources, compare perspectives and navigate the imperfect signals of search engine optimisation (SEO). With the advent of paid placement, that system was never neutral but it required the research ability to scrutinise sources. Students had to weigh, judge and decide.
Now, more and more, AI-generated summaries – embedded in platforms such as Google and tools like ChatGPT – are replacing traditional searches. Instead of sifting through competing links, students receive a single, coherent response: conversational, confident and ready to refine with a follow-up prompt. It is efficient. It is intuitive. It removes the friction of sorting through information. In doing so, it risks removing something else: the habit of critical enquiry itself.
Not wrong – but skewed
Much of the public debate around AI has focused on misinformation: deepfakes, hallucinations, outright falsehoods. But a more subtle issue is emerging – it is not that AI gets things wrong; it is that it gets things selectively right. AI-generated answers are built on vast amounts of scraped content – often drawing heavily from sources such as Wikipedia, Reddit, YouTube and online reviews. These inputs are uneven, opinionated and shaped by participation and user preferences rather than expertise. When they are synthesised into a single response, what emerges is not a balanced view but a filtered one.
- Why AI literacy must come before policy
- What does ‘age appropriate’ AI literacy look like in higher education?
- ‘If you like, I can….?’ Why GenAI needs to come with a health warning
This is something subtler, and in many ways more insidious, than misinformation in the conventional sense. Outputs appear authoritative while quietly privileging certain perspectives over others. Some call this the “white noise” of misinformation; a student encountering such an answer is not misled by outright falsehood but rather guided by emphasis.
The force-multiplier effect
What makes this dynamic powerful – and worrisome – is AI’s tendency to amplify the distinctive or anomalous.
Imagine a student researching a university, a public policy issue or even a scientific debate. Among thousands of consistent, unremarkable data points sits one outlier: a striking claim, perhaps, or a highly negative review, an unusual interpretation.
In a traditional search environment, that outlier might be buried. In an AI-generated summary, it may be elevated – precisely because it is unusual. AI systems are designed to identify patterns and in doing so surface what stands out. The result is a force-multiplier effect – of distinctive narratives rather than dominant ones. In a research context, this means that a marginal or atypical perspective can be presented as representative, shaping a student’s understanding from the outset.
This has profound implications; once that framing is established, it shapes the entire project.
From search engines to answer engines
At the same time, the economics of information are shifting.
Search engines were already shaped by commercial pressures – paid placements, for example, influence visibility. That logic is now moving into AI-generated responses. As advertising becomes embedded in these systems, the boundary between information and promotion risks becoming increasingly blurred. For students and researchers, this raises an uncomfortable possibility: the most persuasive answer might not be the most accurate but the most optimised.
Deskilling: the deeper risk is not just writing less but thinking less
Much of the anxiety around AI in education has focused on writing. Will students still learn to structure arguments, use evidence or master language? These concerns matter. But the greater risk is cognitive. If students rely on AI-generated summaries as starting points, they could bypass the intellectual work that defines critical thinking in higher education: weighing competing claims, identifying gaps and grappling with uncertainty.
Compounding this is AI’s tendency towards confirmation. AI is designed to optimise user experience, so these systems often reflect back what users are likely to agree with, subtly narrowing the range of perspectives they encounter. In disciplines built on contestation and critique, this is not a minor technological flaw; it is a fundamental contradiction that undermines the analytical process of learning.
What universities must do now
We already know that our response cannot be to ban AI; that would be shortsighted and self-defeating because students need to know how to use AI effectively, and it is already embedded in the digital research process itself. And it’s not enough to simply teach students how to write better prompts, because that is a technological solution to a very real human endeavour of intellectual processing.
Instead, universities must treat AI as an object of critical study in its own right.
Here’s what I recommend:
- Make the system visible. Students need to understand how AI generates answers: what data it draws from, how it prioritises information and why certain outputs appear more authoritative than others. Understanding what’s inside the “black box” must become part of the curriculum.
- Shift from AI skills to AI literacy. Using AI effectively is not the same as understanding it. Students should be trained to interrogate outputs: what perspectives are missing? What kinds of sources are being privileged? How might an answer change with a different framing? Assignments should reward the process of questioning answers as well as giving solutions.
- Confront bias in its everyday form. Bias in AI is often discussed in extreme terms but its most pervasive form is preference; what a user likes, believes or expects shapes what the system returns. Without awareness of this dynamic, students risk mistaking personalised outputs for objective truth, so recognising that influence is a critical academic skill. Bias awareness presents a very human learning process of self-reflection: what are my preferences? How might someone on the opposing side of this issue view these answers? How is AI framing this issue to satisfy my preconceived viewpoints?
Reclaiming enquiry
AI will become faster, smoother and more convincing – and that is precisely the challenge. The more seamless the response, the easier it is to accept without question. And the more that happens, the further we move from the core purpose of higher education: to deliver answers but also cultivate the capacity to question them.
Because in the end, the risk is not that AI will think for us. It is that we slowly forget how to think without it.
Cayce Myers is professor of public relations and director of graduate studies in the School of Communication at Virginia Tech.
If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

