Logo

Why AI literacy must come before policy

When developing rules and guidelines around the uses of artificial intelligence, the first question to ask is whether the university policymakers and staff responsible for implementing them truly understand how learners can meet the expectations they set

,

11 Sep 2025
copy
  • Top of page
  • Main text
  • More on this topic
Building blocks for AI policy
image credit: tsingha25/iStock.

Created in partnership with

University of Canterbury logo

You may also like

How to create a higher education AI policy
4 minute read
AI guidelines and policy concept

Popular resources

With the increasing use of artificial intelligence, schools and universities have focused on creating policies aimed at reducing the perceived risks that it poses to learning. These policies tend to focus on setting expectations for students, often emphasising their responsibility to maintain the authenticity and integrity of their work, as Jess Luo, of the University of Hong Kong, has written

However, what is not always clear is whether students understand these policies and have the skills to follow them. While policies aim to ensure the validity of our credentialing processes, they are often created to protect the institution rather than empower students or educators. When developing policies around AI, we should ask whether the policymakers and staff responsible for implementing them truly understand how learners can meet the expectations they set. What does responsible AI use look like? 

Literacy first, guidelines second, policy third

For students to respond appropriately to policies, they need to be given supportive guidelines that enact these policies. Further, to apply these guidelines, they need a level of AI literacy that gives them the knowledge, skills and understanding required to support responsible use of AI. Therefore, if we want AI to enhance education rather than undermine it, we must build literacy first, then create supportive guidelines. Good policy can then follow.

This means:

  1. training policymakers, educators and students in the fundamental literacies of AI
  2. co-designing guidelines that are principles-based and adaptable, not static rulebooks
  3. creating policy that can be enacted practised, assessed and integrated into the curriculum.

Supporting AI Literacy for responsible use

AI literacy must become the foundation for any institutional response. While AI literacy is needed across the university, it remains most important for those who guide the space – senior leaders and educators. AI literacy in educators means to understand what AI is, how it works and how we use it effectively, supporting our students in responsible use.

To begin addressing this critical gap, we have started actively implementing AI literacy across multiple levels, through educator development, curriculum design and student engagement. At the centre of this work, we have used our scaffolded AI literacy (SAIL) framework, developed through a Delphi study with experts in AI and curriculum.

To support the dissemination and application of this framework, we developed a series of short courses for educators, designed to support their understanding and application of AI in teaching and learning. The AI Essentials for Educators course introduces core concepts aligned with the three domains of the SAIL framework: AI concepts, cognitive and applied skills, and AI digital citizenship. This includes building foundational knowledge of how AI works, recognising its limitations, exploring the practicalities of classroom use, and reflecting on the ethical implications and risks. Focusing both on the potential of AI to enhance learning and its pitfalls, we have explored the potential for over-reliance, misinformation and ethical grey zones, as well as how educators can model responsible, adaptive use for their students. A follow-up course addresses the challenging area of AI’s impact on authentic assessment.

We have also drawn on the SAIL framework to develop explicit, practice-based examples that educators can use to integrate AI literacy into their teaching. These include tools and resources to help students better understand how AI works and its downsides. The emphasis is on embedding literacy as part of learning and supporting a continual and contextualised framing of developing AI literacy.

Additionally, we have engaged directly with students in the co-creation of AI guidelines. In doing so, we are embedding AI literacy through a process of organically developing guidelines, not merely demanding it through policy.

These interconnected efforts mark a shift from reactive rule-making towards a more proactive, informed and inclusive approach, that treats AI literacy as both a precondition for policy implementation and a core capability for teaching and learning in the age of AI.

AI literacy as a cornerstone

These approaches reflect the undeniable influence of AI. AI literacy is no longer a “nice to have” or something that can be delivered as a one-off activity or workshop. Rather, for educators and their institutions, it must be the foundation for responsible use of AI: teaching, learning, assessment and, yes, policy. Without it, our policies will miss the mark and hinder rather than help students to become AI-literate citizens.

It’s time to make AI literacy the cornerstone, not the afterthought, of how education responds to this new reality.

Kathryn MacCallum is associate professor of digital education futures within the School of Educational Studies and Leadership at the University of Canterbury, New Zealand. David Parsons is  research director at academyEX and adjunct professor at the University of Canterbury.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site