Schools don’t build innovation on hope; they build it on stable ground. Artificial intelligence has arrived with enormous promise, a dose of anxiety, and the uncomfortable reality that most schools are not yet structurally prepared for what it makes possible. The concept of an AI Safety Capsule offers a way forward. It is not a product or a policy document. It is a way of thinking about governance, strategy, risk, and culture so a school can innovate confidently without drifting into hesitation or chaos.
What Is the AI Safety Capsule Framework?
At its core, the AI Safety Capsule creates clarity for everyone: about what AI is for, how it will and won’t be used, and what conditions must be in place before it becomes routine. It sets the norms, boundaries, and expectations that allow teachers to experiment safely and leaders to make decisions with purpose rather than fear. It becomes the ground that stays still beneath everyone’s feet while the technology races forward.
Why International Schools Need a Foundation for AI Now
The need for such a foundation has become unmistakable. Students already use AI every day: to draft study notes, plan assignments, explore new ideas, and sometimes to take shortcuts. Teachers are using it too – over 90% of teachers in UAE, Singapore, New Zealand and Australia in a recent survey by the OECD – some quietly, some enthusiastically, and most without clear direction even on the basics of what is and isn’t appropriate. Without a shared approach, schools end up with conflicting messages, fragmented practice, and a messy grey zone where innovation and risk coexist without clear supervision. Establishing a Safety Capsule brings order to this complexity and helps schools launch into better conversations and sustainable practices
Without a shared approach, innovation and risk coexist without supervision.
Five Barriers to Implementing AI in Schools
Five particular points of friction complicate this work for school leaders. Each is predictable, but each is also solvable with deliberate structure.

Barrier 1: The Illusion of Time
The first is the absence of urgency. Many staff believe AI is something to “look into later,” even as many global industries, universities, and students move far ahead. A principal may notice that only a few early adopters are experimenting while most teachers see AI as an optional extra. Instead of trying to spark urgency through fear or hype, a better approach is to centre the conversation on sensible responsibility. A one-page AI positioning statement that is simple, public, and values-driven will create immediate clarity and signals that the school is not waiting for the world to move first. (As I like to say, if you can’t create a Canva poster of your message, it’s not clear enough yet.) It becomes an anchor that everyone can return to, and it gives permission for thoughtful experimentation to begin.
Barrier 2: Conflicting Leadership Messages
A second friction point is the fragmented or conflicting expectations that creep in when different leaders have different interpretations of what AI should look like. A deputy encourages creative exploration; the ICT manager warns of data security; a faculty head bans use of AI in essays; everyone on staff quietly uses it for planning. Everyone might already be acting sensibly, but in isolation their decisions create confusion. Leaders can often resolve this through a short alignment process: an internal workshop that clarifies key risks, identifies non-negotiable boundaries, and sets expectations for safe experimentation. When the output is turned into a brief, repeatable message delivered consistently across parent meetings, staff gatherings, and student assemblies, coherence starts to take shape. Schools sometimes find this hardest to do internally because alignment requires honest debate, so external facilitation can accelerate clarity and reduce tension.
Barrier 3: Risk Without Structure
The third friction lies in navigating risk. Leaders carry concerns about data exposure, student privacy, academic integrity, and the unpredictable behaviour of AI tools. A school might trial a new literacy platform only to discover that data consent was incomplete or the hosting location unclear. This undermines trust and slows momentum. The Safety Capsule reframes risk from something to fear into something to map and manage. A quick sweep across teaching, operations, data governance, wellbeing, and parent communication can surface the school’s top few risks. Each can then be paired with a mitigation that is implementable within the term. What matters is not perfection but transparency. Leaders who put these basics in place become better equipped to handle more complex AI decisions down the track.
Barrier 4: Professional Development Without a Pathway
The fourth point of friction is the lack of a clear pathway for teacher capability. Most teachers are not resistant to AI; they are simply unsure where to begin. Without a shared sense of what “capability” looks like, professional development becomes disjointed: a big workshop at the start of term, a tool demo halfway through, the occasional enthusiastic email. Meanwhile, leaders have no visibility of actual practice. Establishing a progression of professional growth, beginning with essential skills in “AI First Aid” and developing into ethical, pedagogical, operational, and adaptive uses gives teachers a map rather than a menu. A simple baseline activity that every staff member can complete in twenty minutes provides an anchor point for growth. When a school adopts a capability ladder, teachers start experimenting more confidently and leaders get the visibility they need to support momentum.
Barrier 5: Failure to Scale
The final point of friction is the difficulty of scaling beyond early adopters. Schools often run promising pilots led by a handful of passionate teachers. Their case studies are shared at a staff meeting, celebrated, and then quietly shelved. Six months later, only those three teachers are still using AI well. Scaling requires rhythm, not fanfare. Setting out a small but deliberate “scale map” or roadmap to visually articulate your strategy. One whole-school initiative, one faculty-specific initiative, and one student-facing initiative. This creates a manageable structure for term-by-term progress. With light reporting and regular review, early successes can be extended without relying on individual champions. Momentum becomes systemic rather than personal.
The AI Capsule is not a policy exercise but a leadership one.
AI Governance in Schools: A Leadership Responsibility
Taken together, these friction points demonstrate why the AI Safety Capsule is not a policy exercise but a leadership one. It equips schools to move safely and strategically, grounded in clarity rather than guesswork. It establishes guardrails that protect people, organisational reputations and values, while also opening space for genuine innovation. Most importantly, it gives teachers and students the reassurance that AI is part of a purposeful, well-governed strategy rather than a chaotic experiment.
Schools that build this foundation discover that confidence grows quickly. Once the capsule is in place, everything becomes easier: designing expectations for students, engaging parents, selecting tools, supporting teachers, and planning for the future. Leaders gain a language for discussing AI that is neither defensive nor reckless. They learn to make decisions that keep the school human-centred even as new automation and intelligence tools emerge.
By Matt Esterman

Matt Esterman, an Edruptor of 2024, has over 20 years working in schools and beyond as a leading voice in the thoughtful adoption of technology. He is a trained History teacher with two masters degrees, who has made a significant contribution to professional learning in Australia and overseas. He has been recognised with several awards, most recently as a Commonwealth Bank Teaching Fellow, provided by Australian Schools Plus. Matt has founded The Next Word, a consultancy that seeks to leverage AI and other technologies to help shape a better future. He works with schools, universities and other organisations to increase awareness and capability in using AI. He has co-authored a book titled “The Next Word: AI & Teachers” with Dr Nick Jackson, which launched in 2024 and “The Next Word: AI & Learners” with Nick and Amy Wallace, published by Amba Press 2025. His is a regular speaker and workshop facilitator across Australia and internationally with educational and corporate organisations. Matt has been appointed an Adjunct Fellow in the School of Education at Western Sydney University and is a member of the HP Futures 2025: AI & Leadership Council.


