Let’s be honest – with everything on your plate as a school leader, another regulatory framework probably isn’t top of your wish list. But the EU AI Act is here, and if your staff is using AI tools (which, let’s face it, they are if you know it or not), you’ll need to understand what it means for you.
The “Quick Guide” to AI Risk Levels
The EU has created four tiers of AI risk, and understanding where different tools fall will determine your compliance requirements:
- Prohibited AI: These are completely banned. Think emotion detection systems monitoring students, real-time facial recognition for discipline, or tools that manipulate student behaviour. If you’ve somehow implemented any of these, they need to go immediately.
- High-Risk AI: This is where your attention is most needed. Any AI making significant decisions about students’ educational journeys – automated grading systems, admissions algorithms, or tools that determine educational pathways. These require robust oversight, explainability, and proper appeals processes.
- Limited-Risk AI: Examples include chatbots, and content generators. These need proper disclosure, so everyone knows when they’re interacting with AI rather than a human. There are also some obligations around the types of content that is generated, especially if it is a deepfake or text written to manipulate (think mis and dis information).
- Minimal-Risk AI: Basic tools like AI powered spell-checkers and scheduling assistants. These systems face no obligations under the AI Act due to their minimal risk to citizens’ rights and safety.
AI Literacy: A Legal Requirement from February 2025
From February 2025 (yes, now), schools must ensure staff have sufficient AI literacy training. The Act takes a proportionate approach, recognising that different staff will need different levels of training based on how they use AI.
For those working with high-risk systems, deeper training on oversight, bias detection, and intervention will be necessary. For staff using AI primarily for administrative tasks or lesson planning, basic awareness training may suffice. This sensible, scaled approach means you can focus resources where they’re most needed without overwhelming everyone with technical details irrelevant to their roles.
What matters is that staff understand areas like:
- How AI generates responses based on patterns rather than understanding
- The importance of reviewing AI-generated content before use
- When human judgment should override AI suggestions
- How to identify and address potential bias
- Basic data protection considerations when using AI tools
Similar to how we approach broader digital skills, AI literacy builds on existing knowledge but adds specific awareness of how these systems work, their limitations, and using them responsibly.
Compliance: The Importance of Early Preparation
I strongly recommend appointing an AI compliance lead by August 2025, giving you a full year to work towards before the main requirements come in to effect in August 2026.
This proactive approach is particularly important considering the potential penalties, which can reach up to €35 million or 7% of global annual turnover for serious violations. Any regulatory action brings unwanted scrutiny and challenges that extend beyond financial consequences.
A Framework for Implementation: Assess, Review, Comply
To help schools navigate this new landscape, consider the following structured approach:
1. Assess
Take stock of your current AI use across the school. You cannot provide training or have oversight on tools you don’t know are in use.
For example:
- Conduct an AI inventory across all departments
- Survey staff to understand which tools they’re using
- Check whether tools are processing student or staff data
2. Review
Examine how these tools are performing and what risks they might pose.
For instance:
- Test AI systems for potential bias or fairness issues
- Verify whether AI-generated decisions can be explained
- Check if human oversight is properly implemented
- Ensure data privacy and security measures are appropriate
- Determine which risk category each tool falls into
3. Comply
Take practical steps to meet regulatory requirements based on your findings. Examples include:
- Develop a comprehensive AI Use Policy
- Create an AI Register documenting all tools and their risk levels
- Implement stronger oversight for high-risk systems
- Establish clear appeals processes for AI-generated decisions
- Deliver appropriate AI literacy training to all staff
Clear Next Steps for School Leaders
- Map your AI landscape: Identify all AI tools currently being used across your school.
- Classify by risk level: Determine which category each tool falls into under the EU AI Act.
- Prioritise high-risk systems: Focus compliance efforts on tools that impact student opportunities and outcomes.
- Schedule staff training: Plan AI literacy training to match the AI use of your school.
- Appoint an AI compliance lead: Designate someone to coordinate your approach by August 2025.
- Develop necessary documentation: Create policies, registers, and oversight mechanisms.
- Establish review cycles: Set up regular checks to ensure ongoing compliance.
“With a structured, step-by-step approach, we can turn regulatory challenges into opportunities for better, more thoughtful AI integration in education.”
The Path Forward: Action, Not Alarm
You’re not alone in this journey. Schools across Europe are navigating the same waters, and resources are emerging to support your compliance efforts. The key is to start now, while there’s still time to thoughtfully implement these changes rather than rushing to meet deadlines.
The schools that will thrive in this new landscape aren’t necessarily those with the most advanced AI tools – they’re the ones with leaders who recognised early that proactive governance beats reactive compliance every time.
Your students and staff are counting on you to get this right. The clock is ticking, but with clear steps and a structured approach, you’ve got this. The time to act is now.
By Matthew Wemyss
Matthew Wemyss is a Teacher and School Leader at Cambridge School of Bucharest. You can connect with Matthew on LinkedIn or through his website. He was named an ISC Research Edruptor of 2024.