Building consensus for a meaningful AI policy

Feb 22, 2024

Since ChatGPT became publicly available in November 2022, schools have been reeling from the effects of artificial intelligence. In this article, Chris Ferrara discusses ways schools can build a consensus for a meaningful AI policy.

Building consensus for a meaningful AI policy

Feb 22, 2024 | EdTech, ISL Magazine, Leadership, Recruitment & CPD, Teaching & learning

By Chris Ferrara

Since ChatGPT became publicly available in November 2022, schools have been reeling from the effects of artificial intelligence. By January 2023, AI was passing the bar exam and getting Bs at the Wharton School of Business. It wasn’t long before AI tools filtered into the student consciousness to fulfil technology’s greatest promise: a free homework machine. How we deal with AI in school will represent the most substantial shift in education since the introduction of the computer itself. Every school leader needs to understand what AI is, what it is not and how to lead their faculty and stakeholders in a deliberate and meaningful direction based on their school’s unique values and challenges. At Menaul School, our interventions and training helped our faculty to get ahead of AI challenges, to the benefit of our students.

To begin with, ‘artificial intelligence’ is a misnomer; a better term for these tools is ‘large language models’ or LLMs. There is no comprehension taking place within the circuits of ChatGPT, instead, LLMs take huge subsets of data, most of it culled from the internet and libraries of texts, then use predictive algorithms to determine what the next word should be. It’s essentially a scaled-up version of the same predictive text feature on your smartphone that knows when you type ‘I’ll call you back’ that the next word is probably ‘later’. Developed with billions of language samples and leveraging neural networks which identify better and worse responses, AI has been trained to generate incredibly complex and superficially impressive outputs.

As a school leader, it’s vital to keep your finger on the pulse of what is going on in the world of teachers. There are a lot more of them than us, and they’re incredibly smart. Reddit’s /r/Teachers and /r/Professors introduced me to AI in November 2022. I remember the conversations about what would happen when students learned how to use these tools. What would it mean to teach standard essays? What would it mean for research assignments? The debates going on here pointed me in the direction of meaningful resources to educate myself on how AI worked and, more importantly, how students were beginning to use it in the classroom.

In January 2023, I delivered the first training to our faculty on AI. Hard to imagine today, but most of our faculty had not heard of it and were flabbergasted as I input an AP English Literature prompt and it spat out a fully developed essay in a few seconds. Some of our teachers foresaw the doom of education. Math teachers shrugged and told us students were already using apps like Photomath to take pictures of equations and lay out the step-by-step solutions – welcome to their world.

Looking at the actual output of an AP Lit essay, I helped the faculty discover hallucinations. Asking ChatGPT to analyse the symbolism of purple cars in E M Forster’s A Passage to India returned an impressive-looking literary analysis that ignored the fact that there are no purple cars in the book. As predictive language models, AI tools don’t generally fact-check, their focus is on generating comprehensible output. Faculty learned that the tools were not omniscient, though newer versions have already become far more ‘fact-aware’.

At Menaul School, our goal was to develop a school-wide response supported by policy and with the buy-in of faculty. Understanding the tools, how they worked, how students would use them and how they would continue developing was crucial to crafting that response. The responses of our faculty mirrored society at large. Techno-futurists posited the redundancy of basic writing skills, ChatGPT serving as a fancy calculator for English, skipping the arithmetic of thesis + sentences = essay straight to engagement with ideas. Others said we’d need to focus education on developing unique and identifiable voices in writing or technical solutions like reviewing Google documents to watch as students typed their responses in real time. Plenty thought that banning the technology was necessary – notwithstanding the logistics or our ability to enforce it outside of school.

The conversation was crucial, teachers needed to process through conversation and find counterexamples to determine how this would affect their lives. I asked them to bring samples of their own assignments and run them through the tools, discuss the output. Most were unhappy with the results – it was good, and they didn’t know how they could tell it from student work except where it was too good.

The first AI detector, GPTZero, had just come out, and we explored that technology. We learned that AI detectors look for specific patterns and that there’s no secret signal that any writing is AI-generated. A clever person can adjust the prompts to avoid the patterns and the more technically proficient the writing is, the more likely it is to be flagged as AI-generated. We couldn’t base our response on a technology with a 30–60% failure rate.

By helping the faculty to work through these issues, to see the positives and negatives to each option through experimentation with the tools and respectful discussion, we realised that we would need to wholly transform how we evaluated student work in our classrooms.

Having broad faculty agreement on what changes we needed to make was crucial to developing meaningful policy. Without faculty buy-in and enforcement, policy lives in handbooks. When everyone agrees on the expectations and standards, then policy supports meaningful change. From our common understanding of the changes we’d need to make, we adjusted our academic honesty policy, discipline ladder, orientation programme and advisory curriculum. We also had to communicate with marketing and enrolment to explain the changes in detail. There was an opportunity to use curricular changes to drive positive word-of-mouth about our school in the community, but not if they were unaware.

“Without faculty buy-in and enforcement, policy lives in handbooks.”

In the end, Menaul School arrived at a place we never would have expected. As a one-to-one device school, we determined we would need to move all significant assessments to handwritten, in-class assignments. With the tools for detection flawed, and the temptation to use AI too great, we agreed that the only way to faithfully assess student skills and knowledge was by going old school. While this conclusion is not for everyone, we have watched as our friends in other schools still struggle with a lack of policy, with unclear expectations. Our teachers felt prepared, educated and empowered to make a decision, and that meant that our policies have worked. When teachers deviated from the policies, they received AI-generated work and the response from other faculty has been, ‘What did you expect? You let them take it home. Why not follow the policy?’ Why not follow the policy indeed.

Five tips for building consensus:

  • Educate on AI: school leaders should start by educating themselves and their faculty about what AI tools are, including their capabilities and limitations.
  • Stay informed and share knowledge: leaders need to keep abreast of ongoing discussions and debates about AI in education, how AI is being used in classrooms, and its implications for teaching and learning.
  • Hands-on training and experimentation: conduct training sessions for faculty to demonstrate what AI tools can and can’t do. Focus on practical examples from the faculty’s own curriculum.
  • Collaborative policy development: work with faculty to develop a school-wide AI policy that reflects the school’s values and addresses specific challenges. This involves open discussions and considering diverse viewpoints from different disciplines to ensure broad agreement and buy-in.
  • Transform evaluation methods and communicate policies: revisit and adapt evaluation methods to address the challenges posed by AI. It’s also important to communicate these policy changes effectively within the broader school community, ensuring that all stakeholders understand and support the new approaches.


Chris Ferrara

Chris Ferrara is the Assistant Head of School at Menaul School. You can connect with Chris via email.

Related content

ISL Magazine

While we acknowledge concerns about the rapid development and adoption of Artificial Intelligence (AI), we recognise it also offers great potential to include students in more sophisticated ways. Ochan Kusuma Powell and Becca Leech discuss the potential of AI.

Inspired Learning



This article was published in International School Leader Magazine

ISL Magazine is a free publication celebrating best practice with all international school leaders.

Cover of International School Leader Magazine issue 26


Subscribe to ISL Magazine

International School Magazine is a free online publication sharing best practice in international schools. Subscribe to receive email alerts when new issues are published.


Let's discuss your needs

We are dedicated and committed to supporting the broader international schools community by providing data, trends and intelligence. Since 1994, we have guided schools with their growth plans, informed investors on new school development, helped universities to engage with international schools, and advised education suppliers that are supporting the market.