AAIE CONNECT           

AI Leadership Roadmap Series with CIS and 9ine

Navigating the AI Frontier Series: A Leadership Roadmap for Responsible Integration

 

AAIE, CIS and 9ine are jointly hosting this three part executive webinar series designed specifically for Heads of School and Senior Leadership Teams.


 

AAIE200x200

 

 

 


 

 

Navigating the AI Frontier: A Leadership Roadmap for Responsible Integration is a three-part executive webinar series jointly hosted by CIS, AAIE and 9ine, designed for Heads of School and Senior Leadership Teams.

As AI becomes part of everyday school life, this series supports leaders to move beyond uncertainty and hype, and towards clear, confident, values led decisionmaking. Rather than focusing on tools or prompts, the sessions center on leadership responsibility - helping schools build trust, protect students, and lead AI integration with clarity and purpose.

Across three focused 35-minute sessions, participants will explore the legal, ethical, and operational responsibilities that underpin safe and trusted AI use in schools.

Grounded in established safeguarding cultures and data protection frameworks, the series provides a practical, leadershipled approach to governance, policy, training, and vendor decision making. Leaders will leave better equipped to meet rising parental expectations, strengthen accountability across teams, and integrate AI in ways that are transparent, sustainable, and firmly aligned with school values. 

 


 

Our Facilitators 

Mark Orchinson serves as the CEO of 9ine Consulting, a leading provider of technology, cyber, and privacy services to schools. With over two decades of experience, Mark is a distinguished thought leader in technology, cyber, and privacy within the education sector. 

Julia Pidduck - Senior Data Privacy Consultant. Julia works with schools internationally to design and strengthen their privacy programmes, supporting compliance, safeguarding, and the responsible use of digital technologies. Her work also focuses on helping schools develop and oversee responsible AI use for both students and staff.

Katie Snelling - Data Privacy Consultant  Katie works with schools around the world to support compliant and responsible use of personal data. Her work frequently involves advising schools on privacy risk, regulatory obligations, and best practice in relation to EdTech and AI


 

Why Attend?


● Meet Rising Parental Expectations: Parents now have an implied expectation that schools are leading on AI safety; this series prepares you to answer their questions about ethical use and residual risk management.
Mitigate Safeguarding Risks: AI increases the scope of online risks of harm. Privacy, transparency, persuasive and age-appropriate design to management.
Governance Frameworks: Understand which specific AI-related responsibilities need allocating across Academic, Safeguarding, Technology and Digital leaders.
Staff and Student Readiness: Identify the specific training needs required to ensure staff can assess AI limitations and students develop the digital literacy to assess AI-generated content.


 

Dates & Times

 

Session 1

Leading AI Responsibly
Strategy, Governance, and Meeting Parental Expectations

  • Tuesday, March 24, 2026, at 09:00 AM EST

Session 2

AI in Practice
Policy, Training, and Operational Readiness

  • Tuesday, April 14, 2026 at 08:00 AM EST

Session 3 

Managing AI Vendors
Risk, Safeguards, and Decision-Making

  • Tuesday, April 21, 2026 at 8:00 AM EST

 

Sessions

 

TOPIC ONE

Leading AI Responsibly
Strategy, Governance, and Meeting Parental Expectations

Tuesday, March 24, 2026, at 09:00 AM EST

This session supports school leaders in establishing a clear strategic approach to AI. It balances the benefits of the technology with the reality that parents expect schools to lead the way in safe, legal,
and ethical integration.

The Opportunity vs. The "Implied Expectation"
  • Balancing AI benefits with the reality that parents have an implied expectation for school-led engagement.
  • Addressing the need for schools to explain how they use AI in a safe and responsible way.
Governance & Accountability
  • Defining leadership responsibility and clear reporting routes to boards and committees.
  • Establishing an AI committee to evidence how the school manages privacy and safeguarding risks of harm.
Risk Alignment & Safeguarding Culture
  • Aligning AI use with the school’s mission, values, and existing safeguarding culture.
  • Understanding the "Escalation Path": How a lack of transparency can lead from a simple complaint to a Subject Access Request (SAR) or legal leverage.
Capacity & Resourcing
  • Prioritizing capacity from current staff and ensuring accountability is sustainable and clear.

 

Audience: Heads of Schools, K-12 Senior Leaders

 


 

TOPIC TWO

AI in Practice
Policy, Training, and Operational Readiness

Tuesday, April 14, 2026 at 08:00 AM EST

Translating high-level strategy into daily practice. This session explores the policies and training required to ensure staff and students use AI consistently and safely, with a focus on the transparency parents now demand.

AI Policy & Acceptable Use
  • Providing clarity for staff and students while ensuring the school is equipped to respond to AI-generated correspondence.
  • Aligning policy with Online Safety requirements and filtering/monitoring standards.
Privacy by Design & Data Rights
  • Embedding "Age-Appropriate Design": Putting the best interests of the child first and avoiding "one-size-fits-all" approaches.
  • Ensuring meaningful human oversight for automated decisions and protecting student intellectual property.
Specialist Training Requirements
  • Ensuring staff are trained to recognize the limitations and risks of AI tools.
  • Curriculum integration: Teaching students to critically assess AI-generated content and keep themselves safe online.

 

Audience: Heads of Schools, K-12 Senior Leaders

 


 

TOPIC THREE

Managing AI Vendors
Risk, Safeguards, and Decision-Making

Tuesday, April 21, 2026 at 8:00 AM EST

How to manage third-party vendors responsibly. This session covers the importance of structured
vetting to maintain oversight and reduce the risk of harmful AI integration.

Vendor Vetting
  • Evaluating vendors against product safety expectations: harmful content prevention and transparency.
  • Verifying that vendors do not store student-created intellectual property for commercial purposes.
Strategic Vetting Procedures
  • Approaches to adoption: Controlled (e.g., Gemini/Copilot) vs. Flexible vs. Phased Pilots.
  • Identifying "Dependency Risk": Vendors with persuasive designs that raise overuse and addiction concerns.
Controller vs. Processor Relationships
  • Understanding the risk when vendors act as both processor and controller (found in 30% of tools).

 

Audience: Heads of Schools, K-12 Senior Leaders

 


 

Join us 

*Through signing up to this series your personal data will be handed in the ways set out in this privacy notice


 

Discover AAIE

 

 Our Events

Our annual event where School Leaders in international education come together to learn from one and another.

LEARN MORE

 

 Our Community

Our unique global network focused on supporting international school leaders and their senior leadership team.

LEARN MORE

 

AAIE Calendar

Don’t miss a moment—see what's ahead in the AAIE Community. From online conversations to our annual conference, explore our upcoming events.

LEARN MORE