Akoranga Training Institute

The situation

Akoranga Training Institute is a large private training establishment delivering programmes across trades, business, IT, and health to 5,500 ākonga. With 800 staff spread across multiple campuses and programme areas, the organisation has built a strong reputation for applied, industry-connected learning.

Over the past 18 months, AI tools have become part of daily life for many ākonga. Some are using generative AI to support research, draft assignments, and prepare for assessments. Others are using it in ways that raise genuine questions about assessment integrity — not because they intend to cheat, but because nobody has clearly defined what acceptable use looks like.

Staff are navigating the same uncertainty. Some have embraced AI in their teaching practice, experimenting with content generation, adaptive resources, and new approaches to feedback. Others are cautious, unsure where the boundaries are or whether they have permission to experiment. A significant number feel underprepared — they can see AI is changing how ākonga learn and work, but they lack the confidence or practical skills to respond effectively.

Departments have started developing their own informal positions. The IT programme has drafted internal guidance. Health has taken a conservative stance, restricting AI use in clinical assessments. Business programmes have largely left it to individual tutors. The result is a patchwork of inconsistent guidance that varies not just between departments, but between individual staff within the same programme.

The Academic Board recognises the situation needs a coordinated response. There is broad agreement that AI use cannot be ignored, and that a blanket ban would be neither practical nor aligned with the institute’s commitment to preparing ākonga for the workplace. What’s missing is a shared framework — something that gives departments consistent guidance while remaining flexible enough to accommodate the genuine differences between programme areas.

The specific challenges

  • Assessment integrity with no shared framework. Ākonga are submitting work that incorporates AI-generated content, and there is no consistent standard for what constitutes acceptable use. Staff are making individual judgements with no institutional backing, and the risk of inconsistent or unfair assessment decisions is growing.
  • Policy vacuum across departments. There is no organisation-wide AI use policy. Departments are developing ad hoc positions independently, creating inconsistency for ākonga who study across programme areas and for staff who teach into multiple programmes.
  • Staff confidence gaps. A significant proportion of staff lack practical confidence with AI tools. This affects their ability to guide ākonga, redesign assessments, and engage meaningfully in policy discussions. The gap is widest among staff who have had the least exposure to AI in their professional practice.
  • Ākonga expectations outpacing institutional readiness. Many ākonga are already using AI tools in their workplaces and expect to use them in their learning. The institute’s current position — neither explicitly permitting nor prohibiting AI use — is creating confusion and eroding trust in assessment processes.
  • Sector norms still emerging. Vocational education in Aotearoa New Zealand does not yet have established sector-wide norms for AI use in teaching and assessment. Akoranga is navigating this without a clear external benchmark, which makes internal consensus even more important.

Where we’d start

The practical starting point is a co-designed AI use policy — developed with staff, ākonga, and the Academic Board — that gives the organisation a shared framework before tackling assessment redesign.

This is the right entry point because the current challenges all stem from the same root: there is no agreed position on what acceptable AI use looks like at Akoranga. Without that shared framework, assessment redesign efforts would lack a foundation, staff professional development would have no clear direction, and departments would continue developing isolated responses.

A co-designed policy also sends a clear signal. It tells staff and ākonga that the organisation is taking a considered, inclusive approach — not imposing rules from the top, but building a framework that reflects the realities of teaching and learning across different programme areas.

How we’d work with Akoranga

Discovery (weeks 1–3)

We’d start by understanding the specific challenges Akoranga is facing — not from a single perspective, but across the people who live with the consequences every day.

This would include:

  • Stakeholder interviews with academic leadership, programme managers, frontline teaching staff, and student representatives — structured to surface the practical realities, not just the policy aspirations
  • Current-state mapping across departments to understand where AI is already being used, what informal guidance exists, and where the most pressing gaps sit
  • Assessment practice review for a representative sample of programmes, identifying where AI use intersects with assessment design and integrity
  • Compliance and regulatory scan to map the current landscape of relevant legislation, NZQA expectations, and any sector-specific guidance that should inform the policy

The goal is to build a clear, evidence-based picture of Akoranga’s situation — the challenges, the opportunities, and the constraints — so that everything we design is grounded in reality.

Duration: 2–3 weeks, depending on campus logistics and stakeholder availability.

Design (weeks 3–6)

We’d design around Akoranga’s specific situation, drawing on what we’ve learned in discovery, relevant evidence and best practice, and the priorities identified by stakeholders.

This phase is collaborative. The people who will live with the policy are part of the process that creates it.

  • Policy co-design workshops with a representative working group (academic leadership, programme managers, teaching staff, ākonga representatives, Academic Board members). These workshops would work through the key decisions: scope of acceptable use, assessment-specific guidelines, staff expectations, and ākonga responsibilities
  • Assessment redesign pilots for 2–3 selected programmes. Working with programme teams to redesign assessment approaches that account for AI use — maintaining integrity while reflecting the tools ākonga will encounter in the workplace
  • Staff professional development design — a practical capability programme tailored to Akoranga’s context, structured so departments can participate without disrupting teaching schedules

Design is where the co-design principle matters most. A policy imposed from outside the organisation will sit in a drawer. A policy shaped by the people who teach and learn at Akoranga will be understood, owned, and applied.

Duration: 3–4 weeks, with workshops scheduled to accommodate teaching commitments.

Delivery (weeks 6–10)

We’d deliver training, workshops, and materials with practical focus — everything designed so staff and ākonga can apply what they learn immediately.

  • Staff capability workshops — hands-on sessions grounded in participants’ actual roles and programme areas. These build practical AI confidence: how to use AI tools effectively, how to guide ākonga, how to identify AI-generated content, and how to apply the new policy in assessment decisions
  • Assessment redesign implementation support for the 2–3 pilot programmes, working alongside programme teams as they put redesigned assessments into practice
  • Ākonga-facing resources — clear, accessible guidance on the new AI use policy, what it means in practice, and where to go with questions
  • Academic Board briefing — a structured session presenting the final policy framework, the evidence base, and the recommended implementation pathway

All materials are handed over to the organisation. Workshop content, policy documents, assessment templates, and guidance resources are designed for Akoranga to use, adapt, and build on independently.

Duration: 4–5 weeks, with delivery scheduled across campuses.

Review (weeks 12–20)

We’d check in post-delivery to ensure value is realised, provide ongoing support, and capture lessons for continuous improvement.

  • 30-day check-in with academic leadership and the working group to review early implementation, surface any challenges, and adjust guidance where needed
  • 60-day check-in to assess how the policy is being applied across departments, gather staff and ākonga feedback, and identify opportunities for further support
  • Pilot programme review — structured debrief with the 2–3 pilot programme teams to evaluate assessment redesign outcomes and recommend next steps for broader rollout
  • Continuous improvement recommendations — a concise report summarising what’s working, what needs attention, and a recommended roadmap for the next 6–12 months

Duration: 30 and 60-day post-delivery check-ins, with email and async support throughout.

What the organisation would gain

  • Organisation-wide AI use policy — co-designed, endorsed by the Academic Board, and ready for implementation across all programme areas
  • Assessment integrity framework — clear guidelines for AI use in assessment, with programme-specific guidance for the pilot areas
  • Redesigned assessments for 2–3 pilot programmes — practical examples that other programme teams can learn from and adapt
  • Staff professional development programme — delivered capability workshops with all materials, templates, and resources retained by the organisation
  • Ākonga guidance resources — clear, accessible materials explaining the policy and what it means in practice
  • Implementation roadmap — a prioritised plan for rolling the framework out across remaining programme areas over the following 6–12 months
  • Confidence to act — a shared framework that gives departments consistent guidance, staff the practical skills to engage with AI, and ākonga clear expectations. The organisation moves from uncertainty to a position it owns and can build on independently

This scenario is a composite example based on sector research and common organisational challenges. We maintain strict confidentiality for all client work.

Facing similar challenges?

Book a free discovery call