The AI Gap: From Experiments to Everyday Practice
Format: Practical guide
Audience: Senior leaders, managers, and team leads
Why This Matters for Your Organisation
Many organisations have started their AI journey. Licences are active, a few pilots have run, and some people are using AI regularly. But for the majority of staff, AI remains something they tried once and quietly moved on from.
This pattern is remarkably consistent across sectors. Tools are deployed, interest spikes, and then usage drops back to a handful of enthusiasts while everyone else returns to familiar ways of working. From a distance it can look like adoption. On the ground, AI is not yet part of normal work.
The gap between “we have AI tools” and “AI is part of how we work” is not a technology problem. It is a management and capability challenge. And it sits squarely in the territory that senior leaders already know how to address: process design, skill building, and accountability.
This guide sets out a practical framework for closing that gap, drawn from common patterns across organisations in education, health, professional services, and community sectors.
The Adoption Illusion
It is easy to mistake activity for progress. Typical AI rollouts focus on procurement, introductory sessions, early pilots, and internal showcase events. All of this is necessary. None of it is sufficient.
What is usually missing:
- Clear guidance on which tasks should involve AI
- Agreed standards for what good output looks like
- Practical coaching on how to break down work, assemble context, and review quality
- Explicit changes to workflows and procedures that embed AI into standard practice
Without these, most staff conclude that AI is “interesting but not for my work” (Davenport & Mittal, 2023).
The shadow AI challenge
When formal guidance is thin, two things happen. Some staff stop using AI altogether, and the organisation pays for licences with minimal return. Others adopt consumer tools with no data controls, developing ad-hoc methods with no quality framework and no way to share what they have learnt.
Both outcomes carry cost. The first wastes investment. The second introduces unmanaged risk and means hard-won lessons stay locked in individual heads. Research consistently shows that organisations without clear AI use policies see higher rates of ungoverned tool use, particularly among staff who feel unsupported by formal channels (Benbya et al., 2024).
Three Levels of AI Capability
A simple mental model helps clarify where effort should go.
Level 1: Awareness and Prompts
This is where most training investment lands. Staff learn what AI is, how to log in, and how to ask simple questions. It is the equivalent of teaching someone how to open a word processor. It does not make them a competent writer.
Necessary, but not sufficient.
Level 2: Practical, Judgement-Led Use
This is the critical middle layer, and it is consistently under-invested.
At this level, staff can:
- Assemble the right context so the AI has enough information to be useful
- Judge when output is fit for purpose and when it needs further work
- Break complex tasks into manageable steps
- Give clear feedback to improve drafts
- Integrate AI into existing workflows so it becomes a normal part of how work is done
- Recognise when AI should not be used at all
Level 3: Technical Integration and Automation
This is the domain of IT, data teams, and developers: connecting AI to internal systems, building custom workflows, managing data pipelines. Important for scale, but it cannot compensate for weak capability at Level 2. If staff cannot use AI well in their day-to-day work, automation simply scales poor practice.
Where the gap sits
Most organisations over-invest in Levels 1 and 3, and under-invest in Level 2. Training budgets go to introductory workshops and technical implementation. The middle layer, where managers and knowledge workers make judgement calls about task design, quality, and workflow, gets minimal structured support.
That is the gap this guide addresses.
Six Skills That Make or Break AI in Daily Work
1. Assembling the right context
Knowing how to give the tool enough information to produce useful work. This includes selecting relevant documents and data, being clear on audience and purpose, providing examples of past work that meet the required standard, and recognising when context is too broad or too narrow.
When this skill is missing, output is vague, off-target, or requires so much editing that it would have been faster to start from scratch.
2. Judging quality
Deciding when AI output is fit for purpose, when it needs editing, and when it should be discarded. This includes basic fact-checking, awareness that AI can mix accurate and inaccurate content in the same answer, checking for tone and appropriateness, and clear sign-off rules for high-risk areas.
AI does not know when it is wrong. It presents plausible-sounding content with equal confidence whether it is accurate or fabricated. The human in the loop must know how to check (Marcus & Davis, 2019).
3. Breaking work into manageable tasks
Instead of asking AI to do a whole job end-to-end, breaking complex work into steps. For a strategy paper: research, structure, draft sections, and refinement as separate tasks. For a client response: extracting key points, proposing options, and preparing the final draft in stages.
AI works best on well-defined, bounded tasks. Breaking work into steps makes outcomes more reliable and easier to review.
4. Improving drafts through feedback
Treating the first output as a starting point and knowing how to give clear instructions to improve it. This means focusing feedback on specific issues (clarity, tone, structure, missing content), giving examples of what “better” looks like, working in short cycles, and knowing when to stop refining and finish manually.
The first output is rarely ready to use. The skill is in moving from a rough draft to a usable version in a small number of focused cycles.
5. Embedding AI into existing workflows
Making AI part of standard practice, not a separate experiment. This looks like procedures that refer explicitly to AI steps, templates and checklists that assume AI support, team norms such as “we always use AI for the first pass on this type of task”, and clear handover points between AI-assisted work and human review.
If AI is not part of the process, it will not be used consistently. And if it is not used consistently, you cannot build reliable practice, measure value, or share what works (Autor, 2024).
6. Knowing where AI should not be used
Clear boundaries on tasks where AI is inappropriate, too risky, or legally restricted. This includes tasks where legal, clinical, or financial risk is too high, decisions that require specific governance, data that must not be shared with external systems, and situations where transparency and audit trails are required.
Clear guardrails build confidence. When staff know where the boundaries are, they are more willing to experiment within them.
Two Patterns of Human-AI Collaboration
In practice, there are two broad ways that people and AI work together. Both are valid. The choice depends on the type of work, the level of risk, and how much creative exploration is needed.
Pattern A: Clear Division of Tasks
People decide the goal and the constraints. AI carries out specific, well-defined tasks such as drafting, summarising, comparing options, or structuring content. People review, adjust, and approve the final version. There is a clear handover between AI work and human work.
Works well for: Board papers, executive summaries, policy documents, formal client communications, and any work with legal, regulatory, or reputational implications.
These tasks have high stakes and require a clear line of accountability. The division of labour makes it obvious where human judgement and sign-off sit.
Pattern B: Continuous Back-and-Forth
People and AI go back and forth in short cycles. The tool is used to explore options, test variations, refine ideas, and challenge thinking. The human lead still decides when the work is complete. The boundary between AI work and human work is fluid.
Works well for: Early-stage concept work, internal communications, service design, process improvement, coding, data analysis, and creative tasks where iteration is part of the process.
These tasks benefit from rapid iteration and experimentation. The back-and-forth helps surface ideas and refine thinking faster than working alone.
Making the patterns explicit
The key step is helping teams decide which pattern applies to which type of work, and then making that visible in procedures, templates, and team norms. For example:
- “Board papers use Pattern A: AI drafts sections based on provided material, manager reviews and edits, executive approves.”
- “Service design workshops use Pattern B: facilitator and AI iterate on options in real time, team decides final direction.”
When the pattern is clear, staff know what is expected. When it is unclear, they waste time guessing or avoid using AI altogether.
What Senior Leaders Need to Do
Moving from experiments to everyday practice is a leadership challenge. Four responsibilities sit with senior leaders.
1. Set narrative and direction
Make it clear that AI is part of normal work, not a side project. Talk about AI in the same way you talk about other core capabilities. Link AI use to organisational goals: better service, faster delivery, more capacity for high-value work. Make it safe to experiment within clear boundaries.
The narrative shapes behaviour. If AI is positioned as “innovation” or “digital transformation”, it stays separate. If it is positioned as “how we do our work”, it becomes embedded (Edmondson, 2019).
2. Design the middle layer of capability
Ensure staff have structured support to build practical, judgement-led AI skills. Agree on the core skills staff need (the six skills above). Provide time and coaching for practice on real work. Capture and share examples of good practice. Make sure managers know how to coach task design and quality review.
This is where the value sits. Level 1 awareness is straightforward. Level 3 technical work is specialist. Level 2 is where most of your people operate, and it is where adoption succeeds or stalls.
3. Put in place light, memorable guardrails
Give staff clear, practical boundaries on where AI can and cannot be used. A short, plain-language policy with concrete examples. Simple checklists for high-risk tasks. Clear data rules. Escalation paths when staff are unsure.
Light beats comprehensive. A two-page guide that staff actually remember is better than a 20-page policy they have never read (Hagendorff, 2020).
4. Make learning visible and reusable
Prevent hard-won lessons from staying trapped in individual heads. Capture workflows and examples from successful pilots. Build a lightweight internal library of patterns: task type, steps, examples, risks. Review what is working on a regular cadence. Standardise the best approaches and embed them in procedures.
Organisations that scale AI well do so by turning individual experiments into shared practice.
Reflection Questions for Your Leadership Team
Use these as a quick health check on where your organisation sits.
Strategy and narrative
- Do you talk about AI as a normal part of how you do work, or as a one-off project?
- Are you clear, in plain language, about what you want AI to achieve for the organisation?
- Have you made it safe to experiment within clear boundaries?
Skills and practice
- Have you agreed on the core skills staff need to use AI well?
- Do managers know how to coach their teams on task design and quality checks?
- Are there regular opportunities for staff to practise on their real work, not just in demos?
Governance and risk
- Are acceptable and unacceptable use cases clearly documented and widely understood?
- Do you regularly review where AI is working well and where it is introducing risk?
- Are you confident about how sensitive data is handled when using AI?
Ways of working
- For key processes, have you decided how AI should support the work?
- Are there a few clear examples where AI is already part of the standard workflow?
- Are you capturing and sharing those examples across teams?
Talent and development
- As routine tasks shift, are you designing new ways for junior staff to build experience and judgement?
- Are you monitoring how AI is changing roles, expectations, and development pathways?
Getting Started: A Practical Sequence
If the questions above surface gaps, the next step is to pick one process and one team to start with.
Step 1: Choose a contained starting point
Select a specific type of work where AI could make a visible difference. Good candidates are tasks that are repeated frequently, take significant time, and have a clear quality standard. Avoid starting with your highest-risk or most complex process.
Step 2: Map the current workflow
Document how the work is done today, step by step. Identify where AI could contribute (drafting, summarising, reviewing, structuring) and where human judgement is essential (sign-off, sensitive decisions, quality assurance).
Step 3: Design the AI-assisted version
Decide which collaboration pattern applies (Pattern A or B). Write out the revised workflow with AI steps included. Be specific about what the AI does, what the person does, and where handovers happen.
Step 4: Build skills through practice
Give the team time to practise on real work, with coaching support. Focus on the six skills: assembling context, judging quality, breaking tasks down, improving drafts, embedding in workflow, and knowing boundaries.
Step 5: Capture and share what works
Document the workflow, including examples of good and poor AI output. Share it as a reusable pattern that other teams can adapt. Review and refine on a regular cadence.
Step 6: Expand deliberately
Once one workflow is embedded, choose the next. Each new workflow builds on the skills and confidence developed in the last. The pace should be steady, not rushed.
The Evidence Base
The challenge of moving from AI experimentation to sustained organisational practice is well documented across management and technology research.
Brynjolfsson and McAfee’s work on the productivity paradox of new technologies highlights that value from AI comes not from the technology itself but from the complementary organisational changes: redesigned workflows, new skills, and updated management practices (Brynjolfsson & McAfee, 2023). This finding is consistent across multiple waves of technology adoption.
Davenport and Mittal’s research on AI-driven organisations reinforces the finding that most AI initiatives stall not because of technical limitations but because organisations fail to invest in the human and process changes required to make AI part of everyday work (Davenport & Mittal, 2023).
Autor’s analysis of the impact of AI on work emphasises that the most valuable human contributions in an AI-augmented workplace are judgement, contextual understanding, and the ability to frame problems well. These are the skills that sit at Level 2 of the capability model described in this guide (Autor, 2024).
Edmondson’s research on psychological safety in organisations demonstrates that experimentation and learning from failure require an environment where staff feel safe to try new approaches without fear of blame. This is directly relevant to AI adoption, where early attempts are often imperfect and learning requires iteration (Edmondson, 2019).
Research on AI governance and ethics consistently finds that clear, practical guidelines outperform comprehensive but complex policy documents. Staff are more likely to follow short, memorable guidance than lengthy frameworks they have not read (Hagendorff, 2020).
References
Autor, D. (2024). Applying AI to rebuild middle class jobs. NBER Working Paper Series. National Bureau of Economic Research.
Benbya, H., Pachidi, S., & Jarvenpaa, S. (2024). Governing AI in organisations: emerging practices and challenges. MIS Quarterly Executive, 23(1), 1-18.
Brynjolfsson, E., & McAfee, A. (2023). The productivity J-curve: how intangibles complement general purpose technologies. American Economic Journal: Macroeconomics, 15(1), 1-36.
Davenport, T. H., & Mittal, N. (2023). All-in on AI: How Smart Companies Win Big with Artificial Intelligence. Harvard Business Review Press.
Edmondson, A. C. (2019). The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth. Wiley.
Hagendorff, T. (2020). The ethics of AI ethics: an evaluation of guidelines. Minds and Machines, 30, 99-120.
Marcus, G., & Davis, E. (2019). Reigniting AI: Finding Ground Truth in the Age of Disinformation. Pantheon Books.
Recognise these patterns in your organisation?
Closing the gap between AI experiments and everyday practice is one of the most common challenges we work on with clients. If you’re looking for practical support to build real capability in your team, tell us what you’re working on and we’ll help you work out the next step.
