Independent Practitioner: Alex Chen
Sector: Solo consultant (L&D and organisational design)
Scale: 1 practitioner | Portfolio of project-based clients
Service context: One-to-One Support
The situation
Alex Chen runs a solo consultancy specialising in learning and development and organisational design. Over the past eight years, Alex has built a strong reputation with a portfolio of project-based clients across education, professional services, and the not-for-profit sector. The work is varied, rewarding, and in demand.
But running a solo practice means wearing every hat. Alex is the strategist, the facilitator, the project manager, the bookkeeper, and the business development lead — often in the same week. The admin load that comes with managing proposals, client communications, invoicing, scheduling, and follow-ups has grown steadily alongside the client base.
Alex has tried using AI tools to ease the pressure. The results have been inconsistent: some outputs are genuinely useful, others miss the mark entirely and take longer to fix than starting from scratch. Without a reliable method, each new task feels like a fresh experiment. The time saved on one task gets absorbed by the time spent troubleshooting another.
There’s also a growing concern about what’s safe to share with AI tools. Client work involves sensitive organisational data — restructure plans, engagement survey findings, draft strategies — and Alex isn’t confident about where the boundaries sit. The result is a cautious, ad hoc approach: using AI for some tasks, avoiding it entirely for others, and never quite trusting the output enough to stop checking everything manually.
This is a common pattern among independent practitioners. The potential of AI is clear, but without a structured approach and someone to refine it with, the gap between potential and practical value stays frustratingly wide.
The specific challenges
- Non-billable admin consuming 8–10 hours per week. Proposal drafting, client follow-ups, scheduling, invoicing, and content formatting are eating into time that could be spent on billable project work or business development.
- Inconsistent AI results with no reliable method. Alex uses AI tools regularly but without a tested approach. Outputs vary significantly in quality, and there’s no systematic way to refine prompts or reuse what works.
- Uncertainty about what’s safe to share with AI tools. Client work involves confidential organisational data. Without clear personal guidelines for what can and can’t be shared, Alex defaults to caution — which limits the practical value AI could deliver.
- No sounding board or structured support. Solo practitioners don’t have a team to test ideas with. AI experimentation happens in isolation, which means mistakes get repeated and useful techniques don’t get captured or refined.
Where we’d start
Client communication efficiency — specifically, the recurring tasks that consume the most non-billable time each week. For most solo practitioners, this means proposal drafting, client follow-up emails, session summaries, and scheduling coordination.
This is the right starting point because it’s contained, low-risk, and delivers visible time savings quickly. By building personal templates and a tested prompt library around these specific tasks, Alex would have a reliable system for the work that currently absorbs the most unproductive hours — and a clear method to extend to other tasks over time.
How we’d work with Alex
Discovery (week 1)
We’d start with a focused 1:1 session to understand how the practice actually operates — not in theory, but in the detail of a typical working week. We’d map where time goes, which tasks are the most repetitive, where AI has helped and where it hasn’t, and what the concerns are about sharing client data with AI tools.
This isn’t about confirming assumptions. It’s about understanding Alex’s specific situation so that everything we build afterwards is grounded in how the practice actually works.
Duration: 1 session (90 minutes) plus a short follow-up to confirm priorities.
Design (weeks 1–3)
Based on what we learn in discovery, we’d design a personalised starter kit built around Alex’s highest-value recurring tasks. This would include:
- A set of tested prompt templates for the most time-consuming admin tasks (proposal drafts, client follow-ups, session summaries)
- A personal prompt library structure — a simple, reusable system to capture what works and refine it over time
- Clear personal guidelines for what’s safe to share with AI tools, tailored to the kind of client data Alex handles
Design is collaborative. We’d refine the templates together, testing them against real tasks until they deliver consistent, usable results.
Duration: 1–2 weeks, working asynchronously with a short check-in session.
Delivery (week 3)
Delivery is a hands-on 1:1 coaching session where we walk through the starter kit together, apply it to live tasks, and refine anything that doesn’t fit. Alex would leave this session with a working system, not a set of instructions to figure out later.
We’d also cover:
- How to evaluate and refine AI outputs to enable independent iteration
- How to extend the method to new tasks as confidence grows
- Practical techniques for managing client confidentiality when using AI tools
All materials are handed over and can be adapted as needed.
Duration: 1 session (2–3 hours), with the starter kit delivered beforehand for review in advance.
Review (post-delivery)
We’d check in at 30 days to see how the system is working in practice. This is where we troubleshoot anything that isn’t landing, refine templates based on real use, and identify the next set of tasks to bring into the system.
If ongoing support is needed beyond the initial engagement, we can move to a flexible mentorship arrangement — scheduled check-ins at a cadence that works, focused on whatever’s most pressing at the time.
Duration: 30-day check-in (1 session), with optional ongoing support.
What Alex would gain
- Tested prompt templates for the most time-consuming recurring tasks — ready to use immediately
- A personal prompt library with a simple structure for capturing, refining, and reusing what works
- Clear personal guidelines for managing client confidentiality when using AI tools
- A reliable method for evaluating AI outputs and improving them — removing the need to start from scratch every time
- Practical confidence to extend the system to new tasks independently, without needing external support for every new use case
- Time recovered — a realistic reduction in the 8–10 hours per week currently spent on non-billable admin, starting from week one
This scenario is a composite example based on sector research and common organisational challenges. We maintain strict confidentiality for all client work.
Facing similar challenges?
Tell us about your situation and we’ll help you work out where to start.
