Innovation in learning product development
Introducing Project Shybird
Matt Hamnett, Founder & CEO

“We’ve done brilliant and important work here guys… and I have a strong sense we could have done it in a week, not a year, if we’d better used tech.”
In hindsight, my 2011 project closure remarks could have indexed more strongly for congrats. We’d just finished an award-winning project to create apprenticeship frameworks for audit, tax, and consulting — spending days on end workshopping and surveying practitioners to capture capability requirements and define assessment methods. I knew there was a better way.
Fast forward fourteen years to 2025. The Shy Bird bar, Kendall Square, Cambridge, MA. “We’re going to do it, and it is going to have a huge impact.” Indexing more strongly for inspiration, we’d just decided to build the solution we’ve been ruminating on for years — including most recently with genius colleagues from MIT’s Media Lab, and Harvard’s graduate school of education.
Through ‘Project Shybird,’ we’re building an AI-powered solution which radically improves and accelerates the development of qualifications and programme specifications — and helps educators to build specification-aligned and personalised programmes and content.
What are we solving for…?
Through Project Shybird, we’ll solve some of the most difficult and deep-rooted issues in the education system — in the UK and around the world. Issues which, you might argue, cannot be solved without AI — because the cost, complexity, and / or change would be too great.
First, we’ll help customers — awarding organisations, universities, others — create specifications which are truly, madly, deeply, rooted in labour market insights. AI enables us to do much more and better than trad processes have ever been able. This really matters because products which fail to capture the substance, nuance, and emerging future requirements of the labour market they serve undermine productivity, growth, and life chances.
Second, we’ll enable customers to bring new products to market in days, not years — partly to generate efficiencies for our customers, mostly so the capabilities their products define reach the labour market when they’re needed and relevant. Products which take 18 months to develop risk being out of date before they’re taught — let alone before participants join the workforce.
Third, we’ll make it easier for institutions and teachers to embrace new programmes. We’ve seen how hard it can be for the system to respond to new products like apprenticeship standards and T Levels. And we know that the teachers work under intolerable, unsustainable, workload pressure — impacting wellbeing, retention, and inevitably, the student experience.
Fourth, we’ll help educators personalise programmes to meet the diverse needs of students — including the growing prevalence of additional learning and support needs. Workforce pressures are inversely linked to personalisation: teachers need time, and tools, to help them create truly personalised learning journeys — and improved outcomes.
And world peace…? You’re right to think these are four knotty, generational, problems. They speak to the length and breadth of the value chain. Our ambition is big, and bold, for sure. Maybe that’s why it has taken me fifteen years, a revelatory experience at MIT, and a cluster of collaborators that makes Oasis look like Take That, to work out how to do it.
But we have worked out how to do it.
What does it do…?
The TLDR is that our solution will do two things:
- Create labour-market aligned qualification and learning programme specifications in days not months — conducting a breadth and depth of research unimaginable without AI (even if you let it take months), defining assessment strategies, and automating artefact production.
- Leverage the structured, synthesised, insights gleaned through that process to create an unprecedented agentic tool which teachers can use to create resources and content, including not least to personalise their programmes to individual learner needs.
The ‘how’ is the secret sauce we’ll save for a patent filing. The first time our proof-of-concept generated sixty pages of unit, sub-unit, and indicative content which our experts said they’d happily submit for regulatory approval…was a moment.
To give a sense of the scale and robustness of research that goes into that definition, we:
- Analyse thousands of job adverts in the relevant geographic area — nation and / or region — to glean, clean, synthesise, and prioritise relevant KSB requirements.
- Do the same with job profiles, such that we’re tapping into both what employers say to the market when they recruit — and what practitioners say within organisations.
- Identify nations and regions where the relevant subject or occupation is more established and mine job ads and profiles there — to pick out emerging and likely future requirements.
- Create and consult a representative, synthetic, panel of practitioners in the relevant subject or occupational area to help us test, refine, and prioritise KSBs after each research iteration.
We’ll also use sophisticated AI tooling to monitor policy, regulatory, and other market signals, and distil their KSB implications — so if Government announces a change in e.g. construction site safety requirements, our tool will translate that into whatever new or adapted KSB should be reflected in — or removed from — the qualification specification.
We’ll apply the same level of research and academic discipline to the definition of assessment strategies — combining established best practice, our customer’s organisational preferences, and whatever product type prescription has been specified by Government to determine the most appropriate assessment methods for each qualification.
Making systematically robust, consistent, and precisely-described decisions on assessment methods will help to drive the validity of outcomes — and provide institutions and teachers with clear, consistent, advice on expectations of them. Too often the fact that a different person wrote a different qualification, in a slightly different way, drives needless confusion and inefficiency.
Two other things to be very clear about. First, we’re building a tool that will enable customers to differentiate from each other — even if you’re all using our tool. We’ll enable you to do so wildly more quickly, and with an elevated common denominator for quality.
Second, one of the most important ways we do that is by building a tool for, not to replace, product development practitioners. They’re not just ‘in the loop’ of the AI; they’re in the driving seat of a tool powered by AI. We’ll simply enable them to focus their time and insight on the aspects of the process which should require it — not those that shouldn’t.
When can we use it…?
We’re excited about the coming wave of product development work in the UK. V Levels, corollary level two qualifications, apprenticeship assessments, and apprenticeship units, will require a huge development effort from UK awarding organisations. We’ll release component modules of our solution, on a tech-enabled consulting basis, to support that work.
We’ll be building and beta testing the second part of our solution — the bit that will help teachers translate specifications into programmes, resources, and learning experiences — in the second half of 2026. We’re determined to put it at the disposal of teachers embracing new V Levels ahead of first teach in autumn 2027 — because we know how important a moment that is.
If you’re planning product development work for 2026 and ’27, let’s talk now.
MH
.jpg)




