“To teach is to learn twice.” — Joseph Joubert
Introduction
This autumn, I had the privilege of leading the 6-week engineering portion of Kainos AI-First Technologist Academy in Belfast.
The mission was ambitious: prepare early-career engineers to work in an industry where AI is a fundamental collaborator in the development process.
What started as a teaching assignment became a profound journey.
This post captures what worked, what didn’t, and what surprised me.
Why AI-First?
Software engineering is being reshaped in real-time. A new generation of developers is entering the field with a fundamentally different expectation: AI isn’t an add-on, it’s already part of their life. Rather than retrofitting AI tools onto established practices, we posed a more interesting question: What happens when we start with AI as a first-class collaborator from day one?
Finding the Right AI Workflow
I deliberately experimented with several distinct workflows to help apprentices understand the nuances:
1. Naive “Vibe Coding”
Early on, apprentices described outcomes loosely (“Add a timer…” / “build a dashboard…”) and let the model generate full implementations.
Result: Rapid scaffolding, but hidden assumptions everywhere — missing error branches, leaky abstractions, inconsistent naming conventions. Without a comparison framework, juniors understandably accepted outputs wholesale.
2. High-Level Planning with AI
We shifted to pre-implementation conversations where the model outlined steps, components, and modules. This approach had worked well for me as an experienced engineer - I could be specific on low-level details as well as steer and prune because I recognized multiple viable paths and had seen what did and didn’t work long-term throughout the years.
The Problem: For juniors, this didn’t work great. When you don’t realize there are competing patterns, what good looks like, etc., the model’s first plan becomes “the way”, crowding out exploration. The scope of a single chat was also often too big, and LLMs are more than happy to implement more than you asked for. This deadly combination led to a term “30-minuter” coined during the academy. The context window gets filled with too much information (including failed attempts to solve the problem), the performance drops… a mess.
3. Spec-Driven Development
While there are some existing well-defined frameworks for this approach (including Spec Kit and BMAD Method), I’ve decided to go with a simple custom solution to teach the idea - not a particular tool.
We introduced a hard boundary: no implementation until a human-coauthored specification captured inputs, outputs, constraints and edge cases.
The prompts would explicitly mention that multiple options should be listed in the output markdown file so that the engineer can pick the appropriate one.
In next steps, a spec would be broken down into smaller task files.
This shifted agency back to the learner. The conversation changed from “Is the model’s plan good?” to “Does the spec fully encode the problem space?”.
Impact:
- AI output quality improved: fewer invented things or bonus features, tighter interfaces, building just enough (for a given stage).
- The specs became a forcing function for comparative thinking (“Should we stream this? Batch it? Cache it?”) before any code existed.
Tooling
Throughout the academy we’ve used VSCode wit GitHub Copilot.
Instruction Files: A Shared Language Between Humans and AI
We standardized on .github/instructions/*.instructions.md documents, using multiple files dedicated to various aspects of each solution. This pattern proved transformative for several reasons:
What They Accomplished:
- Provided AI with consistent operational constraints - Coding standards, architecture patterns, and project-specific conventions were encoded once and referenced automatically.
- Reduced prompt verbosity - Instead of repeating “use TypeScript strict mode, follow functional patterns, add JSDoc comments” in every prompt, teams established reusable conventions.
- Taught instruction hygiene - Apprentices learned that clear, structured guidance scales across teams and projects.
LLMs often got things slightly wrong on the first attempt, but with feedback, they can correct course. I encouraged students to update instruction files immediately when they discovered gaps or errors. This built a habit: when AI produces unexpected output, iterate on your instructions rather than accepting lower-quality results.
Over time, apprentices developed better instruction sets that resulted in better results. The files became living documentation that both humans and AI could reference reliably.
One common gotcha: some teams let AI create instruction files without verifying the location or naming conventions. Files ended up in the wrong directories, misnamed, or missing frontmatter, causing them to be ignored.
Context7 MCP: Grounded Library Usage
We integrated Context7 to pull up-to-date documentation directly into our workflow.
This proved particularly valuable with libraries like DaisyUI and BetterAuth.
Impact:
- Dramatically reduced hallucinated or outdated code.
- Shortened feedback cycles - first drafts were often ok.
- Enabled confident exploration of unfamiliar libraries.
The Context Window Challenge:
Context7’s documentation retrieval can consume significant tokens. While this grounding was invaluable, it highlighted a practical constraint: the context window fills quickly when multiple library docs are loaded alongside conversation history and code.
At the time, GitHub Copilot didn’t support subagent delegation.
If it had, we could have spawned focused subagents to:
- Research specific library APIs independently
- Return only the relevant patterns needed
- Keep the main conversation context lean
This would have let us maintain richer documentation context without overwhelming the primary chat session — a workflow that’s now becoming possible with newer agent capabilities.
Premium Request Usage
After a couple of days, I realized some students were reaching the premium request limit really fast and it made me realize LLMs might be used a bit too much.
I’ve been guilty of this myself - I can recall delegating some really trivial tasks to LLMs while being drunk on my newly discovered AI superpowers back earlier this year. Tasks that I could have easily done faster without LLMs…
We embedded rules throughout the program:
- “Is this actually worth an AI call?” or can it be done quickly manually?
- Prefer IDE capabilities for renaming things, moving files, etc.
- Use free models for trivial tasks - save premium models for more complex work.
The Final Boss: Merge Conflicts
Despite all our AI tooling advances, git conflicts remained the persistent challenge.
Building habits such as switching back to the main branch and checking whether we have the latest version locally took a while.
No AI Days: Rediscovering the Fundamentals
On a couple of occasions, we intentionally unplugged from AI entirely. Students worked on small side projects with a hard constraint: no AI assistance allowed.
To my genuine delight, students enjoyed writing code by hand.
The slower pace forced them to think through each line deliberately. They debugged without delegating to a model. They read documentation manually. They made mistakes and fixed them themselves.
Several apprentices mentioned that these sessions helped them understand the AI-generated code from previous weeks far better. When you’ve struggled to implement something manually, seeing how an AI approaches the same problem becomes a learning opportunity rather than a black box to accept uncritically.
What Surprised Me
-
Genuine AI-native behavior: Apprentices were simply not impressed with AI creating working code. I remember having those “holy smokes!” moments a couple of times over the last 6 months, but for them… it is just a normal thing.
-
Quick learning: Prompting, creating instruction files, creating pull requests, running the applications, spec-first development, etc. became natural very fast!
-
Presentation skills: Despite their young age, the students are great at communicating and presenting. While they might not have had all the answers to some tricky questions during the demos, they’ve answered with confidence :)
-
Cultural immersion: I picked up a wee bit of Belfast slang (though AI didn’t help there ;))
Key Takeaways
These principles were reinforced repeatedly throughout the program:
-
Specs + task files = leverage: They eliminate ambiguity early in the development process.
-
Instruction files elevate both human and AI collaboration quality: Shared conventions reduce cognitive overhead.
-
Context7 MCP grounding sharply improves prompt fidelity: Authoritative documentation improves results.
-
Small, focused PRs are a must: Code review discipline remains essential.
Closing Reflection
I arrived in Belfast to teach and left having learned myself a ton. AI-native engineers aren’t shortcut-seekers — they’re system interrogators. When provided with structured artifacts (specifications, instruction files) and good practices (operational discipline, code review, design patterns), they can elevate both velocity and quality without surrendering rigor.
The enduring lesson: AI-first doesn’t have to mean AI-dependent.
It means designing environments where human judgment is augmented, not displaced — and where curiosity, skepticism, and craftsmanship still define great engineering.
That said, I can’t claim to know exactly where this AI-accelerated world is heading…
The landscape shifts weekly, and what works today might be outdated tomorrow. There’s a genuine uncertainty about which skills will remain differentiators, how fast AI capabilities will evolve, and what new challenges will emerge as these tools become even more deeply integrated.
But here’s what I do know: witnessing the passion, enthusiasm, and genuine curiosity these apprentices brought to their learning gives me real hope.
If the future of software engineering is shaped by people who question outputs, demand rigor, and bring this kind of energy to problem-solving, then perhaps the uncertainty ahead is less daunting than it feels.
The fundamentals of good thinking, clear communication, and disciplined craftsmanship aren’t going anywhere.