Case Study

Coaching a Predominantly Junior Team Through a Greenfield Aspire Build

How patterns left behind in code, code review and Claude Code use kept the in-house team productive after I left, with no rewrite of what was shipped

CP

The challenge

The team I joined was around nine developers spread across three workstreams. Roughly two-thirds of them were junior, including bootcamp graduates and developers who were new to the languages they were working in. A senior teammate from the partner organisation, who had been the obvious source of domain depth, left shortly after I started. A junior teammate later went off sick for an extended period mid-project. The remaining team was capable and willing, but the bench depth was thin.

The other complication was the surface area the team had to cover. Greenfield project, three unfamiliar third-party platforms, two new languages or frameworks for parts of the team, .NET Aspire (which nobody on the wider team had used in earnest), and partway through the engagement the client adopted Claude Code internally as well. The inspection-driven programme meant there was no luxury of dedicated training time before delivery started. Mentoring had to happen alongside the build, not before it.

The risk this set up was the standard one for a senior-led greenfield. If I built the architecture in front of the team without bringing them with me, the patterns would not survive my departure. The team would either reach for what they knew and quietly drift the codebase off the architecture, or they would freeze on changes that touched anything I had built. Neither outcome was acceptable for a platform the in-house team would own afterwards.

The results

Key results

  • Predominantly junior team (about two-thirds), including bootcamp graduates, brought up on Vue.js, C#, .NET Aspire and Claude Code
  • Architecture, integration and component patterns left behind that the in-house team continued to extend after the engagement
  • Mentoring delivered alongside delivery, not as a separate workstream, against a fixed inspection-driven programme
  • Code reviews used as the primary teaching channel rather than separate training sessions
  • Coaching extended to Claude Code patterns when the client adopted it internally mid-engagement
  • The team continued to deliver against the original architecture without me, with no rewrite of what was shipped

The clearest outcome is what the team did after I left: they kept building on the architecture I had set up, against the same patterns, without a rewrite of what had been shipped. The component library kept growing in the same shape. The progress engine kept absorbing new feeds against the same boundaries. The Aspire app host stayed as the local development orchestrator. None of that was guaranteed at the start.

The mid-engagement disruptions (a senior teammate leaving, a junior teammate off for an extended period) tested the same patterns from a different angle. When I had to consolidate a chain of orphaned pull requests after the junior had been off too long to come back to them in time, the work was straightforward precisely because the patterns the team had been writing against were consistent. There was no archaeology in the diffs.

The Claude Code coaching also held up. The team continued using it after I left, and from what I have seen subsequently they continued to use it in roughly the way I had taught: as a verifiable draft generator, not as an authority. That is the part of AI-augmented development that fails most often when teams pick it up under their own steam, and the part I am most pleased to have got right with this team.

The lesson I take from this engagement is that on a junior-heavy team, the load-bearing thing a senior architect leaves behind is not the architecture itself, it is the team's ability to reason about it. Patterns help. Code review helps more. Verbalising decisions in front of the team helps the most.

The solution

My main mentoring channel was code review, not separate training sessions. There was no time for a parallel curriculum, and the work itself was a more honest teacher than any slide deck. Every pull request became a chance to surface the reasoning behind a pattern: not just "this is how we do it" but "this is why we do it, and here is what would go wrong if we did not". The reviews were deliberately slower and more written than fast-moving teams usually run them. That cost was paid back many times over in the rate at which the juniors absorbed the patterns.

I also led the build in a way that made the patterns easy to follow. The Vue.js component library had clear, named patterns for the common shapes the team would need. The C# back end was structured so that a new endpoint had an obvious template to follow. The progress engine had explicit boundaries between integration code, calculation code, and persistence code, with each separated cleanly enough that a junior developer could work in one without having to understand all three. The patterns earned their keep by being copyable rather than by being clever.

Aspire mentoring went through the IDE rather than through documentation. Because the app host was a typed C# project rather than a YAML file, I could walk people through configuration changes the same way I walked them through any other code change. That made Aspire an easier first encounter for the juniors than docker-compose would have been for the same group.

When the client adopted Claude Code internally mid-engagement, I extended the same pattern to AI-augmented development. The principle I taught was simple and load-bearing: verify, do not trust. Every Claude Code output is a draft. The team's job is to read it, push back on it, and feed corrections back. I demonstrated this with the actual outputs from my own discovery work, including the ones that had been wrong. Showing the team where AI had been wrong on this exact codebase was more useful than any general guidance about AI's limits.

In parallel I made sure the architectural decisions were made in front of the team rather than handed down. Trade-offs were verbalised. Alternatives I had considered and rejected were explained, including why. That meant when the team had to make their own architectural calls after I left, they had a worked example of what an architectural argument actually looks like, not just a finished position.

Technical deep dive

Code review as the primary teaching channel

Reviews on this engagement were slower and more written than I would normally run them. Every non-trivial change attracted a comment that explained the reasoning, not just the verdict. "Move this to the progress engine" became "Move this to the progress engine because the BFF should not know which source system this field comes from, and here is what breaks if it does." That style of review is expensive in the moment. It is also the cheapest way I know to transfer architectural intent to a team that has not seen the patterns before.

The shift I encouraged among more experienced reviewers was to ask questions rather than issue corrections. A junior developer who has been asked "what would you expect this to do if the OneFile feed were a day late" learns something different to one who has been told "you need to handle stale data here". Both questions and corrections have their place, but the question gets the pattern into the developer's head; the correction only fixes that one PR.

Patterns first, principles second

I led with concrete patterns and only justified them with principles when asked. Telling a bootcamp graduate to "follow the BFF pattern" with a specific worked example in front of them is more useful than telling them to "respect separation of concerns" in the abstract. Once they had written a few endpoints against the pattern they could reason about what the pattern was doing for them. At that point the principle was something they had derived rather than been handed.

This is the inversion of how a lot of senior engineers default to teaching. Principles first, then examples, is a comfortable ordering for the senior. Examples first, then principles, is a more useful ordering for the junior.

Claude Code coaching: verify, do not trust

The single rule I gave the team for Claude Code use was "verify, do not trust". Every output is a draft. Every output is read, pushed back on, and corrected before it goes anywhere near a build. I demonstrated this with my own discovery work, including the cases where Claude Code had been wrong: the over-produced component variations, the integration patterns that did not match the third-party platforms, the points where Campus and Beehive boundaries had to be hand-fed. Showing the team where AI had been wrong on this exact codebase was more useful than any general guidance about AI's limits.

The other rule was: every Claude Code session has a brief and a verifier. The brief is what you asked for. The verifier is the person, or the test, that will tell you whether the output is right. If you cannot describe the verifier before you start, do not start.

What I would do differently

If I had the engagement again I would invest in a written "patterns" document earlier, in addition to the code review channel. Code review transferred intent to the people who were doing the reviews. A patterns document would have transferred it to the people who were not on a given PR. The cost of writing one is small; the cost of not having one shows up later, when a junior developer is reaching for a pattern they have not seen before and there is no senior in the room to ask.

I would also start the Claude Code coaching from week one rather than waiting for the client to ask. The "verify, do not trust" rule is more easily taught while the team is still forming its instincts than after they have started developing AI habits on their own.

Ready to achieve similar results?

Let's discuss how we can help your organisation achieve these results.

Book a strategy call

Strategic Technology Leadership

Close the technology leadership gap in your UK business without the cost of a full-time executive hire. Whether you need a CTO, Head of Technology, Technical Director, or Engineering Manager - get experienced strategic guidance that aligns technology with business objectives and drives delivery.

Learn more →

AI-Augmented Development

Learn how to leverage AI effectively in your development process. Get proven AI-augmented practices for LLM integration and developer tooling, realistic guidance on when AI helps vs. hinders, and hands-on implementation support that fits your team's capability.

Learn more →