From Kickoff to Working Proof of Concept in One Week with Claude Code
How AI-augmented discovery turned three unfamiliar third-party platforms and a junior team into a runnable Vue.js front end and matching C# back end inside the first week
The challenge
A UK education and training provider needed a unified learner dashboard for the apprenticeship side of the business, ahead of an Ofsted inspection window. The new platform had to consolidate data from three systems I had never worked with before: an internal LMS (Campus, on PostgreSQL), a legacy Drupal-based VLE (Beehive), and an external e-portfolio SaaS (OneFile, drip-fed via overnight SFTP exports).
The wider team was around nine developers, with about two-thirds of them junior, including bootcamp graduates and developers new to the languages they were working in. Discovery was scoped at two to three weeks, and the team needed something concrete to react to as soon as possible. A traditional discovery, with weeks of read-only research before anything ran on a developer's laptop, was not going to fit the inspection deadline driving the wider programme.
The results
Key results
- Runnable Vue.js front end and matching C# back end in front of the team by end of week one
- Three unfamiliar third-party platforms (Campus, Beehive, OneFile) reverse-engineered and mapped during a two-to-three week discovery phase
- Component library derived from design screenshots presented at the kickoff, before any production code was written
- Verifiable data mappings produced for the senior domain expert to confirm before build started
- The proof of concept evolved directly into the MVP that shipped at the end of the engagement
- No throwaway prototype: zero rebuild between PoC and production
By the end of week one the team had a runnable Vue.js front end and a matching C# back end in front of them. The dashboard could be navigated end to end on mock data, then re-pointed at the real back end. The senior developer from the partner organisation had a verified set of data mappings to work from. The junior members of the team had a concrete codebase to react to, learn from, and contribute to, instead of a stack of design documents.
The proof of concept was not a throwaway. The team built directly on it for the next three months, and it became the MVP that shipped at the end of the engagement, roughly two weeks after the Ofsted inspection itself. The build state at the inspection date was enough for the client to evidence to inspectors that the apprenticeship visibility gap was being actively addressed. There was no rebuild between PoC and production. The component patterns and integration patterns established in week one were the same ones the in-house team continued to extend after I left.
The lesson I took into subsequent engagements is straightforward: in a greenfield context with unfamiliar third-party systems and a stretched team, AI-augmented discovery shortens the gap between "we have a brief" and "we have something running" from weeks to days, but only if every output is treated as a draft to be verified rather than an answer to be trusted.
The solution
I treated discovery as a Claude Code task from day one. The kickoff meeting itself was recorded and transcribed, and the transcript was fed straight into a Claude Code project alongside the design screenshots presented during the meeting and the available documentation for the three third-party platforms.
From there I ran parallel research agents across the three platforms to surface their data models, available APIs, and known limitations. For Campus, where the underlying database was accessible, I had Claude help reverse-engineer the relevant API endpoints and produce the data mappings the dashboard would need. For OneFile, where the API surface was patchy, the same approach surfaced the gaps that would later force the integration onto the SFTP feed.
In parallel I used the design screenshots from the kickoff to derive a Vue.js component library, then asked Claude Code to scaffold a runnable proof of concept against it. The PoC began with a mock service layer so the front end could be navigated end to end before any back end existed. Once the senior domain expert from the partner organisation had verified the data mappings, I extended the proof of concept with a C# back end that absorbed the mock data and presented it through real endpoints.
Throughout, the AI work was one side of a loop. The team and I treated every Claude Code output as something to be verified, not accepted. Data mappings were confirmed against real platform behaviour. The component library was reconciled with the official designs. The proof of concept was pulled apart in code review. Nothing went into the build that had not been seen by a person who could push back on it.
Technical deep dive
The Claude Code workspace
The discovery workspace held five things in parallel:
- The kickoff meeting transcript and any subsequent meeting transcripts.
- Screenshots of the design system presented at kickoff.
- Documentation and (where accessible) source code for the three third-party platforms.
- Notes captured during the meetings themselves, often during breaks.
- Outputs from the research agents as they completed.
Discovery work was driven through Claude Code agents rather than in a single chat, so each stream of work (one per third-party platform, one for the design system, one for the data model) had its own scratch space and could be revisited independently.
From design screenshots to component library
For each screenshot I asked Claude Code to identify the component candidates, group repeated patterns, and produce a Vue.js component breakdown plus the data shape each component needed. That output became the seed of the component library and, crucially, fed straight into the data mapping work, because each component now had a concrete data contract.
From data contracts to PoC
Once the data contracts existed, I asked Claude Code to scaffold a Vue.js proof of concept fed by a mock service layer matching the contracts. The mock layer was deliberately stupid: it returned canned responses for every endpoint the front end would ever call. That meant the entire dashboard could be clicked through end to end before the back end existed.
The C# back end was then built to satisfy the same contracts. The mock service layer was retired in stages as real endpoints came online, so the front end never had to be rewritten. This is also why the PoC-to-MVP transition did not require a rebuild: there was nothing to throw away, only mocks to replace.
What did not work
Not every output was useful. Claude Code produced more component variations than were needed and sometimes proposed integration patterns that did not match the realities of the third-party platforms. In particular, it could not reliably tell where Campus ended and Beehive began without being shown, because their boundaries were a product of organisational history rather than anything documented. The remedy was always the same: feed the correction back, narrow the brief, and re-run.
Ready to achieve similar results?
Let's discuss how we can help your organisation achieve these results.
Book a strategy call