Case Study

Unifying Three Source Systems Into a Single Learner View

How a backend-for-frontend, a progress engine, and a clean front-end contract turned an internal LMS, a legacy VLE and an external e-portfolio into one coherent dashboard

UT

The challenge

The starting point for tutors and supervisors was three browser tabs and a working memory of which system held which fact. Campus, the internal LMS, owned learner records, course components and e-learning completion. Beehive, a legacy Drupal-based VLE that the wider organisation was already moving away from, owned the actual e-learning module experience. OneFile, an external e-portfolio SaaS, owned assessments, off-the-job hours and reviews. Each had its own login, its own concept of a learner identifier, and its own update cadence.

The result was the experience the new dashboard had been commissioned to fix: tutors flipping between systems to answer routine questions about a single learner, supervisors with no consolidated view of the people they were responsible for, and an Ofsted inspection on the horizon that needed all of this to look like one product rather than three.

Consolidation was not just a UI question. The three systems disagreed on basic things. Campus could be queried in real time. OneFile only shipped overnight CSV exports onto an SFTP server. Beehive owned content that the dashboard could deep link into but should not try to reproduce. None of those constraints were going away, so the architecture had to absorb them rather than wish them away.

The results

Key results

  • Single learner dashboard sourced from three independent systems with separate identity providers and data freshness models
  • BFF and progress engine layer kept the front end ignorant of source-system specifics
  • Mixed data freshness handled at the progress engine, not in the UI: real-time reads for the LMS, overnight CSV ingest for the e-portfolio, deep links into the legacy VLE
  • SAML single sign-on from the dashboard to the legacy VLE for deep linking into specific course pages
  • Auth0 for primary dashboard authentication, with identity bridging to the underlying systems
  • Component patterns and integration patterns left in place for the in-house team to extend after the engagement

The dashboard shipped as a single experience for both learners and supervisors at the end of the engagement, roughly two weeks after the Ofsted inspection itself. The build state at the inspection date was enough for the client to evidence to inspectors that the apprenticeship visibility gap was being actively addressed. Tutors and supervisors stopped flipping between systems for the questions the dashboard now answered. Learners had one place to see their own progress, and a deep link straight into the relevant course content in Beehive when they wanted to actually engage with it.

The architectural separation paid off as the engagement progressed. When the original OneFile ingest plan turned out to be unviable about two weeks before contract end (covered in a separate case study), the change was contained inside the progress engine. The BFF's contract did not move, and the front end was unaware anything had happened. That is the test I look for when I am willing to call a layered architecture a success: a late, awkward change in one corner of the system stays in that corner.

The patterns left behind, the BFF contract shape, the progress engine boundary, and the freshness-aware data model, were the same patterns the in-house team continued to extend after I left. There was no rewrite between the version I shipped and the version they kept evolving.

The solution

I designed the dashboard as a strict three-layer separation: a Vue.js front end that knew nothing about source systems, a C# backend-for-frontend that owned the dashboard's data contract, and a progress engine behind that which did the actual consolidation work.

The BFF was deliberately thin. Its job was to expose the data shape the dashboard needed and nothing else. It did not know that some of its data came from Campus, some from a nightly OneFile ingest, and some only as a deep link into Beehive. That separation gave the front end a single, stable contract to build against, and let me move the seams behind it without breaking the UI.

The progress engine was where the disagreements between source systems were resolved. It owned a consolidated learner record built from three feeds: a real-time read against Campus (PostgreSQL), an overnight ingest of OneFile CSV exports from SFTP into Azure Blob and then into the engine's own store, and references to Beehive content that the dashboard would deep link into rather than reproduce. The engine surfaced freshness metadata alongside the data itself, so consumers (including the BFF and any future report) could be honest about what they were looking at rather than pretending everything was up to the second.

Identity was bridged at the edges. Auth0 owned dashboard authentication. SAML single sign-on linked through to Beehive, so a learner clicking into a specific course landed inside Beehive without a second login. OneFile and Campus identifiers were resolved at ingest time and never leaked into the front end.

The combined effect was that learners and supervisors saw one coherent product. Underneath, three distinct integration shapes were running in parallel, each appropriate to the source system it talked to, with the differences contained at the progress engine boundary rather than smeared across the codebase.

Technical deep dive

The BFF contract

The dashboard's contract was modelled around the questions a learner or supervisor was actually asking, not around the source systems' schemas. A "learner overview" endpoint returned everything a supervisor needed to see for one learner across all three systems, in one shape, at one freshness floor. A "course progress" endpoint returned the dashboard's view of a course, with deep link references where Beehive owned the underlying content.

Source-system schemas never reached the front end. That was a deliberate constraint, not an accident. Every time a feature added a new field to the BFF response, the question "where does this come from, and how fresh is it" was answered once, at the BFF or progress engine boundary, and then forgotten by the UI.

Progress engine responsibilities

The progress engine owned three things the BFF deliberately did not:

  1. The consolidated learner record across Campus, OneFile and Beehive references.
  2. The data freshness model: which fields are real-time, which are batch-ingested, and how stale a given response is allowed to be.
  3. The progress calculations themselves, which depended on period-based benchmarks, off-the-job hours, e-portfolio assessments and breaks. Those calculations had been validated by the partner organisation in a separate Power BI proof of concept, and the engine's job was to honour the same logic against the consolidated data.

Mixed freshness, honestly handled

The most uncomfortable architectural question was what to do when a real-time field and a 24-hour-stale field appeared next to each other in the same UI tile. The answer was to surface the freshness floor at the API boundary and let the UI present it honestly. A "last updated" stamp at the tile level, a soft warning when the freshness floor was outside expectations, and no attempt to pretend that the OneFile feed was anything other than overnight.

This is where I think a lot of consolidation projects quietly go wrong. The temptation is to hide the messy freshness model behind a clean UI, and then have the team trying to debug why a learner "completed" a module yesterday but it has not appeared on the dashboard yet. Putting the freshness model in the data contract took five minutes to design and saved a category of support questions that would otherwise have lived forever.

Identity bridging

Auth0 owned the dashboard authentication. SAML single sign-on connected the dashboard to Beehive. OneFile and Campus identifiers were mapped to the dashboard's learner identity at ingest time, in the progress engine, so the front end and the BFF only ever spoke the dashboard's identity model. The three source systems' identities never escaped the consolidation layer.

What I would change next time

The split between BFF and progress engine was right for this team and this engagement, but it is a pattern that earns its keep with size. For a smaller team or a smaller integration surface, I would consider collapsing the BFF into the progress engine and letting the front end talk to a single backend service. The principle (one consolidated, freshness-aware contract) is the load-bearing part. The two-process split is an implementation detail driven by team shape and operational concerns, not by the architecture being correct.

Ready to achieve similar results?

Let's discuss how we can help your organisation achieve these results.

Book a strategy call

Architecture Advisory

De-risk critical architecture decisions with on-demand senior advice. Get peer-level technical depth for complex systems, AI adoption strategies, and architectural reviews, without hiring a full-time architect.

Learn more →