Red flags in technical due diligence: what experienced assessors look for

Every technical due diligence assessment tells a story. The red flags that matter most are rarely the obvious ones - they are the architectural decisions, testing gaps, and deployment risks that compound silently until they become expensive problems. Here is what I look for after 20 years of assessing codebases.

Red flags in technical due diligence: what experienced assessors look for

Every technical due diligence assessment tells a story. Not the story the vendor presents in their slide deck - the real one, buried in the codebase, the deployment pipeline, and the way the team talks about their own system.

I have conducted technical due diligence assessments across industries for over two decades - from global humanitarian organisations to property transaction platforms, from insurance systems to startup marketplaces. The technical due diligence red flags I describe here are not theoretical. They are patterns I have seen repeatedly, and they are the ones that cost real money when missed.

If you are a CTO preparing for an assessment, or an investor wondering what to look for, this is the field guide I wish someone had handed me early in my career.

Architecture anti-patterns: the quiet killers

Architecture problems are the most expensive red flags because they compound. A poor architectural decision made three years ago is now baked into every feature built on top of it.

The monolith with no boundaries

Not every monolith is a problem. A well-structured monolith with clear module boundaries, separated concerns, and a coherent domain model can be perfectly serviceable. The red flag is the monolith with no internal structure - where everything depends on everything else.

I once assessed a codebase for a global humanitarian organisation where the entire application lived in a single C# project. No separation of concerns. No domain boundaries. No layering. The vendor had proposed a six-figure modernisation programme, but 60% of the costs were actually addressing technical debt the vendor themselves had created. Without the assessment, the client would have paid to fix problems that should never have existed.

What I look for:

  • Circular dependencies between modules or namespaces. If module A depends on module B which depends on module A, there are no real boundaries
  • God classes - classes with hundreds or thousands of lines that handle multiple unrelated concerns
  • Shared mutable state between components that should be independent
  • No clear domain model - business logic scattered across controllers, services, and data access layers

The distributed monolith

This is arguably worse than a traditional monolith. The team has adopted microservices (or claims to have), but the services cannot be deployed independently. A change in one service requires coordinated deployments across three or four others.

The distributed monolith gives you all the operational complexity of microservices - network latency, distributed debugging, deployment orchestration - with none of the benefits. You have taken a problem that was at least simple to reason about and made it harder to understand without gaining independent deployability.

Red flags here include:

  • Shared databases between services. If two services write to the same tables, they are not independent services
  • Synchronous call chains where Service A calls B calls C calls D before returning a response to the user
  • Coordinated deployments - if the release process requires deploying services in a specific order, you have coupling
  • Shared libraries containing business logic, not just utilities. A shared NuGet package for logging is fine. A shared package containing domain models that multiple services depend on is a coupling mechanism

Technology stack age and support status

I check whether core dependencies are still actively maintained. A framework two major versions behind is a concern. A framework that has reached end of life is a material risk - no security patches, no community support, and a shrinking pool of developers who want to work with it.

This is not about chasing the latest trends. It is about whether the technology choices still have a viable future and whether the team can hire for them.

Testing gaps: coverage is not confidence

High test coverage is one of the most misleading metrics in software development. I have seen codebases with 80%+ coverage that gave the team almost no confidence in making changes. The tests were there, but they were testing the wrong things.

Tests that test the framework, not the business logic

The most common pattern: hundreds of unit tests that verify the ORM maps fields correctly, that controllers return the right HTTP status codes, or that dependency injection resolves correctly. These tests add to the coverage number but tell you nothing about whether the application actually works correctly for users.

What I want to see instead:

  • Tests around business rules - the calculations, the state transitions, the validation logic that represents actual business value
  • Integration tests for critical paths - the checkout flow, the authentication pipeline, the data import process
  • Tests that would catch a regression a real user would notice

No tests at all in critical areas

Sometimes the testing gap is simpler: the most important parts of the system have no tests. The payment processing module has zero coverage. The authorisation logic - the code that determines who can see what - is untested. The data migration scripts that run in production are tested manually, if at all.

I pay particular attention to:

  • Authentication and authorisation code - this is where security vulnerabilities hide
  • Financial calculations - rounding errors, currency conversion, tax logic
  • State machines - any code that manages lifecycle transitions (order status, user onboarding, approval workflows)
  • Data transformation and ETL - the code that moves data between systems

No CI pipeline running tests automatically

If tests exist but do not run automatically on every commit, they decay rapidly. Developers stop trusting them, stop maintaining them, and eventually stop writing them. A test suite that only runs when someone remembers to run it is a test suite that lies to you.

Deployment risk: how fragile is the path to production?

The deployment pipeline tells you more about a team's maturity than almost any other indicator. A team that can deploy confidently and frequently has solved most of the hard problems. A team that dreads deployment day has not.

Manual deployment processes

If deploying to production involves someone following a checklist, running scripts manually, or copying files via FTP, that is a significant red flag. Manual deployments are error-prone, unrepeatable, and create a single point of failure around whoever holds the deployment knowledge.

I have seen organisations where a single developer was the only person who could deploy to production. When that person was on holiday, the business could not ship fixes. When they left, the knowledge walked out the door with them.

No rollback strategy

Every deployment should have a documented, tested rollback plan. "We'll fix forward" is not a rollback strategy - it is a hope. When I ask "what happens if this deployment breaks production?" and the answer involves uncomfortable silence, that is a problem.

What a mature deployment looks like:

  • Automated CI/CD pipeline - code is built, tested, and deployed through automation
  • Multiple environments - at minimum, development, staging, and production
  • Rollback capability - the ability to revert to the previous version within minutes
  • Feature flags - the ability to disable new features without a full deployment
  • Deployment frequency - teams that deploy weekly or daily have less risk per deployment than teams that deploy quarterly

Single-person deployment dependency

This overlaps with the bus factor discussion below, but it is worth calling out specifically. If only one person knows how to deploy, you have a critical operational risk that transcends code quality. I wrote about this pattern in detail in my post on the Hero Developer anti-pattern - the developer who saves every project is often the biggest risk to the organisation.

Documentation absence: the silent indicator

Missing documentation is not just an inconvenience. It is a leading indicator of deeper problems - poor knowledge sharing, high bus factor risk, and a team that is moving too fast to be sustainable.

What should exist but often does not

  • Architecture decision records (ADRs) - why was this database chosen? Why was this framework selected? Without these, every new team member has to reverse-engineer intent from code
  • Runbooks for common operations - how do you restore from backup? How do you rotate credentials? How do you investigate a production outage?
  • API documentation - not just auto-generated Swagger, but meaningful descriptions of endpoints, expected behaviour, and error handling
  • Onboarding documentation - how long does it take a new developer to make their first meaningful contribution? If the answer is "weeks" or "months", documentation is part of the problem

The "it's all in Confluence" problem

Sometimes documentation nominally exists but is so outdated it is actively misleading. A wiki full of architectural diagrams from three years ago, describing a system that has since been substantially rewritten, is worse than no documentation at all. It gives a false sense of understanding.

I check the last-modified dates on documentation. If the architecture documentation has not been touched in 18 months but the codebase has had hundreds of commits, the documentation does not reflect reality.

Bus factor: the risk that walks out the door

The bus factor is the number of team members who would need to be unavailable before a project stalls. Research across 133 popular open source projects found that 65% have a bus factor of two or fewer. Commercial codebases are often worse.

How I assess bus factor

I do not just count heads. I look at:

  • Git blame distribution - is 80% of the code written by one person? Are there entire modules that only one developer has ever touched?
  • Code review patterns - does the same person approve every pull request? Are reviews rubber-stamped or substantive?
  • Knowledge concentration - when I ask questions about different parts of the system, does the team keep deferring to the same person?
  • Documentation as mitigation - even with knowledge concentration, good documentation and well-structured code reduce the bus factor risk

Why this matters for due diligence

Investors discount valuations for high key-person dependency, and rightly so. If your lead architect leaves and the team cannot maintain the system without them, you do not have a technology asset - you have a dependency on an individual. That is a fundamentally different risk profile, and it affects everything from hiring plans to insurance costs to acquisition multiples.

Security posture: beyond the checkbox

Security in due diligence is not about whether the team has a security policy document. It is about whether security is embedded in how they build software.

Red flags I consistently find

  • Secrets in source control - API keys, database connection strings, or credentials committed to Git history. Even if they have been "removed", Git remembers everything
  • No dependency vulnerability scanning - the application depends on hundreds of open source packages, and nobody is monitoring whether those packages have known vulnerabilities
  • Authentication rolled from scratch - unless there is a very specific reason, building custom authentication instead of using established identity providers (Auth0, Azure AD, Cognito) is almost always a mistake
  • No penetration testing - particularly concerning for applications that handle financial data, personal information, or health records
  • SQL injection possibilities - string concatenation in database queries is still depressingly common, even in codebases that look otherwise modern
  • Overly permissive access controls - every developer having production database access, or service accounts with admin-level permissions across the entire infrastructure

Compliance as a proxy

For regulated industries, I check compliance posture - GDPR, SOC 2, ISO 27001 - not because the certificates themselves matter most, but because the discipline required to achieve and maintain them tends to correlate with overall engineering maturity. A team that has been through a SOC 2 audit has been forced to think about access controls, incident response, and change management in ways that teams without external accountability often have not.

Technical debt patterns: reading between the lines

Every codebase has technical debt. The question is whether the team knows where it is, how much there is, and whether they are managing it deliberately.

Deliberate versus accidental debt

Deliberate technical debt - "we know this is not ideal, but we are shipping it now and have a ticket to address it" - is a sign of maturity. The team is making conscious trade-offs and tracking them.

Accidental technical debt - where the team does not realise the debt exists - is the dangerous kind. It accumulates silently and surfaces as unexpected costs during any future change.

Patterns that indicate unmanaged debt

  • Inconsistent patterns across the codebase - three different approaches to error handling, two different ORMs, four different logging strategies. This suggests decisions are made locally without architectural governance
  • "Don't touch that" code - modules or files that the team actively avoids changing because nobody fully understands them. These are time bombs waiting for the day a change becomes unavoidable
  • Copy-paste duplication - large blocks of similar code duplicated across the codebase. I found this pattern at scale in a law firm's legacy system where classes were 4,000 lines long with pervasive duplication. A reflection-based framework reduced them to 400 lines - a 90% reduction
  • Outdated dependencies - not just one or two major versions behind, but dependencies that have been abandoned entirely. If your application depends on a library whose last commit was three years ago, you are accumulating risk with every passing month
  • No technical debt tracking - if the team cannot point to a backlog of known debt items with rough estimates, they are not managing it

Putting it all together

No codebase is perfect. The goal of technical due diligence is not to find perfection but to understand what you are buying, inheriting, or investing in - and what it will cost to bring it to where it needs to be.

The red flags above are not equally weighted. A distributed monolith with no tests and a bus factor of one is a different proposition from a well-structured monolith with some documentation gaps and an ageing CI pipeline. Context matters. Industry matters. The team's trajectory matters.

What separates a good assessment from a checkbox exercise is the ability to read these signals in combination and translate them into business risk. A CTO reading this list should be able to walk through their own codebase and honestly evaluate where the vulnerabilities lie.

If you want a structured approach to conducting your own assessment, I have published a 96-item Technical Due Diligence Checklist that covers architecture, code quality, testing, security, infrastructure, team processes, and data management. It is the same framework I use in professional assessments, and it is free to download.


Frequently asked questions

How long does a technical due diligence assessment typically take?

A thorough assessment takes 3-5 days of active work, spread across 2-4 weeks to allow for scheduling code access, team interviews, and documentation review. The elapsed time depends on how quickly the target organisation can provide access and make people available. For smaller codebases or focused red-flag checks, I can deliver a preliminary assessment in as little as one week.

What is the difference between a technical due diligence and a code review?

A code review examines code quality at the file or pull-request level. Technical due diligence is broader - it assesses architecture, infrastructure, security, team capability, processes, and technical debt holistically. Code quality is one input, but the assessment also covers deployment maturity, bus factor, scalability under growth, and alignment between the technology and business objectives. Think of it as the difference between inspecting a single room and surveying the entire building.

Can a CTO conduct their own technical due diligence?

You can, and the Technical Due Diligence Checklist is designed to support exactly that. However, for high-stakes decisions - acquisitions, significant investments, partnership agreements - an independent external assessment brings objectivity that internal teams cannot provide. Internal assessors have blind spots, political considerations, and familiarity bias. An external assessor has no incentive to minimise or exaggerate findings.

What happens when a due diligence assessment uncovers serious problems?

It depends on context. In an M&A scenario, findings often lead to price renegotiation, escrow holdbacks, or specific remediation requirements written into the purchase agreement. In an investment context, they may trigger additional due diligence, adjusted terms, or a decision not to proceed. In a health check context, they become the foundation of an improvement roadmap. Discovering problems before committing capital is always better than discovering them after.

Which red flags are the most expensive to fix?

Architecture problems are consistently the most expensive because they affect everything built on top of them. A distributed monolith or a system with no clear domain boundaries cannot be fixed incrementally - it requires significant rearchitecting that touches every part of the application. By contrast, testing gaps, documentation absence, and deployment pipeline improvements can be addressed progressively alongside normal development work.

Tags:technical-due-diligencered-flagscode-reviewctoarchitecturetechnical-debtsoftware-assessment

Want to discuss this article?

Get in touch with our team.