Real-Time AI Interview Assistants: What Every Hiring Manager Needs to Know

There is now software that listens to interview questions, generates answers using AI, and displays them invisibly on the candidate's screen. Your screen sharing software cannot detect it. Here is what to do about it.

Share:

A bad senior technical hire costs at least £30,000 before you start recruiting the replacement. That figure accounts for recruitment fees, onboarding time, the months before you acknowledge the problem, and the institutional cost of a team operating around someone who cannot do the job.

There is now a category of software specifically designed to make that hire more likely.

Real-time AI interview assistants have been available in various forms for a couple of years, but the current generation is qualitatively different. These tools listen to interview questions as they are asked, generate contextually relevant answers using large language models, and display those answers on a screen overlay that is invisible to your screen sharing software. The candidate reads while appearing to think.

One of the better-known tools in this space describes its invisibility as a design goal rather than a side effect. It operates at the system level and is, by its own description, undetectable by proctoring software, screen-sharing tools, and recording software. It supports Zoom, Google Meet, Microsoft Teams, and most other video conferencing platforms. It handles both behavioural questions and technical coding assessments. It allows candidates to upload their CV so answers are personalised to their stated experience.

This is not a niche product. It is one of several competing tools in a growing market, and the people building them are not hiding what they do.

Why detection is the wrong strategy

The instinctive response is to look for technical countermeasures. If the software is detectable, detect it. This will not work. The tools are built specifically to defeat detection, and the incentive structures are asymmetric. Tool developers are motivated to stay ahead. Proctoring platforms are motivated to detect broadly rather than specifically. Some of these tools run automated checks against major proctoring platforms every few hours and publish the results publicly.

Relying on technical detection is an arms race you will always be on the losing side of. The better strategy is to change the interview format so that genuine capability is what gets someone through it, regardless of what tools they have available.

The format is the vulnerability

The standard remote technical interview is a sequence of discrete questions with discrete answers. Tell me about a time you dealt with a difficult stakeholder. Walk me through how you would architect this system. What is your approach to technical debt?

These questions work when the constraint is the candidate's knowledge and experience. They do not work when the constraint is the quality of the language model providing their answers in real time. AI-generated responses to discrete questions are often plausible, structured, and superficially confident. They sound like someone who has done the work. The problem is that they are generated from the question alone, without the underlying knowledge that would allow the candidate to go further. That is the gap to exploit.

What still works

Run a conversation, not a Q and A

AI-assisted responses handle discrete questions well and conversational continuity badly. When you ask a follow-up that connects something the candidate said ten minutes earlier to a new question, you are testing something the overlay cannot easily provide: a coherent, consistent thread of thought across the whole conversation.

Treat the interview as a genuine dialogue. Ask a question, then follow it with something that requires the candidate to build on what they just said rather than answer a new question independently. Interrupt with a clarifying point mid-answer. Reference an earlier answer when introducing a new topic. A candidate using an AI overlay has to find their place and re-orient. A candidate with genuine knowledge just continues thinking.

Go three levels deep on one story

AI-generated answers are plausible at the surface and hollow underneath. A question like "Tell me about a technical decision that went wrong" can be answered convincingly by a language model drawing on general patterns. A follow-up like "What did the engineering lead say when you brought it to them?" cannot, because it requires a specific memory that the model does not have.

Choose one story or project from the candidate's CV and go deep rather than broad. Ask what happened. Then ask what happened next. Then ask what they would do differently, and why their view on that has changed. By the third level of follow-up, you are either talking to someone who was there, or you are watching someone struggle to fabricate coherence in real time.

Watch for the signals

A slight consistent lag between the end of your question and the start of the candidate's answer. Responses that arrive in bullet-point structure when a natural conversational answer would not take that form. Eye movement that tracks horizontally rather than reflecting active recall. None of these individually proves anything, but a cluster of them across the call is worth noting.

Weight later stages of the process appropriately

For senior technical roles especially, the screening call and structured interview should be shortlisting tools, not assessment tools. The real assessment should happen in formats that cannot be assisted by a real-time overlay.

A paid half-day working session gives you direct evidence of how the candidate thinks under pressure, collaborates with your team, and approaches an unfamiliar problem. A short technical memo or architecture review gives you evidence of written thinking that the candidate cannot disavow. References from people who directly saw their work give you a signal that cannot be generated by a language model. These formats have always been more reliable than interview performance. The difference now is that they have become more necessary.

The broader point

AI tools are changing what interview performance measures. They are not changing what candidates can actually do once they are in the role. A candidate who cannot do the job but interviews well with AI assistance will still be unable to do the job on day one.

Review your current interview process for senior technical hires. Identify the stages where discrete Q and A is doing most of the work. Replace those with formats that require genuine knowledge and sustained reasoning, and weight the process accordingly.

If you are at the stage of deciding whether you need a full-time CTO or a fractional one, the same logic applies. My CTO Hiring vs Fractional comparison guide sets out the full picture including costs, timelines, and the factors that should drive the decision. And if you want to think through your hiring and evaluation process for senior technical roles more broadly, this is one of the areas I cover in my Fractional CTO engagements.

Tags:AITechnical HiringRemote InterviewsAI ToolsTechnology LeadershipFractional CTOHiring
Share:
Michael Card

About the author

Experienced Fractional Chief Technology Officer (CTO), Architect, and .NET developer with a strong background in leading technical strategy and building scalable applications across diverse industries

More from Michael

Want to discuss this article?

Get in touch with our team.