How Technical Interviews Are Evolving with AI

If you are preparing for software engineering interviews today, you might assume the playbook looks the same as it always has: a phone screen, some coding rounds, maybe a system design, and a behavioral loop at the end.

And that assumption would be mostly right. The overall structure of interviews has not disappeared. What has changed is the fabric of the loop.

AI is threading its way into technical assessments, design challenges, and even behavioral screens. And while some companies are conservative, sticking to algorithms and strict rules, others are experimenting aggressively.

Here are the most important shifts to understand if you are actively interviewing today.


Coding Interviews are no longer “No Tools Allowed”

Coding interviews are where AI has made the most dramatic entrance. Data structures and algorithms still matter, but the focus is shifting from manual implementation to demonstrating how effectively you can collaborate with AI tools to solve them. We are starting to see companies allowing (even encouraging) candidates to use the same tools they would at work.

As an example, at Rippling, coding rounds explicitly state that candidates can use AI tooling, including autocomplete with GitHub Copilot and ChatGPT. The prompts are minified versions of actual problems Rippling engineers face, such as writing a function for an expense tracker app that returns a list of flagged expenses.  

Success in this new format requires both a strong grasp of data structures and the judgment to use AI effectively. Meta is experimenting with a similar format. The company is piloting a new coding interview where candidates have access to an AI assistant throughout the session. The emphasis is less on memorizing algorithms and more on showing how you can collaborate with an AI partner.

One Formation mentor described his approach: “I tell candidates I don’t care if they code a solution at all. Just show me your thoughts. If a person passes this using AI to ‘cheat,’ perfect. We can use AI at work.” In this kind of interview, success isn’t about avoiding AI or blindly trusting AI, it’s about directing it. A strong candidate might let the AI draft the skeleton of a binary search tree or a helper function, but they stay in the driver’s seat: guiding the prompts, catching mistakes, and explaining tradeoffs aloud.

And the trend is spreading. Fellows at Formation are reporting that about 30 percent of their loops allowed or encouraged AI in technical rounds, as long as prompts were specific enough. The bottom line is simple: in some interviews, the question is no longer whether you can solve the problem unaided, but whether you can solve it with AI at your side.

Live Interviews are entering the Vibe Coding era

Perhaps the most dramatic transformation is happening in live coding interviews: companies that leaned into practical skill tests by having candidates build real apps from scratch are now opening the gates to AI-assisted development, redefining what it means to demonstrate coding competency in real-time.

At one seed-stage iPaaS startup (name withheld), candidates are expected to show the prompts they’re using in real time. One Fellow explained that they deliberately used ChatGPT to draft Prisma queries, demonstrating that they understood the concepts but deferred to AI to help with the syntax. An early-stage GovTech tasked candidates with building a todo list app using Claude. In another instance, a prominent startup accelerator asked candidates to design and implement an API for a news site while screen-sharing in Cursor.

All these examples point to the shift: live coding is no longer just about raw implementation. It is about demonstrating AI fluency while keeping control of the work. The expectation was that candidates would use AI tools, and the evaluation focused on strong prompts, careful validation, and course corrections. The key was not whether the AI generated working code on the first try, but seeing how well the candidate adapted when the AI stumbled.

Debugging AI Output is becoming a new testing ground

As AI shifts the engineering role from writing code from scratch to validating AI-generated output, some interviews are adapting accordingly. We are seeing debugging skills in AI-heavy environments emerging as a new testing ground.

One Series B legal-tech company hands candidates a full stack AI chat app riddled with bugs – a ChatGPT API key error here, incorrect data parsing there. The task wasn't to build, but to fix. For candidates, this format feels much closer to real engineering work. Debugging forces them to use their CS fundamentals to trace logic and diagnose errors, while still showing how they might leverage AI tools in to accelerate the debugging process.

Take-Homes are evolving into "open book" tests with unstrained AI usage

Take-home assessments are also changing. Instead of banning AI, many companies are designing tasks where its use is expected.

A US-based virtual assistant SaaS company, for instance, asks candidates to answer a follow-up question when submitting their take-home: “How did you use AI in the challenge?” We saw one seed-stage AI survey company take it a step further. Its assignments are designed to be completed with unrestricted access to the internet, IDEs, and AI tools. Candidates are encouraged to use Cursor with Claude 4 Sonnet, though any tool is acceptable. The company then follows up in behavioral interviews with a question about how the candidate used AI during the assessment. In another case, a pre-seed Web3 startup offered a drag-and-drop calendar project where AI was allowed. One candidate tried using Cursor to generate the solution but found it overly complicated. They eventually rolled back and scaffolded their own base setup before asking Cursor for more targeted help.

The lesson here was that AI use is most effective when combined with independent judgment. Together, these examples show how take-homes are evolving into “open book” exams. The test is not whether you used AI, but how responsibly and effectively you used it.

System Design are including AI as Part of the Blueprint

While coding rounds are changing how candidates use AI, system design interviews are shifting to reflect the kinds of AI products companies now build.

At one Series A AI content platform startup, candidates were asked to design a system that takes user input and uses a third-party service to generate images with an LLM. At Apple, candidates presented a design challenge around an AI video generation application. A global fintech company asked for the design of a chatbot, complete with requirements like handling concurrent users, managing session history, and integrating a response model.

If that last prompt sounds familiar, it should. Aside from the “response model,” it reads almost exactly like a traditional system design exercise. The fundamentals haven’t changed: interviewers still want to see clear problem decomposition, thoughtful tradeoff analysis, and clean architectural reasoning. What’s new is the context, not the core skill. You’re still designing a scalable, reliable system; it just happens to include a call to an LLM instead of a payments API or recommendation engine.

When designing systems involving AI, interviewers are not testing whether you’ve built an AI product before, but rather if you can apply sound system design reasoning to a system that happens to use one. Candidates who understand how an app actually interacts with an LLM, whether through APIs, model context protocol (MCP) interfaces, or managed services can better anticipate challenges around latency, cost, safety, and prompt handling at scale.

Here's why this matters: system design remains very difficult to fake, even with AI assistance. These interviews still ruthlessly expose the gap between those who truly understand distributed architecture and those merely reciting what they've memorized. The addition of AI components has only sharpened this distinction.

Behavioral Interviews are expecting transparency around AI fluency

Behavioral interviews have expanded to include AI habits and experience. Many companies are including the basics: “Do you use AI tools, and how do you use them?” “Have you used AI in your own development?” (DoorDash, various startups). We're also seeing other companies go deeper. Washington Post and Cisco are asking about prior AI project experience, probing on details like usage of specific tools like Copilot vs. Claude Code. At one early stage health-tech startup, interviewers drill into architectural details: “Describe the GenAI features you worked on. How were they evaluated? How have you helped lead your team toward AI adoption?”

These types of open ended questions can be challenging to answer effectively in time pressured environments. Interviewers want to hear how candidates think about AI either as an accelerator and core skill. They look for judgment, curiosity, and awareness of risks like bias or data misuse. While we're still in the early phases of AI usage, interviewers are looking for authentic, thoughtful experimentation above lengthy experience, testing for how you've adapted your development flows in the AI-shaped workplace. Having genuine, reflective experiences with AI tools, no matter how extensive your experience, has become as essential to discuss as your debugging process or code review practices.


Conservative vs Experimental: Two Playbooks

There is a clear divide is between large firms and startups.

Large firms remain conservative. They continue to rely heavily on DSA interviews. They use strict guardrails to prevent cheating, from online platforms that block copy-paste to live monitoring. And they are slow to allow AI tools during interviews.

Startups, by contrast, are experimenting. Some are dropping LeetCode-style screens altogether. Others are adding AI-specific loops, encouraging candidates to use tools like Cursor or Claude, or asking system design questions that are directly tied to AI products.

Formation has seen candidates encounter both worlds within the same week. One Fellow sat through a classic dynamic programming interview at a large firm. Another was asked to design an LLM chatbot while openly using AI tools at a startup.

The message is clear: adaptability is now a skill in itself.

The Bottom Line

The overall interview loop has not been replaced. There is still a phone screen, technical rounds, system design, and behavioral questions. What has changed is the texture within each stage.

Coding interviews increasingly test how candidates collaborate with AI. System design now assumes AI as part of the blueprint. Behavioral interviews probe daily AI use and fluency. And the playbook diverges sharply between conservative and experimental companies.

For candidates, the challenge is to prepare for both extremes. In one loop, you may be asked to solve algorithms unaided. In another, you may be expected to debug a chatbot while screen-sharing in Cursor. What matters is not only your technical skills but your adaptability, your authenticity, and your fluency with AI.

We will explore more about what interviewers are really evaluating, and what candidates should do differently, in future posts. For now, remember this: LeetCode alone will not save you. Interviews are evolving, and fluency and authenticity are the new currencies.