← Back to Blog

How to Prove Your Skills Are Real When You Used AI to Help Build It

TL;DR

  • Hiring managers assume AI was used. That assumption doesn't disqualify you. Not understanding your own code does.
  • Interviewers test this with specific questions: walk me through a decision, explain this tradeoff, modify this code live.
  • Using AI as a tool and using AI as the builder are different things. Interviewers can usually tell which happened.
  • Transparency about AI use is better than pretending you didn't use it. Pair it with genuine understanding.
  • The standard to hold yourself to: can you explain every decision in your project without looking anything up?

The assumption has shifted. Hiring managers reviewing portfolio projects in 2026 are not asking "did this person use AI?" They're asking "does this person understand what they built?"

That's a more useful question, and it's one you can actually prepare for. Pretending AI didn't exist while you were building your projects isn't credible and doesn't help you. Knowing your work cold does.

This article covers how interviewers probe for real understanding, how to make sure you have it, the difference between using AI as a tool versus using it as the primary builder, and how to talk about AI use in a way that's honest without being self-undermining.

How Interviewers Test for Real Understanding

The techniques aren't subtle. Interviewers who want to know whether you understand your own code ask things like:

"Walk me through the architecture of this project. Why did you structure it this way?"

"I see you're using a background job processor here. What problem does that solve? What would happen if you didn't use it?"

"What would you do differently if you were building this from scratch today?"

"Can you add a feature to this live? Here's the spec."

These questions don't care how you built the project. They care whether you understand it. The engineer who built something with significant AI assistance but genuinely understands every decision will answer these questions well. The engineer who used AI to generate large sections they never fully read will not.

The "modify it live" version is particularly revealing. If you have to look at your own repository to remember how your code is structured, that's a signal. If you can describe what change you'd make, where you'd make it, and why, before touching a keyboard, that's a different signal.

The interviewers doing this well are not trying to catch you. They're trying to understand what you can actually do. Give them real information about that.

The Difference Between Tool Use and Builder Use

There's a meaningful distinction between using AI as a tool in your workflow and using AI as the primary builder of what you're submitting.

Tool use looks like: you're implementing authentication for a portfolio project. You know roughly how JWT tokens work. You use Claude to quickly look up the exact implementation pattern for your framework. You read the response, understand it, write the code yourself. You could have gotten to the same place with documentation; the AI just got you there faster.

Builder use looks like: you don't know how authentication works, you describe what you want, you get back code, you paste it in without fully reading it, and you move on. You now have authentication in your project, but you don't understand how it works.

Both of these are "using AI." Only the first one leaves you with genuine knowledge of your own project.

The practical test: for each significant component of your portfolio project, can you explain the following without looking anything up?

  • What the component does
  • Why you structured it the way you did
  • What alternatives you considered and why you didn't choose them
  • What would break if you removed it or changed it significantly

If you can answer these questions for every meaningful piece of your project, it doesn't matter how much AI assistance was involved in building it. You understand it. If you can't answer them, you have gaps to close before you're in good shape for interviews.

How to Go Back and Understand What You Built

If you've built something with heavy AI assistance and there are parts you don't fully understand, the path forward is straightforward.

Go through the codebase section by section. For each part you don't fully follow, use AI to explain it. Ask for line-by-line explanations. Ask why specific approaches were chosen over alternatives. Ask what the edge cases are. Ask what would happen under different conditions.

Then close the AI and explain that section to yourself out loud. If you can do it, you own that knowledge now. If you can't, go back and ask more questions.

This process takes time, but it's genuinely faster than most people expect. You already have the code in front of you. You're not building from scratch. You're filling in the mental model to match what exists. A solid portfolio project can usually be understood at the required depth in a few hours of focused review.

Using AI to learn coding covers the specific techniques for this kind of review work in detail: how to ask AI for explanations that build understanding rather than just hand you answers. The same techniques that work for learning new concepts work for understanding code you already have.

How to Talk About AI Use in Interviews

The question sometimes comes up directly: "Did you use AI tools to help build this?" The wrong answers are pretending you didn't when you did, and being defensive about it.

The right answer is some version of: "Yes, I used Claude to help with [specific parts]. I made sure to understand everything that went into the project, so I can walk through any of it in detail."

That response is honest, specific, and immediately redirects to what matters: your understanding. It also signals that you're comfortable talking about your tools and your process, which is how real engineers talk.

Hiring managers are not looking for candidates who claim to have written every line without any assistance. They're looking for engineers who are self-aware about how they work, honest about their process, and capable of producing good work. Being transparent about AI use while demonstrating real understanding is a stronger position than either extreme.

If AI use doesn't come up directly, you still want to be prepared for the substantive questions about your work. The goal is to walk into any interview conversation able to discuss your project with the fluency of someone who built it from scratch and knows it well. How you built it is less important than how well you know it now.

What Makes a Portfolio Project Defensible

The How to Pick a Portfolio Project article covers what makes a good project choice. The defensibility question is a layer on top of that: given the project you've built, can you defend it in a technical conversation?

Defensible projects have a few qualities:

Clear problem. You can articulate the specific problem the project solves and why that problem is worth solving. You made decisions about scope. You can explain what you left out.

Deliberate technical decisions. You chose a technology stack. You structured the data model in a specific way. You implemented certain features and not others. For each of these, you have a reason. The reason doesn't have to be sophisticated, but it has to exist and you have to know what it is.

Visible engineering judgment. The code shows that you thought about things like error handling, edge cases, and clean structure. Not perfectly, but genuinely. You can point to specific places where you made a deliberate trade-off rather than just doing whatever was easiest.

An honest read on the weaknesses. You know what's not production-ready about your project. You can describe what you'd add or fix with more time. This shows that your understanding extends to what you didn't build, not just what you did.

Projects that meet these criteria are defensible regardless of how much AI assistance was involved in building them. Projects that don't meet them aren't defensible regardless of how much was hand-written.

The GitHub Repository as Evidence

Your repository is part of the evidence that interviewers review. Commit history, code comments, and README documentation all contribute to the picture they're forming.

A repository where every commit is a single massive "initial commit" raises questions. A repository where commits reflect a realistic building process, with early versions, revisions, and incremental progress, looks like the work of someone who was actually present for the construction.

This doesn't mean you need to manufacture a fake commit history. It means you should commit regularly as you work, even when using AI assistance. Committing after each meaningful addition creates an honest record of the work progressing over time.

The GitHub That Gets You Hired article covers this in detail. The core principle applies here: your GitHub is being read as a signal about your work habits and your level of engagement with what you built. A repository that looks like genuine ongoing work is more credible than one that looks like it appeared all at once.

The Transparency Advantage

There's a strategic advantage to being genuinely transparent about AI use combined with genuine understanding, beyond just the ethical dimension.

Most candidates in the current market fall into one of two camps: those who pretend they didn't use AI (which is often not credible) and those who used AI heavily and can't explain their work (which disqualifies them for any serious role). The candidate who used AI thoughtfully, knows their work cold, and can talk about their process honestly is occupying a different position from both.

That candidate looks like someone who is already working the way good engineers actually work in 2026. They use the available tools. They take responsibility for understanding the output. They can explain their decisions. That's the job.

The framing that serves you best: transparency plus genuine understanding. Not "I barely used AI" and not "AI wrote it, here it is." Instead: "I used AI as a tool in this process, I understand everything in this codebase, and I'm ready to talk through any of it in detail."

That combination is harder to fake and harder to argue with.

For more on how AI tools fit into the junior engineer job search, AI coding tools for junior engineers covers the full picture of how to use these tools in a way that builds your career rather than undermining it.

If you want structured support building a portfolio that can withstand real technical scrutiny, here's how the Globally Scoped program works.

Interested in the program?