What AI Can't Do Yet: Where Junior Engineers Still Have an Edge
TL;DR
- AI is genuinely making experienced engineers more productive. It hasn't replaced the judgment that comes from experience.
- Specific things AI can't do well: navigate ambiguous requirements without a human to clarify, communicate with non-technical stakeholders, understand organizational context, make scope trade-off calls, and show up as a reliable communicating team member.
- The junior engineers at most risk are not the ones who use AI. They're the ones who use AI as a crutch and never build the underlying judgment.
- Using AI to accelerate genuine learning produces engineers who are stronger for it. Using AI to avoid learning produces engineers who are fragile when the assistance is removed.
- The gap AI creates is a skills gap, not a jobs gap. Close the skills gap.
The "AI will replace junior developers" narrative has been circulating long enough that it's worth looking at it directly rather than dismissing it or accepting it uncritically.
The honest position: AI tools are making developers at all levels more productive, the baseline for what "ready to contribute" means has shifted upward, and some junior engineers are going to struggle because of it. None of that is the same as saying junior engineering jobs will disappear. It means the path to landing them has gotten more demanding.
What gets lost in the panic version of this story: there is a specific and meaningful list of things AI cannot do well. Engineers who understand that list and deliberately develop capability in those areas are in a better position, not a worse one, than engineers who ignore AI entirely.
The Specific Limits of AI in Engineering Work
Navigating Ambiguous Requirements
AI tools are good at executing against clear specifications. Give a well-scoped prompt a good model, and you'll get working code. The problem is that most real engineering work doesn't start with a clear specification.
It starts with a business stakeholder who says something like "we need to make it easier for users to manage their subscriptions" and then you have to figure out what that means. What does "manage" mean? Which users? What are the edge cases? What constraints exist in the current architecture? What's the priority relative to everything else in the backlog?
AI can help you draft clarifying questions. It can help you think through edge cases once you've identified them. It cannot replace the human work of asking those questions in a meeting, reading the room when the stakeholder isn't sure what they want, and iteratively narrowing down to a specification that's actually buildable.
This is a skill that junior engineers are expected to develop. At a first job, you'll have senior engineers and product managers doing a lot of this work. But your ability to participate in it, ask good questions, and communicate when requirements are unclear directly affects how fast you grow.
Communicating with Non-Technical Stakeholders
AI can generate text. It cannot navigate a meeting with a product manager who is frustrated that a feature is taking longer than expected, or explain to a customer success person why the bug they're seeing isn't actually reproducible, or convince a skeptical executive that the technical debt you're asking to address is worth the investment.
These conversations require context that AI doesn't have: the relationships between people in the room, the history of past decisions, the political dynamics of the organization, the specific concerns of the person you're talking to. They also require real-time responsiveness. You can't prompt your way through a live conversation.
This might sound like "soft skills," which junior engineers sometimes treat as secondary to technical skill. In practice, the engineers who advance quickly are almost always the ones who can translate between technical and non-technical contexts. That ability is not something AI tools develop for you. It develops through practice.
Understanding Organizational Context
Why does this code exist? Why is the database schema designed this way? Why did the team choose this architecture three years ago when there were better options available?
AI doesn't know the answers to these questions. It wasn't there. It doesn't know that the original lead engineer who made that decision left the company, or that the "temporary workaround" in the codebase has been there for four years because there's a business constraint that makes removing it complicated.
Organizational context is knowledge you accumulate by being present: in code reviews, in architecture discussions, in conversations with senior engineers who remember how things got to where they are. AI tools can help you read and understand the code that exists. They can't tell you why it exists or what the constraints are on changing it.
New engineers who develop the habit of asking "why does this exist" and building a mental map of the history behind the codebase become significantly more effective faster than those who just execute against tickets without that context.
Making Judgment Calls About Scope
"Should we build this feature, and if so, how completely?" This is one of the central questions of engineering work, and AI is not equipped to answer it in any real situation.
Scope calls involve: understanding what users actually need versus what was requested, knowing the existing technical debt and how adding more affects the system's future health, understanding the team's capacity and what's realistic to build and maintain, and making a judgment about what's worth doing now versus later.
AI can surface considerations. It can describe approaches and their trade-offs. It cannot weigh those trade-offs against the specific constraints and context of your team and your product. That judgment is developed through experience and through being present for the consequences of past decisions.
This is one of the reasons senior engineers are more productive with AI tools than junior engineers: they bring the judgment that AI lacks. They know what to ask for and how to evaluate what they get back.
Being a Reliable Team Member
This one is less about a specific capability and more about what teams are actually evaluating when they hire and retain engineers.
A reliable team member ships what they say they'll ship. They communicate early when something is taking longer than expected. They flag problems before they become crises. They do code reviews that are genuinely useful rather than just approvals. They are present in standup in a way that gives useful information about where they are. They ask for help before they spin for too long on something.
None of this is about technical skill in the narrow sense. All of it is about judgment, communication, and consistency. AI tools don't develop these qualities for you. In some ways, heavy AI use can work against them: if you're used to getting answers immediately from AI, developing the patience to work through a problem carefully, and the communication habits to flag when you're stuck, requires deliberate attention.
The Junior Engineers Who Are Most at Risk
The narrative that all junior engineers are equally threatened by AI tools is wrong. The risk profile varies significantly based on how someone is developing.
Junior engineers who use AI tools as a shortcut to avoid learning are building on a fragile foundation. They can produce code, sometimes impressive-looking code, but they can't explain it, can't modify it intelligently under pressure, and can't develop the judgment that comes from actually wrestling with hard problems. These engineers will struggle in interviews and will struggle in their first jobs.
The why CS grads aren't getting hired article covers this more broadly: the gap isn't in technical skills narrowly defined, it's in the full range of what it takes to be a functional contributor on a team.
Junior engineers who use AI tools to accelerate genuine learning are in a different position. They're building understanding faster, getting feedback on their reasoning more frequently, and developing judgment about when to trust AI-generated code and when to be skeptical. These engineers are not threatened by AI tools. They're getting stronger because of them.
The dividing line is not "uses AI" versus "doesn't use AI." It's "uses AI in a way that builds judgment" versus "uses AI in a way that bypasses judgment."
What This Means for How to Spend Your Time
The clear implication: your preparation time should not be purely technical in the narrow sense of code output. Some of your time should go toward developing the capabilities AI tools can't provide.
This means practicing communicating about technical work in plain language. Explain what you're building to someone who doesn't code. Explain a bug and its fix without using technical jargon. These exercises feel artificial but they develop a real skill.
It means building things where the requirements are ambiguous and you have to make decisions, not just follow a tutorial. When requirements are unclear, sit with the ambiguity. Ask yourself what you'd need to know before you could build it. Practice defining the scope rather than just executing it.
It means reviewing and contributing to other people's code, not just writing your own. Code review teaches you to read code critically, communicate feedback constructively, and develop the organizational context that comes from seeing how different engineers approach the same problems.
And it means using AI tools in a way that builds judgment rather than bypasses it. Using AI to actually learn coding covers the specific workflow for this. The habits you build now around how you use these tools will compound over the first years of your career.
The Longer Arc
The framing that makes this clearest: AI is changing what the floor is. It's raising the minimum expected output per engineer. But it hasn't changed what the ceiling looks like, and it hasn't changed what makes someone a genuinely good engineer to work with.
The engineers at the top of teams are there because they make good decisions, communicate well, have strong judgment about what's worth building, and are reliably present in all the ways that matter. AI tools are making their code production faster. They're not replacing the judgment.
The junior engineers who will succeed in this environment are the ones who take both parts of this seriously: use AI tools to get faster and build more, and deliberately develop the judgment and communication that AI tools can't provide.
Those two things are not in tension. They're complements. The engineer who does both is more valuable than the engineer who does either one in isolation.
The specific AI tools worth knowing and how to get the most out of them is covered in the AI coding tools for junior engineers guide. That's the practical starting point. The context here is the larger picture of where these tools fit in the full picture of what makes an engineer hireable and effective.
If you want to work through this in a structured way with specific feedback on your portfolio and interview preparation, here's how the Globally Scoped program works.
Interested in the program?