← Back to Blog

Prompt Engineering for Developers: How to Get Better Code Out of AI Tools

TL;DR

  • Vague prompts produce vague code. The more context you give, the better your output.
  • Specify your language, framework, constraints, and what the code needs to integrate with.
  • Never accept the first output uncritically. Iterate and ask follow-up questions.
  • Ask AI to explain its choices. If it can't, or the explanation doesn't hold up, that's a signal to dig deeper.
  • Use AI as a code reviewer: ask it to find problems in code you've already written.
  • Prompt engineering isn't a career path. It's a basic skill, like knowing how to use a search engine well.

There's a version of "prompt engineering" that gets covered in LinkedIn posts and YouTube videos as though it's a new profession. That's not what this article is about. This is about something narrower and more useful: how to write prompts that actually produce good code output when you're working with AI tools day to day.

If you've used Copilot, Cursor, or Claude and gotten output that was technically valid but completely missed what you needed, the prompt was almost always part of the problem. Fixing that is a learnable skill.

This connects directly to how junior engineers should be using AI coding tools generally. The tools aren't magic. They respond to what you give them, and most people give them very little.


The Core Problem: Vague Input, Vague Output

Here's what a lot of developers type into an AI tool:

"Write a function to process orders."

And they get back something that technically processes orders. It just doesn't match their data model, doesn't handle the edge cases they care about, uses a library they're not already using, and has error handling that's inconsistent with the rest of their codebase.

Then they spend 20 minutes figuring out what the AI gave them before eventually writing something closer to what they needed anyway.

The prompt was the problem. "Write a function to process orders" contains almost no information. The AI filled in the blanks with reasonable defaults, which may have nothing to do with your actual situation.


Start with Context

Before you ask for anything, give the AI enough background to give you something useful. This means telling it:

What language and version you're using. Python 3.12 and Python 2.7 are different. Ruby 3.3 and Ruby 2.5 are different. Don't make the AI guess.

What framework you're working in. A function that processes orders in vanilla Python looks different from one inside a Django view, a Rails controller, or a Next.js API route. Specify.

What you're building and where this code fits. "I'm building an e-commerce app. I have an Order model with fields: id, user_id, status (pending/shipped/delivered), and items (array of product IDs and quantities). I need a function that..." is a completely different prompt than "write a function to process orders."

What constraints exist. Are you integrating with an existing function? Do you need to maintain a specific interface? Are there performance considerations? Are there third-party services involved? These details change the output dramatically.

Here's a more useful version of the original prompt:

"I'm working in Rails 7 with PostgreSQL. I have an Order model with these columns: id, user_id, status (string, values: pending/processing/shipped/delivered), total_cents (integer). I need a method that transitions an order from 'pending' to 'processing', creates an OrderEvent record with the current timestamp and the user_id who triggered the transition, and raises an InvalidTransitionError if the order isn't in 'pending' status. We're using ActiveRecord and our existing error classes are in app/errors/."

That prompt gives the AI enough to produce something you can actually use.


Be Specific About What You Want, Not Just What You Need

There's a difference between describing the problem and describing what you want the AI to produce. Both matter.

If you want a function, say you want a function. If you want a code review, say that. If you want the AI to explain a concept before giving you code, ask for the explanation first.

Some useful frames:

  • "Write a method that..." (you want code)
  • "Review this code and tell me what could go wrong..." (you want critique)
  • "Explain how X works, then show me an example in Ruby..." (you want explanation then example)
  • "I'm getting this error: [error]. Here's the relevant code: [code]. What's causing it?" (you want debugging help)
  • "Suggest three different ways to implement X, and explain the tradeoffs." (you want options)

The AI doesn't know which of these you want unless you tell it. Defaulting to "write me code" when you actually need to understand a concept first is a common source of wasted time.


Iterate. The First Output Is a Draft.

Most people treat the first AI response as final. They paste it in, run it, and either move on if it works or start from scratch if it doesn't. Neither is the right move.

Treat the first output as a draft. Then:

Ask for changes directly. "This looks close, but I need the error handling to throw a specific exception instead of returning nil. Also, the SQL query should use a prepared statement." You don't have to start a new conversation. Build on what's there.

Ask what assumptions it made. "What did you assume about my data model that isn't explicitly in the prompt?" This surfaces hidden decisions you may need to override.

Push on edge cases. "What happens if items is an empty array? What if user_id is nil?" Getting the AI to walk through its own code's failure modes is a fast way to find problems before they bite you.

Ask for the cleaner version. Sometimes the first output works but it's verbose or uses a pattern that doesn't fit your codebase. "Can you simplify this? We prefer early returns over nested conditionals." is a valid follow-up.

Iteration is where most of the value is. The first pass is the easy part.


Ask the AI to Explain Its Choices

This one is underused and probably the most valuable habit you can build.

After you get a response, ask: "Why did you structure it this way?" or "What are the tradeoffs of this approach vs. alternatives?"

Two things happen when you do this:

First, you learn. If the AI explains that it used a transaction here because of a specific race condition concern, and that explanation is correct and relevant, you've just learned something about transactions and race conditions in your specific context. That sticks more than reading a generic tutorial.

Second, you catch problems. Sometimes the AI's explanation reveals that it made an assumption that doesn't apply to your situation, or used a pattern because it's common rather than because it's right for your case. The explanation is more transparent about the reasoning than the code alone.

This is especially useful when learning something new. If you're working in a framework you don't know well, asking the AI to explain why it chose a particular approach tells you whether to trust the output or whether to go read the docs first.

Learning through this kind of back-and-forth is one of the legitimate ways to use AI as a learning tool rather than just a code generator.


Giving the AI Your Code to Review

One of the most underused prompt patterns for developers is asking the AI to find problems in code you've already written.

This works well:

"Here's a function I wrote. Find any edge cases I might be missing, potential security issues, and any patterns that would be considered bad practice in Ruby. Don't rewrite it unless I ask you to."

"Review this SQL query for potential injection vulnerabilities and performance problems."

"I wrote this authentication middleware. What could go wrong with it in production?"

The reason this works is that you're in the driver's seat. The AI is reviewing your work rather than replacing it. You get to see what it flags, decide which concerns are real for your context, and make the changes yourself. That process builds judgment in a way that generating code from scratch doesn't.

This is also good interview prep. If you get used to asking "what could go wrong here?" about your own code, you'll be much better at answering that question out loud during a technical interview.


Context Windows and Long Conversations

One thing that bites developers who are new to AI tools: long conversations degrade in quality. As a chat gets longer, the AI has more to keep track of, and its focus on earlier details can drift.

If you're working on a complex problem over a long session and the outputs start getting worse or less relevant, start a new conversation. Paste in the relevant context fresh rather than relying on the AI to remember details from 50 messages ago.

For longer projects, some developers keep a "context block" they paste at the start of new sessions. Something like: "I'm working in Rails 7 / PostgreSQL on an e-commerce app. Current task: [task]. Relevant models: [models]. Constraints: [constraints]." That gives each session a clean starting point.


What "Prompt Engineering" Actually Is

There are courses and certifications now for "prompt engineering." Most of them are more elaborate than the skill requires. For developers working with code-focused AI tools, good prompting comes down to three things:

Give enough context. Language, framework, what you're building, where this fits, what you already have.

Be clear about what you want. Code, explanation, review, options, debugging help. Name it.

Iterate. Treat the first response as a starting point, not a final answer. Ask follow-ups. Push on edge cases. Ask for explanations.

That's most of it. The fancy techniques matter at the margins.


Prompts for Specific Situations

Here are a few prompt patterns that work well for common developer situations:

Debugging: "I'm getting [specific error message] in [language/framework]. Here's the code: [code]. Here's what I've already tried: [attempts]. What am I missing?"

Learning a new concept: "I'm trying to understand database indexing. Explain it briefly, then show me an example using PostgreSQL. Then tell me the three most common mistakes developers make with indexes."

Code review: "Review this code for: (1) correctness, (2) security issues, (3) anything that would stand out as a code smell in a professional code review. Don't rewrite it."

Exploring options: "I need to implement rate limiting in a Rails API. Give me three approaches with different tradeoffs, and recommend which one to use for a small-team production app handling ~10k requests/day."

Architecture questions: "I'm building [thing]. Should I use [option A] or [option B]? Here's my context: [context]. What would you recommend and why?"


What This Looks Like in Practice

The habit to build is this: before you type anything into an AI tool, spend 30 seconds asking yourself what context is missing from the prompt you're about to write.

What language, framework, and version? What does the existing code look like? What constraints does the output need to respect? What specifically do you want back?

Once you've filled in those gaps, write the prompt. Then, after you get a response, don't just run it. Read it. Ask one follow-up. Push on something.

That's the whole practice. It's not complicated. But it changes the quality of what you get.

For what to do once you've gotten AI-generated code and need to know whether it's actually safe to commit, read how to review AI-generated code before you commit it. And for a broader view of where AI tools fit in your work as an early-career engineer, the AI tools guide for junior engineers covers more ground.

If you want structured support with using AI tools effectively as a junior engineer, here's how the Globally Scoped program works.

Interested in the program?