The 5 Algorithmic Patterns That Cover 80% of Coding Interviews
TL;DR
- Two pointers, sliding window, hash map lookups, BFS/DFS, and dynamic programming cover most interview problems.
- Pattern recognition beats problem memorization. Learn the triggers, not just the solutions.
- Each pattern has a signal: a specific shape of problem that tells you which approach to reach for.
- You can get strong at all five in 4-6 weeks of focused practice.
- Knowing the patterns makes take-homes easier too, not just live coding interviews.
If you've spent any time on LeetCode, you've probably felt this: you grind through a problem, read the solution, understand it completely, and then two days later encounter a slightly different version of the same problem and go blank.
That's not a memory problem. It's a pattern recognition problem.
The best approach to coding interview prep isn't memorizing 300 solutions. It's building the ability to look at a problem and recognize its shape. Once you know what category a problem belongs to, the solution approach follows naturally. Before we get into the patterns themselves, one question worth settling first: how much LeetCode practice is actually enough. That article pairs directly with this one. This one teaches you what to practice. That one helps you figure out when to stop.
Here are the five algorithmic patterns that show up most often in coding interviews, what they look like, and how to recognize when to use them.
Pattern 1: Two Pointers
What It Is
Two pointers is exactly what it sounds like: you maintain two index positions in an array (or two nodes in a linked list) and move them toward each other or in the same direction based on some condition.
The pattern is most useful when you're working with a sorted array and trying to find pairs, triplets, or subarrays that meet a constraint.
The Trigger
Watch for these signals in the problem:
- The input is a sorted array
- You're looking for two elements that sum to a target
- You need to find pairs or triplets that satisfy a condition
- You want to remove duplicates or partition an array in place
When you see "sorted array" and "pairs" in the same sentence, your first instinct should be two pointers.
A Simple Example
Problem: Given a sorted array nums and a target integer, return the indices of two numbers that add up to the target.
Two pointers solution: place left at index 0 and right at the last index. If nums[left] + nums[right] equals the target, you're done. If the sum is too low, move left right. If the sum is too high, move right left. You find the answer in O(n) time without nested loops.
The brute-force approach checks every pair: O(n²). The two-pointer approach cuts that to O(n) by using the sorted order.
Why It's Worth Knowing
The insight behind two pointers generalizes well. You'll use variations of it for the "3Sum" problem, for partitioning arrays, and for cycle detection in linked lists (where you use fast and slow pointers instead of left and right). Once you internalize the idea, a whole class of problems starts to feel familiar.
Pattern 2: Sliding Window
What It Is
Sliding window is for problems where you need to find an optimal subarray or substring within a larger array or string. Instead of checking every possible subarray from scratch, you maintain a "window" that you expand and contract as you move through the input.
Think of it like a frame you slide across the data. You add elements as you expand the right side and remove them as you shrink the left side.
The Trigger
The pattern fits when:
- The problem asks about subarrays or substrings
- You're optimizing something: maximum sum, minimum length, longest substring with a constraint
- The problem mentions a fixed-size window ("subarrays of size k")
The phrase "contiguous subarray" is almost always a sliding window hint.
A Simple Example
Problem: Find the maximum sum of any contiguous subarray of size k.
Sliding window approach: compute the sum of the first k elements. Then slide the window right one step at a time: add the new element on the right, subtract the element that just left on the left. Track the maximum as you go. This runs in O(n) instead of O(n*k).
Variable-size windows work similarly, but you adjust the shrink condition based on a constraint (like "no more than two distinct characters in the window").
Why It's Worth Knowing
Sliding window appears constantly in string problems, which are common in interviews across the board. Once you can visualize the window expanding and contracting, you stop staring at substring problems wondering where to start.
Pattern 3: Hash Map and Hash Set Lookups
What It Is
This one is less about a specific algorithm and more about a data structure reflex. When you need to check whether something exists, count how many times you've seen something, or find a complement to a value you're looking at, the answer is almost always a hash map or hash set.
The key property: hash map lookups happen in O(1) average time. That turns an O(n²) brute force into an O(n) solution.
The Trigger
Reach for a hash map or set when:
- You need to check "have I seen this before?"
- You're counting frequencies (how many times does each character appear?)
- You're looking for a pair that meets a condition (two-sum with an unsorted array)
- You need to group things by a key (group anagrams together)
A Simple Example
Problem: Given an unsorted array of integers and a target, find two numbers that sum to the target.
Hash map approach: iterate through the array. For each number n, check whether target - n is already in your map. If it is, you found the pair. If not, add n to the map and keep going.
You solve it in one pass, O(n) time and O(n) space.
Why It's Worth Knowing
The hash map pattern is everywhere. It shows up in string problems, array problems, and graph problems. Once you train yourself to ask "could I store something here to avoid repeating work?", you'll find it applies to a wide range of problems that don't obviously look like "hash map problems" at first glance.
Pattern 4: Tree and Graph Traversal (BFS and DFS)
What It Is
BFS (breadth-first search) and DFS (depth-first search) are the two fundamental ways to explore a tree or graph. Every tree and graph problem is some variation on one of them.
DFS dives deep: it picks a branch and follows it all the way before backing up. You implement it with recursion or an explicit stack. BFS explores level by level: all neighbors before going deeper. You implement it with a queue.
The Trigger
Use DFS when:
- You need to explore all paths or all possibilities
- You're checking whether a path exists in a tree or graph
- You're generating permutations, combinations, or subsets (backtracking is DFS with undo steps)
- You need to reach the leaves of a tree
Use BFS when:
- You're finding the shortest path in an unweighted graph
- You're processing nodes level by level (tree level-order traversal)
- You need the closest/nearest answer (fewest steps, minimum moves)
A Simple Example
DFS problem: Given a binary tree, find all root-to-leaf paths. Use recursive DFS: at each node, append the current value to a running path and recurse left and right. When you hit a leaf, add the current path to your results.
BFS problem: Find the shortest path between two nodes in an unweighted graph. Use a queue. Start at the source. Add all neighbors to the queue. Track visited nodes. The first time you reach the destination, that's the shortest path length.
Why It's Worth Knowing
Trees and graphs show up constantly: binary trees, N-ary trees, adjacency lists, grids treated as graphs. The actual structure varies but the traversal logic doesn't. If you're solid on BFS and DFS, you can handle most tree and graph problems by deciding which one fits and then adapting the standard template.
Grids deserve a special mention here. A 2D grid is a graph where each cell connects to its up/down/left/right neighbors. Once you see that, flood fill, island counting, and shortest-path-in-a-maze are all just BFS/DFS in disguise.
Pattern 5: Dynamic Programming (Simplified)
What It Is
Dynamic programming (DP) gets treated like a final boss in interview prep. It doesn't have to be.
At its core, DP is about solving a problem by breaking it into smaller subproblems, solving each subproblem once, and storing the result so you don't solve it again. The two common approaches are top-down (recursion with memoization) and bottom-up (building a table iteratively).
The Trigger
The clearest signal for DP is this combination:
- The problem asks for a count, maximum, minimum, or boolean (can you do it?)
- The problem involves choices at each step, and those choices affect what's available later
- The same subproblem appears multiple times if you think about it recursively
Classic DP problems include coin change, longest common subsequence, 0/1 knapsack, and climbing stairs. These are archetypes. Many DP problems you'll see in interviews are variations on one of them.
A Simple Example
Problem: You're climbing stairs. Each step you can take 1 or 2 stairs. How many distinct ways are there to reach the top?
This is Fibonacci in disguise. The number of ways to reach stair n equals the ways to reach stair n-1 (then take 1 step) plus the ways to reach stair n-2 (then take 2 steps). Build a table from the base cases: dp[1] = 1, dp[2] = 2. Fill forward from there.
You don't need to see this as "DP" from the start. You can start by writing the naive recursion, notice that you're computing the same subproblems repeatedly, and add memoization. That's a legitimate path to the optimal solution.
The Real Goal with DP
You don't need to master every DP variant before your interview. What you need is:
- The ability to recognize the DP trigger (choices + overlapping subproblems + optimize/count)
- The habit of writing the recurrence relation before you code anything
- A few solved archetypes you can reference when a new problem resembles one you know
Start with these: climbing stairs, coin change, longest increasing subsequence, 0/1 knapsack, and word break. Once you understand the structure of each, you'll start to see echoes of them in unfamiliar problems.
How Algorithmic Patterns Apply Beyond Live Interviews
These patterns show up in take-home challenges too, not just whiteboard sessions. You might get a data processing problem that is essentially a sliding window. An API response that needs grouping is a hash map problem. A dependency resolution question is graph traversal.
The Take-Home Coding Challenge guide covers how to approach those submissions. But knowing the patterns means you're not starting from scratch when a take-home looks unfamiliar. You're recognizing the shape.
Building Pattern Recognition: How to Actually Practice
Pattern recognition is a skill you build through repetition with variation, not through solving hundreds of unique problems. Here's what works:
Group your practice by pattern, not by difficulty. Solve 5-8 two-pointer problems before moving to sliding window. Solve 5-8 sliding window problems before moving to BFS. You're training your brain to associate problem shapes with approaches.
Label problems before you look at the solution. Before reading any hint, write down what pattern you think this is and why. You'll be wrong sometimes. That's the point. The moment you realize you guessed wrong is when the learning actually happens.
Review problems you've already solved. Come back to problems you solved a week ago and try to re-solve them without looking at your previous solution. If you can reconstruct the approach from scratch, you own the pattern. If you can't, you memorized the solution.
Write the pattern, not just the code. After solving a problem, write one sentence: "This is a [pattern] problem because [trigger I noticed]." Doing this deliberately accelerates pattern recognition faster than just solving more problems.
What This Doesn't Cover
Five patterns don't cover everything. You'll encounter problems involving heaps, tries, union-find, monotonic stacks, and more as you go deeper into interview prep. Some companies run interviews that go well beyond these five.
But here's what's true: the majority of entry-level and mid-level coding interviews stay within these patterns. They're the foundation. Getting genuinely solid at these five is more valuable than having surface-level exposure to twenty patterns.
And a point worth making directly: pattern fluency won't substitute for a strong portfolio. The candidates who get hired consistently show two things, technical interview competence AND real projects they can talk about. If you're putting all your time into algorithmic prep without building anything, take a look at The GitHub Profile That Actually Gets You Hired before your next interview cycle.
A Practical 4-Week Plan
If you have a month before interviews start, this is a reasonable structure:
Week 1: Two pointers and sliding window. Solve 6-8 problems from each. Focus on recognizing when to use each and why.
Week 2: Hash map and hash set problems. Solve classic two-sum variations, frequency counting, and grouping problems.
Week 3: Trees and graphs. Start with basic BFS/DFS traversals on trees, then move to graph problems and 2D grid problems.
Week 4: Dynamic programming. Solve the 5 archetypes listed above. Practice writing the recurrence relation on paper before touching the keyboard.
That's roughly 40-50 problems with clear intent, not 200 random problems with no framework. Students who come through our program and focus practice this way consistently report that interviews feel less like memory tests and more like pattern matching exercises. The anxiety doesn't disappear, but the problems start to feel familiar.
The goal of coding interview prep isn't to have seen every problem. It's to have built enough pattern fluency that unfamiliar problems feel like variations on familiar ones. Five patterns, practiced deliberately, gets you most of the way there.
For the complementary skills, how to approach a coding problem you've never seen before covers the thinking process step by step. Big O notation for interviews covers how to talk about complexity once you have a solution. And live coding: how to think out loud covers how to communicate your pattern recognition in real time. If you want to dig into how much practice volume makes sense alongside this pattern work, LeetCode: How Much Is Actually Enough covers that directly.
Interested in the program?