← Back to Blog

Big O Notation for Interviews: How to Talk About Time and Space Complexity

TL;DR

  • Five complexities cover the vast majority of interview questions: O(1), O(log n), O(n), O(n log n), and O(n²).
  • You need to recognize these in code you're writing, not just memorize definitions.
  • Nested loops usually mean O(n²). Hash map lookups are O(1). Sorting is O(n log n). These are the patterns that repeat.
  • Space complexity gets asked almost as often as time complexity. Know how to account for what your solution stores in memory.
  • The skill isn't just knowing the answer. It's being able to explain your reasoning out loud while you code.

Big O notation is one of those topics that gets either over-explained (full CS theory with limit notation) or brushed past ("just memorize these"). Neither approach actually prepares you for how it comes up in interviews.

In a coding interview, Big O isn't a definition you recite. It's a running commentary on your solution. Interviewers want to hear you say things like "this is O(n) time because we're iterating through the array once" while you're writing the code, not after you've finished. They want to see that you think about efficiency as part of designing a solution, not as an afterthought.

This guide covers the five complexities that show up in interviews most often, how to recognize them in code you're writing, and how to talk about them naturally.

Why Big O Matters in Interviews

Before getting into the complexities themselves, it's worth being concrete about why this matters.

When you write a solution in a coding interview, the interviewer almost always asks you to analyze its time and space complexity. This isn't a gotcha. It's one of the primary things being evaluated. Can you reason about the efficiency of your own code? Can you explain the tradeoffs?

Some interviewers ask explicitly at the end: "What's the time complexity of your solution?" Others expect you to volunteer it as you go. The candidates who proactively explain complexity come across as more technically mature.

There's also a practical reason: if you know your solution is O(n²), you know to look for something better, and you can say so: "This brute force is O(n²). I think we can do better. Let me see if there's a way to reduce that."

The Five Complexities That Actually Matter

O(1): Constant Time

Your algorithm takes the same amount of time regardless of the size of the input.

Examples: - Looking up a value in a hash map by key - Accessing an element in an array by index - Pushing or popping from a stack

The key idea: no matter how large n gets, the operation takes roughly the same time. You're not iterating. You're jumping directly to what you need.

In interview code, hash map lookups are the most common O(1) operation. When you say "I'll use a hash map so I can check membership in constant time," you're making an O(1) choice consciously. Say that out loud.

O(log n): Logarithmic Time

Your algorithm cuts the problem in half (or some fraction) at each step.

Examples: - Binary search on a sorted array - Finding an element in a balanced binary search tree

The key idea: you don't look at every element. You eliminate half the possibilities with each step. An array of a million elements takes about 20 steps with binary search. That's the power of logarithmic growth.

In interviews, O(log n) usually comes up when the input is sorted or tree-structured. If you're doing binary search or BST traversal, say "this is O(log n) because we're eliminating half the search space with each comparison."

O(n): Linear Time

Your algorithm looks at each element in the input once (or a constant number of times).

Examples: - Iterating through an array - Finding the maximum value in an unsorted list - Building a hash map from an array

The key idea: as the input doubles in size, the work doubles. It's proportional. Linear time is often the best you can do for problems that require reading every element at least once.

In interviews, a single loop over the input is O(n). Saying "this is O(n) because we touch each element once" is correct and sufficient.

O(n log n): Log-Linear Time

Your algorithm is roughly n operations each taking log n time, or log n operations each taking n time.

Examples: - Any comparison-based sorting algorithm: merge sort, quicksort (average case), timsort - Heap sort - Some divide-and-conquer algorithms

The key idea: this is the complexity you get when you sort something. Sorting is so common in interviews that knowing "sorting costs O(n log n)" is worth memorizing as a fact. Many solutions involve sorting first, then doing O(n) work after, which gives O(n log n) overall.

In interviews, when your solution involves sorting the input, say "sorting costs O(n log n), and after that we do a linear scan, so the total is O(n log n)."

O(n²): Quadratic Time

Your algorithm has a nested loop where both loops run n times.

Examples: - Comparing every pair of elements in an array - Bubble sort, insertion sort, selection sort - A naive solution that checks all possible substrings

The key idea: if n doubles, the work quadruples. O(n²) solutions can work fine for small inputs but become slow quickly. For n = 1,000, that's a million operations. For n = 100,000, that's 10 billion. Most interview problems that have O(n²) brute force solutions also have better approaches.

In interviews, nested loops are the signal. When you write a loop inside a loop and both iterate over the full input, that's O(n²). Say it: "this nested loop makes this O(n²). I'd like to see if we can reduce that."

How to Recognize Complexity in Code You're Writing

Pattern recognition is faster than analysis from first principles. Here are the common patterns:

Single loop over input: O(n) for element in array: do_something(element)

Nested loops, both over input: O(n²) for i in range(len(array)): for j in range(len(array)): compare(array[i], array[j])

Loop where you halve the search space each iteration: O(log n) while left <= right: mid = (left + right) // 2 if target < array[mid]: right = mid - 1 else: left = mid + 1

Sorting the input: O(n log n) for the sort step

Hash map lookup inside a loop: O(n) total (loop is n iterations, each lookup is O(1)) for element in array: if element in seen_set: # O(1) return True seen_set.add(element)

That last pattern is important. If you replaced the hash set with a list and used if element in seen_list, the lookup becomes O(n), turning the overall algorithm from O(n) to O(n²). This is why the data structure you choose directly affects time complexity.

Space Complexity: What It Means and Why It Gets Asked

Space complexity measures how much additional memory your solution uses relative to the input size.

The same five complexity classes apply. An O(1) space solution uses a fixed amount of extra memory regardless of input size (a few variables). An O(n) space solution uses memory proportional to the input size (like building a hash map of all elements).

Two things to know:

The call stack counts. Recursive solutions use stack space proportional to the recursion depth. A recursive DFS on a tree with n nodes uses O(n) space in the worst case (a completely unbalanced tree where every call is nested). This often surprises people.

In-place algorithms are O(1) space. If you're sorting or transforming an array without allocating new data structures, you're O(1) space.

When interviewers ask about space complexity, they're often checking whether you thought about this at all, not whether you've minimized it. Saying "my solution is O(n) space because I'm storing each element in the hash map" is a complete answer. Whether you then try to optimize that is a separate question.

How to Talk About It in an Interview

The goal is to make complexity analysis part of your narration, not a separate step at the end.

While writing your solution:

  • "I'm using a hash map here for O(1) lookups."
  • "This inner loop makes this section O(n) for each outer iteration, so we're looking at O(n²) overall."
  • "I'm going to sort first, which costs O(n log n), but then the rest should be linear."

When you finish:

  • "The overall time complexity is O(n) because we iterate through the array once and each hash map operation is constant time."
  • "Space complexity is O(n) because in the worst case we store every element in the set."

If there's a better solution:

  • "This is O(n²) right now. I think we can improve it. If we sort the input first, we can use two pointers and get it down to O(n log n)."

If you're not certain:

  • "I believe this is O(n log n) because of the sort, but let me trace through to confirm."

Uncertainty stated openly is better than confident wrongness. The approach to unknown coding problems covers this as part of a broader problem-solving framework.

The Questions Interviewers Actually Ask

These come up repeatedly:

"What's the time complexity of your solution?" Answer with the overall complexity and a one-sentence explanation of why. Don't just say "O(n log n)" with nothing after it.

"Can you do better?" This is asking whether there's a more efficient approach. Engage with it honestly. If you think you're already at the theoretical optimum, say why. If you think there might be a better way but you're not sure how, say that.

"What's the space complexity?" Make sure you've accounted for everything: data structures you allocated, the call stack for recursive solutions, any output you're building.

"Why did you use a hash map instead of a list?" This is really a complexity question. The answer is usually "O(1) lookup instead of O(n)."

Building the Pattern Recognition

The best way to get fluent at complexity analysis is to analyze every solution you write during practice, not just check whether it produces the right output.

For any solution you write, ask yourself: - How many times does each element get touched? - Are there nested loops? What do they iterate over? - What data structures am I using, and what are their lookup/insertion/deletion costs? - If my solution is recursive, what's the depth of recursion?

Do this consistently and complexity analysis becomes automatic. You'll start recognizing O(n²) before you finish typing the inner loop, which gives you the chance to redesign before you've committed to the wrong approach.

The algorithmic patterns for coding interviews article covers the most common problem types and the complexity characteristics of each. Two pointers, sliding window, BFS/DFS. Knowing those patterns means knowing their typical complexity profiles. The how much LeetCode is enough article is useful context for how to structure your practice so that complexity analysis becomes part of the habit, not an afterthought.

Big O is one of those topics where the gap between "knowing the theory" and "using it fluently in an interview" is real. Practice explaining it out loud. That's the skill that actually transfers.

Interested in the program?