cheatsheet
cheatsheet
First, let's talk about the time complexity of common operations, split by
data structure/algorithm. Then, we'll talk about reasonable complexities
given input sizes.
Given n = arr.length,
Strings (immutable)
Given n = s.length,
Linked Lists
Hash table/dictionary
Given n = dic.length,
Note: the O(1) operations are constant relative to n. In reality, the hashing
algorithm might be expensive. For example, if your keys are strings, then
it will cost O(m) where m is the length of the string. The operations only
take constant time relative to the size of the hash map.
Set
Given n = set.length,
Stack
Given n = stack.length,
Queue
Given n = queue.length,
Most algorithms will run in O(n⋅k) time, where k is the work done at each
node, usually O(1). This is just a general rule and not always the case. We
are assuming here that BFS is implemented with an efficient queue.
The average case is when the tree is well balanced - each depth is close
to full. The worst case is when the tree is just a straight line.
Heap/Priority Queue
Binary search
Binary search runs in O(logn) in the worst case, where n is the size of
your initial search space.
Miscellaneous
n <= 10
10 < n <= 20
The expected time complexity likely involves O(2n). Any higher base or a
factorial will be too slow (320320 = ~3.5 billion, and 20!20! is much larger).
A 2n usually implies that given a collection of elements, you are
considering all subsets/subsequences - for each element, there are two
choices: take it or don't take it.
Again, this bound is very small, so most algorithms that are correct will
probably be fast enough. Consider backtracking and recursion.
Consider brute force solutions that involve nested loops. If you come up
with a brute force solution, try analyzing the algorithm to find what steps
are "slow", and try to improve on those steps using tools like hash maps
or heaps.
Similar to the previous range, you should consider nested loops. The
difference between this range and the previous one is that O(n2) is
usually the expected/optimal time complexity in this range, and it might
not be possible to improve.
n<=105 is the most common constraint you will see on LeetCode. In this
range, the slowest acceptable common time complexity is O(n⋅logn),
although a linear time approach O(n) is commonly the goal.
In this range, ask yourself if sorting the input or using a heap can be
helpful. If not, then aim for an O(n) algorithm. Nested loops that run
in O(n2) are unacceptable - you will probably need to make use of a
technique learned in this course to simulate a nested loop's behavior
in O(1) or O(logn):
Hash map
A two pointers implementation like sliding window
Monotonic stack
Binary search
Heap
A combination of any of the above
1,000,000 < n
With huge inputs, typically in the range of 109109 or more, the most
common acceptable time complexity will be logarithmic O(logn) or
constant O(1). In these problems, you must either significantly reduce
your search space at each iteration (usually binary search) or use clever
tricks to find information in constant time (like with math or a clever use
of hash maps).
Other time complexities are possible like O(n), but this is very rare and
will usually only be seen in very advanced problems.
Sorting algorithms
All major programming languages have a built-in method for sorting. It is
usually correct to assume and say sorting costs O(n⋅logn), where n is the
number of elements being sorted. For completeness, here is a chart that
lists many common sorting algorithms and their completeness. The
algorithm implemented by a programming language varies; for example,
Python uses Timsort but in C++, the specific algorithm is not mandated
and varies.
Definition of a stable sort from Wikipedia: "Stable sorting algorithms
maintain the relative order of records with equal keys (i.e. values). That is,
a sorting algorithm is stable if whenever there are two records R and S
with the same key and with R appearing before S in the original list, R will
appear before S in the sorted list."
Note that this flowchart only covers methods taught in LICC, and as such
more advanced algorithms like Dijkstra's is excluded.
Interview stages cheat sheet
The following will be a summary of the "Stages of an interview" article. If
you have a remote interview, you can print this condensed version and
keep it in front of you during the interview.
Stage 1: Introductions
Stage 4: Implementation
Stage 7: Outro