Dynamic Programming Patterns: Complete Reference

Type: Software Reference Confidence: 0.95 Sources: 7 Verified: 2026-02-24 Freshness: 2026-02-24

TL;DR

Constraints

Quick Reference

DP PatternTimeSpaceClassic ProblemsKey Insight
1D LinearO(n) or O(n*k)O(n) or O(1)Fibonacci, Climbing Stairs, House RobberEach state depends on a fixed number of previous states
2D GridO(m*n)O(m*n) or O(n)Unique Paths, Minimum Path Sum, Dungeon GameState = (row, col); transitions from adjacent cells
0/1 KnapsackO(n*W)O(n*W) or O(W)Subset Sum, Partition Equal Subset, Target SumEach item used at most once; iterate items then capacity
Unbounded KnapsackO(n*W)O(W)Coin Change, Rod Cutting, Integer BreakItems reusable; iterate capacity then items (or same direction)
LCSO(m*n)O(m*n) or O(n)Edit Distance, Shortest Common Supersequencedp[i][j] = match/mismatch on s1[i] vs s2[j]
LISO(n log n)O(n)Patience Sorting, Russian Doll EnvelopesBinary search on tails array for O(n log n)
Interval DPO(n^3)O(n^2)Matrix Chain Multiplication, Burst Balloons, Palindrome PartitioningMerge smaller intervals; enumerate split point k
Bitmask DPO(2^n * n)O(2^n)TSP, Assignment Problem, Hamiltonian PathBitmask encodes subset; n <= 20
Digit DPO(d * tight * state)O(d * tight * state)Count numbers with digit sum = k, Numbers without consecutive 1sProcess digits left to right with tight constraint
Tree DPO(n)O(n)Diameter, Max Path Sum, Independent Set on TreePost-order traversal; merge child subtree results
State Machine DPO(n * states)O(states)Best Time to Buy/Sell Stock (all variants), String editing with modesFinite states with explicit transitions per step
Profile DP (Broken Profile)O(n * 2^m)O(2^m)Domino/Tromino Tiling, Grid coloring with constraintsEncode column boundary as bitmask; m <= 20

Decision Tree

START: Does the problem have overlapping subproblems?
|-- NO --> Not a DP problem. Try greedy, divide & conquer, or brute force.
|-- YES --> Does it have optimal substructure?
    |-- NO --> DP won't yield optimal solution. Try brute force with pruning.
    |-- YES --> Choose DP approach:
        |-- Can you define states easily from the end?
        |   |-- YES --> Use bottom-up tabulation (iterative, no stack risk)
        |   |-- NO --> Use top-down memoization (recursive, natural for trees/DAGs)
        |
        |-- Identify the pattern:
            |-- Single sequence/array? --> 1D Linear DP or LIS
            |-- Two sequences to compare? --> LCS / Edit Distance (2D)
            |-- Grid traversal? --> 2D Grid DP
            |-- Items with weight/value, capacity limit?
            |   |-- Each item once? --> 0/1 Knapsack
            |   |-- Items reusable? --> Unbounded Knapsack
            |-- Merging ranges or splitting? --> Interval DP
            |-- Small set (n <= 20) with subset tracking? --> Bitmask DP
            |-- Count integers in [L, R] with digit property? --> Digit DP
            |-- Tree structure? --> Tree DP (post-order DFS)
            |-- Multiple modes/states per step? --> State Machine DP
            |-- None of the above? --> Define custom state; check for DAG structure

Step-by-Step Guide

1. Identify DP applicability

Verify the problem has both properties: (a) overlapping subproblems -- solving it recursively would recompute the same subproblem multiple times, and (b) optimal substructure -- the optimal solution to the problem contains optimal solutions to its subproblems. [src1]

Verify: Draw the recursion tree for a small input. If you see repeated nodes, DP applies.

2. Define the state

The state is the minimum set of variables that uniquely describes a subproblem. Common states: dp[i] (position in array), dp[i][j] (two pointers or grid position), dp[i][w] (item index + remaining capacity), dp[mask] (subset bitmask). [src1]

Verify: Can you express the answer to the original problem in terms of your state? e.g., answer = dp[n] or dp[n][W].

3. Write the recurrence relation

Express dp[state] in terms of smaller subproblems. This is the core mathematical insight. Example for 0/1 Knapsack: dp[i][w] = max(dp[i-1][w], dp[i-1][w-wt[i]] + val[i]). [src1]

Verify: Manually trace the recurrence on a 3-4 element input to confirm correctness.

4. Determine the base cases

Identify the smallest subproblems that can be solved directly. Examples: dp[0] = 0 (no items), dp[0][0] = 1 (empty subset sums to 0), dp[leaf] = leaf.val (tree leaf). [src5]

Verify: Base cases should prevent out-of-bounds access and produce correct answers for trivial inputs.

5. Choose implementation approach

Top-down (memoization): Write the recursive solution, add a cache (hash map or array). Natural for tree DP and problems where not all states are visited. Bottom-up (tabulation): Fill the DP table iteratively in topological order. Better for space optimization and avoiding stack overflow. [src6]

Verify: Both approaches should produce identical results on test cases.

6. Optimize space if possible

Check if dp[i] only depends on dp[i-1] (or a fixed window). If so, reduce from O(n*m) to O(m) with a rolling array. For 1D DP depending on dp[i-1] and dp[i-2], reduce to O(1) with two variables. [src4]

Verify: Run optimized version against full-table version on test cases; results must match.

Code Examples

Python: 0/1 Knapsack (Bottom-Up with Space Optimization)

# Input:  weights = [2, 3, 4, 5], values = [3, 4, 5, 6], capacity = 8
# Output: Maximum value achievable = 10

def knapsack_01(weights, values, capacity):
    n = len(weights)
    dp = [0] * (capacity + 1)
    for i in range(n):
        for w in range(capacity, weights[i] - 1, -1):
            dp[w] = max(dp[w], dp[w - weights[i]] + values[i])
    return dp[capacity]

Python: Longest Common Subsequence

# Input:  s1 = "abcde", s2 = "ace"
# Output: Length of LCS = 3 ("ace")

def lcs(s1, s2):
    m, n = len(s1), len(s2)
    prev = [0] * (n + 1)
    curr = [0] * (n + 1)
    for i in range(1, m + 1):
        for j in range(1, n + 1):
            if s1[i-1] == s2[j-1]:
                curr[j] = prev[j-1] + 1
            else:
                curr[j] = max(prev[j], curr[j-1])
        prev, curr = curr, [0] * (n + 1)
    return prev[n]

Python: Coin Change (Unbounded Knapsack)

# Input:  coins = [1, 5, 10, 25], amount = 30
# Output: Minimum coins needed = 2 (5 + 25)

def coin_change(coins, amount):
    dp = [float('inf')] * (amount + 1)
    dp[0] = 0
    for coin in coins:
        for a in range(coin, amount + 1):
            dp[a] = min(dp[a], dp[a - coin] + 1)
    return dp[amount] if dp[amount] != float('inf') else -1

JavaScript: Edit Distance (Levenshtein)

// Input:  word1 = "horse", word2 = "ros"
// Output: Minimum operations = 3

function editDistance(word1, word2) {
  const m = word1.length, n = word2.length;
  let prev = Array.from({length: n + 1}, (_, j) => j);
  for (let i = 1; i <= m; i++) {
    const curr = [i];
    for (let j = 1; j <= n; j++) {
      if (word1[i-1] === word2[j-1]) {
        curr[j] = prev[j-1];
      } else {
        curr[j] = 1 + Math.min(prev[j-1], prev[j], curr[j-1]);
      }
    }
    prev = curr;
  }
  return prev[n];
}

Python: Fibonacci with Memoization vs Tabulation

# Top-down (memoization)
from functools import lru_cache

@lru_cache(maxsize=None)
def fib_memo(n):
    if n <= 1: return n
    return fib_memo(n - 1) + fib_memo(n - 2)

# Bottom-up (tabulation, O(1) space)
def fib_tab(n):
    if n <= 1: return n
    a, b = 0, 1
    for _ in range(2, n + 1):
        a, b = b, a + b
    return b

Anti-Patterns

Wrong: Recursion without memoization

# BAD -- exponential O(2^n) time, recomputes same subproblems
def fib(n):
    if n <= 1: return n
    return fib(n - 1) + fib(n - 2)

Correct: Add memoization or use tabulation

# GOOD -- O(n) time with memoization
from functools import lru_cache

@lru_cache(maxsize=None)
def fib(n):
    if n <= 1: return n
    return fib(n - 1) + fib(n - 2)

Wrong: Wrong iteration direction in 0/1 Knapsack

# BAD -- forward iteration allows using same item multiple times
dp = [0] * (capacity + 1)
for i in range(n):
    for w in range(weights[i], capacity + 1):  # WRONG: forward
        dp[w] = max(dp[w], dp[w - weights[i]] + values[i])

Correct: Reverse iteration for 0/1 Knapsack

# GOOD -- reverse iteration ensures each item used at most once
dp = [0] * (capacity + 1)
for i in range(n):
    for w in range(capacity, weights[i] - 1, -1):  # RIGHT: reverse
        dp[w] = max(dp[w], dp[w - weights[i]] + values[i])

Wrong: Using wrong base case for counting problems

# BAD -- dp[0] = 0 means "zero ways to make amount 0"
dp = [0] * (amount + 1)
dp[0] = 0  # WRONG for counting
for coin in coins:
    for a in range(coin, amount + 1):
        dp[a] += dp[a - coin]
# Result: always 0

Correct: Base case dp[0] = 1 for counting problems

# GOOD -- exactly one way to make amount 0: choose nothing
dp = [0] * (amount + 1)
dp[0] = 1
for coin in coins:
    for a in range(coin, amount + 1):
        dp[a] += dp[a - coin]

Wrong: Interval DP with wrong loop order

# BAD -- accessing dp[i][k] and dp[k+1][j] before they are computed
for i in range(n):
    for j in range(i, n):
        for k in range(i, j):
            dp[i][j] = min(dp[i][j], dp[i][k] + dp[k+1][j] + cost)

Correct: Interval DP iterating by length

# GOOD -- iterate by interval length so smaller intervals computed first
for length in range(2, n + 1):
    for i in range(n - length + 1):
        j = i + length - 1
        for k in range(i, j):
            dp[i][j] = min(dp[i][j], dp[i][k] + dp[k+1][j] + cost)

Common Pitfalls

When to Use / When Not to Use

Use WhenDon't Use WhenUse Instead
Problem has overlapping subproblems AND optimal substructureProblem has greedy choice property (locally optimal = globally optimal)Greedy algorithms (Dijkstra, Huffman, interval scheduling)
Need optimal value (min/max) or count of solutionsNeed to enumerate ALL valid configurationsBacktracking with pruning
State space is polynomial (n, n*W, n^2, n^3)State space is exponential and n > 25Branch and bound, approximation algorithms
Subproblems form a DAG (no cycles in dependency)Subproblems have circular dependenciesIterative deepening, fixed-point computation
Problem can be decomposed into independent subproblemsSubproblems are interdependent (changing one affects others)Simulation, constraint programming

Important Caveats

Related Units