Sensitivity analysis is a technique for measuring how changes in individual input variables affect a financial model's output, holding all other variables constant. It answers the question "what if this one assumption is wrong?" by systematically varying inputs across a defined range and observing the resulting change in key outputs. The two primary tools are data tables (one-way and two-way grids) and tornado charts (bar charts ranking variables by impact). [src1]
START — User needs to test model sensitivity
├── How many variables to test?
│ ├── 1 variable → One-way data table or spider chart
│ ├── 2 variables → Two-way data table (standard DCF)
│ ├── 5-10 ranked by impact → Tornado chart
│ └── Many with correlations → Monte Carlo Simulation
├── Need probability information?
│ ├── YES → Monte Carlo
│ └── NO → Sensitivity Analysis (this unit)
├── Variables change independently or together?
│ ├── Independently → Sensitivity Analysis
│ └── Together as scenarios → Scenario Analysis
└── Output format?
├── Table/matrix → Data table
├── Ranked bars → Tornado chart
└── Distribution curve → Monte Carlo
Showing only how NPV improves with higher growth and lower WACC creates false confidence. [src1]
Vary each input both above and below base case by the same amount. Show both positive and negative bars. [src3]
Applying ±50% to all inputs regardless of actual uncertainty makes stable inputs look as uncertain as speculative ones. [src2]
Use historical standard deviations for market variables, management confidence for operational variables. Each gets its own justified range. [src3]
Changing revenue, margins, and growth simultaneously and calling it "sensitivity analysis." [src1]
Change one variable while holding all others at base case. If multiple change together, call it scenario analysis. [src1]
Misconception: The widest bar on a tornado chart is the biggest risk.
Reality: Tornado charts show sensitivity (impact of change), not risk (likelihood of change). A highly sensitive but near-certain variable is low risk. [src3]
Misconception: Two-way data tables show all important interactions.
Reality: They show interaction between exactly two variables. Real models have 10+ uncertain inputs. Monte Carlo captures the full interaction space. [src2]
Misconception: Sensitivity analysis validates a model.
Reality: It tests how outputs change with inputs — not whether the model structure or base assumptions are correct. A flawed model produces consistent but wrong sensitivity results. [src1]
| Concept | Key Difference | When to Use |
|---|---|---|
| Sensitivity Analysis | Varies 1-2 inputs, others constant | Identifying key value drivers |
| Scenario Analysis | Changes multiple correlated inputs | Testing coherent alternative futures |
| Monte Carlo Simulation | Random sampling from distributions | Generating probability distributions |
| Stress Testing | Extreme but plausible shocks | Testing survival under crisis conditions |
Fetch this when a user asks about building sensitivity tables, creating tornado charts, what-if analysis, identifying key value drivers, or building data tables for DCF output.