Sensitivity Analysis

Type: Concept Confidence: 0.92 Sources: 4 Verified: 2026-02-28

Definition

Sensitivity analysis is a technique for measuring how changes in individual input variables affect a financial model's output, holding all other variables constant. It answers the question "what if this one assumption is wrong?" by systematically varying inputs across a defined range and observing the resulting change in key outputs. The two primary tools are data tables (one-way and two-way grids) and tornado charts (bar charts ranking variables by impact). [src1]

Key Properties

Constraints

Framework Selection Decision Tree

START — User needs to test model sensitivity
├── How many variables to test?
│   ├── 1 variable → One-way data table or spider chart
│   ├── 2 variables → Two-way data table (standard DCF)
│   ├── 5-10 ranked by impact → Tornado chart
│   └── Many with correlations → Monte Carlo Simulation
├── Need probability information?
│   ├── YES → Monte Carlo
│   └── NO → Sensitivity Analysis (this unit)
├── Variables change independently or together?
│   ├── Independently → Sensitivity Analysis
│   └── Together as scenarios → Scenario Analysis
└── Output format?
    ├── Table/matrix → Data table
    ├── Ranked bars → Tornado chart
    └── Distribution curve → Monte Carlo

Application Checklist

Step 1: Identify key input variables

Step 2: Define realistic ranges

Step 3: Build data tables and tornado charts

Step 4: Identify break-even points and present

Anti-Patterns

Wrong: Testing only upside sensitivity

Showing only how NPV improves with higher growth and lower WACC creates false confidence. [src1]

Correct: Testing symmetric ranges

Vary each input both above and below base case by the same amount. Show both positive and negative bars. [src3]

Wrong: Using arbitrary ranges for all variables

Applying ±50% to all inputs regardless of actual uncertainty makes stable inputs look as uncertain as speculative ones. [src2]

Correct: Calibrating ranges to actual uncertainty

Use historical standard deviations for market variables, management confidence for operational variables. Each gets its own justified range. [src3]

Wrong: Confusing sensitivity analysis with scenario analysis

Changing revenue, margins, and growth simultaneously and calling it "sensitivity analysis." [src1]

Correct: Keeping variables independent

Change one variable while holding all others at base case. If multiple change together, call it scenario analysis. [src1]

Common Misconceptions

Misconception: The widest bar on a tornado chart is the biggest risk.
Reality: Tornado charts show sensitivity (impact of change), not risk (likelihood of change). A highly sensitive but near-certain variable is low risk. [src3]

Misconception: Two-way data tables show all important interactions.
Reality: They show interaction between exactly two variables. Real models have 10+ uncertain inputs. Monte Carlo captures the full interaction space. [src2]

Misconception: Sensitivity analysis validates a model.
Reality: It tests how outputs change with inputs — not whether the model structure or base assumptions are correct. A flawed model produces consistent but wrong sensitivity results. [src1]

Comparison with Similar Concepts

ConceptKey DifferenceWhen to Use
Sensitivity AnalysisVaries 1-2 inputs, others constantIdentifying key value drivers
Scenario AnalysisChanges multiple correlated inputsTesting coherent alternative futures
Monte Carlo SimulationRandom sampling from distributionsGenerating probability distributions
Stress TestingExtreme but plausible shocksTesting survival under crisis conditions

When This Matters

Fetch this when a user asks about building sensitivity tables, creating tornado charts, what-if analysis, identifying key value drivers, or building data tables for DCF output.

Related Units