The "We Can Build It Cheaper" Trap
Definition
The "We Can Build It Cheaper" trap is the systematic cognitive pattern where internal software build cost estimates are 2-5x lower than actual costs, driven by a convergence of the planning fallacy (optimistic inside-view estimation), hidden cost blindness (omitting maintenance, infrastructure, and opportunity costs), and organizational incentive structures that reward underestimation to secure project approval. [src1] Research across IT projects shows a mean cost overrun ratio of 1.8x, with software projects averaging 66% cost overrun and the distribution exhibiting fat-tailed characteristics where extreme overruns are far more common than normal distributions predict. [src2]
Key Properties
- Overrun magnitude: Software projects average 66% cost overrun; large IT projects (>$15M) run 45% over budget; the distribution is fat-tailed with 5-10x overruns more frequent than expected [src2] [src3]
- Root cause taxonomy: Three drivers — cognitive bias (planning fallacy/optimism), information gaps (hidden costs not modeled), and strategic misrepresentation (deliberate underestimation for approval) [src1]
- Hidden cost categories typically omitted: Ongoing maintenance (40-60% of build cost/year), infrastructure/security, testing/QA, documentation, training/onboarding, talent retention risk ($15K-$30K per replacement) [src4]
- Cognitive mechanism: The "inside view" — estimators focus on specific project features and best-case scenarios rather than base rates from similar past projects [src1]
- Selection bias amplifier: Projects with the lowest (most optimistic) estimates are most likely to be approved, systematically filtering for underestimation [src6]
- Scope creep contribution: Scope creep affects 52% of IT projects, adding cost increases beyond the initial estimation error [src3]
Constraints
- The 2-5x multiplier is a cross-industry average. Highly regulated industries (healthcare, finance) skew higher; small internal tools may overrun less. [src2]
- Reference class forecasting requires historical data from comparable projects. Many organizations do not systematically collect this data. [src5]
- Some cost categories (opportunity cost, organizational drag) are genuinely difficult to quantify even with full awareness of the bias. [src4]
- The planning fallacy is a cognitive bias, not a moral failing. Debiasing requires structural process changes (external review, reference classes), not just awareness. [src1]
- Experienced engineers produce somewhat better estimates but are still subject to the same directional biases. Expertise reduces variance, not systematic optimism. [src5]
Framework Selection Decision Tree
START — User asking about software cost estimation or build vs buy costs
├── What's the core question?
│ ├── "Why did our build project go over budget?"
│ │ └── The "We Can Build It Cheaper" Trap ← YOU ARE HERE
│ ├── "Should we build or buy this software?"
│ │ └── → Build vs Buy for Enterprise Software
│ ├── "How do we choose between build, buy, and partner?"
│ │ └── → Build vs Buy vs Partner Decision Tree
│ └── "How do we estimate software costs more accurately?"
│ └── The "We Can Build It Cheaper" Trap ← YOU ARE HERE
├── Is the user evaluating a specific estimate right now?
│ ├── YES → Apply the 6 Hidden Cost Categories audit (Application Checklist)
│ └── NO → Provide the bias taxonomy and debiasing techniques
└── Has the project already started and is over budget?
├── YES → Focus on sunk cost fallacy prevention + realistic re-estimation
└── NO → Focus on pre-mortem and reference class forecasting
Application Checklist
Step 1: Audit the estimate for hidden cost categories
- Inputs needed: The current internal build cost estimate, line items included, assumptions
- Output: Gap analysis showing which of the 6 hidden cost categories are missing or underestimated
- Constraint: If the estimate lacks separate line items for maintenance, infrastructure/security, testing/QA, documentation, training/onboarding, and talent retention risk, it is systematically too low. All 6 must be present. [src4]
Step 2: Apply the maintenance multiplier
- Inputs needed: Initial development cost estimate
- Output: 5-year total cost of ownership including maintenance
- Constraint: Annual maintenance typically costs 40-60% of initial development cost. If the estimate shows maintenance below 30%, it is almost certainly underestimated. Multiply initial estimate by 3-4x for 5-year TCO. [src4]
Step 3: Perform reference class forecasting
- Inputs needed: 5-10 comparable completed projects with actual vs estimated costs
- Output: Probability distribution of likely actual cost based on historical base rates
- Constraint: The reference class must consist of genuinely comparable projects. Cherry-picking successes invalidates the exercise. If no reference class is available, apply the industry-average 1.8-2.5x multiplier. [src2] [src5]
Step 4: Run a pre-mortem exercise
- Inputs needed: The project plan, team composition, stakeholder list
- Output: List of 10-15 specific failure modes with cost impact estimates
- Constraint: The pre-mortem must be conducted by people who are NOT the original estimators and do NOT have a stake in project approval. Self-review reproduces the same biases. [src1]
Step 5: Compare adjusted build cost to buy alternatives
- Inputs needed: Adjusted build TCO (from Steps 1-4), vendor quotes for buy alternatives (5-year TCO)
- Output: Apples-to-apples cost comparison with uncertainty ranges
- Constraint: If adjusted build cost exceeds 150% of the buy alternative's 5-year TCO, building is almost never justified unless the capability is a core competitive differentiator. [src4]
Anti-Patterns
Wrong: Estimating build cost as (developers x months x salary)
Teams estimate build cost using only direct engineering labor, missing infrastructure, security, testing, documentation, project management overhead, and the 40-60% annual maintenance tax. The actual 5-year cost is typically 3-5x the initial labor estimate. [src4]
Correct: Using a full-loaded TCO model with all 6 cost categories
Include direct engineering, infrastructure/DevOps, security/compliance, testing/QA, documentation/training, and ongoing maintenance. Add 15-20% contingency for scope changes and a talent retention risk premium. Compare 5-year TCO, not first-year build cost. [src4]
Wrong: Letting the team that proposed building also estimate the cost
The advocating team has a structural incentive to produce low estimates — their preferred project gets approved and they work on interesting technology. This strategic misrepresentation accounts for a significant portion of organizational underestimation. [src1] [src6]
Correct: Using independent estimation with reference class data
Have estimates reviewed or produced by a team without a stake in the build/buy outcome. Anchor in reference class data from comparable completed projects, not in the specific project's optimistic scenario. [src5]
Wrong: Citing a past successful build as proof it will work again
Survivorship bias — the organization remembers the on-time/on-budget project while forgetting the ones that overran. The successful project may have had advantages (smaller scope, stable requirements) that do not transfer. [src3]
Correct: Using the full distribution of past project outcomes
Collect actual vs estimated costs for ALL past projects, not just successes. Plot the distribution and identify median and 80th percentile outcomes. Plan for the median, budget for the 80th percentile. [src2]
Common Misconceptions
Misconception: Experienced engineers produce accurate estimates because they have "been through it before."
Reality: Research shows experience reduces variance but not the directional bias. Senior engineers are somewhat better calibrated but still systematically optimistic. The planning fallacy affects experts and novices alike because it stems from the "inside view." [src1] [src5]
Misconception: Agile methodology eliminates estimation risk because you "discover as you go."
Reality: Agile reduces the risk of building the wrong thing but does not reduce the total cost of building the right thing. Iterative development still incurs infrastructure, maintenance, and organizational costs. Agile can increase total cost by enabling continuous scope expansion. [src3]
Misconception: The 2-5x multiplier is exaggerated — most projects are only slightly over budget.
Reality: The distribution of cost overruns follows a power law, not a normal distribution. While the mode is near 0% overrun, the mean is 66% for software projects, and extreme overruns (5-10x) occur far more frequently than expected. [src2]
Misconception: Adding a 20-30% buffer solves the problem.
Reality: If the base estimate is missing entire cost categories (maintenance, infrastructure, talent retention), a 20-30% buffer on an incomplete estimate is still far below reality. Empirical data suggests 50-100% buffers on complete estimates for 80th percentile coverage. [src2] [src5]
Comparison with Similar Concepts
| Concept | Key Difference | When to Use |
|---|---|---|
| "We Can Build It Cheaper" Trap | Diagnoses WHY estimates are too low (cognitive + structural causes) | Evaluating whether an internal build estimate is realistic |
| Build vs Buy for Enterprise Software | Full decision framework with cost benchmarks | Making the build/buy/hybrid decision for ERP/CRM/HCM |
| Build vs Buy vs Partner Decision Tree | General framework including partner option | Decision space includes outsourcing or partnering |
| Planning Fallacy (general) | Broad cognitive bias across all project types | Understanding estimation bias beyond software |
When This Matters
Fetch this when a user is evaluating an internal team's claim that building software is cheaper than buying, when a build project has exceeded its budget and the user wants to understand why, when someone needs debiasing techniques for software cost estimation, or when building a business case that requires realistic cost multipliers for internal development.