AI/ML Due Diligence Checklist

Type: Concept Confidence: 0.86 Sources: 6 Verified: 2026-02-28

Definition

AI/ML due diligence is a specialized extension of technology due diligence that evaluates the intelligence layer of a target company — model architecture, training data provenance, third-party AI dependencies, MLOps maturity, inference economics, regulatory compliance, and key person risk. This checklist supplements the standard 8-workstream M&A due diligence framework. [src2] [src5]

Key Properties

Constraints

Framework Selection Decision Tree

START — Evaluating target with AI/ML capabilities
├── How central is AI to value?
│   ├── AI IS the product → Full AI DD ← YOU ARE HERE
│   ├── AI enhances product → Moderate AI DD + standard tech DD
│   ├── AI internal only → Light assessment within standard tech DD
│   └── AI claimed but minimal → Validate claims
├── Builds own models?
│   ├── YES → Full model + training data + IP review
│   └── NO (API wrappers) → Focus on vendor dependency
├── Processes personal data in AI?
│   ├── YES → GDPR/EU AI Act review critical
│   └── NO → Standard IP/licensing review
└── AI talent primary driver?
    ├── YES → Key person assessment, retention packages
    └── NO → Standard HR with AI overlay

Application Checklist

Step 1: Map the AI stack

Step 2: Assess training data provenance

Step 3: Evaluate model performance

Step 4: Assess infrastructure and inference economics

Step 5: Regulatory and IP compliance

Anti-Patterns

Wrong: Accepting AI claims at face value

Many companies market as "AI-powered" with minimal actual AI deployment. [src5]

Correct: Demand technical access and independent evaluation

Ask: "If we removed all third-party API calls, what AI capability would remain?" [src1]

Wrong: Ignoring training data rights

Dismissing provenance concerns because "everyone uses the same data" fails when a lawsuit targets the specific acquired company. [src4]

Correct: Categorize training data into risk tiers

Tier 1 (clear rights), Tier 2 (gray area), Tier 3 (high risk). Quantify exposure per tier. [src4]

Wrong: Valuing AI talent without retention analysis

Standard HR DD doesn't assess whether critical AI knowledge is transferable or locked in individuals. [src1]

Correct: Conduct knowledge concentration assessment

Map sole-knowledge holders, assess bus factor, design retention packages vesting over 2-4 years. [src5]

Common Misconceptions

Misconception: Traditional tech DD covers AI assets.
Reality: AI DD adds training data provenance, model validation, inference economics, EU AI Act, and MLOps — none covered in standard tech DD. [src2]

Misconception: Open-source AI models mean no IP risk.
Reality: Licenses vary widely — some restrict commercial use, and models trained on copyrighted data may transfer liability. [src4]

Misconception: High benchmark accuracy means production-ready.
Reality: Benchmark accuracy often overstates real-world performance due to distribution mismatch and data leakage. [src3]

Comparison with Similar Concepts

ConceptKey DifferenceWhen to Use
AI Due DiligenceSpecialized AI/ML assessmentTarget has material AI capabilities
Technology DDBroader IT assessmentEvery tech acquisition
Standard DDFull 8-workstreamEvery M&A transaction
AI Vendor AssessmentEvaluating a supplierProcurement, not M&A

When This Matters

Fetch this when a user asks about evaluating AI capabilities in an acquisition, assessing training data rights during M&A, or understanding EU AI Act implications for transactions.

Related Units