This recipe executes Dimension 3 of the Retail AI Readiness Diagnostic — an assessment of organizational adoption psychology determining whether the retailer can absorb AI tools regardless of technology quality. It maps informal influence networks, inventories fears, scores TAM per tool, audits boundary transparency, measures hero dependency, and designs a peer-driven adoption pilot. This is the most frequently underestimated dimension — technology-first approaches fail because the organization rejects the tools. [src1, src5]
Duration: 1 day · Tool: Viva Insights, Slack Admin Export, or interview proxy
Extract communication metadata, build influence graph, calculate eigenvector centrality (influence), betweenness centrality (brokers), degree centrality (volume). Overlay formal vs informal topology. Classify key actors: champions, blockers, bridges, heroes. [src2, src4]
Verify: Top 10 influencers identified and classified. · If failed: Use interview-based proxy (15-20 people).
Duration: 1 day deploy + 5 days collection · Tool: Anonymous survey
Measure 6 fear categories (1-5 intensity): job displacement, skill obsolescence, loss of autonomy, surveillance anxiety, quality concern, status threat. Include open-ended free-text question. [src5]
Verify: Response rate > 60%, all categories scored. · If failed: Extend window, executive sponsor personal message.
Duration: 0.5 days · Tool: Structured assessment
Score perceived usefulness (1-5) and perceived ease of use (1-5) per tool. Classify into quadrants: sweet spot, worth the pain, easy but pointless, dead on arrival. [src1]
Verify: Every in-scope tool scored on both dimensions. · If failed: Use demo sessions for undeployed tools.
Duration: 0.5 days · Tool: UX audit + staff interviews
Evaluate decision visibility, confidence levels, override paths, error correction loops. Score Level 1 (black box) through Level 5 (full transparency).
Verify: Transparency score per tool with examples. · If failed: Score from vendor docs for undeployed tools.
Duration: 0.5 days · Tool: ONA data + interview validation
Identify AI knowledge concentration, training dependency, maintenance bottlenecks, decision authority SPOFs. Score Level 1 (single hero) through Level 5 (distributed). [src2]
Verify: Hero dependency score with SPOFs. · If failed: Use interview-based assessment.
Duration: 0.5 days · Tool: Pilot design document
Select highest TAM-scored tool, recruit 3-5 champion influencers, design fear mitigations, define success metrics, scope to 1 department for 30-60 days. [src3]
Verify: Pilot design with named champions. · If failed: No champions = Level 1-2 readiness (critical finding).
{
"output_type": "retail_adoption_psychology_assessment",
"format": "JSON + narrative report",
"sections": [
{"name": "composite_score", "type": "number", "description": "Adoption readiness score 1-5"},
{"name": "influence_map", "type": "object", "description": "Informal network with classified actors"},
{"name": "fear_inventory", "type": "array", "description": "Fear categories with frequency/intensity"},
{"name": "tam_scores", "type": "array", "description": "TAM scores per AI tool"},
{"name": "boundary_transparency", "type": "array", "description": "Transparency scores per tool"},
{"name": "hero_dependency", "type": "object", "description": "SPOFs and backup gaps"},
{"name": "pilot_design", "type": "object", "description": "Peer-driven pilot specification"}
]
}
| Quality Metric | Minimum Acceptable | Good | Excellent |
|---|---|---|---|
| Survey response rate | > 50% | > 70% | > 85% |
| Influence map coverage | > 60% | > 80% | > 90% |
| Fear categories measured | 4 of 6 | All 6 | All 6 + open-ended |
| AI tools TAM-scored | 1 tool | All in-scope | All + competitors |
| Champion candidates | 1-2 | 3-5 | 5+ across depts |
| Error | Likely Cause | Recovery Action |
|---|---|---|
| Survey response < 50% | Disengagement | Sponsor message + extended window |
| No champions found | Org-wide resistance | Document Level 1, fear mitigation first |
| Metadata unavailable | Privacy restrictions | Interview-based ONA |
| TAM impossible | Tools not deployed | Vendor demos for surrogate scoring |
| Hero unmeasurable | Small/flat team | Interview-based assessment |
30-60% of retail AI pilots fail due to adoption resistance, not technology. [src5]
Level 5 tools in a Level 2 organization will be rejected. [src1]
Staff maintain shadow workarounds circumventing AI. [src3]
Peer influence is 3-5x more effective than authority-driven adoption. [src3]
80% report "no concerns" due to retaliation fear. [src5]
Use tools that cannot link responses to individuals. Aggregate at department level minimum.
Use when assessing retail AI adoption readiness as Dimension 3 of the diagnostic, or standalone before any AI deployment. This dimension determines whether the organization absorbs or rejects AI tools.