This recipe establishes a structured workflow for building MVP features using AI coding tools — Claude Code, Cursor, Bolt.new, and Replit Agent. The output is a working, human-reviewed codebase built through iterative AI-assisted development: spec first, generate, test, refine. [src1]
Which tool?
├── Developer with existing codebase AND terminal workflow
│ └── PATH A: Claude Code — agentic, reads full codebase, runs tests
├── Developer who prefers IDE AND visual editing
│ └── PATH B: Cursor — inline edits, chat, agent mode, multi-file
├── Non-technical founder AND simple app
│ └── PATH C: Bolt.new — browser-based, full-stack from prompt
└── Semi-technical AND needs full IDE in browser
└── PATH D: Replit Agent — browser IDE, deployment built in
| Path | Tool | Best For | Cost | Learning Curve | Output Quality |
|---|---|---|---|---|---|
| A | Claude Code | Senior devs, complex refactoring | Usage-based (~$5-20) | Medium | Highest |
| B | Cursor | All developers, daily coding | $20/mo | Low | High |
| C | Bolt.new | Non-technical, prototypes | $0-20/mo | Very low | Medium |
| D | Replit Agent | Semi-technical, browser IDE | $0-25/mo | Low | Medium |
Duration: 15-30 minutes · Tool: Text editor
Write a structured specification before touching any AI tool. A clear spec produces better AI output than clever prompting. Include user stories, acceptance criteria, technical constraints, and file scope. [src7]
Verify: Spec covers happy path, edge cases, and error states. · If failed: If the spec is vague, the AI output will be vague. Add concrete examples.
Duration: 5-10 minutes · Tool: AI tool configuration
Path A (Claude Code): Create a CLAUDE.md file at project root with tech stack, architecture rules, testing conventions, and exclusion rules. [src5]
Path B (Cursor): Create a .cursorrules file at project root with framework, style, and pattern conventions. [src2]
Path C (Bolt.new): Write a comprehensive initial prompt with feature list, tech requirements, and priority order.
Verify: AI tool acknowledges context and references project structure. · If failed: Ensure config files are in the project root.
Duration: 1-4 hours · Tool: Selected AI tool
Follow the Plan-Generate-Test-Refine loop per feature. Be specific in prompts: name target files, frameworks, data sources, and error handling. After each AI response, read the code, test it, and commit working states. [src1] [src4]
Verify: Feature works per acceptance criteria. Tests pass. · If failed: Share exact error messages with AI. Break the request into smaller pieces.
Duration: 30-60 minutes · Tool: Code editor + testing tools
Run full test suite, TypeScript compilation, linting, and security audit. Review AI-generated code for hardcoded values, missing error handling, and security vulnerabilities. Prompt the AI to write tests for its generated code.
Verify: All tests pass. TypeScript compiles. No lint errors. No critical vulnerabilities. · If failed: Fix issues iteratively with AI tool context.
Duration: 15-30 minutes · Tool: CLI + hosting platform
Build production version, check bundle size, deploy to staging, run Lighthouse audit. Use AI to fix performance issues with specific metric data.
Verify: Production build succeeds. Lighthouse Performance > 80. Staging works. · If failed: Check build logs for errors. Verify environment variables in hosting platform.
{
"output_type": "ai_generated_codebase",
"format": "code repository + deployed URL",
"columns": [
{"name": "repository_url", "type": "string", "description": "Git repository with code", "required": true},
{"name": "staging_url", "type": "string", "description": "Deployed staging environment", "required": true},
{"name": "features_implemented", "type": "string", "description": "Features built with AI", "required": true},
{"name": "test_coverage", "type": "number", "description": "Code coverage percentage", "required": false},
{"name": "ai_tool_used", "type": "string", "description": "Primary AI tool used", "required": true},
{"name": "total_iterations", "type": "number", "description": "Generate-test-refine cycles", "required": false}
],
"expected_row_count": "1",
"sort_order": "N/A",
"deduplication_key": "repository_url"
}
| Quality Metric | Minimum Acceptable | Good | Excellent |
|---|---|---|---|
| Code review pass rate | > 70% accepted as-is | > 85% | > 95% |
| Test coverage on AI code | > 40% | > 70% | > 85% |
| Iterations per feature | < 8 cycles | < 5 cycles | < 3 cycles |
| Build success rate | Builds without errors | Zero TS errors | Zero lint warnings |
| Time savings vs manual | 20% faster | 40% faster | 60% faster |
If below minimum: Improve prompt specificity. Provide more context. Consider switching tools based on task type.
| Error | Likely Cause | Recovery Action |
|---|---|---|
| AI uses deprecated API | Training data cutoff | Provide current API docs as context |
| AI modifies wrong files | Insufficient constraints | Add exclusion rules in CLAUDE.md or .cursorrules |
| Generated code won't compile | Missing imports or types | Share full error output with AI |
| Inconsistent code style | No style guide in context | Add code examples to project rules |
| Rate limit exceeded (429) | Too many API requests | Wait for reset; use batch mode or slow requests |
| Quality degrades mid-session | Context window saturation | Start new session with fresh, focused context |
| Component | Free Tier | Standard ($20-40/mo) | Heavy Use ($50+/mo) |
|---|---|---|---|
| Claude Code (API) | $5 initial credit | ~$10-20/sprint | ~$30-50/sprint |
| Cursor Pro | 50 slow requests/mo | $20/mo | $40/mo |
| Bolt.new | Limited generations | $20/mo | $50/mo |
| Replit | Limited Agent runs | $25/mo | $50/mo |
| Total per sprint | $0-5 | $20-40 | $50-100 |
"Build me a dashboard" produces generic code needing extensive refactoring. [src1]
Include target files, framework, component patterns, data sources, and error handling requirements.
Leads to inconsistent styles, security vulnerabilities, and compounding technical debt. [src5]
Read generated code like a pull request. Check imports, error handling, naming, and architecture alignment.
Single-prompt generation produces fragile, hard-to-extend code. [src3]
Scaffold first, add features one by one, commit between features for maintainable, revertible code.
Use this recipe when a developer or technical founder needs a structured methodology for building features with AI coding tools. Requires an existing project scaffold. This recipe covers the workflow and prompting methodology — not specific features to build or tech stack selection.