AI-Assisted Code Generation Workflow

Type: Execution Recipe Confidence: 0.89 Sources: 7 Verified: 2026-03-12

Purpose

This recipe establishes a structured workflow for building MVP features using AI coding tools — Claude Code, Cursor, Bolt.new, and Replit Agent. The output is a working, human-reviewed codebase built through iterative AI-assisted development: spec first, generate, test, refine. [src1]

Prerequisites

Constraints

Tool Selection Decision

Which tool?
├── Developer with existing codebase AND terminal workflow
│   └── PATH A: Claude Code — agentic, reads full codebase, runs tests
├── Developer who prefers IDE AND visual editing
│   └── PATH B: Cursor — inline edits, chat, agent mode, multi-file
├── Non-technical founder AND simple app
│   └── PATH C: Bolt.new — browser-based, full-stack from prompt
└── Semi-technical AND needs full IDE in browser
    └── PATH D: Replit Agent — browser IDE, deployment built in
PathToolBest ForCostLearning CurveOutput Quality
AClaude CodeSenior devs, complex refactoringUsage-based (~$5-20)MediumHighest
BCursorAll developers, daily coding$20/moLowHigh
CBolt.newNon-technical, prototypes$0-20/moVery lowMedium
DReplit AgentSemi-technical, browser IDE$0-25/moLowMedium

Execution Flow

Step 1: Write a Spec Before Prompting

Duration: 15-30 minutes · Tool: Text editor

Write a structured specification before touching any AI tool. A clear spec produces better AI output than clever prompting. Include user stories, acceptance criteria, technical constraints, and file scope. [src7]

Verify: Spec covers happy path, edge cases, and error states. · If failed: If the spec is vague, the AI output will be vague. Add concrete examples.

Step 2: Set Up Project Context

Duration: 5-10 minutes · Tool: AI tool configuration

Path A (Claude Code): Create a CLAUDE.md file at project root with tech stack, architecture rules, testing conventions, and exclusion rules. [src5]

Path B (Cursor): Create a .cursorrules file at project root with framework, style, and pattern conventions. [src2]

Path C (Bolt.new): Write a comprehensive initial prompt with feature list, tech requirements, and priority order.

Verify: AI tool acknowledges context and references project structure. · If failed: Ensure config files are in the project root.

Step 3: Generate Code Iteratively

Duration: 1-4 hours · Tool: Selected AI tool

Follow the Plan-Generate-Test-Refine loop per feature. Be specific in prompts: name target files, frameworks, data sources, and error handling. After each AI response, read the code, test it, and commit working states. [src1] [src4]

Verify: Feature works per acceptance criteria. Tests pass. · If failed: Share exact error messages with AI. Break the request into smaller pieces.

Step 4: Review and Harden

Duration: 30-60 minutes · Tool: Code editor + testing tools

Run full test suite, TypeScript compilation, linting, and security audit. Review AI-generated code for hardcoded values, missing error handling, and security vulnerabilities. Prompt the AI to write tests for its generated code.

Verify: All tests pass. TypeScript compiles. No lint errors. No critical vulnerabilities. · If failed: Fix issues iteratively with AI tool context.

Step 5: Optimize and Deploy

Duration: 15-30 minutes · Tool: CLI + hosting platform

Build production version, check bundle size, deploy to staging, run Lighthouse audit. Use AI to fix performance issues with specific metric data.

Verify: Production build succeeds. Lighthouse Performance > 80. Staging works. · If failed: Check build logs for errors. Verify environment variables in hosting platform.

Output Schema

{
  "output_type": "ai_generated_codebase",
  "format": "code repository + deployed URL",
  "columns": [
    {"name": "repository_url", "type": "string", "description": "Git repository with code", "required": true},
    {"name": "staging_url", "type": "string", "description": "Deployed staging environment", "required": true},
    {"name": "features_implemented", "type": "string", "description": "Features built with AI", "required": true},
    {"name": "test_coverage", "type": "number", "description": "Code coverage percentage", "required": false},
    {"name": "ai_tool_used", "type": "string", "description": "Primary AI tool used", "required": true},
    {"name": "total_iterations", "type": "number", "description": "Generate-test-refine cycles", "required": false}
  ],
  "expected_row_count": "1",
  "sort_order": "N/A",
  "deduplication_key": "repository_url"
}

Quality Benchmarks

Quality MetricMinimum AcceptableGoodExcellent
Code review pass rate> 70% accepted as-is> 85%> 95%
Test coverage on AI code> 40%> 70%> 85%
Iterations per feature< 8 cycles< 5 cycles< 3 cycles
Build success rateBuilds without errorsZero TS errorsZero lint warnings
Time savings vs manual20% faster40% faster60% faster

If below minimum: Improve prompt specificity. Provide more context. Consider switching tools based on task type.

Error Handling

ErrorLikely CauseRecovery Action
AI uses deprecated APITraining data cutoffProvide current API docs as context
AI modifies wrong filesInsufficient constraintsAdd exclusion rules in CLAUDE.md or .cursorrules
Generated code won't compileMissing imports or typesShare full error output with AI
Inconsistent code styleNo style guide in contextAdd code examples to project rules
Rate limit exceeded (429)Too many API requestsWait for reset; use batch mode or slow requests
Quality degrades mid-sessionContext window saturationStart new session with fresh, focused context

Cost Breakdown

ComponentFree TierStandard ($20-40/mo)Heavy Use ($50+/mo)
Claude Code (API)$5 initial credit~$10-20/sprint~$30-50/sprint
Cursor Pro50 slow requests/mo$20/mo$40/mo
Bolt.newLimited generations$20/mo$50/mo
ReplitLimited Agent runs$25/mo$50/mo
Total per sprint$0-5$20-40$50-100

Anti-Patterns

Wrong: Prompting with vague descriptions

"Build me a dashboard" produces generic code needing extensive refactoring. [src1]

Correct: Provide specific, constrained prompts

Include target files, framework, component patterns, data sources, and error handling requirements.

Wrong: Accepting AI output without reading it

Leads to inconsistent styles, security vulnerabilities, and compounding technical debt. [src5]

Correct: Review every diff before accepting

Read generated code like a pull request. Check imports, error handling, naming, and architecture alignment.

Wrong: Having AI write the entire app in one prompt

Single-prompt generation produces fragile, hard-to-extend code. [src3]

Correct: Build incrementally, one feature at a time

Scaffold first, add features one by one, commit between features for maintainable, revertible code.

When This Matters

Use this recipe when a developer or technical founder needs a structured methodology for building features with AI coding tools. Requires an existing project scaffold. This recipe covers the workflow and prompting methodology — not specific features to build or tech stack selection.

Related Units