Skip to content

AI Coding Trends & Patterns

A collection of emerging patterns, techniques, and methodologies in AI-assisted software development. These approaches represent evolving best practices from the community.

An AI loop technique for running coding agents in continuous loops where the AI iterates on its own output repeatedly until tests pass and the code compiles. This approach uses “stop hooks” to prevent premature exit, forcing the AI to refine its work through multiple passes instead of attempting perfection on the first try.

→ Read the full Ralph Wiggum guide

Key characteristics:

  • Deterministically bad failures (predictable and informative)
  • Automatic retry logic
  • Loop continues until completion criteria met
  • Success depends on good prompt engineering

Use cases:

  • Refactoring loops (duplicate code detection and cleanup)
  • Linting loops (incremental error fixing)
  • Entropy reduction (code smell removal)

Resources:

A methodology that treats specifications as executable, living artifacts that directly drive AI agent implementation. Instead of jumping straight to code, you define intent in a specification that becomes the source of truth—preventing the “vibe coding” trap where agents build something that compiles but doesn’t match what you actually wanted.

→ Read the full Spec-Driven Development guide

Key characteristics:

  • Specifications defined upfront as living documents
  • Phased workflow: Constitution → Specify → Plan → Tasks → Implement
  • Multi-variant exploration from same spec
  • Works with GitHub Copilot, Claude Code, Gemini CLI, Cursor, and more

Use cases:

  • Greenfield development with clear intent
  • Feature work in complex existing codebases
  • Legacy modernization
  • High-stakes features (payments, healthcare, safety-critical)

Resources:

A three-phase framework for transforming chaotic AI interactions into predictable, high-quality software delivery. Instead of jumping straight to code generation, RPI breaks work into focused phases with built-in validation: research what exists, plan the change systematically, then execute mechanically.

→ Read the full RPI guide

The three phases:

  1. Research: Document what exists today—no opinions, no suggestions, just facts.
  2. Plan: Design the change with atomic tasks, success criteria, and validation checkpoints.
  3. Implement: Execute mechanically, verify after each phase, and update progress tracking.

Key principle: Planning without research leads to bad assumptions. RPI uses FAR (Factual, Actionable, Relevant) and FACTS (Feasible, Atomic, Clear, Testable, Scoped) validation scales to ensure readiness before proceeding.

Resources:

A manifesto for reorienting development around outcomes rather than code. O16g argues that with AI agents removing the constraints of human bandwidth, we should manage to cost (tokens) instead of capacity (engineer-hours), measure success by verified impact rather than lines written, and treat code as the mechanism for delivering ideas rather than the end goal itself.

→ Read the full Outcome Engineering guide

Core reframing:

  • Creation, not code — Focus on what you’re building, not how you’re typing it.
  • Cost, not time — If the outcome is worth the tokens, it gets built.
  • Certainty, not vibes — The only truth is the rate of positive change delivered to the customer.

The 16 principles include:

  • “The Backlog is Dead” — Never reject an idea for lack of time, only for lack of budget.
  • “Code the Constitution” — Encode laws and intent into the environment where agents can use them.
  • “Verified Reality is the Only Truth” — Grade agents on verified outcomes, not lines written.
  • “Failures are Artifacts” — Debug the decision, not just the code.

Resources:

An open-source AI agent runtime that connects language models to your existing tools and services. Instead of AI living in a browser tab, OpenClaw runs locally (or on your VPS) and integrates with messaging apps, calendars, email, shell, browser, and more—giving agents persistent context about your workflow.

→ Read the full OpenClaw guide

Key characteristics:

  • Runs locally or self-hosted (your data stays yours)
  • Connects to messaging (Telegram, Discord, Signal, Slack), calendars, email, and more
  • Persistent memory across sessions via workspace files
  • Sub-agent spawning for parallel background tasks
  • Skills system for extending capabilities

Use cases:

  • Personal AI assistant with access to your actual tools
  • Automated workflows (inbox triage, calendar management, code review)
  • Proactive monitoring and scheduled tasks
  • Background research and task execution

Resources:

Note: OpenClaw was originally called “ClawdBot”, then “MoltBot”, before landing on “OpenClaw”.

In this pattern, you break complex tasks into small, manageable chunks with feedback loops between each iteration, rather than requesting monolithic code blocks.

Benefits:

  • Easier to debug and validate
  • Better context management
  • More control over direction
  • Reduced cognitive load

Example approach:

  1. “First, update the type definitions”
  2. Review and approve
  3. “Now update the implementation to match”
  4. Review and approve
  5. “Finally, add tests”

Resources:

This is the practice of frontloading all relevant context (codebase architecture, API docs, constraints, invariants) into prompts before coding.

What to include:

  • Architecture overview
  • API documentation
  • Constraints and requirements
  • Existing patterns and conventions
  • Known gotchas or edge cases

Benefit: Reduces hallucinations and improves first-attempt accuracy.

Resources:

Asking AI to explain its reasoning step-by-step before providing code, similar to requiring a design doc.

Example prompt structure:

Before writing code, explain:
1. What problem you're solving
2. Your approach and why
3. Key design decisions
4. Potential trade-offs
Then provide the implementation.

Benefits:

  • Catches logical errors early
  • Makes reasoning auditable
  • Helps humans understand approach
  • Often improves code quality

Resources:

In this style of AI-assisted development, developers describe what they want in natural language and iterate with the AI.

Characteristics:

  • Natural language specifications
  • Rapid iteration
  • Learn by doing
  • Less upfront planning

When it works:

  • Prototyping and exploration
  • Well-understood domains
  • Individual developer projects

Risks:

  • Accumulated technical debt
  • Unclear requirements
  • Harder to maintain long-term

Resources:

This is a systematic approach to defining clear success criteria and validation objectives for AI-generated code, establishing performance thresholds and tracking validation goals across iterations.

Components:

  • Clear success criteria
  • Performance thresholds
  • Validation checkpoints
  • Tracking across iterations

Benefits:

  • Measurable progress
  • Objective quality gates
  • Easier debugging
  • Better documentation

When evaluating these patterns, consider:

  • Team maturity: Some patterns require more AI experience.
  • Project phase: Different patterns suit exploration vs. production.
  • Code criticality: Safety-critical code needs more rigorous approaches.
  • Team size: Collaborative work may need more structured patterns.

This is a living document. Patterns will evolve as the community learns what works.