AI Guardrails

LaunchFast includes a CLAUDE.md file that constrains AI coding assistants to maintain correctness when modifying the codebase.

What CLAUDE.md does

When an AI assistant (like Claude, Cursor, or GitHub Copilot) reads the codebase, it reads CLAUDE.md first. This file encodes:

  • Architectural boundaries that must not be violated
  • Security invariants that must be maintained
  • Patterns that must be followed for consistency
  • Anti-patterns that must be refused

Key constraints

The AI is instructed to:

  • Stop on ambiguity rather than guess
  • Refuse entropy-increasing changes
  • Surface conflicts explicitly
  • Treat CLAUDE.md as non-negotiable law

What it prevents

Common AI mistakes that CLAUDE.md guards against:

  • Adding alternative authentication mechanisms
  • Creating new database access patterns
  • Introducing plugin or extension systems
  • Suggesting multi-cloud abstractions
  • Bypassing rate limiting or CSRF protection
  • Committing secrets to version control

The authority model

CLAUDE.md implements an authority model where:

  • "No" is expected and correct
  • Removing options is often improvement
  • Helpful but incorrect output is failure
  • If correctness is unclear, the AI must stop

Why this matters

AI assistants accelerate code production but can also accelerate entropy. By encoding correctness constraints in a file that AI reads first, LaunchFast ensures that AI assistance maintains rather than degrades the founding state.

The goal is not to prevent AI usage. The goal is to make AI usage safe by default.