Constraint-Based Prompting

Constraint-based prompting is a technique where you define boundaries and limitations for AI output rather than prescribing exact solutions. By specifying what the code must or must not do, you guide AI toward correct solutions while allowing flexibility in implementation details.

Example

Instead of 'write a login function', you specify constraints: 'Must validate email format. Must hash passwords with bcrypt. Must rate-limit to 5 attempts per minute. Must return typed error objects. Must not store plain-text passwords.'

Constraint-based prompting shifts focus from telling AI what to do toward telling it what conditions must be satisfied. This approach often produces better results for complex tasks.

Constraints vs Instructions

InstructionsConstraints
"Use bcrypt to hash""Passwords must be securely hashed"
"Add try-catch block""Must handle all errors gracefully"
"Return null on error""Must never throw unhandled exceptions"

Constraints define the goal; instructions prescribe the method.

Types of Constraints

Must have (hard constraints):

  • "Must validate all inputs"
  • "Must be type-safe"
  • "Must handle loading states"

Must not (prohibitions):

  • "Must not mutate input parameters"
  • "Must not use deprecated APIs"
  • "Must not block the main thread"

Should have (soft constraints):

  • "Should be under 50 lines"
  • "Should prefer composition over inheritance"
  • "Should minimize dependencies"

Writing Effective Constraints

  1. Be specific — "Must handle errors" is vague; "Must return Result<T, Error> type" is specific
  2. Prioritize — List critical constraints first
  3. Explain why — "Must not store tokens in localStorage (XSS vulnerability)"
  4. Test constraints — Verify AI output actually satisfies them

When to Use Constraint-Based Prompting

  • Security-sensitive code
  • Performance-critical paths
  • Code that must integrate with existing patterns
  • When you know requirements but not implementation