Hallucination

Hallucination in AI refers to when a model generates confident, plausible-sounding information that is factually incorrect or entirely fabricated. In coding, this means AI might invent non-existent APIs, suggest deprecated methods, or create function signatures that don't match the actual library.

Example

AI suggests using 'react-router's useNavigator() hook' — it sounds right, follows React naming conventions, but the actual hook is called useNavigate(). The AI hallucinated a plausible but wrong API.

Hallucination is perhaps the most important concept for vibe coders to understand. AI doesn't "know" things the way humans do — it predicts plausible outputs, which sometimes means inventing convincing fiction.

Why Hallucinations Happen

AI models are pattern matchers, not fact databases:

  1. Probabilistic generation — AI predicts likely next tokens, not verified facts
  2. Training data gaps — Model may not have seen specific APIs or versions
  3. Confident by design — Models are trained to produce fluent, confident text
  4. No fact-checking — Generation happens without real-time verification

Common Coding Hallucinations

  • Invented APIs — Methods that sound right but don't exist
  • Wrong signatures — Incorrect parameters or return types
  • Outdated information — Deprecated patterns presented as current
  • Mixed frameworks — Combining patterns from different libraries

How to Protect Yourself

Verify critical code:

  • Check official documentation
  • Test the code actually runs
  • Use TypeScript to catch type errors

Prompt for uncertainty:

  • "If you're not sure about this API, say so"
  • "What's the source for this approach?"

Recognize warning signs:

  • Unusual API names
  • Overly complex solutions to simple problems
  • Code that looks right but doesn't run

Hallucination vs Mistakes

Not every error is hallucination. AI also makes genuine mistakes in logic, just like human developers. Hallucination specifically refers to fabricated information presented as fact.