In-Context Learning

In-context learning is an AI capability where models learn to perform tasks from examples provided in the prompt, without any weight updates or fine-tuning. The model extracts patterns from examples you provide and applies them to new inputs within the same conversation.

Example

You show Claude three examples of converting JavaScript to TypeScript with your team's type conventions. For subsequent conversions in the same chat, it follows the pattern without you explicitly restating the rules.

In-context learning is why AI can adapt to your specific needs without training a custom model. It's the mechanism behind few-shot prompting's effectiveness.

How It Works

  1. You provide examples — Input-output pairs in your prompt
  2. Model identifies patterns — Extracts the transformation logic
  3. Model applies patterns — Uses learned rules on new inputs

All of this happens within a single prompt — no model updates occur.

Why This Matters

Traditional machine learning requires:

  • Collecting training data
  • Running training processes
  • Deploying updated models

In-context learning requires:

  • Writing good examples in your prompt
  • That's it

What AI Can Learn In-Context

  • Formatting conventions — How to style code, messages, docs
  • Domain terminology — Your project's specific vocabulary
  • Transformation patterns — How to convert X to Y
  • Coding style — Your team's preferred patterns

Limitations

In-context learning is powerful but not unlimited:

  • Context window bounds — Examples take up space
  • Complex patterns — May need more examples than fit
  • Persistence — Learning doesn't carry between sessions

For persistent learning, combine in-context examples with Cursor Rules or system prompts.

Ad
Favicon