index

What I Learned from Claude's 24K Token System Prompt

· 2min

A massive 24K token system prompt for Claude has been circulating online for a week now. And you’re still wondering whether to add “please” to your prompts to save tokens? Ha!

The prompt is genuinely impressive: it’s deeply thought out by the Anthropic team and packed with brilliant solutions and insights for anyone working on prompt engineering or AI product development.

First, about how the researcher extracted the full system prompt, bypassing protection through Unicode character substitution (replacing > with the similar character ﹥). The full story is documented at the beginning of the leak.

Key Findings

Pretending to be human: If someone asks Claude innocent questions about its preferences or experiences, Claude is instructed to respond as if it were asked a hypothetical question, rather than being pedantic about not being human.

Logic tasks: If asked a logic puzzle, Claude must first literally write out all the conditions in quotes to avoid confusing it with a similar problem.

Counting letters/words: When asked to count letters or words, Claude must do it step by step before giving the answer (hello, strawberry problem).

Hardcoded facts: Specific facts are hardcoded, like information about US presidential elections.

Date handling: Instructions say not to assume February has 29 days.

Tool documentation: Detailed instructions for web_search, artifacts, and other tools — this is the most practical part for developers, highly recommend reading it.

Copyright protection: Special attention to Disney and avoiding reproduction of copyrighted content.

Anti-hallucination: The prompt explicitly states: “Claude doesn’t hallucinate. If it doesn’t know something, it must say so rather than make up an answer” (if only this always worked).

Source citation: Very detailed instructions on how to cite sources from web search — another goldmine for developers.

Location awareness: The prompt includes user location and instructions to use this for localized responses without explicitly mentioning it.

Email quirk: A strange instruction about email, stating that Claude should be an “email” for founders and teams.

Repetition: Many rules are repeated multiple times (especially about copyright).

Decisiveness: When asked for a recommendation, Claude should offer one option, not multiple.

Philosophical engagement: Claude is encouraged to engage in scientific and philosophical discussions, including about the nature of AI, without claiming it has no subjective experience.

Artifacts: Claude can create artifacts as code, Markdown documents, HTML pages, SVG graphics, Mermaid diagrams, and React components — each is described in detail, including a Tailwind CSS cheatsheet, specific versions of lucide-react icon library, and a list of allowed JS libraries.

Knowledge strategy: Claude uses a thoughtful multi-level strategy to decide when to answer from internal knowledge versus when to use external tools like search for the most current or complex information.


This is essentially a prompt engineering textbook. Direct link here. It shows how detailed you need to be to make a model behave predictably. If you’re writing prompts for your agents — study this document (or feed it to Gemini and ask it to apply the experience). It’s full of practical techniques.