LLM Wiki

Type: concept Tags: andrej-karpathy, ingest-protocol, lint-protocol, hot-cache, token-efficiency, llm-wiki-vs-rag

Summary

A personal knowledge system where an LLM (like Claude) maintains and navigates a set of well-organized markdown files rather than a vector database. Originated from andrej-karpathy’s public post that went viral on X in early 2026.

Core Idea

Instead of embeddings + semantic search, you give the LLM:

  • A structured folder of markdown wiki pages
  • An index.md as the entry point
  • [[wikilinks]] connecting related pages
  • A CLAUDE.md explaining how to navigate and maintain it

The LLM reads the index, follows links, and synthesizes answers — exactly how a human would navigate a wiki.

Why It Works

  • LLMs are good at reading structured text and following explicit relationships
  • Explicit [[backlinks]] encode relationships more precisely than chunk similarity
  • The LLM auto-maintains indexes and summaries as pages are added
  • Scales well up to ~hundreds of pages with no infrastructure beyond a text editor

Karpathy’s Scale

~100 articles, ~500,000 words — handled without RAG.

Key Workflows

  • ingest-protocol — turning raw source files into wiki pages
  • lint-protocol — health checks to find gaps and broken links
  • hot-cache — optional ~500-word cache to reduce tokens on repeat queries

Compared To

See llm-wiki-vs-rag for full comparison with traditional semantic search RAG.

Sources