Spinning up
Spinning up
Topic · GEO and AI visibility
The robots.txt of the AI era. Adoption is still under 5%.
Definition
llms.txt is a proposed plain-text manifest at /llms.txt that summarizes a website's content for large language model crawlers. The spec, hosted at llmstxt.org, defines a Markdown structure with sections of named links so models can ingest a curated map of the site. In the Q1 2026 tracked corpus, adoption was approximately 4% across the analyzed cohort.
Services that operate this topic
Industries that care about this
Vertical
B2B SaaS
B2B SaaS in 2026 buys through ChatGPT and Perplexity before it ever sees Google. If your brand isn't cited in AI answers, demos don't book.
Vertical
AI-native companies
AI-native companies need GEO that names the model, the use case, and the developer adoption signal. Generic positioning loses to specific technical-buyer hooks.
FAQ
At the site root, at the path /llms.txt, served as text/plain. Following the spec, it uses Markdown structure with H1 (site name), blockquote summary, and named sections with link lists.
robots.txt controls crawler access. llms.txt curates content discovery — it tells the model what's worth reading, not whether it can read.
Adoption is partial. Anthropic publicly references it. OpenAI and Google have not confirmed dedicated parsers but treat it as crawlable plain text. Adding it is low cost and forward-compatible.
Yes if the site has high-volume content. llms-full.txt is the long-form variant that concatenates page content, intended for offline LLM ingestion.
Standard Markdown: hyphen, space, bracketed link text, parenthesized URL, optional colon-prefixed description. Other formats (em-dash separated, plain URL lists) may fail spec parsers.
Talk to the team