Spinning up
Spinning up

GEO and AI visibility · Madrid
The robots.txt of the AI era. Adoption is still under 5%. Madrid operators face the same llms.txt specification discipline as global peers, with local context (timezone, channel mix, buying patterns) shifting which moves carry the most leverage.
Definition
llms.txt is a proposed plain-text manifest at /llms.txt that summarizes a website's content for large language model crawlers. The spec, hosted at llmstxt.org, defines a Markdown structure with sections of named links so models can ingest a curated map of the site. In the Q1 2026 tracked corpus, adoption was approximately 4% across the analyzed cohort.
How it lands in Madrid
Xpand Media runs llms.txt specification inside Madrid-based engagements with the local context built in: Europe/Madrid timezone, market-specific buyer behavior, and the platforms Spain-based operators actually use. The discipline is global; the operational rhythm is local.
Services that operate llms.txt specification
FAQ
At the site root, at the path /llms.txt, served as text/plain. Following the spec, it uses Markdown structure with H1 (site name), blockquote summary, and named sections with link lists.
robots.txt controls crawler access. llms.txt curates content discovery — it tells the model what's worth reading, not whether it can read.
Adoption is partial. Anthropic publicly references it. OpenAI and Google have not confirmed dedicated parsers but treat it as crawlable plain text. Adding it is low cost and forward-compatible.
Yes if the site has high-volume content. llms-full.txt is the long-form variant that concatenates page content, intended for offline LLM ingestion.
Standard Markdown: hyphen, space, bracketed link text, parenthesized URL, optional colon-prefixed description. Other formats (em-dash separated, plain URL lists) may fail spec parsers.
Run llms.txt specification in Madrid