“Should I have an LLMS.txt?”
No. Adding a root-level llms.txt isn’t worth your development resources right now. There’s no recognized standard, no broad acceptance from LLM platforms, and it doesn’t influence rankings or AI Overviews. At best it’s an assumed pathway for future LLM agent/bot data retrieval; today your time is better spent on clear content, building vectors within your site and robust schema.
I’ve seen this movie before. A CMO once emailed me: “I hear we need AMP pages. Let’s get these done ASAP.” We then diverted other efforts and built to the trend. Short increase and then gravity returned because the real issues were architecture, content depth, and a messy internal link structure. Now my day is spent answering the question “Should we implement llm.txt as the one thing to win the GEO game?”
Friendly reminder: if it sounds too good, it probably is. GEO/SEO doesn’t have a cheat code; it’s a system and a smart one at that. We have to be hyper-focused on building context, not following the newest “fix all”.
What is the /llms.txt proposal?
The idea comes from Jeremy Howard, a co-founder of fast.ai. He published the /llms.txt concept on Sept 3, 2024 as a markdown file at the site root to guide LLMs toward the most relevant, cleanly structured resources. /llms.txt is a markdown file at your site root that gives LLMs a concise, curated guide to your most important pages, ideally linking to clean versions of docs so agents can fetch signals without wading through messy HTML. It’s explicitly framed for inference-time use (when a model is answering a question), not as a training opt-out or a ranking factor.
It’s causing a stir because CMS and plugin ecosystems are rolling out “one-click” support, making it feel like an easy lever to “get into AI answers,” even though formal adoption by major LLMs is still unclear. Yoast now auto-generates llms.txt, and Webflow added a way to upload it at the root - moves that raise visibility and client questions. At the same time, no AI systems currently consume llms.txt as a standard input.
Responding to LLMS.TXT Questions
I know you’re getting emails, pings and in-meeting questions about llms.txt (so am I), and it’s eating time we should spend on real impact. Use these quick, copy-ready replies to arm you with a clear, confident answer without reopening the whole debate.
- “Do we need llm(s).txt?”
No, not a priority. /llms.txt is a proposal, not a standard, with limited adoption and unclear impact today. Consider it optional future-proofing at best. - “Will it help rankings or AI Overviews?”
No impact on rankings, AIO, traffic or other SEO KPIs. The proposal targets LLM/agent retrieval, not web rankings. - “Is it like robots.txt or sitemaps?”
The llms.txt is completely different with unique goals. Robots.txt manages crawler access; sitemaps list URLs; /llms.txt (if used) is a curated guide for LLMs to fetch context at runtime - “What should we do instead if we want better AI answers about us?”
Focus on fundamentals: clear, crawlable docs; solid IA; authoritative content; and structured data- these help both search engines and browsing/RAG systems today. - “When would we consider it?”
If/when major assistants announce first-class support (auto-discovery or weighting) or if your product relies heavily on agent workflows. Until then, treat /llms.txt as an experiment, not a KPI lever.
Bottom line
llm(s).txt is a clever idea still looking for real adoption. It’s an interesting idea for future agents, not a winning strategy today. Don’t get distracted by the shiny object! Keep it on the backlog as an optional experiment; if the majors announce first-class support and we can test measurable lift, we’ll consider it - until then, double down on fundamentals (clean docs, schema, IA, robots rules).