AI systems now answer customer questions, explain features, and guide users of an increasing variety of products and services through tasks they need to complete. In many organizations, they do this instead of sending people to the documentation. Sometimes they do it without ever showing the customer the documentation at all. This means our technical documentation has quietly stopped being reading material and started a new career as knowledge infrastructure. It is now the raw material from which AI systems assemble confident, well-phrased answers — whether those answers are spot-on or otherwise wildly incorrect. Recent research (👈🏽 PDF) from EMNLP 2025 Conference on Empirical Methods in Natural Language Processing explains why so many of these systems sound smart while being wrong, and why fixing the problem has less to do with “better AI” and more to do with how documentation is structured. Let’s talk about what the research found (and why tech writers are holding the keys to AI success, whether they asked for them or not). First, Let’s Be Very Clear About What This Is NotThis is not about documentation of AI products. This is about AI systems that:... Continue reading this post for free in the Substack app |