
"Bring your own AI" (BYOAI) is the emerging workplace pattern where employees use personal AI subscriptions — ChatGPT, Claude, Gemini, and similar tools — for work tasks without organizational oversight, policy, or governance. It mirrors the "bring your own device" (BYOD) phenomenon that disrupted IT departments when smartphones first entered the workplace, and it is creating many of the same problems: data leaks, inconsistent output, and governance gaps that widen faster than policy can close them.
In this article:
Employees who aren't provided with organizational AI tools simply bring their own, just as workers once brought personal smartphones when companies were slow to issue them. In technical documentation, the pattern is now widespread.
According to the 2026 AI Use in Technical Documentation Survey, discussed in a recent Content Wrangler webinar , sponsored by Heretto - AI Adoption in Tech Docs, with Dominik Wever (founder, Promptitude) and hosts Scott Abel (The Content Wrangler) and Patrick Bosek (CEO, Heretto), the majority of technical writers are already using generative AI — but most are doing so independently, without formal tools, training, or policy guidance from their organizations.
The survey leans heavily toward seasoned professionals: most respondents have spent at least ten, and often twenty-plus years in technical writing. These aren’t casual experimenters. And yet end-to-end AI workflow integration remains uncommon. Feeding a Jira ticket into an AI-assisted process and producing updated documentation on the other side is still rare. Most usage is shallow: editing a paragraph, rewriting a section, generating a summary.
Unguverned AI Use is very common in technical documentation, and the numbers are striking.
The 2026 AI Use in Technical Documentation Survey found that roughly one in five organizations have no AI guidance whatsoever, even as their employees actively use these tools. Another 17% are still in the process of developing policies. Only 42% report having formal documented AI policies in place — and even those policies weren't evaluated for relevance or effectiveness.
Meanwhile, adoption is accelerating regardless of governance readiness:
The gap between adoption pace and governance readiness is where most of the risk lives.
Ungoverned AI use is far more common in technical documentation than most organizations realize — and the numbers are striking. The *2026 AI Use in Technical Documentation Survey* found that
Meanwhile, adoption is accelerating regardless of governance readiness:
That gap between enthusiasm and structure produces three concrete risks.
Modern AI tools Modern AI tools are increasingly personalized. They develop memory and behavioral patterns around the individual using them. Ten writers on the same team, each with their own personalized AI subscription, will not get equivalent output, even from identical prompts. The AI has adapted to each person's writing style, vocabulary, and habits.
As Patrick Bosek noted in the webinar, if a writer uses their personal AI for creative projects outside work, those tendencies can subtly bleed into the professional content they produce at work. Multiply that effect across a whole team and you get documentation that looks and sounds inconsistent across articles — which customers notice more than organizations typically expect.
Many consumer AI plans use submitted content to train future model versions. That means internal product roadmaps, proprietary processes, unreleased feature descriptions, and confidential business data can effectively leave the organization the moment an employee pastes them into a personal chat window. Without a clear policy defining what data may and may not be entered into which tools, this is an active risk, not a theoretical one.
Without centralized tooling, shared prompting standards, or organizational training, different team members reach very different conclusions about what AI-assisted documentation is supposed to look like. The survey reflected exactly this fragmentation: 25% reported significant productivity improvements, 42% moderate improvements, 20% mixed results, and 13% little or no impact. That spread is not a measurement problem. It indicates a governance problem, that shows up directly in the content.
This is where the risks become most consequential for technical documentation teams specifically. AI output quality is directly tied to input quality.
"The 'garbage in, garbage out' principle applies precisely here. If your source documentation lacks clear structure, if it doesn't specify which role performs which action, if procedures aren't marked as procedures, if event triggers and success conditions are absent — the AI cannot compensate for those gaps. It will hallucinate and fill in missing information with plausible-sounding answers that may be entirely incorrect."— Dominik Wever, Founder Promptitude AI Adoption in Tech Docs webinar
Scott Abel identified four elements that AI specifically needs but that most technical content does not explicitly provide:
Technical writers were trained to write for human readers, who are forgiving and inferential. AI systems are not. When referenced information doesn't exist in the source content, AI invents it. The documentation looks fine — grammar is correct, spelling checks out — but the structural elements AI depends on for accuracy are absent. That is not an AI problem. It is a content problem that AI has made newly visible.
The downstream consequences are significant and often underestimated. When organizations deploy AI chatbots on their websites, customer portals, or help centers, those systems draw from existing documentation. If that documentation is unstructured, incomplete, or written solely for human inference, the chatbot produces unreliable answers.
Customers receive incorrect information. The organization blames the AI. But the root cause is the content. If the customer experience suffers, the brand takes the heat — not the documentation team, not the AI vendor. Content quality is now a business risk in a way it has never been before.
Dominik Wever outlined a practical five-step roadmap for documentation teams looking to move beyond ad hoc AI usage toward something structured and sustainable.
1 Identify two to three high-impact use cases
Don't attempt to transform everything at once. Look for specific, bounded tasks where AI can deliver measurable return within four to eight weeks. Let multiple teams each identify one candidate use case, then compare what you learn across them.
2 Standardize before you automate
Assemble the context AI will need before building it into any workflow: up-to-date tone and voice guidelines, curated best-practice examples, a clear definition of what good looks like per content type. If your style guide is five years old, that's where the work starts.
3 Define governance rules
Establish clear organizational policy: Which AI tools are approved, what data may be submitted, who has oversight and what review processes apply to AI-assisted output? Answer these questions now, not reactively after a data or quality incident.
4 Find simple integration points first
Full CMS or CCMS integration is a long, expensive project. Start with something tractable: automating metadata generation, short content descriptions, or SEO titles. One measurable win builds the confidence needed to expand from there.
5 Measure, then iterate
Define success criteria before you start. Measure whether your selected use cases delivered the expected gains. Apply those learnings to the next cycle. AI-assisted workflows rarely arrive fully formed — they improve through iteration.
Based on the 2026 survey findings and the discussion from the webinar, the most important immediate actions for documentation teams are:
Bring your own AI is where most documentation teams are today. It is an understandable response to a period of rapid change, and the enthusiasm for AI among technical writers is real and legitimate. But the organizations that will actually capture the productivity and quality benefits of AI are not the ones moving fastest — they are the ones building the most deliberate foundation.
That foundation starts with governed tooling, structured content, clear policy, and the organizational recognition that documentation quality is no longer just a professional standard. It is an AI readiness requirement.
Experience the perfect AI solution for all businesses. Elevate your operations with effortless prompt management, testing, and deployment. Streamline your processes, save time, and boost efficiency.
Unlock AI Efficiency: 50k Free Tokens