Disclose-or-Don't: A Practical Guide to AI Transparency and Shadow AI

AI is already in your inboxes and documents—but should you tell people when you’ve used it? This guide shows when disclosure matters, how to avoid Shadow AI risks, and includes scripts you can borrow. Read on, then grab my free AI resources to put it into practice.

9/9/20254 min read

As artificial intelligence becomes increasingly woven into our daily workflows, a critical question emerges: when should you disclose that AI helped create your work? The answer isn’t always black and white, but having clear guidelines and ready scripts can help you navigate these situations with confidence and integrity—while also addressing the growing concern of Shadow AI in organizations.

To understand Shadow AI, it helps to revisit Shadow IT. For years, employees have gone around official IT systems by using their own tools—signing up for Dropbox accounts to share files, buying unapproved apps with credit cards, or downloading free software to work faster. These actions weren’t usually malicious; they were about convenience. But because IT never approved or monitored them, they created hidden risks: unsecured data, compliance violations, and vulnerabilities hackers could exploit. Shadow AI is the same idea, only now it’s generative AI tools showing up in workflows without oversight.

The Shadow AI Challenge

I’ve seen it firsthand. An employee quietly used ChatGPT to draft client emails without IT approval. A team pushed sensitive data through an external AI service because it “just worked faster.” A manager relied on an unvetted AI tool to generate performance reviews. None of this was sabotage—it was well-meaning efficiency that introduced legal, security, and reputational risks while leaving the organization blind to its real AI exposure.

The solution isn’t to ban AI.
It’s to bring it into the open.

When employees feel safe disclosing AI use, organizations can manage risks and capture benefits instead of driving innovation into the shadows.

The Transparency Imperative

AI disclosure builds trust, maintains professional integrity, sets realistic expectations, and helps organizations track their AI footprint. When stakeholders know AI contributed to an output, they can better judge its reliability, potential biases, and appropriate use cases.

But let’s be honest: disclosure fatigue is real. Not every AI spell-check or rewrite needs a footnote. The goal is to recognize when transparency adds value—and when it simply creates noise.

When You Should Always Disclose

Published or Public-Facing Work

Scenario: Writing articles, reports, or research meant for wide distribution.

Why disclose: Integrity and public trust demand it.

Script: “This article was written with assistance from AI tools for research and drafting. All facts have been independently verified, and the final conclusions represent our professional judgment.”

High-Stakes Decision Making

Scenario: Using AI to analyze financial data for investment recommendations or risk assessments.

Why disclose: Decision-makers need to know the foundation and limitations. Organizations need audit trails.

Script: “This analysis was generated using AI tools to process the dataset. While the AI identified these patterns and trends, I recommend having our financial analysts review the methodology and validate key findings before making final investment decisions. I’ve documented the AI tools used for compliance records.”

Client-Facing Content

Scenario: Drafting proposals, marketing materials, or client presentations.

Why disclose: Clients expect transparency—especially when paying for expertise.

Script: “We’ve leveraged AI tools to enhance our content creation process, allowing us to deliver higher-quality materials more efficiently. All AI-generated content has been reviewed and refined by our team to ensure it meets your requirements and our quality standards.”

When Disclosure Adds Value (and Reduces Shadow AI)

Meeting Documentation

Teams often use AI to summarize transcripts into notes. Disclosure reassures colleagues about accuracy.

Script: “These meeting notes were generated from our transcript using AI summarization tools that comply with our data handling policies. Please review and let me know if any key points were missed or misrepresented. The full transcript is available for reference.”

Data Analysis and Insights

AI can spot trends quickly, but colleagues trust it more when they know the boundaries.

Script: “I used approved AI tools to help process this dataset and identify initial trends. The findings are promising, but I’d recommend validation with our domain experts before drawing conclusions. All data remained within our approved security boundaries.”

Creative Brainstorming

AI-generated ideas work best when treated as a starting point.

Script: “I used AI brainstorming tools to provide a diverse set of initial concepts. These ideas still need our strategic thinking and market knowledge to become viable. I’ve confirmed the generated content is free of IP concerns.”

Building Trust Through Proactive Disclosure

The best way to fight Shadow AI is to create an environment where disclosure feels safe and useful—not punitive. When employees share openly, organizations gain visibility while individuals get support.

  • Professional approach: “I used approved AI tools to assist with [task] to [benefit]. The output was reviewed and reflects my judgment, and I’ve documented the usage for our records.”

  • Collaborative approach: “This work combined AI assistance with my expertise. The AI supported [function], while I handled [your contribution]. I’m happy to share details about the process.”

  • Transparent approach: “For full transparency: I used [specific AI tool] to help with [task]. Here’s what that means for reliability, next steps, and alignment with our governance practices.”

Creating Organizational AI Visibility

Disclosure only works if it’s supported at the organizational level. Effective practices include:

  • Establishing AI registries so teams can log usage without red tape.

  • Running regular AI audits to map tool adoption.

  • Offering safe disclosure channels where employees won’t fear penalties.

  • Publishing clear guidelines about approved tools and purposes.


When Disclosure May Not Be Necessary

  • Minor editing and grammar: Similar to spell-check.

  • Research and info gathering: Summarizing public info adds little value in disclosure, though tracking may help IT.

  • Personal productivity: Task lists or reminders don’t usually need disclosure unless sensitive data is involved.

The Path Forward

AI isn’t looming on the horizon—it’s already in our inboxes, documents, and slide decks. The real challenge is keeping trust in step with technology.

I’ve watched teams shift from whispering about their AI use to openly logging it. The result? Fewer risks, stronger collaboration, and better outcomes.

Shadow AI thrives in silence.
Transparency builds resilience.

The organizations that embrace disclosure and visibility won’t just adapt to AI in the workplace—they’ll set the pace.

Ready to Take the Next Step?

If you’re wondering how to start using AI responsibly in your own day-to-day work—or want practical tools to guide your team—check out my free resources:

👉 100+ Everyday AI Prompts Cheat Sheet – jumpstart your productivity with tested prompts.
👉
AI Integration Strategy (Individuals) – a simple plan to make AI part of your workflow safely.
👉
AI Daily Habit Tracker – build confidence by logging your wins one day at a time.

And if you’re ready for the full playbook, my ebook AI Basics for Everyday Work: What You Need to Know (And How to Get Started) dives deeper into how AI can save you time, reduce burnout, and make your workday smoother.


Note: Written with a little AI help—and plenty of human judgment.