Module 4, Lesson 2
TL;DR: Not everything in your work is safe to paste into a free AI tool. The "outside consultant" test: if you wouldn't hand this document to an outside consultant you just met, don't paste it into a free-tier chatbot. Classify your data into four sensitivity levels and match each to the right service tier.
Your colleague sends you a Slack message: "Hey, can you run this client contract through ChatGPT and get a summary? I need it for the meeting in 20 minutes."
What do you do?
If you said "sure" — this lesson is for you. If you hesitated — good instinct. Let's make that instinct into a framework.
Here's the simplest way to think about AI data safety:
Would you hand this document to an outside consultant you just met at a conference?
When you paste information into a free-tier AI tool, you're essentially handing it to an outside party. Free tiers may use your inputs to improve their models. Even paid tiers have different data handling policies. The "outside consultant" test gives you a fast gut check before you paste.
Public — Information already available to anyone.
Internal — Not public, but not sensitive. Shared within your organization.
Confidential — Sensitive business information with real consequences if exposed.
Restricted — Legally protected or personally identifiable information.
| Tier | What It Means | Example Tools | Safe For | |
|---|---|---|---|---|
| Free chat | Your inputs may be used for model training. No data handling guarantee. | ChatGPT free, Claude free, Gemini free | Public and Internal data only | |
| Paid with data controls | Your inputs are not used for training. ChatGPT Plus requires opting out in Settings > Data Controls (it's not the default); Claude Pro and Gemini Advanced default to no-training. Always verify the active setting. | ChatGPT Plus, Claude Pro, Gemini Advanced | Public, Internal, and Confidential data (check the specific tool's data policy AND verify the opt-out is enabled) | |
| Enterprise with BAA | Contractual data handling agreement. Audit trail. Compliance certifications. | Microsoft 365 Copilot, enterprise ChatGPT, Claude for Enterprise | All data types, subject to your organization's AI policy |
The fourth option: "Do not use." Some data should never go into any AI tool — period. If your legal team hasn't approved a specific tool for specific data types, err on the side of not sharing.
Pause and think: Your company doesn't have an AI policy yet. A colleague sends you a client contract and asks you to "run it through ChatGPT for a summary." What would you do, and why?
A reasonable answer: "I can't put a client contract into ChatGPT free — it's confidential data and the free tier may use inputs for training. I could anonymize it first (remove the client name, specific dollar amounts, and identifying details) and use it for a structural summary. Or I could ask IT if we have access to an enterprise AI tool with data controls. Or I could summarize it myself — it's a 10-page contract, not a 200-page filing."
Notice: there's no single right answer. The skill is knowing WHY you can't just paste it, WHAT your options are, and HOW to make a responsible choice.
Raj, an operations manager, pastes his company's entire vendor pricing spreadsheet into ChatGPT to ask for a cost analysis. He classified it as "Internal" — it's not client data, not employee PII. Just internal pricing.
But the spreadsheet includes negotiated rates with specific vendors. If those rates became public (or were ingested into a model that other companies' employees use), competitors could see what his company pays. Vendors could discover their rates are being compared. This is Confidential, not Internal — not because of regulatory requirements, but because disclosure has real business consequences.
The test: Would you email this spreadsheet to a stranger? No? Then it's not Internal.
You now have a framework for deciding what's safe to share. Before the full assessment, let's do a quick sort to make sure the four levels are clear — then we'll combine data sensitivity with tool selection.