TL;DR: Not all work tasks are equally suited for AI. The reliability spectrum has four zones — high, moderate, low, and not suitable — determined by three factors: what type of output you need, how you can verify it, and what happens if it's wrong.
Hook
You now know AI produces confident first drafts that can be wrong. But how wrong depends enormously on what you ask it to do. Drafting a meeting agenda is very different from calculating a financial projection. The spectrum gives you a way to judge any task before you start.
The Four Zones
High Reliability (Green Zone)
- Output type: Structured text that follows clear patterns — emails from notes, meeting summaries, document formatting, outlines
- Verification: You can check the output against your source material in under 2 minutes
- Failure consequence: Minor — a formatting error or awkward phrasing, easily caught and fixed
- Example: Turning your bullet-point meeting notes into a formatted summary email. You have the notes, so you can verify every claim. The worst case is a phrasing you'd reword.
Moderate Reliability (Yellow Zone)
- Output type: Creative or analytical text that requires judgment — marketing copy, report analysis, presentation narratives, proposals
- Verification: You can check it but it requires reading carefully with domain knowledge
- Failure consequence: Moderate — wrong emphasis, missed nuance, or tone that doesn't match your audience
- Example: Drafting a client proposal. The structure will be solid, but the AI may emphasize the wrong service features for this specific client. You need to know the client to catch that.
Low Reliability (Orange Zone)
- Output type: Content requiring specific facts, recent information, or specialized domain knowledge
- Verification: Requires cross-referencing external sources or consulting an expert
- Failure consequence: Significant — hallucinated facts, outdated information, or wrong technical details that could mislead decisions
- Example: Generating a competitive analysis with market statistics. The AI will produce numbers that look right — but some may be fabricated. Every statistic needs independent verification.
Not Suitable (Red Zone)
- Output type: Legal documents, medical advice, financial calculations, anything with regulatory consequences
- Verification: Requires professional certification or legal review — AI output alone is never sufficient
- Failure consequence: Severe — legal liability, health risk, financial loss, compliance violation
- Example: Drafting a binding contract clause. Even if the language sounds legally correct, AI has no understanding of your jurisdiction, applicable precedents, or the specific negotiation context. This requires a lawyer, not a chatbot.
Three Questions to Place Any Task
For any work task, ask:
- What type of output do I need? (Structured from my notes → high. Creative requiring judgment → moderate. Facts I can't verify myself → low. Legal/medical/financial → not suitable.)
- How would I verify the output? (Compare to my source → easy. Read with expertise → moderate. Cross-reference external sources → hard. Need a professional → not suitable.)
- What happens if it's wrong? (Reword a phrase → minor. Miss a client's priority → moderate. Quote a fake statistic → significant. Legal liability → severe.)
Pause and think: Think of a task you did last week. Where would it fall on this spectrum? What's the primary verification method you'd use?
Non-Example: The Task That Looks Green but Is Actually Orange
Marcus, a corporate trainer, wants to use AI to "generate a list of the top 10 pharmaceutical regulations for sales compliance training." This sounds like a straightforward task — a formatted list. Green zone, right?
Wrong. The output type is a list, but the content requires specific, current legal knowledge. If even one regulation is outdated or misnamed, Marcus is training sales reps on incorrect compliance requirements. That's not a formatting error — that's professional malpractice.
The task LOOKS like the Green Zone because the format is simple. But the reliability zone is determined by the content's verification requirements, not its format. This task is Orange Zone at best — every item needs independent verification against the actual regulatory database.
Tool Categories (Introduction)
Different zones favor different tools:
- Green/Yellow zones: General chatbots (ChatGPT, Claude, Gemini) work well — you're using the AI's language skills, not its "knowledge"
- Orange zone: Research engines (Perplexity) that cite sources are better — at least you can check the citations
- Red zone: No AI tool alone is sufficient. Use AI for initial research or drafting, then always involve a qualified professional for review
You'll learn the full tool selection framework in Module 4. For now, just notice: the tool that's right depends on the zone.
Bridge
You now have a framework for judging any work task. Next, you'll apply it to your own recurring tasks to build a personal AI Task Map — a practical guide to which of YOUR tasks deserve AI and which don't.