Prompt
Before we begin this module, take 60 seconds to answer these from memory. Don't look back at Modules 1 or 2.
- Name the four reliability zones on the AI reliability spectrum.
- Give one example of a task in the "not suitable" zone and explain why.
- What is the primary risk of using AI for a task in the "low reliability" zone?
- What does "predicts the next word" mean for AI accuracy?
Answers
- High, Moderate, Low, Not Suitable. High = structured text from your own notes. Not Suitable = legal, medical, financial with regulatory consequences.
- Example: drafting a binding contract clause. Why: requires jurisdiction-specific legal knowledge, precedent awareness, and carries liability if wrong. AI output alone is never sufficient here.
- The primary risk is hallucinated facts that look authoritative. The AI will produce specific-sounding statistics, citations, or technical details that may be entirely fabricated. You can't tell by reading — you have to verify against external sources.
- AI doesn't retrieve facts from a database. It generates each word based on what's statistically likely to come next. This means it can write grammatically perfect, confident-sounding text that is factually wrong — because "sounds right" and "is right" are different things.
Bridge
If you got all of these, your Module 2 retention is strong. If you missed any, that's exactly why we do this exercise — the act of trying to remember strengthens the connection. Now let's build on those foundations: you know WHICH tasks to use AI for. Next, you'll learn HOW to get AI to produce output that's actually useful.