Module 1, Lesson 2
TL;DR: AI generates text by predicting the most likely next word, not by retrieving facts from a database. This is why it can write grammatically perfect sentences that contain made-up information. Every AI output needs verification because "sounds right" and "is right" are completely different things.
Remember Sarah's vendor email? The AI confidently wrote "our meeting on March 14th" when the actual date was March 12th. It didn't look up the date and get it wrong — it never had the date at all. It predicted that a date would go there and generated one that sounded plausible.
That's not a bug. That's how the technology works. And understanding this one idea changes everything about how you use AI.
When you type a prompt, the AI doesn't search a database of answers. It does something much simpler and much stranger: it predicts the next word.
It starts with your prompt, then asks itself: "Based on all the text I was trained on, what word is most likely to come next?" It generates that word, then asks again: "Given everything so far, what word comes next?" Over and over, word by word, until it has a complete response.
This is why AI can:
And why AI can also:
Here's the practical implication: "sounds right" and "is right" are completely different things with AI.
A human colleague who writes you a confident email about a March 14th meeting probably checked their calendar. An AI that writes the same email did not — could not — check anything. It generated text that reads like someone who checked.
This is why the edit-before-use habit you practiced in Lesson 1.1 isn't optional. It's the core skill. Your domain expertise — knowing the actual date, the right tone for this client, the constraint about the billing module — is what makes AI output safe to use.
Pause and think: If AI predicts words rather than retrieving facts, why does it sometimes give correct answers? And why can't you tell the difference between a correct prediction and an incorrect one just by reading the output?
The answer: Correct predictions and incorrect predictions look identical on the page. The AI was trained on enormous amounts of text, so it's often right — but "often right" is not the same as "reliable." The only way to distinguish is to verify using something outside the AI: your knowledge, a source document, a colleague, or a search engine.
A common mistake: treating AI like Google. You ask "What were Q3 revenue figures for Acme Corp?" and the AI responds with a specific number and source. That number might be correct — or it might be fabricated with a plausible-sounding citation. Google retrieves existing web pages. AI generates new text. They are fundamentally different tools, even when the output looks similar.
If you need verifiable facts: Use a research tool like Perplexity (which searches the web and cites sources) or check the AI's claims against your own records. Don't assume the AI "looked it up."
The technical term for this prediction process is "autoregressive language modeling." The AI (called a large language model, or LLM) was trained on billions of pages of text from the internet, books, and other sources. During training, it learned patterns — not facts. It learned that "The quarterly report shows revenue of $" is very likely to be followed by a number, that "Best regards," usually ends a professional email, and that "The capital of France is" is almost always followed by "Paris."
When the patterns are strong (the capital of France really is always Paris in training data), the AI is reliably correct. When the patterns are ambiguous (what date was YOUR specific meeting?), the AI generates whatever is statistically likely — which may have nothing to do with reality.
This is why AI is remarkably good at tasks where the "right answer" follows strong patterns (professional writing, summarization, formatting) and unreliable at tasks where the "right answer" depends on specific facts it was never given (dates, proprietary data, recent events).
Now you know how AI works at a fundamental level — it predicts, it doesn't know. This explains every AI success and every AI failure you'll encounter. But before we move into the frameworks for using AI well, let's take a quick look at where you stand relative to everyone else.