Module 3, Lesson 2
TL;DR: The difference between a useless AI response and a useful one is usually the prompt, not the tool. A four-element prompt — role, context, format, and constraints — tells the AI what you'd tell a competent colleague taking over your task.
Here's the same task, prompted two ways. Watch what happens.
The task: Write a status update email to a client about a software migration project.
Vague prompt:
Write me an email about the project status.
Four-element prompt:
You are a project manager at a mid-size consulting firm. Write a status update email to the VP of IT at Meridian Healthcare about their EHR migration project. The email should be 150-200 words, use a professional but warm tone, and cover: (1) what was completed this week (database schema mapping — on schedule), (2) what's next (user acceptance testing starts Monday), and (3) one risk to flag (the vendor hasn't confirmed the test environment availability — I need a response by Friday). End with a specific ask: confirm receipt and the test environment date.
From the vague prompt: A generic 250-word email with placeholder content ("The project is progressing well... We anticipate completing the next phase soon... Please don't hesitate to reach out with any questions."). It reads like every status update template on the internet. You'd have to rewrite 80% of it.
From the four-element prompt: A 180-word email that names Meridian Healthcare, references the EHR migration specifically, reports the database schema mapping completion, flags the test environment risk with Friday as the deadline, and ends with the specific ask. You'd edit maybe 2-3 sentences — tighten the phrasing, add a detail about Monday's UAT kickoff time.
Same AI. Same task. Completely different results.
1. Role — Tell the AI who you are. "You are a project manager at a consulting firm" gives it the vocabulary, tone, and perspective of that role. Without this, it defaults to a generic voice.
2. Context — Tell the AI what you're working on and for whom. "A status update email to the VP of IT at Meridian Healthcare about their EHR migration" gives it the specific situation. Without this, it writes about a generic project to a generic recipient.
3. Format — Tell the AI what the output should look like. "150-200 words, professional but warm tone, cover three specific points" prevents it from writing a 500-word essay when you wanted a quick email. Without this, length and structure are random.
4. Constraints — Tell the AI what MUST be in the output and what must NOT be. "End with a specific ask: confirm receipt and the test environment date" ensures the email does what you actually need it to do. Without this, the AI generates a plausible-sounding email that misses the point.
The four-element prompt is what you'd say if a competent colleague walked into your office and said "What do you need?" You wouldn't say "Write me an email about the project." You'd say "I need a status update to the Meridian VP — keep it short, flag the test environment risk, and ask for a reply by Friday."
The AI needs the same briefing your colleague would.
Pause and think: Why does adding constraints to a prompt change the output so much? What is the AI doing differently when it has more context?
The answer: With a vague prompt, the AI draws from the broadest possible pattern — "status update emails in general." With constraints, it draws from a much narrower pattern — "short, warm project status emails that flag risks and end with specific asks." The narrower the pattern, the more useful the output. You're not giving the AI "more to work with" — you're narrowing the space of possible outputs so the most likely next word is closer to what you actually need.
Can you give AI too many constraints? Yes. Here's what that looks like:
You are a senior project manager with 12 years of experience in healthcare IT consulting who specializes in EHR migrations for mid-market hospitals. Write a status update email to Jennifer Martinez, VP of IT at Meridian Healthcare, who prefers bullet points, dislikes jargon, reads emails on her phone during her commute at 7:15 AM, and needs information she can forward to her board with minimal editing...
This 200-word prompt will produce a decent email — but you spent more time writing the prompt than you'd spend writing the email yourself. The four-element framework gives you the sweet spot: enough specificity to get usable output, not so much that you've done the work already.
Rule of thumb: If you've specified HOW to write it rather than WHAT to include, you've crossed into doing the work yourself. The over-constrained Meridian prompt above is doing exactly that — dictating tone, sentence cadence, reading context, and audience preferences in such detail that you've essentially drafted the email's voice in the prompt. The four-element framework asks you to specify the requirements (role, context, format, constraints), not the execution.
Why does role matter so much? When you tell the AI "you are a project manager," you're activating a cluster of language patterns the AI learned from millions of project management documents. The vocabulary shifts ("deliverables," "milestones," "stakeholders"), the sentence structure changes (more direct, more action-oriented), and the assumed context narrows (the reader is a client or colleague, not a general audience).
This is the same reason AI-generated marketing copy sounds different from AI-generated legal text — both draw from different regions of the AI's training data. The role element is your way of selecting which region the AI draws from.
You now know the four-element framework. Before you build your own prompt, let's make sure you can identify the elements reliably — the next short check tests that skill.