Learning Platformprototype

Day 2 — Talking to AI Well

Time: ~60 min · Date: Sun Apr 26

Why this matters

The single biggest predictor of getting useful AI output is the quality of your input. People who think AI is "meh" usually have meh prompts. People who think it's magic usually figured out how to talk to it. This lesson is the difference.

The four things every good prompt has

Most good prompts have some combination of these. You don't need all four every time, but missing all four guarantees a vague answer.

1. Role / context

"I'm a real estate agent on the Wasatch Front…"

Tell it who you are and what you're doing. Claude calibrates the answer to the audience.

2. Task

"Write a follow-up email to a buyer who's gone quiet."

Be specific about what you want it to do. Verbs: write, summarize, rewrite, list, compare, explain, draft, critique.

3. Specifics / inputs

"The buyer's name is Sarah, we last spoke 8 days ago, we had a great first conversation but she hasn't responded to my last 2 emails."

The more concrete details, the more concrete the output.

4. Format / constraints

"Keep it under 100 words. End with a low-pressure question. Tone: friendly, not desperate."

Length, tone, structure. If you don't ask, Claude picks defaults that may not match what you want.

Bad → good rewrite

Bad:

Write a follow-up email.

Good:

I'm a real estate agent. Write a follow-up email to a buyer named Sarah who went quiet after I sent her a list of homes 8 days ago. We had a great first conversation. Tone: friendly, not desperate. Under 100 words. End with a low-pressure question.

The bad version gets you a generic follow-up email template. The good one gets you something you can actually send.

Other useful moves

Show, don't just tell

If you want output in a specific style, give an example.

"Write three Instagram captions for a new listing. Match the style of these three captions I've used before: [paste captions]"

Iterate

The first answer is rarely the final answer. It's a starting point.

"Make it 30% shorter and remove the part about the schools." "More casual. Less formal." "Try again, but assume the buyer is a first-time investor."

Ask for options

When you don't know what you want yet:

"Give me three different versions: one warm, one direct, one playful."

Ask Claude to ask you

When you're not sure what to include:

"I want to write an open house follow-up email. What information do you need from me to write a good one?"

This flips the script — Claude asks you the right questions.

Use the magic phrase: "before you answer"

"Before you answer, what assumptions are you making?"

This catches situations where Claude is guessing. Sometimes those guesses are wrong, and you'd rather find out before you act on the answer.

Common pitfalls

  • Asking yes/no questions when you want analysis. "Is X a good idea?" → Claude says "yes/no, but…" Asking "what are the tradeoffs of X?" gets you the actual thinking.
  • Stacking too many tasks in one prompt. "Write the email, then summarize it, then tweet it, then translate to Spanish." Pick one. Iterate.
  • Trusting confident wrongness. Claude will sound certain even when guessing. Always verify facts that matter.
  • Over-engineering. Sometimes "rewrite this email more warmly" is the whole prompt you need.

Exercise (~30 min)

Pick one real task from your work this week. Something you'd otherwise do yourself in 15 minutes.

Examples:

  • Drafting an email
  • Writing a listing description
  • Summarizing a document a client sent you
  • Coming up with questions for a meeting

Now:

  1. Write a bad prompt — vague, no context, no constraints. Try it in Claude.ai. Note the result.
  2. Rewrite using the four-part framework (role + task + specifics + format). Try again. Compare.
  3. Iterate at least once on the second attempt to make it better.
  4. Take notes on what changed and why.

This is the pattern you'll use forever. The first prompt is rarely the keeper.

Recap

You should now be able to:

  • Name the four parts of a good prompt
  • Recognize a vague prompt and rewrite it
  • Iterate on a Claude response instead of accepting the first try
  • Catch a "confidently wrong" answer

Going deeper (optional, ~50 min)

Read (~30 min)

  • Anthropic's prompt engineering guide — the canonical reference, written by the people who make Claude. Skim it. The "Be clear and direct" and "Use examples" sections are the most relevant for you.

Watch (~20 min)

Done?

  • Day 2 complete

Next: Day 3 — Mac vs PC: the cheat sheet

Lesson recap

Done with this lesson?
Mark it complete to track your progress.