A practical guide to writing better prompts
I just added a Prompts page to curate useful starting prompts and their purposes. I find myself often explaining to others how to get what they really want out of an interaction with a large language model (LLM). It's not as easy as "I want you to do the thing." It requires some understanding of the task at hand and how language shapes outcomes.
Let's walk through a concrete example.
Step zero: explain the situation first
Before any prompt makes sense, the LLM must understand the situation. This is a step people often skip because it feels obvious to them. Think of this the same way you would think about delegating work to another human: you wouldn't ask for a polished response without first explaining what actually happened. For all examples below, assume the following incident context has already been provided to the LLM:
"Earlier today, my website was unavailable for approximately one hour. The outage occurred during a scheduled database maintenance window that overlapped with an unexpected spike in traffic. During this overlap, the site became unresponsive. No customer data was lost, and there was no security incident. The issue was resolved by rolling back the maintenance change and temporarily increasing database capacity. The site has been stable since service was restored."
This description is intentionally factual and bounded. It reflects what is known, without speculating about mechanisms or long-term risk. Everything that follows is a transformation of this explanation, not a replacement for it.
A deliberately vague prompt
Prompt
"Write a professional email explaining why my website went down for an hour."
Example output
Hello,
We experienced a temporary outage earlier today that caused our website to be unavailable for approximately one hour. The issue occurred during routine maintenance combined with higher-than-usual traffic. The site is now operational.
We apologize for any inconvenience this may have caused and appreciate your patience.
Sincerely,
[Your Name]
This output isn't wrong, but it's generic. It could be sent to almost anyone, which means it doesn't serve anyone particularly well. The model wasn't lazy or confused, it simply wasn't told what mattered.
Making intent explicit: customers and reassurance
Now we keep the same incident explanation and add intent.
Prompt
"Using the incident description above, write a short email to non-technical customers explaining a one-hour website outage. The goal is to reassure them that the issue is resolved and that no data was lost. Use a calm, accountable tone. Avoid technical jargon. Assume the reader will spend about one minute reading this."
Example output
Hello,
Earlier today, our website was unavailable for about an hour due to an issue that occurred during routine maintenance combined with higher-than-expected traffic. The issue has been resolved, and the site is now available again.
We want to reassure you that no customer data was lost and there was no security incident. We're sorry for the inconvenience and appreciate your patience.
Thank you,
[Your Name]
Notice what's not here:
- No technical speculation
- No promises about prevention
- No claims beyond the known facts
Reassurance comes from clarity and honesty, not from certainty that hasn't been earned.
Same incident, technical audience
Now we change only the audience and purpose, not the facts.
Prompt
"Using the incident description above, write a concise internal summary for engineers. Include the known cause, resolution steps taken, and clearly identified follow-up considerations. Do not speculate beyond the provided information."
Example output
Incident Summary
The website experienced an approximately one-hour outage during a scheduled database maintenance window that coincided with an unexpected traffic spike. During this overlap, the site became unresponsive.
Resolution
The maintenance change was rolled back, and database capacity was temporarily increased. Service was restored following these actions and has remained stable.
Follow-up Considerations
- Review maintenance scheduling practices
- Revisit capacity planning assumptions during maintenance windows
- Consider additional safeguards for unexpected traffic increases
This output is intentionally constrained. It does not infer root cause beyond what was stated, and it does not present follow-ups as conclusions, only as considerations (and it generated them based on the scenario, not from hidden knowledge). That's realistic technical communication.
Same incident, executive stakeholders
Again: same explanation, different lens.
Prompt
“Using the incident description above, write a brief update for executive leadership. Focus on business impact, resolution status, and future risk. Keep it high-level and avoid implementation details.”
Example output
Earlier today, the website experienced an approximately one-hour outage caused by a scheduled maintenance activity overlapping with higher-than-expected traffic. The issue was resolved within the hour, and service has been fully restored.
There was no data loss or security incident. The site has remained stable since service was restored.
This version avoids overstating certainty. It reports:
- What happened
- What was done
- What is observable now
That's the appropriate level of confidence given the facts. Note that this response does not mention future improvements or risk reduction. That information was not present in the original incident description, so a truthful response cannot assume it.
Why the model didn't "figure this out" on its own
A common reaction is: "Shouldn't the AI know which version I want?"
But humans don't work that way either. If you asked three colleagues to "write a professional email about the outage," you'd get three different answers, each shaped by their assumptions about audience and intent. A good prompt replaces assumptions with clarity.
Each chat is a fresh contract
It's important to be explicit about something that's easy to misunderstand when reading examples like these.
Each chat with an LLM should be treated as a fresh contract.
The model does not implicitly remember conclusions, interpretations, or follow-up ideas from earlier interactions unless they are part of the current conversation. If you start a new chat and provide only the incident description and a single prompt, the model has access to only that information.
In this article, each example output is based on:
- The incident description shown earlier, and
- The specific prompt for that section
Nothing else.
If an output mentions risk, follow-up actions, or process changes, that information must have been explicitly provided in the same chat. Otherwise, it's an assumption, and assumptions are exactly what good prompting is meant to eliminate. A useful way to think about this is the same way you'd think about briefing a human collaborator: if you didn't say it out loud in this conversation, you shouldn't expect it to show up in the response. Even within a single chat, only what has been explicitly stated should be treated as shared knowledge.
When you don't yet know what details matter
Sometimes you know what you want to do, but not enough to specify how. That's not a failure, it's an opportunity to use the model collaboratively.
Prompt
"I need to communicate about the outage described above, but I'm not yet sure which details matter most for this audience. Before writing anything, ask me the questions you need in order to do this well."
Example output
Before drafting the message, I need a bit more context:
- Who is the intended audience (customers, partners, internal teams, leadership)?
- Is the primary goal reassurance, transparency, or documentation?
- How formal should the tone be?
- Are there legal, regulatory, or public relations considerations?
- Should next steps or follow-up actions be mentioned, or is this purely informational?
At this point, many people stumble, not because the questions are hard, but because they answer them piecemeal or vaguely.
How to respond (single, structured answer)
A good response answers all questions clearly in one message:
The audience is non-technical customers. The primary goal is reassurance and transparency, not documentation. The tone should be calm, professional, and approachable. There are no legal or regulatory constraints to consider, and this message will be public-facing. This should be informational only; do not include speculative next steps or preventative claims.
This reply:
- Removes ambiguity
- Prevents the model from guessing
- Keeps the scope intentionally narrow
Likely LLM response after clarification
Hello,
Earlier today, our website was unavailable for approximately one hour due to an issue that occurred during scheduled maintenance combined with higher-than-expected traffic. The issue has been resolved, and the site is now available again.
We want to reassure you that no customer data was lost and there was no security incident. We appreciate your patience and apologize for the inconvenience.
Thank you,
[Your Name]
This final output is better not because the model changed, but because the uncertainty was resolved collaboratively.
What this is really teaching
Prompting well isn't about verbosity or clever phrasing. It's about making your intent visible while respecting the limits of what is known.
A useful mental checklist:
- Who is this for?
- What do I want them to understand or feel?
- What facts are confirmed?
- What is still uncertain?
- What would be inappropriate to assume?
Answering those, even briefly, is what turns "do the thing" into something genuinely useful.
The real takeaway
LLMs don't replace thinking. They amplify it. They are not omnipotent. A good prompt doesn't make the model smarter, it makes your understanding clearer to the model. And that only works if the information you provide is accurate, bounded, and honest about uncertainty. Once people internalize that, they stop treating LLMs like magic boxes and start using them as powerful tools for communication and reasoning. That's where the real value is.