I've had several conversations lately that follow roughly the same arc:
"I tried using AI for X."
"It didn't really do what I wanted."
"It kind of worked, but not really."
And my first instinct (which I try not to say out loud immediately) is something like:
- How complex was the thing you were trying to solve?
- What did you actually tell it?
- What assumptions were in your head but not in the prompt?
- How much context did it have available?
Not because the model is perfect. It isn't. But because most of the time, the failure isn't randomness, it's missing structure. AI is the fastest executor most of us have ever had access to. And fast execution exposes unfinished thinking almost immediately.
"This Seems Like a Lot of Work"
Whenever I explain this in person, there's usually a look that comes across someone's face when I start talking about outcomes, constraints, users, risk, operating conditions, etc.
It's the look that says:
"You want me to do all that before I even start?"
Yes. But here's the thing - that's not extra work. That's the work that should be done anyway.
Before AI, you could sometimes get away with vague thinking because execution was slow. You'd have meetings. You'd iterate. You'd "discover requirements." You'd accidentally build three partial solutions before realizing what you actually meant.
Now? You can build something that looks complete in an afternoon. If your idea is half-formed, you will build a half-aligned system very quickly. AI didn't create chaos. It just removes the buffer that used to hide it.
AI Is an Executor, Not a Product Manager
When you ask AI to do something, it will:
- Expand.
- Elaborate.
- Fill in blanks.
- Make reasonable guesses.
It will not:
- Decide which business outcome matters.
- Reconcile conflicting constraints.
- Know which risk would keep you up at night.
- Ask whether the problem itself is wrong.
That part is still yours. If you think your idea all the way through, you can build the whole thing surprisingly quickly. If you think halfway through it, you will rebuild it repeatedly - also quickly. Speed is not the problem, unfinished thinking is.
The Layers I Try to Think Through
I don't always write these down formally, but I usually walk through something like this in my head.
1. What actually changes if this works?
Not "we have a new dashboard." What changes?
- Does someone save time?
- Does an error rate drop?
- Does latency improve?
- Does a decision get made faster?
If I can't describe the measurable change, I'm probably not ready to automate it.
2. What are the constraints I'm pretending don't exist?
Time. Budget. Existing infrastructure. Compliance boundaries. Data access. Constraints shape architecture more than preferences ever will. If I don't surface them explicitly, they show up later as "surprises."
3. Who is this really for?
One person. One core job. The moment I start thinking "well it could also do X and Y and Z," I know I'm drifting. Generalization is expensive. AI will happily generate something broad and impressive. Broad and impressive is rarely what you actually need.
4. What's the smallest thing that proves this works?
Not a grand system. What's the thin slice?
- One real user.
- One real path.
- Real data.
- Real boundaries.
If I can't demo it in a few minutes, I probably scoped it too big.
5. What has to be true for this not to collapse?
This is where risk lives.
- Is the data cleaner than I assume?
- Is the dependency more stable than I assume?
- Is the user behavior more consistent than I assume?
AI can build on top of bad assumptions extremely efficiently. That doesn't make the assumptions better.
6. What happens when something fails?
This is the part most people skip.
- If a dependency times out, what happens?
- If the data is partial, what happens?
- If the model output is malformed, what happens?
If I don't think about this up front, I'll think about it later - probably at an inconvenient time.
Modes Don't Just "Exist"
Another thing I see a lot is blended thinking. Someone asks for strategy, architecture, and implementation all at once. The output comes back polished and comprehensive, but shallow in all directions. AI does not automatically separate thinking modes; you have to create that separation. Sometimes I'll literally say:
"We are in strategy mode. Do not generate code."
"Switch to architecture mode. Define interfaces and data flow only."
"Use implementation mode. Generate the smallest working slice."
"Switch to review mode. Find edge cases and failure modes."
It feels slightly artificial at first; it works. When modes are blended, you get something that looks smart but isn't anchored. When modes are separated, depth improves almost immediately.
Fast Execution and Moving Targets
There's another pattern I've noticed, especially in business environments. Ideas evolve mid-stream. Requirements get added casually. The scope shifts slightly every time someone re-prompts. With traditional development, that drift happens slowly enough that people adjust gradually. With AI-assisted development, the loop is tight. If the target moves every time you prompt:
- The architecture shifts every time.
- Assumptions stack on top of each other.
- Context fragments.
- Quality degrades.
AI doesn't make moving targets worse, it just makes the consequences visible faster. If you think your idea all the way through, you can build it quickly. If you don't, you'll iterate your confusion at high speed.
If You Feel Behind, You're Probably Not
I've seen people assume there's some secret to "prompt engineering." In my experience, the people getting consistent results aren't using magical phrasing, they're clearer. They know:
- What they want to change.
- What constraints matter.
- What success looks like.
- What would make them stop.
That clarity is portable. It's learnable. And it has nothing to do with hype.
Where the Leverage Actually Is
AI doesn't remove the need for judgment, it amplifies it. If your thinking is vague, the output will be elaborate but misaligned. If your thinking is structured, constrained, and outcome-driven, AI becomes a force multiplier. The leverage isn't in typing faster or generating code faster, it's in thinking further ahead before you ask something to execute quickly.
And if you've been frustrated so far? That's not a sign you're bad at using AI. It's usually a sign that you're finally seeing how much unfinished thinking used to be hidden by slower execution.