Kid mode
Last Sunday, I caught my 11-year old son chatting with ChatGPT.
His initial question was: “how do i make a working chicken farm in minecraft”
That was it. No “act as a Minecraft expert.” No “I’m a beginner, please be patient.” No two paragraphs explaining what version he’s playing or what biome he’s in. Ten words.
He read the answer. Built the thing. Then came to show me this automated machine for cooked chicken twenty minutes later.
I asked what led him to use ChatGPT, and I almost said: “you should tell it more about what you want.”
But I didn’t. Because for what he was doing, his prompt was perfect.
He’s eleven. Nobody told him AI is hard. He doesn’t think of it as a professional tool, or as artificial intelligence, or as anything. It’s just a thing that knows about Minecraft.
That’s the right way to use it. For him.
It is not the right way for me.
When my son asks ChatGPT how to build a chicken farm, the worst case is he builds the wrong farm and tries again.
When you ask it how to refactor a payment flow running real money through it, the worst case is much worse than that.
The skill most founders haven’t built isn’t prompting.
It’s noticing which kind of question they’re in.
Most of us are doing it backwards.
Two modes, one question to ask first.
Before you prompt AI, ask: are the stakes kid stakes or expert stakes?
Kid stakes: Exploring, learning, nothing on the line. A “good enough” answer is fine. Wrong answers cost you a retry.
Expert stakes: Output goes to a client, into production, in front of a customer, or into a decision you can’t undo. “Good enough” isn’t.
For kid stakes, prompt like a kid:
"How do I [thing]."
That’s it. Don’t add a role. Don’t add constraints. The model is faster than your typing, so let it work.
For expert stakes, have three lines:
- Role and task. You’re reviewing a client onboarding email for a US founder.
- Constraint. Under 80 words. No exclamation marks. No “just wanted to follow up.”
- Format. Output the email body only. No subject line.
Bonus tip: If you have one good example, paste it. That alone makes the output three times better.
What you shouldn’t do in either mode:
- Apologize (“sorry if this is a stupid question…”)
- Over-explain context the model doesn’t need
- Ask it to “do its best”, it already does
- Hedge (“could you maybe possibly help me with…”)
The amateurs over-explain on the kid questions and under-explain on the expert ones. Backwards.
The fix is one beat of awareness before you type.
Which mode am I in?
My son built a chicken farm with ten words.
You’re not building a chicken farm.
What's one task this week where you've been over-engineering the prompt? Reply and tell me. I read every one.
That's all for this week. See you next Thursday.
— Michal
P.S. The chicken farm worked. Damn effective thing. I couldn't have built that myself. My son probably knew, so he asked the expert.