How to Write Better Prompts for AI
Note: This blog post was enhanced with the help of AI to improve grammar and refine tone. All content and opinions are my own.
I’ve been using Claude and ChatGPT for code reviews, blog editing, and random DevOps questions for about two years now. And man, did I suck at prompting initially.
My first attempts were basically “make this better” or “fix my Kubernetes config.” The results? Generic advice that could’ve come from any Stack Overflow post. It was frustrating until I realized: the problem wasn’t the AI, it was me asking terrible questions.
Here’s what I’ve learned from countless failed prompts and a few wins.
What Actually Matters in Prompts
Forget the fancy frameworks. Here’s what I’ve found works in practice:
Be stupidly specific. Instead of “review my code,” I now write “review this Terraform module for security issues, focusing on IAM permissions and S3 bucket policies.” The difference is night and day.
Give context about yourself. I always mention I’m working in a homelab or that I prefer Kotlin over Java. The AI adjusts its suggestions accordingly instead of giving generic enterprise advice.
Tell it what format you want. “Give me a bulleted list” or “write this as a shell command” or “explain like I’m already familiar with Docker.” Otherwise you get essays when you wanted one-liners.
Real Examples from My Experience
Here’s a prompt that used to fail me:
“Help with my GitHub Actions”
And here’s what actually works:
“I’m running GitHub Actions in my homelab K3s cluster using self-hosted runners. This workflow builds a Hugo site and deploys to S3. The issue is [specific error]. I prefer minimal YAML and already have kubectl access.”
The first one gets you GitBook documentation. The second gets you a working solution.
When I Mess With Parameters
I rarely touch temperature or top_p anymore, but when I do:
- Low temperature (0.1-0.3) for debugging code or generating precise configs
- Higher temperature (0.7-0.8) when I want creative blog ideas or alternative approaches
Most of the time, default settings work fine if your prompt is decent.
What I’ve Learned the Hard Way
Examples are magic. If I want a specific output format, I show the AI an example of what good looks like. This is especially true for generating YAML or JSON.
Iterate on working prompts. When I find a prompt structure that works, I save it and reuse it. Why reinvent the wheel?
Don’t overthink it. My best results come from conversational prompts, not formal “prompt engineering” templates.
My Go-To Template
For technical stuff, I usually follow this pattern:
I'm [context about me] working on [specific problem].
Here's what I've tried: [attempts so far].
I need [specific outcome] in [format preference].
[Any constraints or preferences]
It’s not fancy, but it works way better than my old “please help” approach.
The real trick isn’t learning prompt engineering theory — it’s just being clear about what you actually want. Treat the AI like a very smart but literal-minded colleague who needs good instructions to help you effectively.