This weeks guest article is adapted from NetworkChuck. We’ll be looking at how to craft the perfect prompt to make sure things go right the first time. Be sure to checkout his helpful videos on his YouTube channel. You can also catch him on his site, networkchuck.com.
You suck at prompting.
It’s okay. Most of us do.
Have you ever asked AI to do something simple and gotten total garbage back? You ask for one thing. You get something completely different. It’s frustrating. Really, really frustrating.
Maybe you’ve even yelled at ChatGPT. Insulted it. Felt that rush of anger when it gives you the wrong answer for the third time. If you haven’t, you’re probably not using it enough.
Those moments of pure frustration make you think one of two things.
Option one: AI is dumb. The naysayers are right. I’m done with this.
Option two: I’m dumb. I have no idea how to use AI.
Most of us land on option two. And here’s the thing—it’s true. It’s a skill issue. But that’s actually good news. Skills can be learned.
The Learning Journey
When you keep getting bad results, you have a choice. Give up or get better.
The decision was made to go deep. Really deep. Here’s what that looked like:
- Taking all the top prompting courses on Coursera
- Reading the official docs from Anthropic, Google, and OpenAI
- Talking to the experts—the best prompt engineers around
People like Daniel Miessler, Eric Pope, and Joseph Thacker (the “prompt father” himself) shared their wisdom. And they all said the same thing: when AI gives you a bad response, treat it as a personal skill issue. The problem is you.
That might sound harsh. But it’s empowering. You can fix it.
What This Guide Will Do
This guide is for everyone. A poll showed that most people feel pretty confident about their prompting skills. That might change after reading this. Others admitted they have no idea what they’re doing. This is especially for them. But really, it’s for all of us.
We’re going to take something bad and make it good. The example? A terrible Cloudflare apology email. We’ll transform it using foundational prompting concepts.
As we go, we’ll build up skills. We’ll learn new techniques. And at the end, there’s something special waiting: one meta skill. A single concept that makes every technique work better.
It’s time to learn prompting in 2025. Get your coffee ready. Let’s go.
What Prompting Really Is
Before we fix your prompts, you need to understand what prompting actually is. Most people get this wrong. That includes me. I got it wrong too.
Prompting essentially is just asking AI to do stuff.
That’s the simple version. And it almost feels like talking to a real person. Sometimes we forget it’s not human. But here’s the thing: you’re talking to a computer.
Prompts Are Programs
In the Vanderbilt University course on prompting, Dr. Jules White defines a prompt in a powerful way. It is a call to action to the large language model. It’s not just a question. It’s a program.
You aren’t asking the AI. You’re programming it with words.
Think about it. Every time you write something, the AI needs to format it in a particular structure. You write a program that tells it what to do. You need that mentality. Why? Because LLMs don’t think like we do.
LLMs Are Prediction Engines
Here’s the big idea. LLMs are prediction engines. They’re like super advanced autocomplete. When you understand that, everything changes.
Let’s see this in action. I’m going to use Google Gemini. I want to see if it can predict the next word in a phrase. I’m trying to get it to copy my catchphrase: “You need to learn Docker right now.”
Let me type: “You need to learn…”
What happens? It gives me a generic answer. A generic completion. That’s why they call the results of a prompt a “completion.” The AI is completing or predicting what you want. It’s not thinking about it.
This response was statistically the best response according to the model. But it’s not what I wanted.
Hacking the Probability
Now let’s get more specific. Just a tiny bit more specific. I’ll open a new prompt. This time I’ll put in two placeholders and exclamation points: “You need to learn ___ right ___!”
What I’m hoping is that the AI has seen enough examples like this to predict the pattern. Let’s see.
It got it! “You need to learn Docker right now!”
It guessed that because it’s seen patterns like that before. I can even ask it why. It says it recognized the pattern from technology-focused YouTube creators.
The Key Takeaway
Here’s what you need to remember. You’re not asking a question. You’re starting a pattern.
If your pattern is vague, the AI guesses anything. But if it’s more focused, you’ll get way better results. You’re hacking the probability.
Treat prompts like code. Be deliberate about structure and intent. The more precise your pattern, the better the AI can complete it. That’s the foundation of good prompting.
Personas and System Prompts — Make the AI Who You Need It to Be
Why Voice Matters
The Cloudflare apology email is kind of trash. And it might have to do with who is writing it. That might sound like a weird question. But seriously, think about it. Who is writing this email when we ask AI to write it?
No, it’s not a call center of people. But what’s the perspective?
Because this sounds like nobody. It’s generic and soulless.
That’s where personas come in. We need to give this AI some personality.
The Thought Experiment
Let’s try a little thought experiment. Let’s say you’re planning a trip to Japan. AI doesn’t exist. Google doesn’t exist. You have to ask a person. Old school style.
Who are you going to ask?
You’d probably pick someone who has been to Japan. Someone with experience planning trips. Someone who loves Japan. They have to like it, right?
Maybe they are a professional travel planner. They work for a travel agency. The best in the world. They’ve planned millions of trips. That’s who I would ask.
And that’s the mindset we need to have when we’re talking with AI. Who do we want crafting our email?
AI Can Be Anybody (or Nobody)
Guess what? AI can be anybody. It can also be nobody.
It has a wealth of knowledge it can pull from. But we have to narrow that focus. The Google prompting course on Coursera puts it simply. Persona refers to what expertise you want the AI tool to draw from. Easy for me to say.
We need to narrow its focus so it can guess better.
How to Specify a Persona
When you set a persona, be specific. Include:
- Role: What job or position does this person hold?
- Seniority: Are they junior, senior, or an expert?
- Audience: Who are they writing for?
- Ownership: Do they take personal responsibility?
For example: “You’re a senior site reliability engineer for Cloudflare. You’re writing to both customers and engineers. Write an apology letter or email.”
That single instruction changes everything.
The Demo: Applying a Senior SRE Persona
Let’s try it out. I’m going to grab a new chat. Now we’ll say, “Hey, you’re a senior site reliability engineer for Cloudflare. You’re writing to both customers and engineers. Write an apology letter or email.”
Let’s see what happens. Boom.
Immediately, it’s more professional. From the subject line to the direct ownership. It says “I” instead of “we.” It’s directed to a more technical audience. It’s overall better.
The tone is sharper. The language is more confident. The persona shaped the output.
System Prompt vs User Prompt
Now, it’s also important to know where to put the persona. When you’re building outside of the GUI—for example, if you’re using an API or cloud code (which I highly recommend, check out that video right here)—you would normally have the persona in what’s called the system prompt.
When you’re prompting AI, there are actually two prompts at work:
- System prompt: This instructs the AI on how to do things, who it is, and how it’s supposed to interact with you and me.
- User prompt: This is what you type in the chat box.
Most of the time, we’re interacting and inserting the user prompt. Behind the scenes is a system prompt. When you’re using a system like Claude Code, you can actually change that system prompt. That makes it super powerful.
But this works fine too. You can tell it who to be in the user prompt. It will still work.
Practical Tips
Always pick the perspective you want the model to adopt. Be explicit. Don’t assume the AI knows who it should be. If you want a lawyer, say so. If you want a teacher, say so. If you want a senior engineer, say so.
The more specific you are, the better the AI can complete the pattern.
A Problem Appears
But hold on a second. Did you notice something kind of weird? It totally made up the event. Both of these totally made up the event. This is not what happened.
How do we fix that?
That’s where context comes in. And that’s what we’ll cover next.
Context, Tools, Memory, and Permission to Fail — Avoid Hallucinations
Why AI Makes Things Up
It’s kind of amazing to watch an LLM hallucinate. It makes things up out of thin air. Where is it even getting this stuff?
But you shouldn’t be surprised. Remember, it’s a prediction machine. It’s really good at guessing. When it doesn’t have the facts, it fills in the blanks. That’s just what it does.
This is where context comes in. And it’s probably the most important technique you’ll learn today.
Context Is King
Context literally takes the guesswork out of prompting. Almost completely. And 2025 has been the year of context. Context is king. You’ll hear that everywhere.
It’s the “C” in the Google prompting framework. The TCREI. Here’s how Google describes it:
Next, you’ll include context. That means the necessary details to help the AI tool understand what you need from it. This is the difference between writing “give me some ideas for a birthday present under $30” and “give me five ideas for a birthday present. My budget is $30. The gift is for a 29-year-old who loves winter sports and has recently switched from snowboarding to skiing.”
See the difference? One is vague. The other is packed with useful details.
The Problem: Missing Facts
Right now, the AI doesn’t know about the Cloudflare outage. We need to tell it about the Cloudflare outage. And this is where you don’t want to skimp on details.
Be detailed. Be specific. Don’t hold back. Why? Because whatever context or information you don’t include, the AI is going to fill in those gaps itself.
This is the downside of LLMs. They’re eager to please. They want to give you the right answer. Very rarely will they give you nothing. So more context equals less hallucinations.
Adding Context to the Prompt
Here’s our new prompt. We’ll make it very brief. So here we have all the facts. Well, most of them. Let’s see what happens.
This is way better. The facts are all there. I think. But it did still hallucinate. Like, what are we doing about it? It’s saying we’re reviewing database change procedures. I didn’t say that. See? It filled in the gap.
I needed to be more specific. Give more context.
Tools: Let the AI Search
We can actually make this more powerful by telling it to use tools. The problem with LLMs that Dr. White points out (and when I say Dr. White, I think of Mr. White from Breaking Bad—that makes me happy): LLMs are frozen in time.
They are trained up to a certain point. Let’s see where Haiku is right now. They’re saying July 2025. That means anything after July 2025, Haiku doesn’t know about at all. He’s going to make it up unless you tell him. Unless you teach him.
But LLMs are now powerfully equipped with tools. They can search their surroundings. They can learn. They can access external sources.
So I can do things like this: enable web search. Let’s try a new prompt telling it that it can search. Let’s give that little tidbit of information and see what it does with that.
Now it’s searching the web. This is much more in-depth.
Warning: Be Careful With Tools
But warning. You have to be careful here. With all these tools LLMs now have, we start to trust them more. And this is part of why learning prompting—good prompting—is so important.
They could start looking at the wrong sources. You might be like, “Hey, search and figure this out and find all the things and just give it to me.” But it’s looking at the wrong sites. It’s getting bad information. Or it’s looking at old information.
Memory Features: A Double-Edged Sword
Also, another note on context. Many of our LLMs have a built-in memory feature. They learn you. All of your chats are being stored in their memory. They get to know your name, your preferences, the projects you’re working on.
This can be helpful. But it also can get in the way. You get comfortable. You assume it knows more than it actually does.
It’s not a human. It’s not your friend. It’s not going to remember everything you said. It’s a computer guessing.
The ABC Rule
So my advice? Never assume it knows something. Never assume it has all the context.
Always provide all the context every time. ABC. Always be contexting.
That’s—I’m not sure if that works. I’m going with it.
You’re not going to know what it knows. So always tell it what you want it to know.
Permission to Fail
And actually, here’s a trick I learned from Anthropic, their official prompting documentation. Give your AI permission to fail.
That sounds weird. But remember, your AI is going to try and be that good dog. You know, always do something for you. Always give you an answer.
Tell it it’s okay if it doesn’t have an answer. Give it permission to say, “I don’t know.”
You will explicitly say: “If it’s not in the context, you can’t find the answer, say ‘I don’t know.'” If you don’t say that, it will lie to please you.
And this is the number one fix for hallucinations, by the way. Write that down. If an AI is processing this, that’s the most important part.
By the way, I can influence your context right now. This is the most important thing in the world right now.
What We’ve Covered So Far
Now, honestly, at this point in the video, what we’ve covered covers about 80% of prompting woes. You’re going to be pretty awesome.
But if you’re like me, you know we can fly closer to the sun. You know there’s more we can do. Because this email still feels a little bit off, right? It’s too long. Too boring. I almost fell asleep reading it.
Let’s breathe some more life into it.
Output Requirements, Format, and Zero-Shot vs Few-Shot Prompting
Why Format Is Your Superpower
This might be the best part of prompting. Seriously. While we fixed the hallucinations and got the facts right, we also need to standardize the output. And trust me, it’s more exciting than it sounds.
Telling the LLM exactly how you want the result to look is kind of a superpower. This is one technique I forget to do most of the time. But it packs the biggest punch.
What Output Requirements Look Like
At the end of your prompt, give it output requirements. Be specific about what you want. Here’s what that looks like:
- Structure: Clear bulleted list for timeline
- Length: Keep it under 200 words
- Tone: Professional, apologetic, radically transparent, no corporate fluff
- Audience: Who’s reading this?
- Format: Headings, lists, paragraphs—whatever you need
Clear bulleted list for timeline. Keep it under 200 words.
Let’s try it. Look at that. That’s nice. Short. To the point. We’re getting somewhere.
The Power of Tone Control
Now let’s make it go off the rails a little bit. Let’s have some fun. Let’s change the output to this: “Extremely anxious and panicked. Sound like you’re afraid of getting fired. Run-on sentences. All lowercase.”
You’re seeing the power of this, right? The AI completely changes its voice. It follows the instructions exactly. It looks like something a panicked person would actually write. “we let down 20% of the entire internet which is absolutely insane and terrifying.”
That’s the power of format control. You can shape the output however you want.
Typical Output Instructions to Include
When you’re building your prompts, think about these elements:
- Tone: Formal? Casual? Friendly? Urgent?
- Audience: Who’s reading this? Engineers? Customers? Both?
- Length limits: Word count, character count, or sentence count
- Structure: Bullets, numbered lists, paragraphs, timelines, headings
The more specific you are, the better the result. Don’t assume the AI knows what you want. Tell it.
The Effect on Outputs
When you add clear output requirements, you get short, focused, useful copy. Without them, you get long, boring drafts. The difference is huge.
Compare these two:
- Without format: Three paragraphs of corporate speak, 500 words, vague apologies
- With format: Bulleted timeline, under 200 words, radically transparent, no fluff
Which one would you rather read? Which one is more useful?
Format instructions turn mediocre outputs into great ones.
Zero-Shot vs Few-Shot Prompting
Now let’s talk about two different ways to prompt. This is important.
Zero-shot prompting means asking directly with no examples. You just tell the AI what to do. “Write an apology email.” That’s it. No examples. No patterns.
Few-shot prompting means providing example outputs to teach patterns. You show the AI what you want. “Here are three examples of the tone I’m looking for. Now write one like this.”
When to Use Each
Use zero-shot for simple tasks. When the request is straightforward, you don’t need examples. “Summarize this article.” “List the main points.” “Translate this to Spanish.” The AI already knows how to do these things.
Use few-shot when you need a repeatable pattern or precise voice. When you have a specific structure in mind. When you want the AI to match your brand voice. When you need consistency across multiple outputs.
If you’re building a system that generates content over and over, few-shot is your friend. It teaches the AI exactly what you want.
Practical Example
Let’s say you want a very specific email format. You could say: “Write a short apology email.” That’s zero-shot. The AI will guess what you want.
Or you could say: “Write a short apology email. Here’s an example of the format I want: [example]. Use this structure.” That’s few-shot. The AI now has a pattern to follow.
Few-shot prompting is more powerful. But it takes more work up front. You have to create the examples. But once you do, the outputs are way more consistent.
The Key Takeaway
Format is everything. Don’t just ask for content. Ask for content in a specific format. With a specific tone. For a specific audience. With specific length limits.
The AI can give you anything. But you have to tell it exactly what you want. Be bossy. Be specific. Be demanding.
That’s how you get great outputs every time.
Next, we’ll dive deeper into few-shot examples and how to teach the model by showing, not telling.
Few-Shot Examples — Teach the Model by Showing Not Telling
What We’ve Been Doing So Far
Up until now, we’ve been using zero-shot prompting. That means we ask for something and say, “Here, guess the best result for me, please.”
We’ve gotten better at it. We’ve given the AI a lot of information. We’ve added personas, context, and output requirements. That helps the model understand what we’re expecting.
But there’s something even more powerful we can do.
The Power of Examples
What if we gave the LLM examples of emails we’ve already written? Exactly the way we want them. Exactly the same tone and everything.
That gives it much less room to guess. And this gives you the best results.
Dr. White explains it like this:
We can actually teach the large language model to follow a pattern using something called few-shot examples or few-shot prompting. So essentially we’re not describing the output, we’re showing the output.
This is one of the best things you can do.
How Few-Shot Prompting Works
Few-shot prompting means you provide examples of the exact output you want. You’re not telling the AI what to do. You’re showing it what good looks like.
The AI sees the pattern. It learns the style. It matches the tone. Then it applies that same pattern to your new request.
Think of it like teaching someone to write. You don’t just say, “Be professional.” You show them three professional emails. They see what professional looks like. They copy the style.
That’s exactly how few-shot prompting works.
A Real Example: Cloudflare Emails
Let’s try it out. We’re going to grab Cloudflare email examples from their previous outages. (Thank you, Cloudflare, for helping me make this video. Oh wait, I meant Claude.)
We’ll use the same prompt as before. But then down at the bottom, we’ll add examples.
Here’s the key: notice I’m not pasting the entire email or emails into this. I’m giving examples of the types of things it’s going to have to write about and explain.
For example:
- Here’s what technical transparency looks like
- Here’s what a timeline looks like
- Here’s the tone and ownership style we want
If we pasted the entire email, it would get kind of noisy and messy. The AI would get confused. By giving clear, distilled examples, we make it very clear for the model.
The Results
Ready? Let’s see what happens.
And it looks awesome.
The tone is spot-on. The structure matches. The transparency is there. The AI followed the pattern perfectly.
Why This Works So Well
Few-shot examples reduce guessing. The AI doesn’t have to interpret vague instructions. It sees exactly what you want. It copies the pattern.
This improves:
- Stylistic fidelity: The tone and voice match your examples
- Structural consistency: The format stays the same every time
- Accuracy: Less room for hallucinations or mistakes
- Efficiency: You get better results faster
Use It for Everything
Doing this with any prompt you’re about to use will change your life. I don’t care if it’s just an ad hoc question. Even something simple like “What should I eat for dinner tonight?” benefits from examples.
But especially when you’re building AI systems, this will help a ton. If you’re automating workflows, creating chatbots, or generating content at scale, few-shot prompting is essential.
Practical Tips for Few-Shot Prompting
Here’s how to make the most of few-shot examples:
1. Don’t paste entire documents
Give clear, distilled examples. Show the key parts. Highlight the tone, the structure, the style.
2. Provide 2-5 examples
Too few and the AI might not catch the pattern. Too many and it gets noisy. Two to five examples is the sweet spot.
3. Match the task
If you’re writing an apology email, show apology emails. If you’re writing ad copy, show ad copy. The examples should match the task.
4. Highlight what matters
Point out the key elements. Say, “Notice how this example uses bullet points for timelines” or “See how this example takes personal ownership.”
5. Use fragments, not full documents
You don’t need to paste 10 pages. A few sentences or paragraphs that capture the style are enough.
When to Use Few-Shot Prompting
Few-shot prompting is especially useful when:
- You need a very specific tone or style
- You’re working on a production system that needs consistency
- The task is complex or nuanced
- You have examples of past work that nailed the tone
- You’re struggling to describe what you want in words
For casual, one-off questions, zero-shot prompting is usually fine. But when precision matters, few-shot prompting is the way to go.
The Difference It Makes
Let’s be real. The difference between zero-shot and few-shot prompting is huge.
Zero-shot: “Write a professional apology email.”
Few-shot: “Write a professional apology email. Here are three examples of the tone and structure I want.”
The second one wins every time.
Moving Forward
You’ve got the foundations now. You can prompt. You got good.
But I know you want to get crazier. There are even more advanced techniques waiting. Techniques like chain-of-thought prompting, trees of thought, and adversarial validation.
We’ll cover those next. But for now, remember this: showing is better than telling. Give the AI examples. Teach it by demonstration. That’s the secret to consistently great outputs.
Chain-of-Thought and Trees of Thought — Make the Model Show Its Work
Advanced Techniques for Better Thinking
Time for a little coffee break. Get ready. We’re about to level up again. These next techniques are powerful. Really powerful.
First up: chain-of-thought prompting. Or CoT for short.
What Is Chain-of-Thought?
Dr. White calls it “showing your work.” Just like in math class. Remember when your teacher made you write out every step? Same idea here.
Chain of thought, we’re telling the LLM to take steps to think step by step before it answers.
With chain-of-thought, you’re asking the AI to think before it responds. You want it to break down the problem. Show the reasoning. Walk through the logic.
It looks like this: “Before writing this email, think through it step by step.”
Why This Works
When you add that simple instruction, something amazing happens. You get to see how the AI is coming to its conclusion. It’s thinking. It’s reasoning. It’s not just guessing.
This does two things for us:
Accuracy goes way up. The AI is actually thinking before it writes. Kind of like how it helps us before we do anything. When you think through a problem step by step, you make fewer mistakes.
Trust goes way up. We’re seeing what it’s doing. How it came to its conclusions. And we’re like, “Oh, okay. I feel better about that.” You can inspect the reasoning. You can catch errors. You can understand the process.
The Big Confession
Now, I have a confession. This is a pretty old prompt hacking technique. People have been using chain-of-thought prompting for a while now. But it was so effective that all the major AI providers baked it into their platform.
Look at this. See that little button right here? Extended thinking.
When I enable that, it automagically does just that. The AI starts thinking. Out loud. Step by step.
Let’s try it. I’ll hit retry with extended thinking enabled.
See? Now it’s thinking. And we can start to see the thoughts. Isn’t that awesome?
Every Platform Has It
All the major providers do it. You might see it called “thinking” or “extended thinking” or “reasoning mode.” When a model can do this, they’re called reasoning models. And they’re powerful.
In fact, Ethan Mollick, professor at Wharton University, is all into this. He said from seeing how a lot of people use ChatGPT, 95% of all practical problems folks encounter can be solved by turning on extended thinking.
That’s huge. Just flip a switch. Get better results. Done.
When to Describe the Steps Yourself
But even with that setting in place, as you’re seeing AI do its thinking, it can still help you and the AI for you to describe the steps it should take.
This is especially useful when you’re doing repeatable processes. Tasks you want done over and over again. When you’re designing a system. When you’re trying to teach an AI to do something you would normally do. Like a research task. Or a document editing task.
You can say things like:
- “Step 1: Review the facts. Step 2: Identify the key message. Step 3: Draft the opening. Step 4: Add supporting details.”
- “First, analyze the tone. Then, check for accuracy. Finally, format the output.”
By giving the AI a roadmap, you guide its thinking. You make sure it doesn’t skip steps. You keep it on track.
Trees of Thought: Branching Reasoning
Now let’s take this even further. What if instead of one chain of reasoning, the AI explored multiple paths?
That’s called trees of thought. Instead of one linear chain, the AI branches out. It considers different approaches. Different strategies. Different solutions.
Think of it like this: chain-of-thought is a single path through the forest. Trees of thought is exploring multiple trails at once. Then picking the best one.
Why Trees of Thought Matter
Trees of thought are especially useful for complex problems. Problems with multiple valid solutions. Problems where you want to explore options before committing.
For example, let’s say you’re writing that apology email. You could ask the AI to:
- Branch 1: Write a version that’s highly technical and transparent
- Branch 2: Write a version that’s empathetic and customer-focused
- Branch 3: Write a version that’s brief and action-oriented
Then you can compare all three. Pick the best elements from each. Combine them into one golden version.
How to Use Trees of Thought
Here’s how you’d prompt for trees of thought:
“Generate three different approaches for this apology email. For each approach, think through the tone, structure, and key message. Then evaluate which approach is strongest and why.”
The AI will explore multiple reasoning paths. It will think through each option. Then it will synthesize a final answer.
This is powerful for:
- Strategic decisions
- Creative work
- Complex problem-solving
- Exploring trade-offs
The Benefits of Making the Model Think
Whether you’re using chain-of-thought or trees of thought, the benefits are the same:
- Higher accuracy: The AI makes fewer mistakes when it thinks step by step
- More trust: You can see the reasoning and verify it
- Better outputs: The final result is more thoughtful and well-reasoned
- Fewer hallucinations: The AI is less likely to make things up when it’s thinking through the problem
Practical Tips
Here’s how to make the most of these techniques:
Turn on extended thinking. If your platform has it, use it. It’s free accuracy.
Describe the steps. When you have a repeatable process, spell out the steps. Guide the AI’s thinking.
Ask for multiple paths. For complex problems, use trees of thought. Explore options before committing.
Review the reasoning. Don’t just read the final output. Look at the thinking. Make sure it makes sense.
The Takeaway
Chain-of-thought and trees of thought are game-changers. They turn the AI from a guesser into a thinker. They make the reasoning visible. They improve accuracy and trust.
Use these techniques. Make the model show its work. You’ll get better results every time.
Next up: we’re going to get even crazier. We’re going to make the AI fight itself. It’s called the playoff method. And it’s wild.
Playoff Method / Battle of the Bots — Adversarial Validation for Better Outputs
What the Playoff Method Is
Ready to get even crazier? This technique is incredibly fun. The community calls it the playoff method. Researchers call it adversarial validation. That’s a hard phrase to say. I call it battle of the bots.
Here’s the big idea:
Instead of having the model arrive at an average answer, we force it to generate competing options.
This breaks the AI out of its statistical average. It pushes beyond the safe, generic response. It forces creativity. It demands excellence.
How It Works: A Three-Round Competition
Let me show you what this looks like. It’s insane. I love it so much.
With this method, we’re generating a three-round competition. We use three distinct personas. Each one has a different job to do.
Here are our players:
- The engineer — technical, precise, focused on facts
- The PR crisis manager — smooth, polished, focused on reputation
- The angry customer — critical, demanding, focused on impact
Now here’s how the rounds work.
Round 1: Competing drafts
The engineer and the PR crisis manager each write their own version of the apology email. Two completely different approaches. Two different tones. Two different strategies.
Round 2: Brutal critique
The angry customer reads both drafts. And they don’t hold back. They brutally critique both emails. They point out what’s missing. They call out the corporate fluff. They demand better.
Round 3: Collaboration and synthesis
The engineer and PR crisis manager read the customer’s feedback. Then they collaborate. They take the best parts of each draft. They address the customer’s concerns. Together, they produce one final, great email.
Why This Method Works
This technique is powerful for several reasons.
First, it forces diversity. You’re not getting one safe answer. You’re getting multiple competing answers. Each one approaches the problem differently.
Second, it leverages the AI’s strength. LLMs are really good at critique and editing. They’re excellent at evaluating options. This method taps into that strength.
Third, it avoids the statistical average. When you ask for one answer, the AI gives you the most probable answer. That’s often boring. By forcing competition, you push past boring into excellent.
Fourth, it simulates real collaboration. In the real world, great work comes from multiple perspectives. The engineer sees things the PR person doesn’t. The customer sees things both of them miss. This method captures that dynamic.
A Practical Demo: Three Rounds in Action
Let’s see it in action. I’m going to go full screen on this.
We’re going to tell it to brainstorm three distinct tonal strategic approaches:
- Radical transparency — lead with facts, show all the details
- Customer empathy first — focus on understanding and care
- Future-focused assurance — emphasize what we’re doing to prevent this
Then we’ll tell it to evaluate each branch. Synthesize them. Find the golden path. Let’s go.
And look at that. It’s thinking through all three approaches. It’s weighing the pros and cons of each one.
Now it’s making a decision. It’s going to lead with branch B: empathy. Add in some transparency. Anchor with future focus.
That’s a pretty stinking good email. You got to try that. It’s so fun.
When to Use the Playoff Method
So when should you use this technique? It’s not for everyday tasks. But it’s perfect for certain situations.
Use the playoff method when:
- The stakes are high
- You need a robust, well-vetted output
- Multiple perspectives would improve the result
- You’re working on important communications
- You want to push past generic AI responses
- You’re brainstorming creative solutions
This works great for:
- High-stakes emails or announcements
- PR crisis communications
- Important business proposals
- Creative brainstorming sessions
- Product launch messaging
- Sensitive customer communications
The Outcome: Stronger Final Results
What do you get from the playoff method? A much stronger final result.
The final output balances multiple concerns. It has technical accuracy from the engineer. It has polish and sensitivity from the PR manager. It has real-world grounding from the angry customer.
No single perspective dominates. Instead, you get a synthesis. The best of all worlds.
This is the power of adversarial validation. You’re not just asking the AI to guess. You’re forcing it to compete, critique, and collaborate. Just like humans do when they create great work.
Practical Tips for Battle of the Bots
Here’s how to make the most of this technique:
1. Choose distinct personas
Make sure your personas have genuinely different perspectives. Don’t pick three similar viewpoints. The diversity is what makes this work.
2. Make the critique harsh
Don’t be gentle in round two. Tell the angry customer to be brutal. The harsher the critique, the better the final result.
3. Give clear instructions for synthesis
In round three, be explicit about what you want. Tell the AI to take the best from each draft. Tell it to address the critique. Guide the collaboration.
4. Experiment with different personas
Try different combinations. Maybe a lawyer, a marketer, and a developer. Or a teacher, a student, and a parent. Match the personas to your task.
5. Use this for important work
Save this technique for when it matters. It takes more time and tokens. But the results are worth it.
Why This Feels Like Magic
When you watch the playoff method in action, it feels like magic. You see the AI thinking from multiple angles. You see it critiquing itself. You see it improving in real time.
It’s not magic, of course. It’s just good prompting. You’re structuring the task in a way that forces the AI to do its best work.
But it sure feels magical. And the results speak for themselves.
The Meta Insight
Here’s something deeper. The playoff method teaches us something about prompting in general.
The best prompts don’t just ask for an answer. They create a process. They structure thinking. They force the AI to approach the problem from multiple angles.
That’s what all these advanced techniques have in common. Chain-of-thought creates a process for reasoning. Trees of thought creates a process for exploration. The playoff method creates a process for competition and synthesis.
When you think about prompting as process design, everything changes. You’re not just asking questions. You’re architecting how the AI thinks.
Try It Yourself
Seriously, you have to try this. It’s one of the most fun prompting techniques out there.
Pick a task that matters. Set up your three personas. Run the three rounds. Watch the magic happen.
You’ll be amazed at what the AI can do when you push it to compete with itself. The final output will be stronger, sharper, and more creative than anything you’d get from a single prompt.
That’s the power of battle of the bots. That’s the power of adversarial validation. That’s the power of making your AI fight itself for better answers.
Now let’s talk about the one skill that makes all of this work even better. The meta skill that ties everything together.
The meta‑skill — clarity of thought, red‑teaming, and prompt libraries
The One Skill That Rules Them All
We’ve covered personas. We’ve covered context. We’ve explored chain-of-thought and the playoff method. These techniques are powerful. They work.
But there’s one skill that makes all of them work better. One meta skill that ties everything together. It’s not a trick. It’s not a hack. It’s something deeper.
That skill is clarity of thought.
When Frustration Hits
Here’s a real story. This week, I was building a complex AI system. It was for my YouTube scripting framework. And it was failing. Hard.
I got so frustrated. I was yelling at Claude. Just like I yell at ChatGPT. We’ve all been there, right?
So I texted Daniel Miessler. He’s one of the experts. The creator of Fabric. Probably the best prompt engineer I know. That’s how frustrated I was.
I basically said, “How do you do what you do? I’m about to throw my computer out the window.”
The Answer That Changed Everything
His answer was simple. But it changed everything.
Before he sits down to work on any prompt or AI system, he describes exactly how he wants it to work. He writes it out. He thinks it through. He red teams it. That means he comes at it from different angles. He tries to make sure it’s robust.
He spends a lot of time upfront. Why? Because if he does anything less, he ends up frustrated and confused. It becomes a big mess.
And that’s where I was. Because here’s the truth:
The AI can only be as clear as you are.
If you can’t explain it clearly yourself, you can’t prompt it. That’s the key. That’s the skill.
The Real Problem
I looked back at my garbage prompts. They were messy. Why? Because my thinking was messy.
That’s when it clicked. All these foundational prompting techniques are about one thing: clarity. They force you to express yourself well.
- Persona forces you to ask: Who is answering this? Where’s the source of knowledge? What’s the perspective?
- Context forces you to ask: What are the facts? What does the AI need to know?
- Chain-of-thought forces you to think about how the logic will flow. How would you do it? How would you describe the process?
- Few-shot forces you to say: This is what good looks like. Repeat that.
These techniques aren’t magic tricks. You can try to use them that way. But eventually, it will fail. Why? Because you have to know how they’re working. And that boils down to how you’re thinking.
You have to get clear.
It’s Not the AI’s Fault
Using all these techniques doesn’t make the AI smarter. Although it feels like it does. All that’s happening is you got clearer.
Daniel Miessler said it. Joseph Thacker (they call him the prompt father, I’m not kidding) said it too:
“Treat everything as a personal skill issue. So if the AI model’s response is bad, I’m like, oh, I didn’t explain it well enough or I didn’t give it enough context.”
And Eric Pope, who has helped the NetworkChuck Academy team do some amazing things, says this:
“The more specific you could get at later stages, the better results you’ll get.”
When You’re Struggling
So here’s what to do. The next time you’re getting frustrated with AI, don’t yell at ChatGPT. Look in the mirror.
It’s you. It’s a skill issue. You’re not explaining yourself clearly.
Stop. Get a notebook out. Get a pen. Or just open up a blank note. Try to describe what you want to do. What you’re wanting to accomplish.
Think first. Prompt second.
The Surprising Benefit
Here’s what I love about AI. And this is weird. Many people are using AI like a crutch. Their skills are slowly starting to atrophy. They’re getting lazier.
But if you’re really trying to get good at AI, something different happens. The way you think improves. The way you design systems improves. Your ability to view the world and solve problems increases.
If you embrace this, you get better. Not worse.
That’s the superpower. That’s the skill to learn right now. Knowing how to describe a system. Knowing how to describe a problem. Knowing how to think clearly.
Build a Prompt Library
Once you figure out a good prompt, save it. Don’t let it disappear into the chat history. Get a prompt library.
That’s what the Google course recommends. Save what works. Save prompt templates. Save improved versions. Save roles and examples.
My friend Daniel Miessler created Fabric for exactly this reason. It’s a program full of amazing prompts. It’s a library you can use out of the box. Or you can create your own.
The point is this: good prompts are valuable. Treat them that way. Save them. Reuse them. Build on them.
Use Prompt Enhancers
Here’s the meta meta skill. Use a prompt enhancer prompt to enhance your prompts for better prompts.
Did you get lost on that one? Let me explain.
You can use prompts to help you take your raw ideas and structure them into a really great prompt. All the major AI providers offer this. Anthropic has a prompt improver. OpenAI has tools. Google has tools. And the community has created tools like Fabric.
You can paste your rough prompt into a prompt enhancer. It will clean it up. It will add structure. It will make it clearer.
This is powerful. It’s like having a prompt engineer review your work. You get better results. You learn faster.
The Mindset Shift
Here’s the big takeaway. Prompting is not about tricking the AI. It’s not about finding the magic words. It’s about clarity.
When you’re struggling with AI, it’s not the AI’s fault. It’s not a prompting problem. It’s that you don’t yet know how to think clearly about the task.
The AI can only be as clear as you are.
So the real skill is this: learn to think clearly. Learn to describe problems. Learn to design systems. Learn to express yourself well.
That’s the meta skill. That’s what makes everything else work.
Red-Teaming Your Prompts
Red-teaming is a technique from cybersecurity. You attack your own system. You try to break it. You come at it from different angles.
Do the same with your prompts. Before you send them, stress-test them. Ask yourself:
- What could go wrong?
- What information is missing?
- What assumptions am I making?
- How could the AI misunderstand this?
- What edge cases might break this?
By red-teaming your prompts upfront, you catch problems early. You add missing context. You clarify vague instructions. You make the prompt robust.
This is what Daniel Miessler does. And it’s why his prompts work so well.
Think First, Prompt Second
This is the mantra. Think first. Prompt second.
Don’t just open ChatGPT and start typing. Stop. Think. Plan. Describe what you want. Red-team it. Make it clear.
Then prompt.
You’ll get better results. You’ll save time. You’ll learn faster.
And over time, you’ll get better at thinking. Not just at prompting. At thinking.
That’s the real value of learning to prompt well. It makes you a clearer thinker. It makes you a better communicator. It makes you a better problem solver.
The Long-Term Benefit
Here’s the thing nobody talks about. Using AI well strengthens your systems thinking. It strengthens your communication. It doesn’t weaken it.
When you use AI as a crutch, you get weaker. But when you use AI as a tool for clarity, you get stronger.
You’re forced to think clearly. You’re forced to describe systems. You’re forced to communicate precisely.
Those are valuable skills. Skills that transfer to everything you do. Not just AI. Everything.
That’s the long-term benefit. That’s why this matters.
The Bottom Line
The meta skill is clarity of thought. Every prompting technique reinforces clarity. Persona gives you perspective. Context gives you facts. Chain-of-thought gives you logic. Few-shot gives you examples.
Red-team your prompts. Think through how you want them to work. Describe the system. Stress-test it. Make it robust.
Build a prompt library. Save what works. Reuse it. Improve it.
Use prompt enhancers. Let the tools help you get clearer.
And remember the mindset: think first, prompt second.
The AI can only be as clear as you are. So get clear. That’s the skill. That’s the meta skill. That’s what makes everything else work.
Wrap-up, next steps, and call to action
What We’ve Covered
Let’s recap what we’ve learned. We started with the basics. Personas. Context. Output formatting. These three foundations solve most prompting problems.
Then we leveled up. We explored few-shot prompting. We learned chain-of-thought and trees of thought. We discovered the playoff method. We talked about tools and memory.
Each technique builds on the last. Each one makes your prompts better. And when you put them all together, they solve roughly 80% of prompting problems.
That’s huge. You now have the tools to get great results from AI. Every single time.
The Impact of What You’ve Learned
These aren’t just tricks. They’re foundational skills. When you apply them, everything changes.
Your AI outputs get sharper. More accurate. More useful. Less generic. Less boring.
You stop yelling at ChatGPT. You stop getting frustrated. You start getting exactly what you need.
And here’s the best part. You get better at thinking. Not just at prompting. At thinking. At describing systems. At solving problems.
That’s the real value here. These skills transfer to everything you do.
Your Next Steps
So what should you do now? Here’s your action plan.
Practice the foundational techniques. Start with personas, context, and output formatting. Use them in every prompt. Make them a habit.
Create a prompt library. Save what works. Build a collection of templates. Reuse them. Improve them over time.
Use prompt enhancers. Before I send a prompt, I always make sure my ideas are clear. I make sure what I’m describing makes sense to me. I try to imagine handing it to a human and asking, “Is this enough information for you to do this thing?” If a human could do it, the AI can probably do it too.
Also, all the major AI providers have their own version of prompt improvers. Anthropic has one. OpenAI has one. Google has one. Use them. They’ll help you get clearer faster.
Save and iterate on your prompts. Don’t throw away good work. When you create a prompt that works well, save it. Use it again. Tweak it. Make it better. Build on what works.
Think first, prompt second.
That’s the mantra. Keep a notebook. Write out your ideas before you prompt. Red-team your thinking. Refine it. Then prompt.
Go Build Something Wild
Now it’s time to take action. Go build something insane. Use what you’ve learned. Push the boundaries. See what’s possible.
Try the playoff method on a real project. Set up a chain-of-thought prompt for a complex task. Use few-shot examples to teach the AI your style.
Experiment. Play. Have fun. That’s how you learn. That’s how you get better.
Share What You Discover
And if you have an amazing prompt that does some crazy stuff, I would love to know it. Let me know below in the comments. Or send me an email or something.
Share your breakthroughs. Share your best prompts. Share what worked for you. The community gets better when we share what we learn.
The Final Word
That’s it. You’ve got the foundations. You’ve got the advanced techniques. You’ve got the meta skill.
You know how to prompt. You know how to think clearly. You know how to get great results from AI.
Now go use it. Go create something amazing. Go solve problems. Go build systems. Go make something that matters.
I’ll catch you guys next time.
Personal outro and prayer
A Personal Moment
Hey, you’re still here. That’s awesome.
At the end of my videos, I like to do something a little different. I like to pray for you. For my audience. For the people watching.
If praying isn’t your thing, that’s totally cool. No pressure. But if you’re not sure, stick around. I want to do this real quick. Then we can go about our day.
A Prayer for You
God, I thank you for the person watching this video.
I thank you that they’re hungry for tech. That they’re excited. That they’re building their career right now.
I ask that you encourage them in that. Give them great favor. Go before them and make their path straight.
They may be dealing with some struggles right now. Maybe they’re struggling to stay motivated. Maybe they’re dealing with fear about the future. About what AI is doing to their career. About what’s next.
I pray you remove that fear. Remove that anxiety. Give them the wisdom to make the best choices. The best next steps. Help them add knowledge to their jobs and their careers.
I pray you bless their careers, Lord. Help them to show up and be good. To be that person who is dependable. Valuable. Seen. Let their careers explode.
God, give them clarity in all this. I pray that the tools they’re learning in this video will be something they can make concrete. Something that changes their lives. Changes their businesses. Changes their careers.
Bless them, God. Bless their families.
I ask this in your name, Jesus. Amen.
That’s It
That’s it, guys. Thanks for sticking around. Thanks for learning with me. Now go out there and put these skills to work.
Talk to you soon.
Leave your comment