A few months back, one of my workshop students came to me frustrated.
She'd been building AI agents for her business. The first few weeks went great. Then things started going sideways. Her email-tagging agent was misclassifying threads. Her meeting prep agent was pulling irrelevant information. She kept tweaking her prompts, but nothing helped.
She thought she needed better prompts.
She didn't. She had a context problem.
The Shift Most People Are Missing
For a while, “prompt engineering” was a real skill. Early models needed precise instructions. Specific syntax. Careful phrasing. You had to talk to AI like it was fragile.
But the models got a lot smarter. And now, most of the old prompting tricks are irrelevant. You can just… talk to the AI naturally. Explain what you need like you'd explain it to a smart colleague.
What matters now is context engineering.
Context engineering is simpler than it sounds. It just means giving the AI enough specific details to do the job well.
Here's the clearest way I know to explain it.
If I type: “Write a newsletter on AI agents,” I'll get a generic, forgettable newsletter that could've been written by anyone.
But if I type: “I'm Thanh from Asian Efficiency. This newsletter goes out to about 80,000 people. Most of them work in corporate America or run a small business. They're relatively new to AI but curious about it. Write a newsletter on AI agents” — now the agent knows exactly who it's writing for. The output is completely different.
Same request. Completely different context. Night and day results.
That's it. No magic words, no special formatting, no secret syntax. Just the right details.
The Context Window Mistake
Here's where I see people go wrong once they understand this idea.
They swing too far in the other direction.
They open ChatGPT, load in 300 documents, paste their entire email inbox, and then ask one small question. Then they wonder why the answer is vague or wrong.
The AI can only hold so much in memory at once. When the context window gets too full, it doesn't crash — it just starts skimming. Accuracy drops. The model starts guessing.
I use a sheet of paper analogy with my workshop students. When a conversation fits on one sheet of paper, the AI reads every word carefully. When you give it a stack of paper, it starts skimming like a student the night before an exam.
My student's problem wasn't her prompts at all. She'd loaded her agents with massive knowledge bases “just in case.” We stripped them down to only what each specific task needed. Her email agent got only the email templates and tagging rules. Her meeting prep agent got only the relevant contact data and agenda format.
Accuracy went back to near 100%. Same agents. Same tasks. Just tighter context.
How to Build an Agent That Actually Performs
When I teach agent building in my workshops, I always start with two questions before anything else: what is this agent's role, and what is its goal?
They're different things, and confusing them is one of the main reasons agents underperform.
The role is like a job title. It defines identity. Tone. How the agent thinks about itself and the work.
The goal is the specific outcome you want. The deliverable.
Think about how you'd hire a human assistant. You'd tell them: “You're a travel coordinator” (role) and “I need you to book me on a flight to New York next Tuesday, under $400, window seat” (goal). Both matter. Skip either one, and the results get sloppy.
Same thing with agents. If you only give a goal without a role, the agent's tone and judgment will be all over the place. If you only give a role without a clear goal, it'll spin in circles.
The formula I use is something I call the OCE approach: Outcome, Context, Expectations. What do you want it to produce? What does it need to know? What format and constraints matter? Once you get these three things defined, most agents start working well on the first try.
What Real Results Look Like
I'll leave you with a story from one of my Two Hour Workday workshops.
Mark was the only student who showed up the day we built a meeting prep agent. (There was a reminder email glitch. Long story.) So he got essentially a private session.
We built him a simple agent that pulled upcoming meetings from his calendar, ran background research on each attendee, and sent him a briefing before every call.
Setup time: about thirty seconds of actual work on his end.
A week later, I followed up.
He told me he'd had several meetings that week and felt genuinely better prepared going into every one of them. He'd even started running it before his church committee meetings. The cost? A few cents a day.
“I've only been on it a week,” he said, “but it's been a lot of fun.”
That's what good context engineering makes possible. Not just better prompts — agents that actually work the first time, cost next to nothing to run, and feel like they were built for exactly your situation.
Because with the right context, they were.
The Quick Version
If you want to start applying this today:
- Add context before asking — who are you, who's the audience, what format do you want?
- Keep context tight — only give the AI what it needs for this specific task
- Define role AND goal — for any agent, both matter
- Don't blame the model — if output is weak, check your context first
Most AI problems aren't model problems. They're context problems.
And context engineering is way easier to fix than retraining a model.
Want to learn how to build AI agents that actually work for your business? I run hands-on workshops in Austin. Reach out here.
