December 2024. I'm in Vietnam visiting a friend.
We end up at another friend's apartment. And this person — a crypto developer — shows us something he built. An AI agent that monitors code repositories for bugs, and then automatically negotiates bounties via email. No human in the loop. Just the agent doing the work.
I stood there watching it run.
Monitoring. Deciding. Emailing. All on its own.
And I remember thinking: oh my gosh. This is real.
Not “AI could theoretically do this.” It was doing it. Right in front of me, in this guy's apartment in Ho Chi Minh City.
That was the moment everything shifted.
The Difference Between Using AI and Deploying AI
Up until that point, I'd been a pretty heavy AI user. ChatGPT for drafts. Claude for analysis. Perplexity for research. I was getting a lot of value out of these tools.
But I was still the one doing the work.
Every time I needed something, I'd open a chat window, type a prompt, read the response, copy something out, edit it, paste it somewhere else. The AI was making me faster. But I was still in every loop.
What I saw in Vietnam was categorically different. The agent wasn't waiting for me. It wasn't in a chat window. It was running on its own, watching for conditions, making decisions, and taking action.
That's not a faster search engine. That's a digital teammate.
I came home and spent the next few months rebuilding how I work.
The 63-Cent Podcast Outline
One of the first things I automated was my podcast content workflow.
Before, putting together a podcast outline was a 3-4 hour project. Research the topic. Gather examples. Find the right stories. Write the structure. I'd put it off constantly because it felt heavy.
Now, when I want to start a new episode, I move a card in Jira. That triggers an AI agent that researches the topic, generates a full outline, and pulls from a database of my real personal stories to illustrate each point.
The whole thing runs in about 2 minutes. Total cost: 63 cents.
That's not a typo. 63 cents. For something that used to take me 4 hours.
Why Most AI Content Falls Flat (And How to Fix It)
This is the part I want to spend some time on, because it's where most people get stuck.
If you ask ChatGPT to write a LinkedIn post, it'll write a LinkedIn post. But it'll be generic. Bland. It'll have the kind of “insights” that could apply to anyone.
The reason is simple: the AI doesn't know who you are. It doesn't know your stories, your clients, your specific experiences. So it makes things up. Or it writes something technically accurate but completely impersonal.
I ran into this with a client earlier this year. He wanted to use AI to create LinkedIn content. We set up a workflow, and the posts came out… fine. But they felt fake. Like a press release, not a person.
The fix was building a story bank first.
A story bank is a database of real things you've actually said and done. Stories from client calls. Examples from your workshops. Observations you've shared on podcasts or in meetings. All captured as plain text and stored somewhere searchable.
When my content agents write anything now, they search that database first. So instead of inventing a vague example, they pull something real. A specific client. A specific situation. A specific outcome.
The difference is night and day. The content actually sounds like me.
Start With What You Do Every Single Day
One thing I've learned from building agents for myself and clients is this: don't start with the impressive use case. Start with the frequent one.
I call this the 80-20 approach to agent building. What do you do every day or every week that eats the most time?
For me, it was email. Meetings. Content creation. Those three things alone were taking 6+ hours a day.
For one of my clients — a VC named Evan — it was meeting prep. Every day he'd spend time digging through emails and notes to get context before calls. I built him a daily briefing agent. Now he gets a prep doc every morning automatically. What used to take 20 hours a week of human assistant time now takes minutes.
That's the 80-20 in action. High frequency, high pain. Compounding ROI.
The quarterly-use-case automations can wait. Build for the daily loops first.
What Agent as Teammate Actually Means
The mental model that helped me most is thinking of agents as teammates, not tools.
A tool waits for you. A teammate is aware of what's going on, knows the context, and can act without being told every step.
That Vietnam crypto agent wasn't waiting for its developer to say “hey, go check for bugs.” It was running a loop. Monitoring. Finding opportunities. Taking action within the boundaries it was given.
That's what I aim for when I build agents now. They should know enough context that I don't have to babysit them. They should have clear limits on what they can do without approval. And they should get out of my way for anything they can handle.
The Week That Made It Click
Earlier this year, I tracked what my Lindy agents saved in one week.
239 hours.
That's equivalent to six full-time employees working a full week. In one week. From automations I'd built over the previous few months.
Some of that is email triage. Some is meeting prep. Some is content creation. Some is research. It adds up faster than you'd expect.
I'm not sharing this to brag. I'm sharing it because the number surprised even me. The compounding effect of high-frequency automations hits different once you see it on paper.
Where to Start
If you're reading this and still using AI mostly as a chat tool, here's what I'd suggest:
Pick the thing you do every single day that feels like a drag. Email sorting. Meeting notes. Research briefs. Proposal drafts. Whatever it is.
Start there. Build one agent. Get it working at 70%.
Then let it run for a few weeks and watch what happens to your week.
You don't have to see Vietnam to get the aha moment. But you do have to stop treating AI like a faster search engine.
The agents are the real thing.
Want to learn how to build your first AI agent? Check out the Productivity Academy for hands-on training with Thanh.
