What Thailand Taught Me About the AI Echo Chamber
What a two-week break revealed about relevance, context, and AI
I just got back from two weeks in Thailand.
Nine loads of laundry are piled in the basement. My kids are jet-lagged, confused about what day it is. My brain isn’t ready for anything that requires sharp thinking.
There are 200+ unread messages in my inbox. My LinkedIn feed is moving fast. AI agents. Agentic frameworks. New model releases. 2026 predictions. Think pieces about what I missed while I was gone. What our work will look like in 2026.
The FOMO hits immediately. But there’s something else sitting next to it. A dissonance I can’t shake.
Thailand didn’t care about AI.
The souvenir shops in Pa Tong don’t have apps. You can’t add items to a cart and check out. You walk in, find something you like, and negotiate face-to-face.
I tried this once. Picked up a small carved piece. Asked the price. The seller quoted me something. I countered lower. She said no, held firm. My kids were cranky. It was a hot day. I didn’t have the muscle for it anymore. I’m too used to paying sticker price. Too used to Amazon.
I left. Walked to an air-conditioned store down the street. Paid more for roughly the same thing.
The vendor I walked away from probably needed that sale more than the air-conditioned store did. But the friction was too much. I opted out.
The economy there runs on haggling, cash, and face-to-face trust.
Tuk-tuks are rickety and jazzed up at the same time. Drivers negotiate fares in real time based on distance, time of day, and how they read you. No apps. No surge pricing algorithms. Just humans making deals. Side note: We barely used the Grab app, and it was useful to overcome the language barrier to inform the cabbie where we were trying to go.
Shops charge 3% if you want to use a credit card. Most people pay cash. The system works.
Then I come home and open my feeds.
Every post is about AI. Every product update includes “AI-powered” in the announcement. Conferences, podcasts, newsletters—all racing to cover the latest model, the latest framework, the latest breakthrough.
But watching the hype cycle from the outside—even for just two weeks—made something clear.
Where AI matters deeply, in specific contexts
Knowledge workers in wealthy economies. People whose work happens on computers all day. Industries where labor costs are high enough that automation is relevant. Companies operating at scale where efficiency compounds.
That’s the echo chamber: a tight loop of people building AI, talking about AI, and worrying about AI—mostly with each other. We’re shipping AI products, writing about AI, and consuming content from others doing the same work. The discourse feels urgent because we’re all talking to each other about our own disruption.
And to be honest, not all of this is confusion or curiosity. Some teams are unclear about what to do. They’re avoiding hard prioritization decisions by hiding behind AI. Shipping “AI-powered” features is safer than saying no, cutting scope, or admitting a problem doesn’t need AI at all.
But most of the world’s economic activity doesn’t operate under these conditions. Cash transactions. Human relationships. Labor that’s affordable enough that replacing it with software simply doesn’t make sense.
AI’s impact is real. The discourse travels faster than its relevance. And being two weeks behind on the conversation doesn’t mean being behind on what matters.
How I’m Thinking About Re-Entry
I’m not ignoring AI. But I’m not diving in headfirst either.
I’m focusing on the problems I’m actually trying to solve, not the solutions being announced.
Before I chase the latest agentic framework, I want to know what user problem it solves that a simpler approach doesn’t. Before I add AI to a feature, I want to understand the cost—both to build and maintain—and whether it’s worth it.
This means understanding the mechanics. What it costs to run. Where it breaks. What value does it create for users versus what it signals to stakeholders.
What I’ve noticed is how quickly AI features turn into defaults instead of decisions. Summaries shipped everywhere because they were easy to justify, not because they were clearly valuable. Each subsequent layer: agents, orchestration, and multi-step reasoning, adds power, but also complexity and ambiguity.
Each progression feels less like “we solved the last problem” and more like “the technology advanced, so we’re finding new ways to solve the same problems.”
Are we solving user problems or solving our own anxiety about being left behind?
I don’t have a clean answer.
Some AI features are genuinely useful. Some solve real problems. But a lot of what’s being shipped feels like checkbox innovation. AI-powered features that exist so the company can say they have AI-powered features.
The Pa Tong vendor doesn’t think about optimization. She thinks about whether she’ll make enough today to cover costs. Whether the item she’s holding will sell. How to read the customer in front of her and find the right price.
Her solutions flow from her problems.
So that’s what I’m trying to do. Catch up strategically, not frantically. Read what’s relevant to the problems I’m actually working on. Ignore the rest—for now. Asking, does this need AI, or does a simpler solution work just as well?
It’s slower. Less flashy. But it feels right.
The world didn’t end while I was offline.
Thailand keeps running without AI. The big AI breakthroughs I “missed” will still be there when I’m ready to learn about them.
If you just got back from a break and your feeds feel overwhelming, you’re not behind. The discourse moved. That’s not the same thing as progress.
AI’s impact is real for some problems. Not all problems. The work is knowing which problems deserve your attention and which don’t.
In the meantime, I’m still doing laundry. Still jet-lagged. Still figuring it out.
But I’m not racing to catch up anymore. I’m choosing what to catch up to.
That feels like the right place to start.
