Gas Town
Steve Yegge’s Gas Town post is absolutely bonkers in terms of AI token spend. It tries to keep a perpetual flywheel going in terms of generating applications.
And who can blame him? The tool itself was vibe coded in Python, then Go. It’s likely never to run on my machine because I can’t think that fast in terms of the number of agents I want to work.
It also reminds me, the simpler explanation of how things work is much better. Maggie Appleton’s post is a lot easier to understand from someone who has a visual expertise in breaking down how something complicated could work using the correct metaphors. It also helps that the images produced aren’t AI generated from Google’s Nano Banana.
On agent orchestration patterns, why design and critical thinking are the new bottlenecks, and whether we should let go of looking at code
Some notable takeaways:
-
Design and planning becomes the bottleneck when agents write all the code
-
It turns out to be slightly difficult to make happen because of the way current models are trained. They’re designed as helpful assistants who wait politely for human instructions. They’re not used to checking a task queue and independently getting on with things.
-
Stakes matter. If an agent breaks some images on your personal blog, you’ll recover. But if you’re running a healthcare system where a bug could miscalculate drug dosages, or a banking app moving actual money around, you can’t just wave an agent at it and hope. Consequences scale up fast.
Honestly, the past few months has been another whirlwind of AI tools that makes it hard to keep up. Vibe coding was so last February of 2025, and we’ve moved on to agent orchestration, vibe engineering, and a slew of other things I’ve missed. I know I’m not the only one feeling this way at all, and it reminds me that I do, indeed, need to continue blogging my own thoughts.