After Reading All Blogs by Peter Steinberger, Founder of OpenCLaw: Extracting the Core Methodology

133    2026-02-02
I recently dug through OpenClaw founder Peter Steinberger's 2025-2026 blog posts and got thoroughly hooked—this guy isn't just writing technical shares; he's living a pragmatic, anti-motivational personal operating system around "using AI agents to write code." His take on AI agents hits hard: not the kind of colleague who whiteboards grand visions or deflects blame in meetings, but a silent, gritty workhorse who grumbles at you occasionally yet never drops the ball when it matters.

What struck me most is his brutally honest, almost offensive one-liner: "These days I don’t read much code anymore. I watch the stream and sometimes look at key parts, but I gotta be honest—most code I don’t read." To him, code isn’t a document to dissect line-by-line anymore; it’s more like a rushing river. You don’t need to jump in and swim with the current—just stand on the bank and toss the occasional stone to steer its direction. He also drops this vivid metaphor: "Building software is like walking up a mountain. You don’t go straight up; you circle around it, take turns, sometimes veer off-path and have to backtrack a bit. It’s imperfect, but eventually you get to where you need to be." Software development isn’t about a flawless straight-line climb—it’s about circling the mountain, meandering, correcting course when lost, and stumbling forward until you reach the summit.

24540_gdah_8834.webp

Now, let’s unpack this highly actionable methodology through the lens of his blog’s hilarious yet piercing complaining, which might make you laugh out loud—but trust me, the substance is real.

24541_q8vd_5702.webp


1、Mindset Shift: From "Writing It Myself" to "Watching It Run + Occasionally Cursing"
Peter's core philosophy is buried in his laugh-out-loud one-liners—no fluff. "Just talk to it." is practically his mantra. He mocks those overcomplicating things with plan modes, subagents, multi-step wrappers, or RAG pipelines, calling them "old-model-era theatrics." Today's AI is smart enough for direct conversation, he argues. He once quipped about a misunderstood tweet: "a highly misunderstood tweet of mine that’s still circling around that showed me that most people don’t get that plan mode is not magic."—proof that folks have overhyped "plan mode" as some mystical tool.

What’s wilder is his "pathological trust" in models: "If codex comes back and hasn’t solved it in one shot, I already get suspicious." When an AI fails a complex task, his first thought isn’t "Did I explain it poorly?" but "Is the prompt bad? Context messy? Model having an off day?"—a mindset as blunt as it is refreshing.

He also busts an industry myth: "Most software does not require hard thinking. Most apps shove data from one form to another, maybe store it somewhere, then show it to the user." Software, he says, is mostly "data shuffling + display," not rocket science—AI can handle it. He later adds a sting: "The amount of software I can create is now mostly limited by inference time and hard thinking." His output now depends on AI speed and his own rare bursts of deep thought, not typing speed.

The funniest part? He self-identifies as a "Claudoholic", admitting he’s hooked on the dopamine hit of "just one more prompt"—like a gambler glued to a slot machine. "I’m that person that drags in a clipped image of some UI component with ‘fix padding’ or ‘redesign’"—snapping a UI screenshot, tossing it to AI, and saying "fix this" in one line. Yet in Just One More Prompt, he questions whether this is hyper-productivity or a new form of workaholism—even burnout in disguise. His snark shines through too, like when Claude Code finally fixed its notorious flicker bug: "Hell froze over. Anthropic fixed Claude Code’s signature flicker in their latest update (2.0.72)"—equal parts shock, relief, and dark humor.

图片

2. Core Problem-Solving Logic: First Map the "Blast Radius"

Peter’s first move in any problem-solving scenario is always to gauge the "blast radius"—arguably the most recurring meme in his blog: "The important question is always: how big is the blast radius of this change?" Before touching code, he prioritizes understanding the scope of impact.

  • Small changes (1-2 files, minimal ripple effects):

 He tosses the task to AI with a simple prompt—"Fix it"—no extra fluff.
  • Big changes (20+ files, core architecture tweaks):

 He shifts to cautious collaboration mode: "let’s discuss," "give me a few options before making changes," "take your time," "read all related code," "create hypothesis." He lets AI analyze dependencies, research solutions, and propose multiple approaches first, then selects one with: "ok, build this one."

He despises two extremes: over-specify (micromanaging AI with rigid constraints that stifle creativity) and under-communicate (vague demands that force AI to guess). His sweet spot? "Under-specify is often better—the model will fill in the gaps in surprisingly good ways." Leaving some ambiguity lets AI surprise him with elegant solutions.

His secret weapon? Cross-project reuse. A simple instruction like "look at ../vibetunnel and do the same for Sparkle changelogs" leverages his AI-optimized codebase (clean structure, intuitive naming) for rapid "copy-paste-with-brains" scaling. He self-deprecates: "I do know where which components are and how things are structured—and that’s usually all that’s needed." No need to memorize details; just know the architecture, and let AI handle the rest.


3. Daily Workflow: Throw → Watch → Tweak → Repeat

Peter’s daily grind is a mirror of AI-agent-driven coding, laced with his signature casual banter—authentic and relatable:

-Multi-agent parallelism + Queue:
He usually juggles 3-8 projects at once, using Codex’s queue feature to toss new ideas into the processing pipeline. He self-mocks: “usually I’m the bottleneck”—AI often outpaces his own efficiency.
-CLI-first approach:
“Whatever you build, start with the model and a CLI first.” For any product, he builds a testable, closed-loop CLI tool first—letting AI debug/test itself—before wrapping it into UI, browser extensions, or mobile apps. He gushes: “I’m quite in love with it. Runs on local, free or paid models.”
-Minimal prompts + Image-dragging fiend:
His prompts now often shrink to 1-2 sentences paired with screenshots—“fix padding,” “redesign,” “make it feel better.” He owns the habit: “Yes, I’m that person that drags in a clipped image…”—tossing UI component snippets with single-line demands.
-Linear evolution → Direct main branch commits:
“I simply commit to main. Sometimes codex decides it’s too messy and auto-creates a worktree then merges changes back, but it’s rare.” No rollbacks, checkpoints, or heavy worktree use. Ugly code? Toss another prompt. Wrong direction? Pivot with a new prompt. He quips: “I’ve already mentioned my way of planning a feature. I cross-reference projects all the time…”
-Context engineering tricks:
He avoids restarting sessions (thanks to GPT-5.2’s stable context); critical knowledge lives in docs/*.md; a global AGENTS.MD file lists prompts like “always read docs/XXX first” for quick AI reference; cross-project refs use raw folder paths—simple and effective.
-Refactoring & Testing:
Ugly AI-generated code? Instant refactoring prompt. While blogging or zoning out, he runs 4 large 2-hour refactoring tasks in the background. He reminds AI: “write tests after each feature/fix”—and shares his multi-screen setup: “I usually work on two Macs. My MacBook Pro on the big screen, and a Jump Desktop session to my Mac Studio on another screen.”
图片

4. Key Technical Decision Preferences

Peter’s technical choices come with his signature blend of pragmatism and light snark:

  • Languages:

TypeScript for web projects (rich ecosystem), Go for CLI tools ("types are simple, AI writes it like lightning"), Swift for macOS/UI. Surprised by Go adoption: "Go wasn’t something I gave even the slightest thought a few months ago, but eventually I played around and found agents are really great at writing it."
  • Dependencies:

Prioritize popular, actively maintained, community-vetted libraries (AI has more "world knowledge" of them, easing adoption). "Picking the right dependency and framework to settle on is something I invest quite some time in."
  • Models:

Primary use of GPT-5.2-Codex High, adhering to KISS (Keep It Simple, Stupid)—no mode-switching. "Again, KISS. There’s very little benefit to xhigh other than it being far slower."
  • Tools:

CLI > everything; tmux/Ghostty for multi-pane workflows; custom tools (Poltergeist, Peekaboo, Oracle) to patch AI gaps. "Building Oracle was super fun—learned tons about browser automation, Windows, and finally dug into skills."
图片

"Design the codebase to be AI-agent-friendly → Chat with it using minimal prompts + screenshots → Gauge blast radius before queueing for parallel inference → Sit back, watch the stream, curse occasionally → Evolve linearly on main → Cultivate intuition through massive interactions until software grows like weeds—or become a Claudoholic and rue the day."

At its core, this is the ultimate play of "maximize model inference time, minimize human cognitive load." But Peter never forgets to throw in a reality check (with self-deprecation): "Writing good software is still really hard. AI just moved the bar from ‘typing’ to ‘taste + architecture + direction’. And yes, I’m still the bottleneck—just a fancier one now."

After reading his blogs, I suddenly get it: The joy of coding in the AI era might be watching models sprint ahead while muttering, "No, not like that—like this!" followed by an impulsive "just one more prompt." That addictive yet efficient feeling? Only those who’ve truly coded with AI agents would understand.

Author: Lema