Reliance on Claude
I have been utilising Claude Sonnet 4.5 and Claude Opus 4.5 (and recently 4.6) daily at work over the past months. Being an avid Claude Code user for close to a year (I recall trying it out around March 2025), I have adjusted to Claude Code and Anthropic’s models.
Adjusted is a nice way of saying: “I got lazy”.
I started exploring outside Claude Code: OpenAI’s Codex CLI, Google’s Gemini CLI, OpenCode with OpenRouter, and a bunch of Chinese models that were cheap to try out at home.
Here’s an actual prompt I gave Claude Code halfway during a session:
remove the copy
The above prompt would have been inferred by Claude Code (and its models) to do more than just “remove the copy”. It knows enough to work on adjacent things, related files, functions, etc. It felt “smart”.
I expected the same prompt to work with other tools or models. Codex CLI, before GPT-5, would do exactly that: remove the copy. Nothing more. Nothing less. It didn’t infer. It followed instructions well. GPT-5 has since improved this, but the other tools have not.
My recent experience with cheaper models through OpenClaw was even more frustrating. GLM-4.7 and other cheap models via OpenRouter couldn’t follow my instructions well. Often, they would proactively change things I explicitly told them not to touch. Maybe it is an OpenClaw prompt engineering issue rather than the models themselves. But my previous experience using the Chinese models told me it could be a combination.
Claude Code built such a good experience: system prompt engineering and the harnesses available (especially with hooks). But that’s also what made me lazy. My prompts won’t work well unless I give more context in other tools and models, even with the same AGENTS.md instructions across the board.
What I am afraid of is that I am tied in deeply to a single tool. Then again, GPT-5 showed me the gap is closing. Maybe lazy prompting will just become the norm. And if it does, I guess I was just ahead of the curve.