We've all been there:
Staring at a blank repository, facing the challenge of an empty piece of paper, - or
Needing to make a quick change to a feature but sink most of the time into the boilerplate overhead, - or
Wanting to build a quick prototype to test an idea, but dreading the entire setup that comes with it.
When Anthropic released Claude Code, their agentic coding tool that "reads your codebase, edits files, runs commands, and integrates with your development tools," it felt like a wish finally coming true: Rapid prototyping, fast development, the promise to focus on what's really needed and only what's needed.
At ape factory, we not only like to try out new tools, we also like to avoid wasting time.
Let's be clear about our scope. We decided upfront that Claude Code will be used for experiments and early-stage prototypes only. We love AI, but we know our tools well enough, and we have the experience to build our production code ourselves. AI can help, but the Human-in-the-Loop stays where it should be: In the driver's seat.
Our idea was to give Claude Code a shot and let it do the grunt work: Generating boilerplate, setting up project structures, and helping us explore ideas quickly.
The good news first: Claude Code absolutely delivers. It allows prototyping at lightning speed. It is genuinely impressive how it sets up a basic project structure, walks you through the project ideas and even challenges your assumptions. We've seen success stories from developers who implemented complex systems in hours rather than days.
The tool comes with interesting features beyond basis code generation. The planner, a phase where the tool will not edit anything but go into structured planning and exploration or scan the entire project folder. Then, there is CLAUDE.md, a global or project-level instruction file that gets injected into every session and functions as a form of Claude Code's permanent memory. It is used to specify the project, MCP settings and define your code-styling. Another important file is settings.json, which lets you configure the actions Claude Code can take without asking for permission (e.g. Git commands, Python execution or web search). Unless you allow full auto-editing, the tool will ask for confirmation before making changes. It will show changes as diffs and wait for you to review and approve.
And that's where temptation kicks in.
It is very tempting to just click "go!" on every suggestion. The suggestions are rarely bad. In fact, Claude Code's ideas are usually at least good-enough, if not spot-on. And isn't the main idea of AI to have your smart assistant do all the work while you are leaning back in your office chair? The line between staying in control and giving in to full vibe-coding is thin. We strongly recommend that any company using Claude Code defines clear rules before powering up the tool.
The reason why we resisted the temptation, as hard as it was, were the undeniable, yet predictable, friction points:
The key insight: You need to know your domain AND you need to know AI.
If you do not understand the domain, AI will will happily lead you down a wrong path. If you do not know how to instruct AI, it will do what it wants, which rarely is what you'll need.
And if you do not know the domain and do not know AI? Then remember that AI is a probability machine. You are playing roulette with your code. You might get lucky ... or not.
We learnt a lot and we will definitely keep experimenting.
We only scratched the surface with CLAUDE.md and the other instruction files on how to set constraints. These instructions help keep the agent on track but they are not a complete solution to context drift.
More promising is our exploration of a toolkit called "get shit done" (GSD), a structured prompting system designed to address addresses the context degradation. It uses a structured workflow that we in software engineering know all too well: Discussion - Planning - Execution - Verification. Think of it as sprint cycles applied to aI.
GSD follows that iteration cycle while giving the coding agents precise instructions on what to do each step. It's like treating AI as a supervised developer, not like a pair programmer.
Our practical takeaway: Start small, define your goals before you start, and prepare to take the steering wheel whenever needed (at least for a while).
AI assistants are like junior developers with a lot of energy. But there is no flying blind. Given the current pace of AI development, that might change. But for now, you need someone with experience to be in charge.
---