Using AI Like a Development Team Instead of One Long Chat

I’ve had several people contact me and ask for more details on how I work with AI tools during development.

This is not intended to be a full step-by-step implementation of my workflow. I get into that in much more detail in the book I am working on, Real Programmers Use AI, where I break down how to build and run a practical AI-assisted development system day by day.

But I thought it might be useful to give a high-level view, because I think a lot of people are still thinking of AI as one long-running chatbot conversation.

That’s not how I use it.

The way I use these tools is probably closer to running a small development team.

For any serious project, I set up a project folder so the related conversations share the same project documents and some common project context. But even inside that project, I do not let every conversation become a general-purpose wandering agent.

Usually, only one session is the actual code builder.

That is the conversation responsible for touching the source, assembling the repo, making the real code changes, and keeping the implementation coherent.

The other sessions are more like specialists.

One tab might be looking at a feature idea. Another might be thinking through a bug. Another might be working on documentation. Another might be looking at marketing, positioning, or how a developer would actually use the feature.

Those sessions are related to the same project, but each one is tightly focused on the thing in front of it.

When one of those side sessions gets the idea worked out, I usually have it prepare a Markdown document. I download that, review it, and when the timing is right, I feed it into the main builder session.

That gives me much tighter control over the process.

I only send an AI into exploratory mode when I actually need exploration, such as reviewing a codebase, investigating a bug, or evaluating an implementation path.

Most of the time, there are no free-range agents wandering through the code trying to improve everything they see.

When I give the builder session a coding task, it gets a tightly scoped prompt. The prompt tells it what to change, what not to change, what files matter, what coding rules apply, and that it should make surgical changes instead of refactoring the world.

That is one of the big differences between “using AI” and “letting AI drive”.

This matters even more now that tools like Claude, GitHub Copilot, ChatGPT, Gemini, and others are moving toward more powerful agentic workflows, larger contexts, and in many cases more metered usage.

Every unfocused agent session has a cost.

Every oversized context has a cost.

Every broad “go look at the repo and improve it” request has a cost.

The cost might be money. It might be time. It might be bad code. It might be losing control of the direction of the project.

My goal is to get maximum useful work out of the AI without paying maximum price for wandering, retrying, over-reading, or over-generating.

For Clarion work especially, this matters. You do not want an AI randomly refactoring code that already works. You do not want it inventing syntax. You do not want it making broad assumptions about your templates, classes, embeds, or generated code.

You want the AI focused.

You want the AI constrained.

You want the AI helping you do the thing you asked it to do, not redesigning your whole application because it saw a pattern it thought it could improve.

That is where disciplined prompting, scoped conversations, project documents, and a builder/specialist workflow make a real difference.

The AI assistant should be a power tool, not a free-range employee with a company credit card.