The hidden cost of AI "magic"

I’ve been working through another pattern I’m seeing more often with AI-assisted development and finally put it into a guide + field note pair.

Guide:

https://askgoodquestions.dev/guides/the-hidden-cost-of-magic

Field note:

https://askgoodquestions.dev/field-notes/when-working-code-isnt-enough

The idea is simple, but I think it’s important:

AI makes it easier than ever to get working code. But that doesn’t automatically mean you have something you can safely own.

The moment where things get interesting is not when the code first works. It’s when something needs to change.

That’s where lack of understanding shows up, and where the real cost starts to become visible.

I’d be interested to hear how others are handling this. Are you finding that AI-generated code is holding up well over time, or are you seeing friction when it comes to maintenance and changes?

Hi Charles,

I’ve written a personal app three times now using AI. The first time I wrote it using Flutter because I wanted to see what a good Flutter app looked like. I used the technique I mentioned in another thread - got one agent to write the original code and then got another agent to write it better. Seems to have worked well.

I used the same technique to write it again using Swift on a Macbook and a third time using Kotlin for an Android phone. I know nothing about any of these languages, but AI agents DO know them so using one agent to monitor the other seems to be working?

I’m going to try the same technique today with Clarion. I suspect it will be a different story?

1 Like

Well, I have seen my share of “I see the problem!” lies :), but mostly it holds up pretty well. I do my best to understand and agree with it before committing it though.

1 Like

In my case AI often channels Johnny Nash - “I Can See Clearly Now”
(https://www.youtube.com/watch?v=b0cAWgTPiwM)

1 Like

I think that is exactly where the difference shows up.

With Flutter, Swift, or Kotlin, you can often tell the AI, “build me one of these”, because those are source-centric environments. The AI can work from the actual codebase, follow the structure, and see much more of the application’s big picture.

Clarion is different.

Unless you are building a hand-coded source program outside the IDE, a lot of the real structure is not sitting there in plain source form for the AI to reason over. It lives in the dictionary, templates, embeds, generated code, and procedure context. So right away, the AI is working with a much narrower window into what the application really is.

Then you add the second issue, which is that Clarion is a niche language. The models simply have not seen the same volume of examples, patterns, discussions, and historical code that they have for mainstream languages.

So I think AI in Clarion is usually less about “build me one of these” and more about “help me do this.”

That is where it starts to shine more: narrower scope, better boundaries, local examples, and clear constraints. In other words, not broad autonomous building, but guided assistance inside a well-defined area.

Since I started using AI with coding I have used it in Clarion on tens of thousands of lines of code. Once I figured out that scope and context was everything (as well as providing an example of what “good” looked like and giving it the ground rules relative to where we were working) it shined.

So yes, I suspect your two-agent method will still be interesting in Clarion, but I would expect it to need tighter supervision and much better grounding than it does in those other languages.

Yes, I think that is exactly the right way to use it especially if it is with a language you may not use every day (or at all).

The AI does not get points just for sounding like it found the problem. The real question is whether, after looking at what it gave you, the explanation and the fix actually hold up.

That is where the human still has to be the programmer in the loop, and having another AI review the first one’s work can be a very sensible extra check.

I do the same to make sure and it works fairly good.

One thing that help me with good results is, give IA small tasks and it will perform better. When I know a change will impact my whole project, I make sure I have a git check point or a backup, just if things go messy. Then i tell de IA (like a boss :smile: ) “do X task keeping the current functionality without edit Y code” or better yet, ask if not sure about the changes and let me decide. I’ve seen IA sometimes asking like a multiple choice to take a path with a change.