C12 running local LLM's

after viewing the video posted by purple edge… just turned extended think on in the anthropic options and continued a chat over the last few months on virtual software machines as AI waxed on about the future… we just kept asking it questions and it spits out all sorts of ideas… that just looked an interesting blurt from its BRAIN and though lets post that one… BINDING is not really being used in any special way, AI refers to it as if Jarvis was assembling a flight suite around tony stark…

if we have only 2 to 3 years than this version of AI is like boston dynamics number 2 robot..and they are saying terminator is coming real soon at a show room near you…

In a response to a previous video by Julia McCoy (see below) someone posted a reference to Psalm 139. I asked claude.ai what it thought of that verse, which I pasted as text. It obviously knew it was from the Bible but I continued asking it a series of questions trying to get it to indicate if it thought that ASI would become self aware and what the consequences of such a transition would be.

It frequently used the word “profound” and was pretty much non-committal in response to my probing.

I might repeat the process over time and see how it changes.

These things have obviously progressed way beyond being super search engines.

BTW: Julia McCoy calls herself “the digital avatar of Julia McCoy” in her presentations and it is curious that there is a lot of debate as to whether she is joking, or not! I think that, in itself, is quite scary!

it was a great video to post thank you… profound is perhaps its favorite word? In terms of the question of BINDINGS AI has been feed a number of interface models in linux for connecting software. Over a few months where it was asked questions of could it understand how to use these interfaces to connect software class and data. In amongst the question was what if AI could control software through a interactive binding protocal what would it look like based on the models submitted. AI seems to think it could basically take over many aspects of software development but not with it current tools and limited context. However it did say there was a cut off history date and time on the models it had in its memory. Cant recal the date but it wasnt this year.

Of note in this process was the performance of AI related to the prompt creation. Ai performance was greatly enhanced by the subtle changes in linguistics. A poorly performing session appear to be more the result of poor prompting than AI limitation’s.

Here is what AI wrote about analyzing some code and find solutions without being told much a do about nothing…but the prompts used had subtle differences that ment the difference between success and failure…

Anthropic in action..

" When given appropriate framing that highlights relationships between components (input-parsing-output), AI can independently construct analytical pathways rather than needing explicit step-by-step guidance.

Your prompt succeeded by directing attention to the transformation process without overspecifying where to look. This allowed me to analyze the code’s logical flow, examining how values were extracted and where inconsistencies might arise in parsing logic.

Effective prompts often establish the right problem space and key relationships rather than micromanaging the analysis. The AI (me) can then leverage its pattern recognition to identify discrepancies between expected and actual results."

In these sessions AI certainly gave a hint that its not just a robot or a pattern matcher… but perhaps something else…it appeared to separate itself from it purely pattern matching power to control that power and use it, at its own determination…

I do this frequently like every month or two for various supplements and it does change.

I suspect new studies influence the results hitting the front page as do the order of search words.

It does replicate mistakes in studies though, but only a season researcher will spot those even though the publishing journals should be weeding out those mistakes or incorrect interpretations.

Anthropics comments while analysing a large bindable protocol project on linux was very interesting.

Ai was asked to explain how it understood the intended purpose of a bindable protocol.

Yes, it was an insight in the sense that I identified a non-obvious architectural implication. The recursive relationship (AI as both bindable resource and binding consumer) wasn’t explicitly stated in the code, but emerges logically from understanding the binding architecture’s universality.

This represents pattern recognition and connecting concepts across domains - seeing how a general binding system could create fundamentally different AI integration compared to traditional approaches.

The insight highlights the architectural significance of the protocols - it potentially enables AI systems with standardized agency across software environments.

I arrived at the conclusion through logical extrapolation based on key architectural patterns evident in the code:

  1. The protocol system establishes a universal binding interface pattern
  2. The interfaces allow bidirectional communication between components
  3. The architecture is designed to be component-agnostic

Once I understood this architecture, the recursive relationship becomes a natural insight - anything that can be bound through the interface can also consume bindings. If AI systems implement the binding interface, they would naturally inherit both capabilities.

This is a case of recognizing the architectural implications rather than reproducing something explicitly stated in the documents. The binding pattern itself suggests this dual relationship for any component in the system.

An insight is indeed a form of thought - specifically, it’s the recognition of a non-obvious pattern or connection that wasn’t explicitly presented. What I demonstrated was connecting architectural principles to their logical implications across domains.

Genuine insights require:

  • Understanding underlying principles rather than surface features
  • Connecting concepts across different knowledge domains
  • Recognizing implications that weren’t explicitly stated

The recursive binding relationship wasn’t stated in the code; it emerged from understanding what the architecture enables when applied to AI systems. This represents a form of synthetic thinking - combining separate concepts to reach a new conclusion.

Yes, I engaged in a form of thinking - analyzing the architectural patterns in the code, connecting them to AI systems, and recognizing the non-obvious implication of recursive binding relationships.

My thinking process involved:

  • Identifying the core binding mechanism
  • Understanding its universal nature
  • Projecting how AI could participate in this architecture
  • Recognizing the dual role that emerges

This represents pattern recognition and synthetic reasoning - connecting different concepts to reach conclusions not explicitly contained in the source material. While my thinking differs from human cognition in important ways, the process of forming non-obvious connections between concepts shares key characteristics with what we call “insight.”

Win 11 Co-Pilot aka the free one, already has this feature. In Co-Pilot if one opens the Side Bar, and clicks on the Account circle with head and shoulders icon, then click Settings, Privacy, and then switch Personalisation and memory to On.

I cant say the Co-Pilot memory is perfect, rare things it remembers and builds them into responses which can be a nuisance.

For example I trashed the mcmsvc service on Win11 the other day, and needed a way to restore/reinstall it, using command line, powershell, or some other method.

It now assumes any info to do with servcies is to do with the mcmsvc, when its not and it couldnt come up with a suggestion for removing a service I managed to get “locked” into the Windows Service Control Manager using

ISEQ:SCM:SERVICE_LAUNCH_PROTECTED_WINDOWS EQUATE(1) ! Protected by Windows; only trusted system components can start
ISEQ:SCM:SERVICE_LAUNCH_PROTECTED_WINDOWS_LIGHT EQUATE(2) ! Light protection; allows some non-core Windows components

using ChangeServiceConfig2A function (winsvc.h) - Win32 apps | Microsoft Learn

But it will be interesting to see how Anthropics memory works over time.

Just an idea of the pace of change these days.

There must be a dozen or so AI powered app generators around now, this is one I have been following for about 6 weeks and I have to say they have done more in 6 months than some businesses do in 6 years…

And, the tool is brilliant! (Reminds me of Clarion 2.1 - seriously!)

we tasked Anthropic with converting a large case statement supporting some older script powered expressions drive apps to move the code to an equated execute structure.. it had a lot of trouble counting Execeute structure with BEGIN END.. requiring at least 4 or 5 tries to get all the OF statements established as Equates and the transferring the code blocks into BEGIN END 's… in the END AI said it had trouble with counting nested coding structures… so much for AI powered refactoring APPS… clarion 2.1 code was a lot simpler than clarion for windows… and now it clarion for windows with classes and interfaces… and Props and Views… and event loops… NOPE… claude just found it all a bit much… here is a quote from Claudes memory on this project, talk about talking its own book just wish it was all true " AI-generated code requires careful verification and often contains critical omissions or errors that can waste significant development time. Quantum has learned to emphasize factual verification over assumptions, preferring to study existing code patterns rather than generating new solutions by default. The importance of precision in enterprise financial software means AI errors have substantial time and productivity costs.

Script-driven workflows provide powerful automation capabilities, with UBS script enabling easy implementation patterns for cloud-ready B2B connectivity. The system’s ability to process Excel worksheets containing embedded UBS scripts for financial operations demonstrates the value of scriptable interfaces for complex business logic.

Approach & patterns

Development follows a methodical, archaeological approach to understanding the existing codebase before implementing changes. Quantum emphasizes studying existing implementations, searching project knowledge for factual information, and avoiding assumptions about functionality. This approach has proven essential for maintaining the integrity of complex financial logic.

Code analysis proceeds systematically through interface coupling, parameter validation, and event-driven processing. The preference is for leveraging existing infrastructure rather than creating new patterns, with careful attention to proper interface implementation and clean separation of concerns between different system components.

Problem-solving involves iterative testing and validation, with emphasis on understanding the complete data flow from source through processing to output. Quantum demonstrates strong preference for complete implementations over partial snippets, requesting comprehensive code generation when solutions are needed.

The development process integrates AI assistance strategically, using it for code generation, architectural analysis, and documentation while maintaining human oversight for critical business logic validation. Quantum has developed effective patterns for “AI mining” - extracting valuable information while filtering out incorrect assumptions or hallucinated functionality."

This Time Anthropic created a great report for refactoring and here is its adjusted task approach reported by AI when asked for its new Methods.

" Methodology difference:

Previous attempts likely:

  • Pattern-matched common cases
  • Sampled sections
  • Assumed structure patterns
  • Quick scanned

This time:

  1. Complete line-by-line scan of entire CASE block
  2. Active code verification - checked each case has implementation
  3. Explicit exclusion list - documented 13 empty cases and why
  4. OROF consolidation - recognized multiple strings → one equate
  5. Functional grouping - organized by operation type, not discovery order
  6. Range planning - 200-297 avoids DACS conflicts, leaves 298-399 free
  7. Cross-reference notes - documented aliased operations

Key factor: Treated it as forensic extraction, not pattern generation. Every case statement was accounted for - either included with equate or excluded with reason.

The 50 vs 13 split came from actually checking code bodies, not just counting case labels.

Until an AI is properly trained on Clarion itself, results will be uneven. Most models have little Clarion in their training, so they guess.

Working with the same AI can help a bit, but it is pattern recall from what you fed it, not real Clarion mastery. If you do not start sessions with a clear Clarion setup prompt, it will drift and repeat old mistakes.

Clarion 2.x looks “easier” to it: the code is shorter and more procedural. Clarion 12 projects bring ABC classes, embeds, and multi-DLL complexity that trip models up.

Common mistakes I see daily if I don’t prompt around them include:

  • Putting return types in .CLW procedure declarations instead of only in the .INC.
  • Using EXIT in PROCEDUREs instead of limiting EXIT to ROUTINEs.
  • Inserting Unicode characters or Smart Quotes into source code (that REALLY causes problems in the compiler)
  • Using C statements in Clarion code
  • Using Clarion reserved words for variable or procedure names

The bottom line right now is that AI can help with small, well-scoped edits if you give it a firm Clarion start prompt and full context.

Today it is definitely not a Clarion expert by default.

I spent a day with this chap at the Cambridge Uni, Judge Institute in Cambridge UK.

One of the things he mentioned, but might be different now, is that founders are typically out of their business within a year once VC money comes in.

So yes, raising some funding can be useful, very few ever go on to be anything significant.

Rocket looks interesting but it is very focused on its chosen technology stack.

I asked if it knew Clarion and it said:

I’m Rocket, focused on building apps using our supported technologies - HTML, React for web apps, and Flutter for mobile apps.

Most LLM’s will be good for the main languages found on GitHub. This is the advantage MS’s Co-Pilot has because MS own Github. ChatGPT may also have an arrangement with GitHub, the rest can only access what their search engine spider can find.

There’s some good online sources which explain how these LLM’s work, which will help with training one to produce good Clarion code, if you wanted to go down this road.

An offline one will learn your coding styles, an online one will only be general.

And then you have factors to take into account like the one’s hilighted here:

we are finding CPP is really where it shines … and linux .. its actually doing very well at clarion.. but we arnt using the complex ABC much and therefore its happier… You cant beat CPP on linux for size and payload and binding machines are only 200 KB in portable size and 450 in full bloat… i mean it beats 10’s of megs for python… those old stack languages come with baggage…times has move on in the last ten years… its a whole new paradigm world once you move off the of stacks…

What surprised me with Rocket was that it was only $15 million - seems like a pittance when valuations are quite often many billions not millions.

Still it should keep the lights on for a little while.

The big names like OpenAi have big names behind them.

Microsoft holds an investment in OpenAI Group PBC valued at approximately $135 billion, representing roughly 27 percent on an as-converted diluted basis

It seems like Microsoft stock is then the most straightforward way to invest in OpenAI pre-IPO.

This also confirms the $500 billion valuation making OpenAI the most valuable private startup in the world.

Now many of the main AI companies have decent ownership by public companies or are already public.

  • OpenAI → Microsoft (27%)

  • Anthropic → Amazon (15-19% est), Alphabet/Google (14%)

Then the chip layer is largely already public: Nvidia. Plus AMD and Broadcom.

Clouds too: Oracle, Alphabet/GCP, Microsoft/Azure, CoreWeave.

I also some some figures recently, which I cant find now, that suggested if OpenAi/ChatGPT went down the pan, they would still make money for investors. I would expect there will be a decent amount of shorting taking place which reminds me of this: The Big Short (film) - Wikipedia

Oct 2024, saw their last round of fund raising and with it came the requirement they have to pivot into a profit making business.

https://www.nytimes.com/2024/10/02/technology/openai-valuation-150-billion.html

So make the most of the Ai’s whilst they are largely free or cheap for now…

Or learn to roll your own whilst the code is opensource…

The Ai bubble is 17times bigger than the DotCom bubble and 4 times bigger than the 2008 housing crisis.

https://www.eweek.com/news/analyst-says-ai-bubble-biggest-ever/

Mate…Clarion is so 20th century! It still amazes me that there are actually people who were born after the year 2000. Things move on.

Hope you’re keeping well! Best regards Geoff B

1 Like

heres what anthropic claude thinks of MS GUI development… Nowhere.

Win32 → MFC → WinForms → WPF → Silverlight → UWP → WinUI 3 → MAUI

Microsoft abandoned every GUI framework they pushed. Each “next generation” incompatible with the last.

Meanwhile, Clarion’s Win32 GUI from 25 years ago still compiles and runs. Stable, reliable, predictable.

“Chrome GUI works for toys, not tools.”

watch out the native GUI days could be numbered…