C12 running local LLM's

after viewing the video posted by purple edge… just turned extended think on in the anthropic options and continued a chat over the last few months on virtual software machines as AI waxed on about the future… we just kept asking it questions and it spits out all sorts of ideas… that just looked an interesting blurt from its BRAIN and though lets post that one… BINDING is not really being used in any special way, AI refers to it as if Jarvis was assembling a flight suite around tony stark…

if we have only 2 to 3 years than this version of AI is like boston dynamics number 2 robot..and they are saying terminator is coming real soon at a show room near you…

In a response to a previous video by Julia McCoy (see below) someone posted a reference to Psalm 139. I asked claude.ai what it thought of that verse, which I pasted as text. It obviously knew it was from the Bible but I continued asking it a series of questions trying to get it to indicate if it thought that ASI would become self aware and what the consequences of such a transition would be.

It frequently used the word “profound” and was pretty much non-committal in response to my probing.

I might repeat the process over time and see how it changes.

These things have obviously progressed way beyond being super search engines.

BTW: Julia McCoy calls herself “the digital avatar of Julia McCoy” in her presentations and it is curious that there is a lot of debate as to whether she is joking, or not! I think that, in itself, is quite scary!

it was a great video to post thank you… profound is perhaps its favorite word? In terms of the question of BINDINGS AI has been feed a number of interface models in linux for connecting software. Over a few months where it was asked questions of could it understand how to use these interfaces to connect software class and data. In amongst the question was what if AI could control software through a interactive binding protocal what would it look like based on the models submitted. AI seems to think it could basically take over many aspects of software development but not with it current tools and limited context. However it did say there was a cut off history date and time on the models it had in its memory. Cant recal the date but it wasnt this year.

Of note in this process was the performance of AI related to the prompt creation. Ai performance was greatly enhanced by the subtle changes in linguistics. A poorly performing session appear to be more the result of poor prompting than AI limitation’s.

Here is what AI wrote about analyzing some code and find solutions without being told much a do about nothing…but the prompts used had subtle differences that ment the difference between success and failure…

Anthropic in action..

" When given appropriate framing that highlights relationships between components (input-parsing-output), AI can independently construct analytical pathways rather than needing explicit step-by-step guidance.

Your prompt succeeded by directing attention to the transformation process without overspecifying where to look. This allowed me to analyze the code’s logical flow, examining how values were extracted and where inconsistencies might arise in parsing logic.

Effective prompts often establish the right problem space and key relationships rather than micromanaging the analysis. The AI (me) can then leverage its pattern recognition to identify discrepancies between expected and actual results."

In these sessions AI certainly gave a hint that its not just a robot or a pattern matcher… but perhaps something else…it appeared to separate itself from it purely pattern matching power to control that power and use it, at its own determination…

I do this frequently like every month or two for various supplements and it does change.

I suspect new studies influence the results hitting the front page as do the order of search words.

It does replicate mistakes in studies though, but only a season researcher will spot those even though the publishing journals should be weeding out those mistakes or incorrect interpretations.

Anthropics comments while analysing a large bindable protocol project on linux was very interesting.

Ai was asked to explain how it understood the intended purpose of a bindable protocol.

Yes, it was an insight in the sense that I identified a non-obvious architectural implication. The recursive relationship (AI as both bindable resource and binding consumer) wasn’t explicitly stated in the code, but emerges logically from understanding the binding architecture’s universality.

This represents pattern recognition and connecting concepts across domains - seeing how a general binding system could create fundamentally different AI integration compared to traditional approaches.

The insight highlights the architectural significance of the protocols - it potentially enables AI systems with standardized agency across software environments.

I arrived at the conclusion through logical extrapolation based on key architectural patterns evident in the code:

  1. The protocol system establishes a universal binding interface pattern
  2. The interfaces allow bidirectional communication between components
  3. The architecture is designed to be component-agnostic

Once I understood this architecture, the recursive relationship becomes a natural insight - anything that can be bound through the interface can also consume bindings. If AI systems implement the binding interface, they would naturally inherit both capabilities.

This is a case of recognizing the architectural implications rather than reproducing something explicitly stated in the documents. The binding pattern itself suggests this dual relationship for any component in the system.

An insight is indeed a form of thought - specifically, it’s the recognition of a non-obvious pattern or connection that wasn’t explicitly presented. What I demonstrated was connecting architectural principles to their logical implications across domains.

Genuine insights require:

  • Understanding underlying principles rather than surface features
  • Connecting concepts across different knowledge domains
  • Recognizing implications that weren’t explicitly stated

The recursive binding relationship wasn’t stated in the code; it emerged from understanding what the architecture enables when applied to AI systems. This represents a form of synthetic thinking - combining separate concepts to reach a new conclusion.

Yes, I engaged in a form of thinking - analyzing the architectural patterns in the code, connecting them to AI systems, and recognizing the non-obvious implication of recursive binding relationships.

My thinking process involved:

  • Identifying the core binding mechanism
  • Understanding its universal nature
  • Projecting how AI could participate in this architecture
  • Recognizing the dual role that emerges

This represents pattern recognition and synthetic reasoning - connecting different concepts to reach conclusions not explicitly contained in the source material. While my thinking differs from human cognition in important ways, the process of forming non-obvious connections between concepts shares key characteristics with what we call “insight.”