C12 running local LLM's

This is what AI dreamed up today…

https://claude.ai/public/artifacts/94937589-4a21-4480-8eac-bac28d22ed64

DotNet was introduced on the pretence you could write memory safe code with its Garbage Collectors (GC)'s, but in practice this hasnt always been the case.

So now MS appear to be pivoting to writing parts of the OS in Rust for the purported reasons of being able to write memory safe code. However you can still write memory unsafe code in Rust.

Its a bit like Google, both introduce new things, MS = Frameworks, Google = internet services, and then abandon them over time. Facebook probably even does it, but less visibly.

Its like they have assembled a team, developed something, the team gets traction, perhaps recovers initial investment costs, and then move on to other things, project dies.

In a way, they are little more than a type of “internal startup”, instead of seeking funding from various VC funds, like YCombinator, by virtue of already being recruited into one of the tech leaders, they just need to seek funding for their internal project, and push ahead if green lighted.

Yeah, there’s a constant thread that Flutter is doomed. Angular JS keeps getting changed, so often it drove me nuts and I abandoned it.

There’s a lot to be said for small development teams. I always told my clients that the more developers they added to a project the greater the chance it was going to fail. I think some of these systems stick around despite what the megaliths do to sabotage them.

But then again, I’m getting old and young people, especially these days adapt to change much, much better than I ever did. Progress is good, when it’s going in the right direction.

ha ha but so are many of us!

all good here thanks - being retired, Clarion is just a hobby for me so I’ve no real need to keep up with the latest/greatest technologies which come and go at a great rate of knots.

I keep trying various AI’s to see how they go at writing Clarion code and they still have a long way to go. They are improving but it is hard to predict when they will get a reasonable level of competancy - maybe a couple more years???

Anyway Geoff B, hope you and yours are keeping well too.

Which one’s have you tried?

I’m interested in the one’s that will run offline, but I’d like to direct them to selected Github Repo’s for training, but I dont know of any that will go online and spider selected websites and parts of a website. One work around might be to pull the Github Repo’s onto my computer and include them for training that way.

This is what Gemini says about offline LLM’s.

Yes, there are many LLMs that can be installed and run for offline use, such as those that can be accessed through applications like

LM Studio, GPT4ALL, and Jan.ai. These tools allow you to download and run models locally on your computer, ensuring privacy and functionality without an internet connection. You can also use other platforms like Ollama or set up a model using tools like Oobabooga’s text-generation-webui.

Applications for running LLMs locally

  • LM Studio: A user-friendly option that is good for beginners and supports a wide range of models. It allows for extensive configuration, including offloading to multiple GPUs and CPUs.
  • GPT4ALL: A privacy-first tool that runs well on various chips, offers a large selection of open-source models, and can process local documents offline.
  • Jan.ai: A popular, open-source alternative that is known for its simple and user-friendly interface for running LLMs offline.
  • Ollama: While sometimes seen as more engineer-focused, it’s a popular tool for managing and running local LLMs.* Oobabooga: A text generation web UI that can be used to run LLMs locally by downloading models directly through its interface.

Key considerations

  • Model choice: There are many models available, ranging in size and capability. Models are often released on platforms like Hugging Face. You can choose a model based on your hardware and the tasks you want to perform.
  • Hardware requirements: Offline LLMs require local resources. Ensure your computer has sufficient CPU, GPU, RAM, and storage to run the models you want to use.
  • Privacy: Running LLMs locally provides a high degree of privacy, as your data does not need to be sent to a remote server.

The ones I have tried are Claude, ChatGPT, Gemini and Grok. I have not used any local LLM’s or done any training - just used the browser interfaces.

1 Like

Model Adoption Is Fragmenting

Over the first week of October 2025, Sonnet 4.5’s share of total requests declined from 66% → 52%, while Sonnet 4.0 rose from 23% → 37%. GPT-5 usage stayed steady at about 10–12%

Thought this was interesting on a couple of counts, first being maybe “comfort” and “trust” is setting in with some models.

Is it that hard to get your prompt requests, moved over to newer models, or does each new model represent a new exercise in prompt & memory training the Ai to deliver the info how you want it, ie in the programming style you require?

Whilst I have the personalition and memory option in Co-Pilot switched on, so it remembers how I like things to be written out, eg my equates being ISEQ:…, there doesnt seem to be an easy way to move these instructions to new models. Thats potentially a lot of work, getting the new Ai model to up to speed, like a new employee, which could explain the slower adoption rate with new models.

I’m sure there’s other reasons why devs are not adopting the latest models, but I’ve not found them yet other than whats mentioned here.

The 2nd point is Michael Burry, who was depicted by Christian Bale in the film The Big Short, has gambled $1.1bn (£840m) on a fall in the shares of chipmaker Nvidia and software company Palantir aka a bet on the Ai bubble bursting.

Now his financial crisis short kept being rolled over, investors started sueing him to get their money out because he wouldnt let them have it back straight away, but when that bubble burst, despite all his extra incurred (legal) costs, he still walked away with a nice pay packet and a film made about him!

Is this another Ai winter omen?

On November 20, 2025, trading algorithms identified what may become the largest accounting fraud in technology history—not in months or years, but in 18 hours. This is the story of how artificial intelligence discovered that the AI boom itself was built on phantom revenue.

https://substack.com/inbox/post/179453867