As Clarion 12 is reported to be able to run local LLM’s (Ai’s) [1], or earlier versions using #RunDLL in the template language, I thought I’d start a thread for those interested in running local LLM’s.
So I’ve been keeping an eye on what LLM’s are performing best in programming activity, and Claude AI seems to be doing well in various polls. [2-5]
Its possible to run a LLM locally on a dev machine, and RAG [6] seems to be a way to keep LLM’s truer aka reduce “hallucinations”.
So to get started with running a LLM locally, it would help to find some good instructions to setup an environment for locally run LLM’s.
I think this one seems to be quite good [7], but if anyone else finds better instructions or problems with set up, then let us know on here.
I’ve been wondering just how these LLM’s will learn a new language, like the template language or Clarion language, and one way seems to be to point them to one or more Github repo’s to analyse the code.
This should address the problem where some template language commands can only be used within other template language commands, like * #Tab’s encapsulating #Prompt’s.
*Not strictly true, but used as an example of commands needing to be encapsulated.
I havent found any other method to train the LLM’s yet, like providing a list, but it was Marks post [8] that prompted me into thinking, how do you train a LLM on a proprietary language like Clarion?
Anyway I think this post/thread should cover the basics, and perhaps act as a starter thread for this particular topic with Clarion.
I would imagine the built in Facebook/Meta Llama3 will be more seamless to use in C12 as it will all be cloud based so wont require any setup, but it might also mean less control, over screen and report layout’s. There may be a way to skew this though, by providing some Github code examples which BobZ put out a few years ago at one of the Devcon’s, which might then become included in the training data for Llama3.
Any questions, no matter how dumb they might appear on the subject could also go here, as I’m sure others may also have the same question.
Comments always appreciated…
[1] Embracing the Future - Clarion 12 Archives - Clarion
[2] Shepherd’s Dog - when-ai-fails/shepards-dog/README.md at main · vnglst/when-ai-fails · GitHub
[3] Aider LLM Leaderboards | aider
[4] Can Ai Code Results - a Hugging Face Space by mike-ravkine
[5] ProLLM Benchmarks
[6] What Is Retrieval-Augmented Generation aka RAG | NVIDIA Blogs
[7] Claude Ai Local Install Guide | Restackio
[8] Clarion Language: Complete List of Statements That Require a Matching END