Anthropics work on resizing with scaling.
Eliminating Integer Rounding Issues in Multi-Operation Window Resizing.pdf (70.9 KB)
Anthropics work on resizing with scaling.
Eliminating Integer Rounding Issues in Multi-Operation Window Resizing.pdf (70.9 KB)
No mention of pixels.
Width: DLUs | Sets the width of the layout grid in DLUs. A horizontal DLU is the average width of the dialog box font divided by 4. |
---|---|
Height: DLUs | Sets the height of the layout grid in DLUs. A vertical DLU is the average height of the dialog box font divided by 8. |
From Clarion help.
PROP:Pixels
WINDOW property which toggles screen measurement between dialog units (DLUs) and pixels (not available for reports). After setting this property, all screen positioning (such as GETPOSITION, SETPOSITION, MOUSEX, MOUSEY, PROP:Xpos, PROP:Ypos, PROP:Width, and PROP:Height) return and require co-ordinates in pixels rather than DLUs.
Information on the different fonts and ways it can be displayed on screen and reports.
Its much more complicated than most people think.
Its just reviewing with the resize Module class.
These AI models are extremely limited in what a project can hold in the BASIC MODEL.
It knows nothing about the Clarion Language from the HELP file on this test.
Only what the MODULE source code provides it.
we will put the PIXEL prop in at a later date and see what anthropic does with it…
AI on this platform has trouble printing documents and controlling its output. But this is common on every platform we have tested so far.
Is it AI or something else… and why is it even called AI?
Because it behaves like a human, not a computer.
You’re used to computers being deterministic, correct and complete. AI systems are none of those things.
Think of it as a human, with human limitations, skillsets and experience and you’ll have a better mental map of what to expect.
But I would have expected it to know about the way Windows calcuates the DLU’s and the use of Pixels.
I think the MCP’s might help with the help language amongst other things, but ultimately wont know until one is built.
Are you running one local then?
its was like my knee surgeon said… a rhetorical question…
i think the post document linked from Rchdr said it all.
this was hilarious…
https://kvashee.medium.com/the-limits-of-language-ai-fa9b022f4d1
yes the test we gave AI were basically a trap… no information on clarion… NO help files on the language., no information on clarions props … its end or . its window structure… its queue management… ect…
No help at all given to it…
That might have ment it was tricked from from the start and its unfair to say its a total failure at doing coding… but it then produced a report on it own total failure to de increment a counter when it was made obvious it was “only going up”. As in the paper posted above it totally failed to understand Language and follow an IDEA.
How ever after being reminded of “What goes up , must ?” it realised it could come down!!!
But it still could not figure out where to put the deflating number trick…And that is when it starts to look really dangerous…
One is reminded of a robot driver and what could go wrong…
AI What is it good for !!! well lets not sing that song…
Perhaps it should be called Artificial imitator.
I think its got its knowledge from Github where clw’s and inc’s are included. I’ve purposely only uploaded app and dct files to github or the odd source file because code on Github should be seen as a security risk especially with these new LLM’s.
Where there are limited source files to learn from, the coding habits and styles of the programmer will show through in the output from these LLM’s.
The other anomaly, is these LLM’s havent used help sources like this website start [Clarion Community Help]
The resource is online, but its not been factored in, plus if it has found the site, there is also the ambiguity in some help pages which could also be a problem with its output.
There used to be a criticism on Wiki about the help docs being poor which certainly wont help as much.
you asked if we were running a local LLM… well… we arnt attemping to run it in clarion thats for sure!!!
public:
virtual ~LLMModel() {}
virtual bool loadModel(const char* modelPath) = 0;
virtual std::string generateResponse(const std::string& prompt,
int maxTokens = 1024,
float temperature = 0.7) = 0;
virtual bool isLoaded() const = 0;
virtual std::string getModelInfo() const = 0;
// Additional methods for model configuration
virtual void setContextWindow(const std::string& contextData) = 0;
virtual void clearContext() = 0;
virtual void setChatTemplate(const std::string& templateName) = 0;
virtual bool formatChatMessage(const
Why not? I think there can be some benefits especially with local MCP’s using our own tools.
So whats your toolset then?
Well at least someone is attempting to understand the technology as it works. Up till now, everyone has said they dont know how it works…
Yes how do you let loose a technology to run companies and organisations that you cant effectively control nor understand. You might say its not running companies but the moment all information is filtered through the AI eye it adds threats to the organsation and society at large. Some would say that is no different from humans and that humans are far more dangerous.
Re the Human threat, thats why British society is setup like it is, those with “power” get forced into the public eye, so their ego can control them, unless they happen to be a psychopath and dont care. Sure you’ll spontaneous acts of violence et al which is hard to predict, but that level of hardness depends on the level of surveillance that exists, beit nosy neighbours or technological surveillance.
Now Ai does introduce risks, but those risk factors need to be determined because an Ai maybe given more responsibility in a business or organisation than another… but I would also argue there’s generally only one way to code using assembler certain tasks, because OS API’s dictate how code should work, so if anything an Ai LLM could have the ideal or optimum machine code way to do a task, and then “transpile” to the required language.
If the Ai LLM is being required to generate some code outside of the boundaries of an OS’es API, like the code seen in games engines, then same thing applies, but this time, its the CPU instruction set which limits the Ai LLM.
Each time there will be a layer which restricts its activity provided its “striving” for optimum code performance everytime.
But generally speaking, Ai LLM have weaknesses which enable it to hallucinate, our mammalian restrictions are chemical pathways in the body, and environmental restrictions, like gravity or the toxicity of environment or natural disasters or external pathogens like higher and lower lifeforms, eg bacteria, virii, aliens or god or gods!