Having experimented a bit further with debug messages I think the only practical way of encrypting debug messages is not to have them at all. Just comment them all out when you ship a production version.
if you write ud.debug('This is my debug message') then even if it appears in debug view as 54686973206973206D79206465627567206D657373616765 it still appears in the EXE as the string ‘This is my debug message’, so what’s the point? Anyone analysing the EXE is going to find the messages included in the code as being extremely useful, along with the names of procedures and functions, etc.
So if you don’t want your code analysed by someone else, you’ll have to write obfuscated code. Good luck with that.
I have been reviewing my app that uses Capesoft MyTable for encryption, and it requires 2 secrets. If you provide them as literals, e.g. ‘OpenSesame’ then they are stored as such in the EXE. Not particularly secure. You could store it as ‘4F70656E536573616D65’ which is the Hex version, and this will defeat the casual observer, but ‘ukROzIObFmzITL8xwxzI’ is probably better. (It’s the output of xHide(‘OpenSesame’), a function I use in Access VBA. But then don’t call the de-obfuscation function something like “UnHide” because it will give the game away to a determined hacker.)
Has @CarlBarnes or anyone else written a class to obfuscate/de-obfuscate text strings? I’m wary of reinventing the wheel.
Maybe a template that encrypts the debugview message when its released for production?
This is something I’ve been toying with my debugview template, its partly why I’ve asked on here what others think but it means calling dll’s from the template to encrypt the messages before its compiled and would mean having to use #Code templates which can admittedly break up sections of embed code, but its the only way I can see that would make it quick and easy to encrypt the debugview output.
As the source code for Debugview++ is on github, I think it would be possible to fork it and modify it to automatically decrypt the debugview output to make it as seamless as possible.
The main thing is the #code templates would be storing the debugview output encrypted inside the exe, affording a level of protection from those using reverse compilers like the NSA Ghidra.
I’ll be interested to see if anyone has a way to do that.
At first I thought “why not just have a wrapper around ud.debug() that checks some switch?” but then I read earlier in the thread and saw you did not want the output strings to be in the exe.
if it is not possible at code generation time, perhaps you can write a simple “post generation” program to process the code before it is compiled and put it in the project. You would need to be careful if any of your debug statements were split over more than one line.
I seem to recall the #PROJECT system has pre and post build events but do not know much about that - hopefully someone else can help with that if you end up going down that route.
An alternative is to wrap all your ud.debug() statements in either COMPILE or OMIT statements with a compile time switch - but that might be a bit tedious. Mind you a code template could do that easily.
I think I may have found a way to hide “secrets” in plain sight. These are generally the Achilles Heel of any encryption system.
Firstly, I defined some local variables in the Main routine of my generated app:
Then I ran this code in the Open Window embed point:
! 1 2 3 4 5 6 7 8 9
! 123456789012345678901234567890123456789012345678901234567890123456 78901234567890123456789 01
strAlpha = ' AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZz1234567890[.<<(+!&]$*);^-/|,%_>?`:#@''="'
! < '
strSecret = strAlpha ! P
strSecret = clip(strSecret) & strAlpha ! a
strSecret = clip(strSecret) & strAlpha ! s
strSecret = clip(strSecret) & strAlpha ! s
strSecret = clip(strSecret) & strAlpha ! 2
strSecret = clip(strSecret) & strAlpha ! w
strSecret = clip(strSecret) & strAlpha ! o
strSecret = clip(strSecret) & strAlpha ! r
strSecret = clip(strSecret) & strAlpha ! d
glo:secret1 = clip(strSecret)
!ud.debug(strSecret) ! Don't blab it to the world
The result is “Pass2word” which you save to the (global) variable that needs to store the secret. This takes a bit more work than storing it as a literal string, but the extra effort is worth it IMHO.
CapeSoft MyTable has two secrets like this, and its default (for simplicity’s sake) is to store it in a literal string, but you can see the literal strings in the EXE. Hence the need to hide it in plain sight.
With this method, it is a lot more difficult to figure out the secret, and all you see in the EXE is the following:
It gives no indication of what the password might be. Hidden in plain sight. A disassembler might find it, but a simple inspection of the text strings in the EXE will not.
A further way to make casual inspection a bit more difficult is to use UPX to compress the EXE. When I tried it, the resulting EXE was about a tenth of the size of the normal EXE, and it ran perfectly.
I realise this is all “security through obscurity”, but it does raise the bar a bit more than normal.
If you know what to expect, it would be possible to work out that technique and use STRPOS and MATCH to load a pe file and pull that sort of stuff out automatically. Lee White had recommended something similar on the ngs back in the C6 days which I used to use myself.
Boot sector of an infected floppy from 1986 Thats 36 years ago.
Image file with encrypted code stored in it using xor. This exe is locating the malware from inside itself, like resource files that can include icons (stenography risk), cursors and other files, XORing itself and then running the malware.
AV scanners have to look for signatures, which can include “encrypted” strings inside a file.
Plus there are also different implementations of functions like CRC32 as noted here
So hypothetically, if you have an implementation of CRC32 which is not known to others or little used, then would AV scanners pick up an anomalous string inside a PE file and go to the effort of reverse engineering the string in order to find out what it might contain and then look elsewhere in the exe to see if there are any built in functions to decrypt said anomalous string?
My setup builder installations were always generating false positives, I wasnt along which became a bit of nuisance and I couldnt use capesoft’s cryptonite to encrypt data which could be decrypted by other apps and websites because there were different implementations of encryption algo’s used.
But the above will give you an idea of the process involved in trying to find viruses and malware hidden inside PE files. With that in mind, and the automatic submission of PE files to the AV company, if the AV scanners cant automatically reverse engineer a PE file using some of the techniques shown above, they have to manually reverse engineer it using tools like the NSA’s ghidra and what ever else they have developed themselves internally. I’m reminded of the fact that Stuxnet took over a year to reverse engineer by F-Secure because of the “engineering” or obfuscation that had gone into it.
Now with clarion, we have the appgen which generates the code for us, and then it automatically compiles it. Now one of the reasons for me to write some templates to edit templates is to have a template which will obfuscate the generated code before it gets compiled. The C7+ ide has the new command line interface (CLI) called ClarionCL where it would be possible to use this to control the generation and compilation process even further.
TLDR the techniques used by Basit and Amjit (the floppy boot sector 1986 guys) and the Russian chap in the 2nd ted talk link could be used to help protect your exe. Whats good for the goose is good for the gander!
One of the advantages of Clarion over virtually every other programming IDE is the templates, you can use them to do stuff automatically, like encrypt strings before compilation when an app is compiled in release mode. I cant think of anything in Visual studio or even Windev (considering its closed nature) where that level of flexibility and convenience to generate code for different situations and requirements exist. Do you?
Mr Wang created this back door by inserting a single number into millions of lines of code for the exchange, creating a line of credit from FTX to Alameda, to which customers did not consent," he added. "And we know the size of that line of credit. It was $65 billion.
Appear weak when you are strong, and strong when you are weak . - Sun Tzu
Edit. And lets not forget all the social engineering that goes on in plain sight either.
Hi Donn - It’s nice to obfuscate stuff in the EXE, but if it’s decoded in memory then oftentimes you can see that stuff when you do a dump through process explorer or open the process with an editor that supports it, such as 010 editor. Not saying this isn’t useful to do, but you can’t stay too confident that it would actually protect your secrets.
Chilkat has a “Secure String” library. Haven’t done anything with it, but it would seem to me that it would be difficult to always have that data secure, even with that, because presumably you’d actually want to use that data somewhere.
Mainly the problem is that somewhere the line would be drawn that it ends up being strong encryption. I am not smart enough to know if my code would exceed the xx number of bits encryption limit that applies to US exports and end up getting me into some kind of trouble.
I learnt I had to apply to the dept of trade or something here in the UK to get an export licence because it employed encryption.
So I lowered the encryption to what was allowed and then used another exportable form of encryption to encrypt the first encrypted data, ie double encrypted.
I figured automated decryptors will use the social engineering to only encrypt something once and then use that to get the file signatures before branching off into another relevant level of data analysis, hence the double encryption amongst other things.
Anyway, what should have been a 6 week process took 6months and in the end the UK govt said no to me having an export licence, so I ditched the idea.
I’m not going to be forced into their proxy wars by their legislation!
Its a shame the state doesnt teach law at primary school age, I’d have got loads of adults banged up as a kid, but I think the state creates exploitation in attempt to justify its existence. Put simply divide and conquer, but their game infects the whole our lives and continues until the day we die, so legislation is a way of coercing people into creating terrorists in other countries amongst other things.
Mikko Hyppönen is a Finnish computer security expert, speaker and author. He is known for the Hyppönen Law about IoT security, which states that whenever an appliance is described as being “smart”, it is vulnerable. He works as the Chief Research Officer at WithSecure (former F-Secure for Business) and as the Principal Research Advisor at F-Secure.
This (audio) book is on my reading list.
The problem with all programs that encrypt and decrypt data is that if you have control of the PC that is running the software, you can figure out the decryption used. That’s why if you can play a commercial video DVD on your PC, even though the data files may be encrypted, the PC has to know enough to be able to decrypt it, for you to watch the movie. All the clues are there.
ALL software can be reverse engineered. Some is more tricky than others. My goal is to make my software a bit more tricky to reverse engineer than most. At that point the security of the system is enforced or broken by humans, not by technology.
If I put debug code into a production app, it’s because it may assist in tracking down problems when things go wrong. At that point, I want to log all the information to reproduce the bug/error, so then the debugger output is probably the most useful information I can get.
In other cases I want to use encryption to store sensitive information so that other users of the system can’t bypass the login security.
I didn’t use the method suggested by @jslarve because it relies on a secret, and I’m trying to protect a secret.
See this is where I differ, so I want my debugview template to leave encrypted debug msgs in the release version of the app, but those messages are just encrypted messages like “this routine ran”, “that procedure ran”, “this is the top of a loop type” and they can indicate how many times the loop has run in that situation which can also be helpful. Importantly those msgs wouldnt contain any data like record id’s, names, things like that because this is getting into the data privacy laws of a region and I’m not an expert of law in different countries. Not only that some sites will only give me test data so I gain nothing by having more detailed debugview messages capturing record ids and other data as I never get to see the live data, other sites havent been bothered about me having access to live data though.
So those fixed but encrypted messages at compile time should be enough to help me hilight where code is failing on a site and they help the software to not be of interest to spooky hackers because of the data it could have access to and the data that can be obtained from debugview messages just by running a command line switch and or using reverse compilers.
Now, if I cant fix a problem with those encrypted messages at compile time, another feature I’m thinking about for the template is then having a second level or more detailed level of debugview output which takes data like record id’s, encrypts on the fly (could be different encryption algo here to the compiled encrypted debugview msgs), and then get a site to run that and send me the msgs.
So just like we have a debug version of the clarion runtime which we can give to customers when trying to fix an onsite bug that cant be reproduced on the dev machines, at the same time a debug version of the runtime can be shipped, a special version of the app using the more detailed version could be switched on in the template to increase the information that is captured.
I dont think it would be handy to ship that second level in a release app to all sites, just a need to know basis, and at that point I could then also use an algo to make the command line switch’s a one time command line switch that only works for a fixed window of time, beit a few hours or days. I think it was about 20 years ago when I first came across a one time command switch for a linux system at a site.
I dont want to make it easy for some others, but once such a template exists, it becomes easy to make life difficult for some others.
Either way, I think the template give me an edge over a class.
Edit. On the point of data privacy laws, the maximum fine in the UK is £17.5million or 4% of turnover if we dont take necessary steps to protect peoples data. Debugview output with data in the message is an easy way to extract data which could make us liable for fines. Penalties | ICO
I dont know what other penalties exist in other countries, but I have noticed that multiple countries are using the findings of other countries to justify imposing a fine. Take Google and Android, the EU made a ruling and applied a fine on Google, so India also decided to use the same evidence to impose a fine of their own on Google as well. Whats the next country to try this on? Google appeals India’s fine over ‘unfair’ business practices on Android | TechCrunch
I dont see why data privacy laws wouldnt/couldnt be applied in the same way, its easy money for the govt of a country and helps to keep that political party in power, whilst keep taxes low.
Even if you dont sell an app abroad, if your customer has overseas entities interacting with its computer systems, what position does this leave your customer and you in? Could an overseas data protection agency decide to impose a fine on you and could you sued by your customer for not following data protection guidelines, and where could that line of suing entities stop? For example could someone decide to sue @noyantis or @MarkGoldberg for providing free debugview classes? Lawyers will try anything on in my experience.
Well let’s hope not then hey
On a serious note, we do abide by all GDPR rules and are of course registered with the ICO ( Information Commissioner’s Office ) etc etc
You do make a good point on the ability to encrypt the debug output. Our templates / classes do include the ability to write debug info, but at the users request of course - its something they have to turn on themselves. I could always add the ability to encrypt the debug output too - that way everybody is happy
Now if the debugview messages were encrypted, then you definately have to download and use this GPT-3 AI addon if you want the app annotated, but then would those apps with encrypted debugview messages end up “infecting” this GPT-3 AI model so it couldnt do its job?
But at the same time, whether we are storing data or someone else is storing data about us as individuals, I’d like to think these systems are robust and not leaking data through a simple command line switch. How easy is it to modify a shortcut and the capture the debugview output from the Bosses computer or the system admins computer?
Should the debugview output require the software companies permission to run, and not simply knowledge of a command line switch?
You could use a code that is provided by the software company. I generate a “daily code” for my app that is a hash of the user name and the date. So it can only be used by a particular user on a particular day. Or you could just have a 4-digit code that is some sort of hash of the date.