If you do obfuscate/encrypt debugview output for some client sites but not others, tick all the options that apply.
Do you obfuscate/encrypt your debugview output at client sites??
Do you encrypt your database or ISAM files when storing other peoples data??
Various countries and states are rolling out or have rolled out legislation to protect people’s data, when thinking about computer hackers and how its “big” business selling people’s data, I know databases have the option to store data encrypted, but something I’ve been debating is it is worth encrypting debugview output?
One of the biggest sources of computer hacking is from malicious staff and knowing how easy it is to get additional output by using command line switches, plus programmer oversight, as I see with some of MS windows debugview output appearing in plaint text, in amongst my own debugview output is this an area of computer security this is just missed?
When thinking of tools like the NSA’s Ghidra which is free and how easy it is to reverse engineer apps, plain text debugview output is just like reading the programmers own personal notes in the code, and thus gives hackers a better insight into what a program is doing, this is why I did this poll.
I agree that personal data should always be encrypted wherever possible. I have been developing a medical database since 2002 that encrypts the patient’s phone numbers, names and other text fields. It’s not bulletproof, and given enough data and some knowledge of encryption you could probably guess the first 10 characters of each field, but it is enough to put off an opportunistic thief. This is being done in MS Access.
Now that we have a library like MyTable for $99 it is difficult to explain why personal data is not being encrypted in Clarion.
Encrypting debug output for viewers like debugview and debugview++ has been on my mind for a few years now because the metadata like record ID’s, Time of day, etc is useful for identifying people even if there is no human readable people data, ie names, address & (phone/social security/etc) numbers.
Here in the UK we have the GDPR act. So Liz Truss is my politician (shortest british PM in history), and in order to contact your politician in the UK, you have to give them details about yourself before they will talk to you on email. This makes them a data controller, so I gave Liz a Data Subject Access Request (DSAR) to find out what info she has on me because they also have alot of control over your life. She ignored my request, but I have noticed local councillors and politicians have quietly got registered with the information commissioner now, to comply with their own laws!
Whats also interesting is how different entities interpret the law, even different police forces in different parts of the UK interpret DSAR requests differently. Law at best seem highly vague or at least the application is vague and you will even catch UK police forces committing crimes by not legally abiding to DSAR’s!
Seriously though, its a serious subject because most people dont have a clue when they have been hacked. I’ve had my mobile phones hacked so I could be GPS tracked even when walking my dog in a nearby forest! Things like airbnb altered so I’m forced to only choose limited options, I wont give away how I worked that out, but its not opportunistic thieves most people are dealing with either. And thats the thing, you have to cater for the worst case scenario.
In banking and finance, there is so much regulatory compliance paperwork involved to cover the banks, accountants or financial advisor, and yet mysteriously we in the IT world have next to nothing in terms of regulatory compliance, which is highly suspicious in my books. This isnt a situation where next to no regulations exists for laying a patio or driveway, so that illterate’s can still earn a living doing manual labour, programming is towards the other end of the scale in terms of mental work and yet there is no regulatory compliance other than anything documented in a legal contract when they exist.
So I’m just asking who actually encrypts their debugview output because I dont want my personal details stored on crappy software!
But youte asking the wrong question regarding debugview. The right question is “do you clean up your app between releases do it doesn’t output to debugview?”
Debugview is for debugging, and it’s hard to debug when there’s already lots of noise in debugview. So it’s really important to clean up your debugview messages when you’ve solved a problem. Keeping your debugview clean is good form, and will make your life easier.
That said, I judiciously leave in small amounts of debugview code, in areas where I know it’s useful on end user sites. Like if there’s an error on startup, or in the case of web servers some small amount of data on startup to help be sure everything is secure, and so on.
But do yourself a favour, and remove as much debug as you can as you go.
So do you encrypt that to keep the info away from prying eyes? Lets face it, most command line settings are simple enough for end users to run at their end and then data gets logged and sent back, even gpf reports are plain text when I last used them.
When I was looking around for what others did, IBM changed some commands so their debug output was encrypted around 2017 or 2018, and thats all I’ve really been able to find and yet I remember what some people do, they poke around with programs to see what they can learn from them, and I dont think its a good idea to make it easy for them with plain text debug output.
I had thought about doing a command line switch which is a sort of one time command line switch as I’ve seen that in use, but it still doesnt get around the problem of debugoutput is a good source of intelligence for working out how a program work if its in plain text.
The problem with encrypting your debug output is that it is no longer readable (to state the perfectly obvious).
I was thinking of making a version of UltimateDebugView that could decrypt and display the ud.Debug messages. But this would require me to ship the new UltimateDebugView to the client, defeating the whole point. Or am I missing something?
Until you get the data back onto your machine and decrypt it in order to see what the debug output is.
You only need the encryption in ultimate debug and a standalone app to decrypt on the dev machine.
I dont want you to think I’m having a stealth dig at you because of the timing of the post, its just I check out apps and you’d be surprised how bad some of them are, and considering the templates and classes, I think this is an easy win for Clarion users.
It lets you use debugview at live.sysinternals.com and its sending debugview output over the internet, probably unencrypted so everyone with infrastructure access can view whats going on, ie the spooks.
So I’m just as guilty of not encrypting debugview output.
That would make life more convenient, but if you have downloaded and used decompilers like ghidra, its less than a 5 min job tracking down the password for the command line. I used to use PCTool’s hex editor to go through apps and workout strings etc in the DOS days.
I was thinking something like PGP with a public key stored in the app and the public key being unique for each instance of the app. I dont know how many public keys PGP works with.
Potentially, this way if an app popped up on a warez site, and I’ve seen some clarion addons appear on warez sites like @BoxSoft, then you know the client who uploaded it, or has poor IT security, if it wasnt the developer themselves.
Another point is, when looking at infosec advice like this example Codesigning - Need alternative to Comodo
in general, there arent really any discussions or focus on how to make an app more secure for the software company or their end users.
This discussion/thread is one example of looking at making an app more secure for both.
For example, we are predominantly creating end user database apps, yet there is no regulatory compliance to ensure we have employed best practices in the design of said software.
Now if you were in the financial world, there is so much compliance paperwork its almost a joke. So Govt’s obviously demonstrate their ignorance or priorities.
Is it really that easy to download code onto someone’s machine from the internet?
Having experimented a bit further with debug messages I think the only practical way of encrypting debug messages is not to have them at all. Just comment them all out when you ship a production version.
if you write ud.debug('This is my debug message') then even if it appears in debug view as 54686973206973206D79206465627567206D657373616765 it still appears in the EXE as the string ‘This is my debug message’, so what’s the point? Anyone analysing the EXE is going to find the messages included in the code as being extremely useful, along with the names of procedures and functions, etc.
So if you don’t want your code analysed by someone else, you’ll have to write obfuscated code. Good luck with that.
I have been reviewing my app that uses Capesoft MyTable for encryption, and it requires 2 secrets. If you provide them as literals, e.g. ‘OpenSesame’ then they are stored as such in the EXE. Not particularly secure. You could store it as ‘4F70656E536573616D65’ which is the Hex version, and this will defeat the casual observer, but ‘ukROzIObFmzITL8xwxzI’ is probably better. (It’s the output of xHide(‘OpenSesame’), a function I use in Access VBA. But then don’t call the de-obfuscation function something like “UnHide” because it will give the game away to a determined hacker.)
Has @CarlBarnes or anyone else written a class to obfuscate/de-obfuscate text strings? I’m wary of reinventing the wheel.
Maybe a template that encrypts the debugview message when its released for production?
This is something I’ve been toying with my debugview template, its partly why I’ve asked on here what others think but it means calling dll’s from the template to encrypt the messages before its compiled and would mean having to use #Code templates which can admittedly break up sections of embed code, but its the only way I can see that would make it quick and easy to encrypt the debugview output.
As the source code for Debugview++ is on github, I think it would be possible to fork it and modify it to automatically decrypt the debugview output to make it as seamless as possible.
The main thing is the #code templates would be storing the debugview output encrypted inside the exe, affording a level of protection from those using reverse compilers like the NSA Ghidra.
I’ll be interested to see if anyone has a way to do that.
At first I thought “why not just have a wrapper around ud.debug() that checks some switch?” but then I read earlier in the thread and saw you did not want the output strings to be in the exe.
if it is not possible at code generation time, perhaps you can write a simple “post generation” program to process the code before it is compiled and put it in the project. You would need to be careful if any of your debug statements were split over more than one line.
I seem to recall the #PROJECT system has pre and post build events but do not know much about that - hopefully someone else can help with that if you end up going down that route.
An alternative is to wrap all your ud.debug() statements in either COMPILE or OMIT statements with a compile time switch - but that might be a bit tedious. Mind you a code template could do that easily.