Back at CIDC2013 I did a presentation on refactoring and optimizing code and on the general theme of “quality”. One of the points was that the amount of testing you should do is proportional to the consequences of getting it wrong.
Consider these contrasting scenarios:
-
You are doing some batch update where you take a backup, then do the update. If it crashes mid-stream you simply restore to the backup point, fix the problem and re-run. Rinse and repeat.
-
You write a system where life depends on the software. For example, software for a driverless vehicle or a rocket launch with people onboard.
Obviously case 2 has a much higher cost (lives) of failure. Therefore the testing needs to be appropriately much, much higher.
In the mid 1980’s I was at a job where there was basically no formal QA (Quality Assurance) apart from one’s own professional pride. We were trusted to test our work well before putting it into production. This setup works as well as the quality of the programmers.
My next job was the complete opposite - a large team with multiple layers of bureaucracy. A job or fix that might have taken an hour or two at the previous job now took days or even weeks with numerous forms to fill in with documentation and tests that were passed to a separate QA team who vetted your changes and tested everything. And had lots of meetings to discuss everything.
It was a bit of a culture shock moving from one environment to the other and I felt somewhat stifled by the lack of productivity.
There had to be a better way!
I started my own company so I only had to answer to myself (and my clients if I got something wrong).
Mind you I never wrote software that sent anyone to the moon.
I hadn’t really thought of that before. This is true of the “boiler plate” code but not your custom code that you put in embed points. Dave Harms wrote a series of articles in his Clarion Magazine basically suggesting you take your code out of embeds and place it in classes that could be tested automatically using a unit test framework.
Clarion can be divided into two parts - “open” and “closed”. The templates and ABC library are “open” - you can view the source and fix bugs or do your own thing.
The compiler, run-time and drivers are a different matter. My concern in recent years has been that some new versions of Clarion seem to be “two steps forward, one step back”. They fix up some things. And break others.
You hear of people using a later version of Clarion with some particular driver from an earlier version “because it works”.
if true then that’s a worry. The costs of a mistake here are high. Grounding all the planes for a few hours must have cost millions in productivity (not to mention aggravation).
Reports on this were unclear and contradictory. I think it comes down to media reporters not having an IT background. When I first read about this I thought they meant they had restored to an earlier version of the software but there was some corruption in a data file so of course that fixed nothing. Other reports suggested the data corruption was caused by some “bug” in the software. Any of us who in the past used tps files in a networked environment probably know about data corruption. Not necessarily caused by your software.
You are probably right but that didn’t stop some wags (jokers) on the Skype groups suggesting it must have been tps file corruption!