I’m rewriting the systems for a local non-profit food bank that will have a total of six computers. One computer will be a simple dedicated Windows 11 Po computer that has the database on it and will not be used by any users. A K.I.S.S. work-group.
My question: Which is the best DB Driver to use, using Clarion V10 for a small workgroup? Clarion Files, TopSpeed file, etc.
In the past I only used Clarion DB files and it worked for all of my purposes so I am most comfortable with that design. But, am willing to change if enough reasons can justify the change.
If you do not want the overhead of SQL - I would only use TPS files, not DAT files.
Will the data computer be running the only program accessing the data?
If you are going to stick with ISAM files, I would at least move to TPS files – you can frame your transactions, and TPS files are more robust.
Myself, I would seriously consider installing a small, free SQL server on your server. I’d suggest postgresql. It will save you worrying about corruption of the TPS files particularly caused by setting your oplocks settings wrong. And, even though you are clearly a dinosaur, having a SQL backend makes writing your program simpler. You can write your program just like you would if you had ISAM files on the back end, but if you learn just a little bit, there are vast improvements to be had in:
replacing processes with a single SQL statement with huge time savings
simplifying and speeding up reporting using VIEWs created on the server
better backups
more robust relational integrity and checking of inputs
easier security and audit
However SQL does require a change in the way you think about things. If all your previous experience in dealing with data is by looping through it one row at a time, you will have a bit of learning to do.
I have been trying to get one of my clients to switch for over 5yrs and they always find some excuse. I warned them about approaching the 2GB file limit on TPS and they said “don’t worry”. Well we are there and they are talking SQL and I am saying 6-12Mos to production and they are freaking out. Oh well.
I would use TPS for a small application. You could use a free SQL database, such as Microsoft SQL Server Express Edition, but a TPS application is KISS. It’s just easier and it’s reliable. SQL Server Express Edition has a limit of 10 GB per database file. See 7 SQL Server Express Limitations [2023].
Personally I’d recommend either PostgreSql or Firebird. Both are free. Both are easy to work with.
Firebird is “lighter” on resources. Postgres has more power. If you’re coming from Dat both are more than capable. Both run just fine on a simply Win 10 box.
While there are substantial advantages to be gained in the long run, switching and getting it all running doesnt take long. Probably less than a month. (And much of that time will be your learning curve.) Then you can start optimizing and so on from there.
There are three key reasons I’d go this route;
performance. Ultimately it’ll make the program faster (although perhaps not in the initial conversion, but later on.)
reliability. Topspeed works really well, but there are factors outside your control that can lead to file corruptions. They are usually not common for small sites, but even once or twice a year is a pain (and unsustainable once you retire.)
openness. Storing the data in an “open” way allows other tools and program easier access (wirh appropriate permissions.) This benefits the client when it comes to integrations, ad-hoc reporting and so on.
I know it feels like a big jump, but its not as hard as it used to be, and once you overcome the initial learning curve you’ll find its well worth the effort.
Personally I’d recommend either PostgreSql or Firebird. Both are free. Both are easy to work with.
Firebird is “lighter” on resources. Postgres has more power. If you’re coming from Dat both are more than capable. Both run just fine on a simply Win 10 box.
While there are substantial advantages to be gained in the long run, switching and getting it all running doesnt take long. Probably less than a month. (And much of that time will be your learning curve.) Then you can start optimizing and so on from there.
There are three key reasons I’d go this route;
performance. Ultimately it’ll make the program faster (although perhaps not in the initial conversion, but later on.)
reliability. Topspeed works really well, but there are factors outside your control that can lead to file corruptions. They are usually not common for small sites, but even once or twice a year is a pain (and unsustainable once you retire.)
openness. Storing the data in an “open” way allows other tools and program easier access (wirh appropriate permissions.) This benefits the client when it comes to integrations, ad-hoc reporting and so on.
I know it feels like a big jump, but its not as hard as it used to be, and once you overcome the initial learning curve you’ll find its well worth the effort.
I agree with Bruce on all points.
Essentially once you’ve gone the SQL route you discover there is a LOT of power at your disposal. It’s a little difficult to go back again
I’d also recommend PostgreSQL, it’s under constant active development and will scale well from 1 to several thousand users. It’s also free which never hurts
The transition can be pretty painless. Mostly everything “just works”, although a few caveats, particularly with some browses.
Clarion is actually amazingly useful for conversions due to the driver tech. You can convert a system in bits. Just convert one table, or small group, at a time. Clarion happily just works with mixed drivers. I converted a large suite of apps this way and it worked a treat.
You may have to turn off transactions in some areas for a bit. But it all works.
While I agree that some form of SQL would be the “premium” solution, it feels like overkill for this particular setup. As I mentioned, this is a very small-scale installation. In my previous business, I successfully used MSSQL alongside Clarion files, and the system has been running smoothly for 33 years without any issues. If it’s not broken, why fix it?
Currently, the food bank is using Microsoft Access, which they’ve relied on since 2004. However, Access has been painfully sluggish for their needs. Despite never purging old data, their total database size is only 149 MB—nowhere near the 2 GB limit of TPS files. Given the small scale of their operations, I don’t foresee them ever exceeding that threshold.
For these reasons, TPS should be more than sufficient for their needs. I appreciate all the insights and suggestions from everyone—thank you for your input!
Be aware that the network landscape has changed a bit in those 33 years. The SMB protocol that was relied on has moved from V1 to V3, V3 does not allow oplocks to be turned off. Oplocks are a major cause of TPS corruption. V1 is disabled by default on recent windows versions.
Just be aware of this and go in with eyes open.
I have never heard of SMB protocol. Or, at least I have gladly forgotten about it!
This client refuses to have this workgroup connected to the internet, since they store SSN numbers (really just the last four digits, but they still do not want this workgroup on the Internet). I have set up the 6 computers using a simple 1 Gb switch and planned on using just a Windows workgroup for the entire project.
Will SMB protocol be an issue then?
You are referring to “oplocks”… Are you talking about Holding records? Or Transaction processing via Clarion? Or, is this a windows setting that needs to be set on each workstation?
SMB is Server Message Block, you basically use it every day if you access files in shared folders on Windows servers since it’s the underpinning architecture of sharing files & folders in a Windows environment.
SMB1 allowed you to turn off oplocks, which was something that had to be done on each client if you were using TPS files (to avoid corruption issues).
But the oplocks setting has gone away with newer SMB versions, and (I think) SMB1 is deprecated by default on new installs because of security issues.
Hence the recommendations to use SQL instead
Their database is MS Access which is already a SQL database and is easier to covert to MS Sql Server than it will be to convert to TPS. In fact if you just converted their data to MSSQL then they could continue using their existing system without the drawbacks of the MS Access database.
Commuting to work in an 18-wheeler, or 30 seconds in a helicopter is overkill. Using a SQL data store for an app of any size is not.
Reliability and performance are never "overkill " in my book.
We took over an app for panel shops, it runs on small LANs, and contains small amounts of data. First thing we did was convert it to Firebird. (We chose Firebird for the really small foot print.) Best decision we ever made.
Given the cost is the same, I’m not sure i follow your logic.
Either way, it is still easier to convert to MSSQL than to TPS and regardless of the DB the app will need to be re-written in Clarion, which seems to be what Robert is intent on doing.