In C6, not tried since, I used to be able to create a clarion dct file/table where the data source was ASCII, set the field delimiters and set the filename so it was pointing to the CSV filename and path in order to use the CSV file as a data source in a clarion app.
Then making a simple clarion app that opened the csv file before copying the records into a TPS or SQL database is optional, the app wizards would make it a 5 or 10 min job, depending on if I had to process the data in some way, like removing duplicates or concatenating fields or splitting fields eg fullname field into firstname, middlename & surname fields or vice versa.
If you are importing a large CSV file, like million+ or billion+ records, and its going into a sql db, where MS SQL server was the ultimate data repository, the MS SQL server has an import option which is the fastest to use. It bypasses the db record and table locking which could be in action.
Importing and Working with CSV Files in SQL Server (sqlshack.com)
By comparison, when importing v.large csv files, into a sql server, using a clarion app (or any other app), would take hours instead of minutes. Sure the length of time to import could be reduced if using LogOut & Commit importing records in batches, but everything was slower in my tests unless I used the built in MS SQL server flatfile import method as detailed above. I’m sure this would apply to other SQL servers, not just MS SQL server, but some of the csv files I had were massive, literally hundreds of millions of records in a csv file.