Adding column to sql(ODBC) database w/o changing CW DCT

I tried a quick search of previous topics but did not see anything so i am asking again.

Can I insert columns into a postgres sql database table at the end of the record w/o changing my CW DCT w/o causing any errors in my CW App? (Error 47, etc)

ALTER TABLE *table_name* ADD *column_name datatype* ;

I need to add the columns, but do not want to change my program until I am ready to finalize the coding.

It is critical that production not be affected until i make the needed changes and release new versions of the program utilizing the new columns.

Thanks.

Yes. Your clarion program does not know about or care about any columns in the table that are not in the dictionary. It will know that they are there when you open the table, because it will do a “select * from table_name” to get a list of columns and types from the server, but it will happily ignore the ones that it can’t match to the columns in the clarion dictionary.

There are obviously things that could fool up your program, like you make the new columns not null and don’t provide a default value, so any insert from your program will fail, but that’s about it.

1 Like

Thank you - i thought this was true, but could not remember be for sure.

I was planning something like

ALTER TABLE env_customer ADD COLUMN
(
EagleStamp boolean,
EagleOwner char(7),
EaglePermit char(6)
)
AFTER bin_button_link;

Adding new fields to the end of a record for sql tables has always worked well for me when testing new versions on a live db. It even works for stored procedures.

i just added 3 columns (21 bytes total) to 8.8 Million records and it took 40 min 45 sec

The system slowed a bit, but has not crashed any programs not using an updated DCT for the new fields added.

It’s more work but if the 40min wait is unacceptable then exporting the data to txt, dropping the db, creating a new db then importing the modded txt data can outperform in some cases with larger numbers of records (billions).

FWIW

Taking the system down is not an option - wish it was

I find it hard to believe that writing out to text and reading in from text could possibly be faster than an in-database operation. What is often suggested is:

Create a new table with the new structure, but without indexes.
Insert the data from the old table (no indexes or constraints to check, so very fast)
Create indexes and constraints on the new table

Drop the old table
Rename the new table to the old table name

It’s just those last two, very quick, steps where people cannot use the existing table.