There is an issue with Browse-Forms inserting records while the last record read before inserting the new record had NULL values.
I shared this issue and asked for ideas on last ClarionLive. SETNONNULL is per field (It could done looping through the fields). By the documented functionality of SETNULLS, seems it does not help in this case as you need a previous stated retrieved with GETNULLS. Generally speaking, it would be good to have a built-in solution instead of trying to workaround this. I filled in PTSS #43443
Description: When Inserting new rows on a Form, Field’s content are primed by default to 0/empty, but NULL state of the fields is not reset, so based on the last row accessed before Insert action, NULL state may be different, making the way they will be saved into the SQL database random/not deterministic if the value for the column remains 0 or empty.
While this behavior could have been in place unnoticed for long time, the effects of this issue are unpredictable, eg. filters like somefield=0 on server side will not include those that were saved as NULL.
Happens to MSSQL driver and also with ODBC driver (either pointing to MSSQL or an SQLite table using CHRIS WERNER SQLITE ODBC), it does not happen with Clarion SQLite driver.
Steps to Reproduce: Build and start test application, connect to a MSSQL database, it will create a simple table on Master db (all done by built-in clarion code as the table has CREATE attribute) called TestFile.
Select Browse TestFile, Add a Row with key value 1 nullable field value 0
Add another Row with key value 2 nullable field value 0
Click on Set Null and Update button to set null either first or second row,
Try Insert row 3 and 4, changing the highlighted row to select a row with null value or one without null value, the new row will replicate the original row null status.
I perhaps experienced this long ago after moving to Postgres but I know there were other situations where setting NULLS in Clarion was a pain in the backside. I finally settled on using NULLIF in PG triggers instead, and now spend very little time in Clarion thinking about null checking.
Philosophically I’d say everything should start off as NULL, and if the person filling in the form wants something to be 0 and not null, they have to explicitly put a 0 in the field.
But I can also think of situations where that would be annoying, like specifying your annual income from different sources (earnings, interest, business income etc.) where as a form filler you would expect you only have to fill in the types of income you have. But those cases I think I would treat as an exception, and would explicitly set the fields to non-null zero values when entering the form.
Like most people, probably, I only ever ran into problems where Clarion tried to put 0s into fields which were lookups, and then have the database tell you the lookup value 0 didn’t exist in the lookup table, so I always assumed that Clarion was (annoyingly) filling numeric fields with non-null 0 values. Interesting to know that is not always the case, but I doubt it will cause me much grief.
Douglas, Andy also mentioned the triggers workaround, well not really the same but Clarion client side triggers
Daniel, that could try to help on the queries generated by the view engine/drivers, similar to the new OPT implementation in release 13505, but could come with extra side effects, as per the documentation it would be always including rows with nulls while sometimes you might be interested in filtering out.
Same for triggers, if you have queries testing for @@rowcount , triggers would alter the logic, same if you have queries with OUTPUT clause.
Probably better going a SETNONNULL per field and also to be safe with ISNULL/COALESCE although it could have a performance impact regarding indexes.
Jon, I understand your thinking about implementing the NULL concept completely but Clarion aimed to simplify all that to Clarion programmers, I’m not against that.
The point is, being Clarion datacentric, Browse-Form as the core part, why inserting a new record would be non deterministic on the null state of the fields, depending on whatever row you have selected before?
Ive been thinking about this a bit. Obviously the root of the issue is the CLEAR(file) command.
Currently that clears all the fields in the record, but it leaves the “null state” of each field alone - neither setting or unsetting the current null state.
It should probably either clear the null state, or set the null state. An argument could be made both ways.
Given that most Clarion developers mostly ignore null as an option, i suppose the default could be clearing the null state. Thats sorta backwards, but whichever way it goes someone will be unhappy.
So my vote is tgst CLEAR(file) resets the null state of all fields to not-null.
Thanks Bruce, I also think CLEAR(file/record) is the root but omitted mentioning it to avoid arguments if it is a bug or not, its documentation is a bit confusing regarding Nulls. What is “clear” is that inserting rows should behave consistently. Perhaps the solution is resetting nulls on CLEAR or adding a documented nulls reset function and modifying ABC and Legacy templates to call it.
Thanks Carlos, I’ve also tested SETNULLS with fake second parameter and worked, but as inferred on the first post, that is “undocumented” so it could work with some records, but is not guaranteed to work in all cases nor to continue working on later Clarion versions.
Notes from the test made on other record with more fields (Hex values are a dumping of GETNULLS function: it can be seen there are lots of values with unknown meaning):
!SETNONULL(MyField)
!SETNONULL(MyFile,MyField)
!SETNONULL(MyRecord)
!SETNONULL(MyFile,MyRecord)
SETNULLS(MyFile,ALL('<0>',2048)) !8 BYTES PER FIELD (2048:UP TO 256 FIELDS IN THE RECORD), OVERED FIELDS ALSO COUNT
!Original:
!040000001E0000001E0000001E000000000000001E000000060000000500000003000000050000000300000010000000000000000000000000000000000000000300000000000000FFFFFFFF
!04000000FDFFFFFFFDFFFFFFFDFFFFFFFDFFFFFFFDFFFFFF06000000FDFFFFFFFDFFFFFFFDFFFFFFFDFFFFFF10000000080000000600000006000000FDFFFFFFFDFFFFFFFDFFFFFF01000000
!after setnulls 0... testing with getnulls:(it seems it might reseted more values than the null state)
!00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
!00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
!Instead SETNULL, After only SETNONULL(MyField) or SETNONULL(MyFile,MyField):
!0000001E0000001E0000001E000000000000001E0000000600000005000000030000000500000003000000100000000000000000000000000000000000000003000000000000000000000004
!000000FDFFFFFFFDFFFFFFFDFFFFFFFDFFFFFFFDFFFFFF06000000FDFFFFFFFDFFFFFFFDFFFFFFFDFFFFFF10000000080000000600000006000000FDFFFFFFFDFFFFFFFDFFFFFF000000002F
!Instead the above, if using SETNONULL(MyRecord) or SETNONULL(MyFile,MyRecord)
!00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
!00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000008000000060000000600000000000000000000000000000000000000
!but then when clicking ok to insert it fails:
!Record Insert Error
!An error was experienced during the creation of a record.
!File:MyFile
!Error: Invalid date format (22008).