We have an incremental load failing for some tables when a new column is added in the source tables. We use 'Table Update' component for the incremental load, and if I am wrong, the loads fails because the component does not recognise the new column. What could be done to solve this issue?
However, you can use JDBC Incremental connector if it is available for Redshift(I use Snowflake), this accommodates schema drift, so any change in source schema will be mirrored in the target table also.
This is the link: https://docs.matillion.com/metl/docs/2955330/
If not, you can create a custom pipeline to check for any changes in the schema at source and then apply those to target before every load.
To put in simple words, whenever there is a new column added in the source table, the incremental run does not load data in the target table because of mismatch in columns at source & target.
I have to delete the target table and do a full refresh. Is there anyway I can avoid this when a new column is added at the source.
For example,
The source.table_a has 1,2,3 columns and target.table_a has same 1,2,3 columns. I need to initiate full load for table_a when a new column 4 is added in the source.table_a which I do not want to do.