Hi Feinholz,
You mentioned earlier that there was going to be some consideration towards building a tool that will potential be able to decode the dataparcel column in the _ET table in a convenient manner.
Is that tool available?
Currently, we import a LOT of web data(30 million records a day) into one of our table using TPT LOAD operator. Since it is web data, the possibility of data being out of format is pretty much there. As a result we have a situation where potentially 20-30k records make their way to the error table due to bad formatting on a daily basis. Though the errorcode and errorfield tell us what and where the error is, it would be great if we could see the actual record too rather than in some binary format.
Due to this limitation, we're actually considering loading all 30 million records into a varchar only NOPI stage table, then insert the same into the target table with the proper datatypes using MERGE. The downside with this approach is the additional casting we need to do for all columns before inserting the records into the target table. Casting can become really expensive from a CPU Standpoint especially when done over a lot of records and a lot of columns. Our target table is both: VERY LONG AND VERY VERY WIDE.
Hi Feinholz,
You mentioned earlier that there was going to be some consideration towards building a tool that will potential be able to decode the dataparcel column in the _ET table in a convenient manner.
Is that tool available?
Currently, we import a LOT of web data(30 million records a day) into one of our table using TPT LOAD operator. Since it is web data, the possibility of data being out of format is pretty much there. As a result we have a situation where potentially 20-30k records make their way to the error table due to bad formatting on a daily basis. Though the errorcode and errorfield tell us what and where the error is, it would be great if we could see the actual record too rather than in some binary format.
Due to this limitation, we're actually considering loading all 30 million records into a varchar only NOPI stage table, then insert the same into the target table with the proper datatypes using MERGE. The downside with this approach is the additional casting we need to do for all columns before inserting the records into the target table. Casting can become really expensive from a CPU Standpoint especially when done over a lot of records and a lot of columns. Our target table is both: VERY LONG AND VERY VERY WIDE.