I need help with the following issue.I am looking to extract data from Teradata into several files on unix nodes. Right now we are using TPT EXP to extract data which is in order of terabytes. The data is chunked into several gigabytes based on date. 3 to 4 export TPT jobs write chunked data into different files on one unix node. we need a window of 20 hours and multiple fexp on the server to achieve this. Now I would like to split the same TPT exp job(using one export operator) writing subset of resultset into different files on multiple unix nodes,so that more data can be extracted with one fexp utility slot (to overcome the i/o bottleneck of disk on unix nodes). How can I achieve this.
@Feinholz In one of your replies "(There is a way (a bit more complex) to "fake out" TPT)" I think you mentioned this kind of scenario. Could you please elaborate how to solve my issue.
I need help with the following issue.I am looking to extract data from Teradata into several files on unix nodes. Right now we are using TPT EXP to extract data which is in order of terabytes. The data is chunked into several gigabytes based on date. 3 to 4 export TPT jobs write chunked data into different files on one unix node. we need a window of 20 hours and multiple fexp on the server to achieve this. Now I would like to split the same TPT exp job(using one export operator) writing subset of resultset into different files on multiple unix nodes,so that more data can be extracted with one fexp utility slot (to overcome the i/o bottleneck of disk on unix nodes). How can I achieve this.
@Feinholz In one of your replies "(There is a way (a bit more complex) to "fake out" TPT)" I think you mentioned this kind of scenario. Could you please elaborate how to solve my issue.
Thanks in advance