This is an interesting topic. I need help with similar problem. I am looking to extract data from Teradata into several files on unix nodes. Right now we are using TPT EXP to extract data which is in order of terabytes. The data is chunked into several gigabytes based on date and several fastexport jobs write chunked data into sevral files on one unix node. we need a window of 20 hours and multiple fexp utilites on the server to achieve this. Now I would like to split the same TPT exp job on multiple unix nodes,so that more data can be extracted with one fexp utility slot (to overcome the i/o bottleneck of disk on unix nodes).The extracted data from fastexport(or tptexp) is written into multiple files on different unix nodes.
This is an interesting topic. I need help with similar problem. I am looking to extract data from Teradata into several files on unix nodes. Right now we are using TPT EXP to extract data which is in order of terabytes. The data is chunked into several gigabytes based on date and several fastexport jobs write chunked data into sevral files on one unix node. we need a window of 20 hours and multiple fexp utilites on the server to achieve this. Now I would like to split the same TPT exp job on multiple unix nodes,so that more data can be extracted with one fexp utility slot (to overcome the i/o bottleneck of disk on unix nodes).The extracted data from fastexport(or tptexp) is written into multiple files on different unix nodes.