fastload cannot start with error "The request exceeds the length limit, The maximum length is 1048500." - response (1) by dnoeth
fastload cannot start with error "The request exceeds the length limit, The maximum length is 1048500." - response (2) by jingguo
SESSIONS 4; /* optional ,total number of sessions to be allotted for the script */
ERRLIMIT 1000; /* optional */
logon td/user,pass;
DATABASE dev_scratch;
DROP TABLE target_table_test; /* final target table */
DROP TABLE error_1; /* error table ,internal to fast load utility needed to be defined */
DROP TABLE error_2; /* error table ,internal to fast load utility needed to be defined */
CREATE MULTISET TABLE target_table_test
(
column1 CHAR(40) CHARACTER SET LATIN NOT CASESPECIFIC,
column2 CHAR(60) CHARACTER SET LATIN NOT CASESPECIFIC,
column3 TIMESTAMP(6),
column4 TIMESTAMP(6),
column5 FLOAT,
column6 FLOAT,
column7 CHAR(8) CHARACTER SET LATIN NOT CASESPECIFIC,
column8 FLOAT,
column9 FLOAT,
column10 CHAR(100) CHARACTER SET LATIN NOT CASESPECIFIC,
column11 CHAR(60) CHARACTER SET LATIN NOT CASESPECIFIC)
NO PRIMARY INDEX ; /* because the table doesn't have too many rows, i used no primary index. anyway it doesn't seem to cause the problem here */
SET RECORD VARTEXT "\t"; /* delimiter in the source file */
DEFINE /* define the structure of the source file */
column1 (VARCHAR(80)),
column2 (VARCHAR(120)),
column3 (VARCHAR(52)),
column4 (VARCHAR(52)),
column5 (VARCHAR(60)),
column6 (VARCHAR(60)),
column7 (VARCHAR(16)),
column8 (VARCHAR(60)),
column9 (VARCHAR(60)),
column10 (VARCHAR(200)),
column11 (VARCHAR(120))
File=csv_fl.dat; /* source file location */
SHOW;
BEGIN LOADING ethoca_base_test
ERRORFILES error_1, error_2;
INSERT INTO ethoca_base_test /* final insert */
(
column1
,.../* skip the column names just for an example */
,column11
)
VALUES
(
:column1
,.../* skip the column names just for an example */
, :column11
)
;
END LOADING;
LOGOFF;
----------
i have to mention that if I replace the dat file with a file of 100 lines, this script works.
I use this command to run it:
fastload < test_fastload.fl
fastload cannot start with error "The request exceeds the length limit, The maximum length is 1048500." - response (3) by jingguo
SESSIONS 4; /* optional ,total number of sessions to be allotted for the script */
ERRLIMIT 1000; /* optional */
logon td/user,pass;
DATABASE dev_scratch;
DROP TABLE target_table_test; /* final target table */
DROP TABLE error_1; /* error table ,internal to fast load utility needed to be defined */
DROP TABLE error_2; /* error table ,internal to fast load utility needed to be defined */
CREATE MULTISET TABLE target_table_test
(
column1 CHAR(40) CHARACTER SET LATIN NOT CASESPECIFIC,
column2 CHAR(60) CHARACTER SET LATIN NOT CASESPECIFIC,
column3 TIMESTAMP(6),
column4 TIMESTAMP(6),
column5 FLOAT,
column6 FLOAT,
column7 CHAR(8) CHARACTER SET LATIN NOT CASESPECIFIC,
column8 FLOAT,
column9 FLOAT,
column10 CHAR(100) CHARACTER SET LATIN NOT CASESPECIFIC,
column11 CHAR(60) CHARACTER SET LATIN NOT CASESPECIFIC)
NO PRIMARY INDEX ; /* because the table doesn't have too many rows, i used no primary index. anyway it doesn't seem to cause the problem here */
SET RECORD VARTEXT ""; /* delimiter tab, i could not type tab in this post, just for an example */
DEFINE /* define the structure of the source file */
column1 (VARCHAR(80)),
column2 (VARCHAR(120)),
column3 (VARCHAR(52)),
column4 (VARCHAR(52)),
column5 (VARCHAR(60)),
column6 (VARCHAR(60)),
column7 (VARCHAR(16)),
column8 (VARCHAR(60)),
column9 (VARCHAR(60)),
column10 (VARCHAR(200)),
column11 (VARCHAR(120))
File=csv_fl.dat; /* source file location */
SHOW;
BEGIN LOADING ethoca_base_test
ERRORFILES error_1, error_2;
INSERT INTO ethoca_base_test /* final insert */
(
column1
,.../* skip the column names just for an example */
,column11
)
VALUES
(
:column1
,.../* skip the column names just for an example */
, :column11
)
;
END LOADING;
LOGOFF;
----------
i have to mention that if I replace the dat file with a file of 100 lines, this script works.
I use this command to run it:
fastload < test_fastload.fl
Insert records in bulk into a Teradata Table - forum topic by indra91
I have to insert some 50 million rows into a teradata table.The insert data will be generated by shell script.I know we can import data from a file using the IMPORT data option in the sql assisstant.But I would like to know whether this approach will work for 50 million records considering the size of the input file will be huge or we have to implement this using any other utility?Any suggestions are welcome.
Regards,
Indranil Roy
Teradata JDBC Driver returns the wrong schema / column nullability - response (14) by tomnolan
The Teradata JDBC Driver's ResultSetMetaData.isNullable method has worked the way it does for several years, and we need to preserve its current behavior for applications that expect it.
We can consider providing a new connection parameter such as MAYBENULL=ON that would enable you to choose a different behavior for the ResultSetMetaData.isNullable method. The default would be MAYBENULL=OFF to yield the existing old behavior.
How can I load files into TD tables using .NET (C#) - forum topic by sharonn
Hi,
I'm writing a program that connects to TD.
During execution i'm creating a class structure that i need to move to TD DB.
In .net i can use Bulk Copy or write the data into text files and then use Bulk Insert.
Is there an easy direct way of doing that in TD (like bulk copy or bulk insert)?
I know there are tools like mload/fload/... How can i use it's scripts using c#?
Thank you
Sharon
Insert records in bulk into a Teradata Table - response (1) by dnoeth
No, SQL Assistant, everything is faster than this.
Better do it Teradata Studio (if you prefer a GUI, it will use the JDBC FastLoad protocol if the target table is empty) or switch to TPT (If it's a delimited file which matches the columns in the target table TPT's EasyLoader will be the easiest way).
Using RJDBC and connected successfully, I get errors when trying to write using "as.td.data.frame" or "dbWriteTable"... - response (7) by tomnolan
@jknife What problem are you encountering? Please post details of your code and what error you are seeing.
load_from_hcatalog --- error while selecting Varchar,Date,Timestamp column - response (1) by Arpan.Roy
Hi All,
Can you please help us om load_from_hcatalog issue we are facing?
Thanks in advance.
Arpan.
Release Lock from a Table - response (2) by Ghalia
Hi Dieter,
After using LOCK TABLE MY_TABLE EXCLUSIVE
I have an update
Then , I want to add something like RELASE LOCK from MY_TABLE after the update, in case of fail.
Is it possible in Teradata (14) please ?
Ghalia
Easyloader Schema Error 15.10 Only - response (9) by dlcook
I may be running into this issue as well. How does tdload determine the source Schema?
I'm following along with the setup here: http://developer.teradata.com/database/articles/teradata-express-14-0-for-vmware-user-guide which shows a table being created with an INTEGER field, but I'm not able to successfully load data if my table has any non-VARCHAR fields. My thought was that tdload is getting the source schema from the target table schema, but if that's the case, then the example in the link above shouldn't work. Am I missing something?
Here's my table:
CREATE TABLE TestDB.TestTable
(ColInt INTEGER,
ColVarchar VARCHAR(20))
Here's the tdload command:
C:\Users\Dave\Desktop>tdload
--SourceFileName "C:\Users\Dave\Desktop\test.csv"
--TargetTdpId localtdat
--TargetWorkingDatabase TestDB
--TargetTable TestDB.TestTable
--TargetUserName dbc
--TargetUserPassword dbc
testJob
Here's the result:
Teradata Load Utility Version 15.10.01.02 32-Bit
TDLOAD: TPT05550: Warning: no Source File Delimiter specified, default "," will be used
Teradata Parallel Transporter Version 15.10.01.02 32-Bit
Job log: C:\Program Files (x86)\Teradata\client\15.10\Teradata Parallel Transporter/logs/testJob-1.out
Job id is testJob-1, running on DavePC
Teradata Parallel Transporter DataConnector Operator Version 15.10.01.02
$FILE_READER[1]: DataConnector Producer operator Instances: 1
$FILE_READER[1]: TPT19108 Data Format 'DELIMITED' requires all 'VARCHAR/JSON/JSON BY NAME/CLOB BY NAME/BLOB BY NAME/XML BY NAME/XML/CLOB' schema.
$FILE_READER[1]: TPT19015 TPT Exit code set to 12.
Teradata Parallel Transporter Load Operator Version 15.10.01.02
$LOAD: private log specified: LoadLog
$LOAD: connecting sessions
$LOAD: preparing target table
$LOAD: entering Acquisition Phase
$FILE_READER[1]: Total files processed: 0.
$LOAD: disconnecting sessions
$LOAD: Total processor time used = '0.28125 Second(s)'
$LOAD: Start : Wed Aug 24 10:32:48 2016
$LOAD: End : Wed Aug 24 10:32:51 2016
Job step MAIN_STEP terminated (status 8)
Job testJob terminated (status 8)
Job start: Wed Aug 24 10:32:44 2016
Job end: Wed Aug 24 10:32:51 2016
Thanks!
Locking - response (8) by Ghalia
Hi,
After using LOCK TABLE MY_TABLE EXCLUSIVE
I have an update
Then , I want to add something like RELASE LOCK from MY_TABLE after the update, in case of fail.
Is it possible in Teradata (14) please ?
Ghalia
TDStudio stopped working after Java update - response (8) by fgrimmer
The latest version of Studio does not place the JRE location in the .ini if you are using the default location. Can you upgrade to 15.11?
Teradata Partition table used in Informatica - TPT (update) - forum topic by tnshankar
Hi ,
I am getting the below error when i try to run the Teradata partition table (Case_N) in Informatica -TPT connectivity .
Error : Target [TABLE_NAME] is an invalid database partition target type
Could anyone help me on this ?
Easyloader Schema Error 15.10 Only - response (10) by toadrw
I guess the issue I'm having if we are moving a flat file to Teradata why doesn't Easy Loader assume all VARCHAR?
Does this mean we need to create a job variables files with the proper schema for the flat file (it would be all VARCHAR I assume)? We're not using one in 15.00.
I also still don't understand why we're seeing FLOATS from easy loader in 15.10 when it was previously all VARCHAR in 15.00?
You've always been very helpful, Steve, when I've posted in the forum. I appreciate all the time you have given to this question. We'll do our best to workaround this, but something doesn't seem right here. :/ Thanks again!!!
Easyloader Schema Error 15.10 Only - response (11) by feinholz
If you are loading data from a flat file to the Teradata Database, the schema will be taken from the target table.
However, it looks like we may have a bug/regression from when EasyLoader was introduced in TPT 13.0.
Please add --SourceFormat 'delimited' and that should tell EasyLoader to convert the target table schema to all-VARCHAR.
Easyloader Schema Error 15.10 Only - response (12) by feinholz
Well, I could have edited the previous comment but did not in case someone read the old one.
I checked with the developer to make sure no regressions have occurred and I was assured no regressions have been introduced.
When EasyLoader was first released with TPT 13.0, the idea was to facilitate the loading of data from a delimited flat file to Teradata without the need for a script. And the rule was that the data in the file had to match the table layout exactly (column for column). Thus, EasyLoader would take the schema from the target table and convert the non-VARCHAR columns to VARCHAR and thus generate an all-VARCHAR schema.
EasyLoader should still work that way.
Thus, I will investigate.
fastload cannot start with error "The request exceeds the length limit, The maximum length is 1048500." - response (4) by feinholz
I believe the issue *might* be related to the delimiter.
The use of "\t" does not mean use the TAB character. We will look for the characters "\" followed by "t", and thus we keep reading until we get too much data.
Try this and see if it helps.:
SET RECORD VARTEXT "TAB";
Easyloader Schema Error 15.10 Only - response (13) by feinholz
Ok, another update. Yes we have a regression. We were doing an EasyLoader overhaul and introduced a bug (that is currently being fixed).
If you add --SourceFormat delimited (I do not remember whether you need quotes around it), that should get around the bug.
Easyloader Schema Error 15.10 Only - response (14) by toadrw
You're best Steve! Once again you came through and I'm so appreciative! Teradata is lucky to have you and thank you for sticking with this through until the end and providing us with excellent guidance as usual.
--Todd
Can you show the actual FastLoad script?