Quantcast
Channel: Teradata Forums - All forums
Viewing all 27759 articles
Browse latest View live

Load Utilities - forum topic by tusharzaware1

$
0
0

Hello All,
 
I am working as teradata dba but I am new in teradata development.
Could you please help me to understand below questions?
 
1. Does load utilities will load data only from files? table to table not possi?
2. I have loaded data from ','& '|' seperated source. How to load data from '(space)','(tab)' seperated source files?
3.Could you please post sample source file loading data?
 
Thanks,
Tushar

Forums: 

UTY4017 Not enough data in vartext record number 1 - response (1) by mirza.hasnain90@gmail.com

$
0
0

Hi,
Check your data source file. I guess, u have defined only one column and in layout section of mload script you have defined two fields. Since you have defined two fields in layout section, mload script is trying to read two column data from data source file in acquisition phase.
Regards,
Mirza

Reading Clob column using teradata JDBC is too slow - response (2) by teradatauser2

$
0
0

Hi Toloman,
I am researching to know the performance impact of using a clob column/XML columns in tables.
What are the situations wherein we should consider using a clob column ? What are the performance impacts of using a clob column if i want to use such a column in where condition or join conditions ? What are other implications that we should consider before designing such a table ?
 
If you can give me some pointers to links/materials giving some details on this, then it will be very helpful.
--Samir

Clob Column - performance implications - forum topic by teradatauser2

$
0
0

Hi,
I am researching to know the performance impact of using a clob column/XML columns in tables.
What are the situations wherein we should consider using a clob column ? What are the performance impacts of using a clob column if i want to use such a column in where condition or join conditions ? What are other implications that we should consider before designing such a table ? I know that clob objects create a subtable and it needs to be accessed to get the data for the clob column.
 
If you can give me some pointers to links/materials giving some details on this, then it will be very helpful.
I am reposting to bigger audiance then the connetivity section.

--Samir

Forums: 

Clob Column - performance implications - response (1) by oshun

$
0
0

Hi. I was writing a post on my blog about this topic. Basically, CLOBs are not stored in the base table but in a sub table. Depending if you select the CLOB column or not this may have an performance impact:

http://www.dwhpro.com/teradata-clob/
 
Best Regards
 
Roland

Defining Index on a table having multiple Joining columns!! - response (1) by oshun

$
0
0

Hi,
Chosing the Join columns of one of the joins as the Primary Index makes definitely sense. Regarding the secondary index: It really depends if the index can be used. This is influenced by selectivity, join type (nested join possible?), etc.
I don't think there is a general advise which can be given. More details would be required.
 
BR
 
Roland
 
 

Calling Macro from .Net Provider - response (1) by NetFx

$
0
0

There is only one way:  
1- Set the TdCommand.Text = "Exec MacroName(?)" and Set the TdCommand.CommandType = CommandType.Text;
2- Add a TdParameter to the TdCommand.Parameters collection
3- Invoke TdCommand.ExecuteReader or TdCommand.ExecuteNonQuery
Executing a Macro is identical to executing a DML statement (Select or Update/Insert/Delete). The ADO.NET specification does not have a separate CommandType for Macro (vs. StoredProcedure or Text). 

TPT Question: Read two files simultaneously and write to Table - forum topic by talk2soumya

$
0
0

TPT Question

Can i do perform below tasks in TPT if yes how? I will be using multi threads as there are 120 sets of files to read for each set and load will process 4.8 billion records.

 

FILE A :                                                                              FILE B:

1|A|B|C                                                                              X|Y|Z

2|D|E|F                                                                              U|V|W

 

OutPut should write to table as below. Treat '|' as separator for table columns

==============================================

1|A|B|C|X|Y|Z

2|D|E|U|V|W

 

 

How to Use File A & B in below code


DEFINE JOB BATCH_DIRECTORY_SCAN
DESCRIPTION 'Batch directory scanning'
(
  APPLY $INSERT TO OPERATOR ($UPDATE[@UpdateInstances])
  SELECT * FROM OPERATOR ($DATACONNECTOR_PRODUCER[@ReaderInstances]);
);

 

 

Forums: 

.NET Provider on Mono? - response (5) by dsmart

$
0
0

We require Mono support to be able to run on Linux servers under Mono.
Given the openness Microsoft have towards Mono these days, this is something you should get on your Roadmap.  Microsoft is moving everything to NuGet packages and ensuring Mono support for thier Web/Entity/etc frameworks.  Get on it!
I also want a Type Provider for F# so data scientists can be empowered to use F# against your platform (see https://msdn.microsoft.com/en-us/library/hh361033.aspx).
 

Connecting to Teradata using Python - response (5) by ratzesberger

$
0
0

There is now an open source Teradata python module available:
http://developer.teradata.com/tools/reference/teradata-python-module

installation of GeoLiteCity - response (3) by Jens.Humrich

$
0
0

Hi Dieter,
I was able to track down the following error within the loaded the csv by using head -n input.csv > small.csv and loading small.csv until I found the following error:
Loading tuples using node '145.225.116.201'.
ERROR:  value too long for type character varying(16)
line 51790, column quality: "Bad.ConfigurationError"
After ALTERing the table (adjusting the Column type), I was able to load in the data just fine.
Hope this helps,
Jens

TPT_INFRA: TPT04106: Error: Conflicting job names have been specified - forum topic by WAQ

$
0
0

Hi,
I am getting the below error if I specify -L parameter (log path) in tbuild command:
TPT_INFRA: TPT04106: Error: Conflicting job names have been specified
'tbuild' command argument <job name>: 'WAQ_JOB' and 'C:/Program Files/Teradata/Client/15.00/Teradata Parallel Transporter/logs'.
Below is my tbuild statement:
tbuild -f  sp.txt WAQ_JOB -L "C:\TEST\New folder"
Can someone please provide the reason for this error?

Tags: 
Forums: 

Opening in Teradata with Datastage - forum topic by Balajee

$
0
0

Our client partner is looking for pros with experince in teredata and datastage with around 5-10 years experience.
Work Location: Renton WA

Expected Tasks

  • Build and enhance ETL Software, data warehouse scripts, views, metrics and reports.
  • Accommodate requests from business users and owners of other applications (e.g. CRM) for additional data sources, business rules and tables
  • Provide production support for 25-30 data warehouse applications.  Some off-hour (evening/weekend) responsibility.
  • Develop and execute unit test plans
  • Other duties and appropriate or assigned.

 

Required Skills/Experience

  • Analysis and development experience using the following toolsets:
    • Datastage (IBM) ETL tool.
    • Teradata,
    • Oracle data base
    • SQL Server
  • Excellent, professional written and verbal communications skills

Others:
Rate:  $/hr W2 , $/hr on a 1099 or $/hr C2C (kindly mention)(appox package will be around 40-60 $/hr
Availability: immediate/1 week upon confirmation/2 week notice
Full-Time
US Citizen Only

Reply at baajeevs@agileglobalsolutions.com with your details or reach us at ;916 235 8982
 
Regards
Bala

Forums: 

TPT - Scripts from 13.10 doesn't work with 15.10 using named pipes - response (6) by sagar182

$
0
0

Hi Feinholz,
 
We have just upgraded TTU utilities to 15.10 but the databse version is 13.10.07.12.
When Im running TPT , its giving below errors :

The RDBMS retryable error code list was not found

RetentionLogSelect: The RDBMS retryable error code list was not found

**** 05:29:00 The job will use its internal retryable error codes

 The job will use its internal retryable error codes

 

Buy TPT exits with return code = 0 (i.e. as Success) and out of 14,000 records to be extracted it only extracts 7938 records.

 

Could you please elaborate more this issue as in past posts I am seeing that you are refeering it as a known issue of PTP and is fixed woth version 15.00.00..02.

 

I am not sure if this  error is being caused by the same issue.

 

Thanks,

Sagar 

 

TPT_INFRA: TPT04106: Error: Conflicting job names have been specified - response (1) by WAQ

$
0
0

It seems like some issue with tbuild when exectes from shell. Check the below two statements. #1 is executed from CMD and #2 is executed from shell. #1 didn't produce any error however #2 ended up with error TPT04106 Error: Conflicting job names have been specified:
1- tbuild -f sp.txt WAQ_JOB -L "C:\TEST\New folder"
2- tbuild -f sp.txt WAQ_JOB -L \""C:/TEST/New folder" \"
Any idea why #2 is causing the failure?


TPT_INFRA: TPT04106: Error: Conflicting job names have been specified - response (2) by WAQ

$
0
0

I've found that using \" in #2 is causing issue. The correct statement in the shell should be:
tbuild -f sp.txt WAQ_JOB -L "C:/TEST/New folder"
However, now the error has changed to:
tbuild -f sp.txt WAQ_JOB -L "C:/TEST/New folder"
Teradata Parallel Transporter Version 15.00.00.01
Could not create Plan File C:\TEST\New folder\C:/TEST/New folder\APPLY_1[0001]00
0001087601
TPT_INFRA: TPT02992: Error: Execution Plan generation failed.
Compilation failed due to errors. Execution Plan was not generated.
Job script compilation failed.
Job terminated with status 12.

How to ensure atomic nature of multiple table actions? - response (3) by sdc

$
0
0

Padhia, thanks very much for your response.  That is certainly helpful.
One question that your response brought to mind is this: Is it possible to do a single transaction with multiple SQL queries?  Like:
"BEGIN TRANSACTION"
"<do some stuff>"
"<do some other stuff>"
"<do more stuff>"
"END TRANSACTION"
...where each line is a separate query?  That is what I really desire for my object-oriented application, for multiple statements to be added up in a transaction and then executed at once.

How to ensure atomic nature of multiple table actions? - response (4) by sdc

$
0
0

I did some testing and found that in my example above, each line would individually acquire a lock instead of waiting for the "END TRANSACTION" to do it all at once.  So, even if it is working as a single transaction, there is still a possibility for deadlock here because it does not wait until the end to try to execute everything.
Please note that in my situation, I cannot control the "other application" I mentioned in the original post, which will be only reading, but will not be using the tables in any specific order.  So, I cannot use padhia's suggestion of using the tables in a common order to avoid deadlock.  (I am assuming that the read locks that the other application causes when it reads can contribute to a deadlock, but I'm not sure of this.)

FASTLOADCSV Error JAVA - response (2) by zioli

$
0
0

Hi Tom, 
I have executed the source T20208JD.java and I think you may see the complete error track here:
 

Sample T20208JD starting: Tue Aug 04 10:13:21 ART 2015
 Looking for the Teradata JDBC driver.
 Teradata JDBC driver loaded.
 Attempting connection to Teradata.
 Connection to Teradata established.
 Creating a Statement object.
 Created a Statement object.
 Drop table exception ignored: java.sql.SQLException: [Teradata Database] [TeraJDBC 15.10.00.05] [Error 3807] [SQLState 42S02] Object 'DB.T20208JD_ERR_1' does not exist.
 Drop table exception ignored: java.sql.SQLException: [Teradata Database] [TeraJDBC 15.10.00.05] [Error 3807] [SQLState 42S02] Object 'DB.T20208JD_ERR_2' does not exist.
 Creating table DB.T20208JD.
 Created table DB.T20208JD.
 Opening C:\Users\myuser\Documents\query_result\query_result.csv
 Attempting connection to Teradata with FastLoadCSV.
 Connection to Teradata with FastLoadCSV established.
 Creating a PreparedStatement object with FastLoadCSV.
SQL State = HY000, Error Code = 1384
java.sql.SQLException: [Teradata JDBC Driver] [TeraJDBC 15.10.00.05] [Error 1384] [SQLState HY000] A failure occurred while initializing FastLoad resources for destination database table. Details of the failure can be found in the exception chain that is accessible with getNextException.
	at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeDriverJDBCException(ErrorFactory.java:94)
	at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeDriverJDBCException(ErrorFactory.java:64)
	at com.teradata.jdbc.jdbc.fastload.FastLoadCSVPreparedStatement.initializeFastLoad(FastLoadCSVPreparedStatement.java:276)
	at com.teradata.jdbc.jdbc.fastload.FastLoadCSVPreparedStatement.<init>(FastLoadCSVPreparedStatement.java:233)
	at com.teradata.jdbc.jdk6.JDK6_FastLoadCSV_PreparedStatement.<init>(JDK6_FastLoadCSV_PreparedStatement.java:22)
	at com.teradata.jdbc.jdk6.JDK6_FastLoadCSV_Connection.constructPreparedStatement(JDK6_FastLoadCSV_Connection.java:30)
	at com.teradata.jdbc.jdbc.fastload.FastLoadCSVConnection.prepareStatement(FastLoadCSVConnection.java:356)
	at T20208JD.main(T20208JD.java:159)

SQL State = HY000, Error Code = 1383
java.sql.SQLException: [Teradata JDBC Driver] [TeraJDBC 15.10.00.05] [Error 1383] [SQLState HY000] The next failure(s) in the exception chain occurred while creating FastLoad resources for destination database table. Found 86 AMP(s) and created 4 Connection(s) and 0 PreparedStatement(s) with SESSIONS=0, but all FastLoad resources that were created have now been closed.
	at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeDriverJDBCException(ErrorFactory.java:94)
	at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeDriverJDBCException(ErrorFactory.java:84)
	at com.teradata.jdbc.jdbc.fastload.FastLoadCSVPreparedStatement.connectFastLoad(FastLoadCSVPreparedStatement.java:435)
	at com.teradata.jdbc.jdbc.fastload.FastLoadCSVPreparedStatement.initializeFastLoad(FastLoadCSVPreparedStatement.java:272)
	at com.teradata.jdbc.jdbc.fastload.FastLoadCSVPreparedStatement.<init>(FastLoadCSVPreparedStatement.java:233)
	at com.teradata.jdbc.jdk6.JDK6_FastLoadCSV_PreparedStatement.<init>(JDK6_FastLoadCSV_PreparedStatement.java:22)
	at com.teradata.jdbc.jdk6.JDK6_FastLoadCSV_Connection.constructPreparedStatement(JDK6_FastLoadCSV_Connection.java:30)
	at com.teradata.jdbc.jdbc.fastload.FastLoadCSVConnection.prepareStatement(FastLoadCSVConnection.java:356)
	at T20208JD.main(T20208JD.java:159)

SQL State = HY000, Error Code = 2632
java.sql.SQLException: [Teradata Database] [TeraJDBC 15.10.00.05] [Error 2632] [SQLState HY000] All AMPs own sessions for this Fast/Multi Load or FastExport.
	at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeDatabaseSQLException(ErrorFactory.java:301)
	at com.teradata.jdbc.jdbc.GenericLogonController.run(GenericLogonController.java:659)
	at com.teradata.jdbc.jdbc.raw.RawConnection.<init>(RawConnection.java:68)
	at com.teradata.jdbc.jdk6.JDK6_Raw_Connection.<init>(JDK6_Raw_Connection.java:28)
	at com.teradata.jdbc.jdk6.JDK6ConnectionFactory.constructRawConnection(JDK6ConnectionFactory.java:73)
	at com.teradata.jdbc.jdbc.ConnectionFactory.createConnection(ConnectionFactory.java:214)
	at com.teradata.jdbc.jdbc.ConnectionFactory.createConnection(ConnectionFactory.java:169)
	at com.teradata.jdbc.jdbc.fastload.FastLoadCSVPreparedStatement.connectFastLoad(FastLoadCSVPreparedStatement.java:375)
	at com.teradata.jdbc.jdbc.fastload.FastLoadCSVPreparedStatement.initializeFastLoad(FastLoadCSVPreparedStatement.java:272)
	at com.teradata.jdbc.jdbc.fastload.FastLoadCSVPreparedStatement.<init>(FastLoadCSVPreparedStatement.java:233)
	at com.teradata.jdbc.jdk6.JDK6_FastLoadCSV_PreparedStatement.<init>(JDK6_FastLoadCSV_PreparedStatement.java:22)
	at com.teradata.jdbc.jdk6.JDK6_FastLoadCSV_Connection.constructPreparedStatement(JDK6_FastLoadCSV_Connection.java:30)
	at com.teradata.jdbc.jdbc.fastload.FastLoadCSVConnection.prepareStatement(FastLoadCSVConnection.java:356)
	at T20208JD.main(T20208JD.java:159)

 Closing Connection to Teradata with FastLoadCSV.
 Connection to Teradata with FastLoadCSV closed.
 Closing C:\Users\myuser\Documents\query_result\query_result.csv
 Closing Statement object.
 Statement object closed.
 Closing Connection to Teradata.

 

 

Any idea?

 

thanks

Reading Clob column using teradata JDBC is too slow - response (3) by tomnolan

$
0
0

In the future, please create a new forum topic for a new question. Please do not add an unrelated post to old thread.
 
>>> What are the situations wherein we should consider using a clob column ?
 
Teradata Database VARCHAR columns are limited to 64KB of data. You should use a CLOB column when you need to store more than 64KB of character data in a column.
 
>>> What are the performance impacts of using a clob column if i want to use such a column in where condition or join conditions ?
 
You cannot join on a CLOB column. That is not supported by the Teradata Database.
 
>>> What are other implications that we should consider before designing such a table ?
 
Each LOB value is stored intact on a single AMP. A LOB value is never split up into pieces and stored across multiple AMPs.
 
If you plan to store very large character values in the CLOB column, for example, character values in the range of megabytes to 1 GB, then you will need to allocate enough perm space to the table owner so that every AMP has enough room to store those large character values.
 

Viewing all 27759 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>