Quantcast
Channel: Teradata Forums - All forums
Viewing all 27759 articles
Browse latest View live

How to ensure atomic nature of multiple table actions? - response (5) by dnoeth

$
0
0

Each request places the neccessary locks (if they're not set already) and all locks will be released when the transaction is commited.
But in every DBMS there's the same recommendation, set the locks at the begin of the transaction to avoid deadlocks:

BEGIN TRANSACTION
lock table table1 for write -- all locks as a single MultiStatement Request
;lock table table2 for write
;
<do some stuff>
<do some other stuff>
<do more stuff>
END TRANSACTION
multiple statements to be added up in a transaction and then executed at once

If you can add up those statements into a single SQL string this would be best case, a MultiStatement Request:

<do some stuff>
;<do some other stuff>
;<do more stuff>
;

Can you show some actual queries?


BTEQ examples - response (25) by tusharzaware1

$
0
0

.logon localtd/tduser,tduser;

 

.IMPORT DATA FILE = C:\TD Utilities\Source Files\EmpFlat.txt

.QUIET ON

.REPEAT *

 

USING IN_CustNo        (VARCHAR(30))

         ,IN_Cust_Name   (VARCHAR(30))

         ,IN_Cust_Phone  (VARCHAR(30))

 

INSERT INTO DBA_BACKUP.Customer_Table

 VALUES     (:IN_CustNo

                ,:IN_Cust_Name

                ,:IN_Cust_Phone) ;

 

.QUIT

.LOGOFF

 

 

Error:--------------------------

*** Growing Buffer to 12337

 *** Error: Import data size does not agree with byte length.

            The cause may be:

                1) IMPORT DATA vs. IMPORT REPORT

                2) incorrect incoming data

                3) import file has reached end-of-file.

 *** Warning: Out of data. 

 *** Finished at Tue Aug 04 21:09:00 2015

 

 *** Total elapsed time was 1 second.

 

Source File:

 

10 TUSHAR 99757752

20 parunath 99757752

30 deepika 98507390

40 chaitrali 99701956

 

Please suggest??

 

FASTLOADCSV Error JAVA - response (3) by tomnolan

$
0
0

Error 2632 is a Teradata Database error ("All AMPs own sessions for this Fast/Multi Load or FastExport.")
 
The Teradata Database Messages book says the following:
Explanation: This error occurs when all AMPs have a session assigned to them for the ongoing FastLoad, MLoad or FastExport.
Notes: This error exists only to instruct the utility program to stop attempting to log sessions onto the Teradata DBS.
 
This error does not normally occur, because the Teradata JDBC Driver includes logic to avoid logging on too many FastLoad or FastExport data sessions.
There must be something unusual with your Teradata Database configuration that would permit this error to occur.
 
As a workaround, you should be able to avoid this error by specifying the SESSIONS= connection parameter.
Specify a small number of sessions, such as SESSIONS=1 or SESSIONS=2, and see if you avoid this error.
 
If you are a customer, please feel free to open a Teradata customer service incident so we can troubleshoot this issue further.
 

Stored procedure runs in SQL assistant but given error when triggered through Informatica - forum topic by arpit.ubale

$
0
0

Hi,
 
I have createda simple stored procedure to truncate and load a target table and I am passing schema name as parameter.
When I run the procedure in SQL assitant it executes correctly, but when it is invoked through informatica, it gives following error.
SRC_VNDR_FNDTN_STG_LOAD:Syntax error, expected something like a name or a Unicode delimited identifier or an 'UDFCALLNAME' keyword or '(' between the 'FROM' keyword and the string 'MDM_D1' keyword.Unable to get catalog string.]
 
The procedure code is as follows:

create PROCEDURE MDM_D1.SRC_VNDR_FNDTN_STG_LOAD
(
IN DB_NAME VARCHAR(10)
)
BEGIN

DECLARE SRC_DB VARCHAR(10);
DECLARE TGT_DB VARCHAR(15);
DECLARE SQL_QRY1 VARCHAR(100);
DECLARE SQL_QRY2 VARCHAR(1000);

SET SRC_DB = DB_NAME;
SET TGT_DB = DB_NAME||'_WORK' ;

SET SQL_QRY1 = 'DELETE FROM '||TGT_DB||'.SRC_VNDR_FNDTN_STG;';

CALL DBC.SysExecSQL(SQL_QRY1);

SET SQL_QRY2 = 'INSERT INTO '||TGT_DB||'.SRC_VNDR_FNDTN_STG SELECT * FROM '||SRC_DB||'.SRC_VNDR_FNDTN;';

CALL DBC.SysExecSQL(SQL_QRY2);

END;
Procedure call goes as follows:

call MDM_D1.SRC_VNDR_FNDTN_STG_LOAD('MDM_D1')
The error message says " between the 'FROM' keyword and the string 'MDM_D1'keyword" but MDM_D1 is just the value passed.
I am not sure if there is any systanctical change thats needed.
 
Please advice and thanks in advance.
 
Regards.

Forums: 

How to ensure atomic nature of multiple table actions? - response (6) by sdc

$
0
0

Dieter, thanks so much for your response.  Very helpful, as always.  I was actually wondering if something like your suggestion was a common practice or not, sounds like it is.  I will try to implement your suggestion and if I can't make it work I'll come up with some actual queries to further illustrate my situation.  Thank you!

Reading Clob column using teradata JDBC is too slow - response (4) by teradatauser2

TPT - Scripts from 13.10 doesn't work with 15.10 using named pipes - response (7) by feinholz

$
0
0

These messages:
 
The RDBMS retryable error code list was not found
RetentionLogSelect: The RDBMS retryable error code list was not found
**** 05:29:00 The job will use its internal retryable error codes
 
are not errors, they are informational messages and do not impact the job.
 
If there is an issue related to number of records extracted, it is not related to those messages.
 

Connecting Python to Teradata over ODBC - response (15) by Roopalini

$
0
0

With all the information , I am still no where close to getting this accomplished. My first challenge is to install pyodbc on SuSE SP1 64 bit. I get the below error when I try to install the package using zypper.
zypper install pyodbc
Loading repository data...
Reading installed packages...
'pyodbc' not found.
Resolving package dependencies...
 
I also tried to install using 'rpm -ivh package' however failing dependencies not being there. Could one of you point me to the right pyodbc package and the dependencies? Greaty appreciate the help.


TEO 141 - response (1) by bmogilev

$
0
0

Dear yrstruly, the only advice I would give - read all materials more than once and try to think them through. Creators of the TCPP want us to have deep understanding of the topic though it is definitely very hard especially if you have not touched the subject in your day to day activities.
 

Load Utilities - response (1) by feinholz

$
0
0

TPT (Teradata Parallel Transporter) is the load utility suite you should be using.
TPT can load Teradata with data from flat files, queues, access modules and even from Teradata and non-Teradata databases.
With TPT you can specify any delimiter you would like.
The TPT documentation covers everything.
The Reference manual will discuss how to set the delimiter and the User Guide provides samples for how to create TPT jobs based on various loading scenarios.
TPT also provides sample scripts in the "samples" directory in the directory into which TPT is installed.
 

Connecting Python to Teradata over ODBC - response (16) by ericscheie

$
0
0

I would recommend having a look at the new <a href="https://developer.teradata.com/tools/reference/teradata-python-module">Teradata Python Module</a>.   It doesn't depend on pyodbc and is 100% python so you should find it eaiser to get working.  Also, it provides many capabilities above and beyond pyodbc that you may find useful.

Regular expression with irregular result - response (5) by dnoeth

$
0
0

I got you wrong, of course STRTOK doesn't work in your case.
When I tried your expression outside of TD it returned the expected data, but Oracle always returned NULL, as it doesn't allow lookahead/lookback.
TD REGEXP_SUBSTR seems to be stuck when there are multiple delimiter, there's no more match at all, all return an empty string. You might open an incident on that, but I'm not sure if it's a bug or just based on the regex dialect TD implements...
As a workaround you can use the following, don't do a lookahead on the 2nd '*', add it to the result instead and then trim it:

rtrim(regexp_substr('PR*2*100.8**45*69.12','(?<=\*).*?(\*|$)',1,4), '*')

 

Impact of unnecessary compress on all char and varchar columns. - forum topic by bhartiya007

$
0
0

Hi All, 
My client recently suggested to use compression on all CHAR and VARCHAR fields like the below:

Customer_Name VARCHAR(50) CHARACTER SET LATIN NOT CASESPECIFIC COMPRESS,
Customer_id char(3) CHARACTER SET LATIN NOT CASESPECIFIC COMPRESS,
#Will the above statement compress only compress nulls??
However, i have read that TD 13 onwards, Nulls are automatically compressed even if you don't explicitly specify.
#I need to know how will it effect the performance on any queries being fired on these columns and also the impact on the table due to unnecessary compression.
#Also, do we need CHARACTER SET LATIN NOT CASESPECIFIC as this is the default value even if we dont specify.?
Thanks,
Amit

Forums: 

TPT UTF8 Import of "no valid unicode character" - need help - response (6) by brian_m

$
0
0
BEGIN LOADING
   $DBX_LOAD....
   ERRORFILES
     $DBX_LOAD...._ERR1,
     $DBX_LOAD...._ERR2
     CHECKPOINT 3000000;
     SET RECORD VARTEXT "§" NOSTOP DISPLAY_ERRORS;

     axsmod /../.../work/cp2uni_axm.so "CodePage=UTF8, ErrorChar=U+003F";

Hi,
we found a solution for this unicode import problem! Using the "AXSMOD" file from the Unicode Toolkit.
The untranslatable character is now a "?" (define in ErrorChar).
And you can use the axsmod in TPT script:
Varchar AccessModuleInitStr = 'CodePage=UTF8, ErrorChar=U+003F, EOR=0A',
Varchar AccessModuleName = '/.../.../work/cp2uni_axm.so'
Simple - when you know...
 
greets,
brian
 

TPT_INFRA: TPT04106: Error: Conflicting job names have been specified - response (3) by WAQ

$
0
0

Not sure whats going wrong over here but seemingly there is some issue with the slashes.
 
1- tbuild -L "C:/TEST/New_folder" -f sp.txt WAQ_JOB
2- tbuild -L "C:\TEST\New_folder" -f sp.txt WAQ_JOB
 
Statement 1 is giving error TPT02992 while statment 2 is working fine.
What is the issue with statement 1?


Monitor Index Creation - forum topic by arpitsinha

$
0
0

Hi,
Can we check how much in % of index creation is completed..?
 
Say I have a table of 2TB in size; I just fired a command to create an USI and after some time if i wanted to know how much of it is created; can I check it...?
 

Forums: 

TPT UTF8 Import of "no valid unicode character" - need help - response (7) by brian_m

$
0
0

Info:  1st Code in last post is a FastLoad.
But you can use axsmode in FastLoad, MLoad or TPT.
There different axsmod files for AIX, Suse, ... 
Please refer to the documentation "Teradata Unicode Toolkit".

Trying to Replace Blank value (may be null, may be space char) with 'N/A' - forum topic by arbiswas

$
0
0

Hi Experts,
I used below to solve:
CASE WHEN CAST(trim(ABC_INDICATOR) AS VARCHAR(255) ) IS NULL THEN 'N/A' Else CAST(ABC_INDICATOR AS VARCHAR(255) ) End
but it is not working. Its a varchar filed hence zeroifnull function is also not working. Any idea?
 
Thanks,
Arindam

Forums: 

Trying to Replace Blank value (may be null, may be space char) with 'N/A' - response (1) by dnoeth

$
0
0

Hi Arindam,
did you work with Oracle before (where NULL sometimes equals '')?

CASE
   WHEN ABC_INDICATOR = '' OR ABC_INDICATOR IS NULL 
   THEN 'N/A' 
   ELSE CAST(ABC_INDICATOR AS VARCHAR(255)) -- why the cast if it's already a VarChar?
END

Btw, ZeroIfNull and NullIfZero should be rewritten with Standard SQL COALESCE(x,0) and NULLIF(x,0).

Impact of unnecessary compress on all char and varchar columns. - response (1) by CarlosAL

$
0
0

Hi.
#1: Yes, It will compless NULLs only. (BTW NULLs are NOT automagically compressed without the COMPRESS clause).
#2: In general, shorter rows=>more rows per datablock=>less IO. I don't understand the term "unnecessary compression". Again, in general, the more COMPRESS, the better (with the right values, of course).
#3: It is defaulted. Dbscontrol fields DefaultCaseSpec and DefaultCharacterSet.
 
HTH.
Cheers.
Carlos.

Viewing all 27759 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>