Quantcast
Channel: Teradata Forums - All forums
Viewing all 27759 articles
Browse latest View live

Issues happening after TTU13.10 upgrade - forum topic by Dominiq

$
0
0

Recently we installed TTU13.10 upgrade as many users were facing temporary connectivity issues for ABI and unix servers.
Post upgrade we are getting to hear all new things from users.

  1. For jobs running from several months, users faced problems regarding comments. Users when put coments within /* */, their jobs are running successfully, however after removing comments, they are facing errors if there are multiple SQLs to be executed.comments can be of length from one word to number of lines. For running jobs successfully, it is important to have comments. However, this is not true for all. We have jobs running from months, and if we remove their comments, then issue occurs.
  2. while we faced such issues for comments, other team faced issues for quotes. For column alias, they had double quotes like "SEL COL1 as "MYCOL1"" which is not running now. however when double quotes replaced with single quotes, there has been no issues.

Both these issues are happening for ABI Graphs, and post TTU upgrade. Can there be a relation with TTU or its just co-incidence? Can there be problems for ABI compatibility with TTU? We are clueless about such things.
 

Forums: 

Trying to Install Teradata Studio 15 and can't Uninstall 14.10 without Binaries - Windows X86 64 - response (6) by fgrimmer

$
0
0

Brett. Did you have Studio 14.10 installed or Studio 14.10.01? These are two different product codes.

Need help to write Recursive Query with Unknown Depth level - response (2) by deepthi.narayanasetty

$
0
0

Hi Dieter
thank you for thr quick response..
if we make the limit to 10000 or something, query is running fr more time and spooling out..

is there any thing that we can do to avoid spoolspace issue...

please help me..

Trying to Install Teradata Studio 15 and can't Uninstall 14.10 without Binaries - Windows X86 64 - response (8) by brettbates

$
0
0

Francine;
 
I had 14.10.01 installed and found the product key to be {BA32DE5A-4189-4287-9D9B-52A8A6AB1410}
 
Based on this, I ran a search through the entire registry for this entry and deleted the following keys/items based on this to attempt a manual removal of the software and also deleted the files on the file system for the old version:
 
HKEY_CLASSES_ROOT/Installer/Products/{Entire Key}
HKEY_LOCAL_MACHINE/SOFTWARE/Microsoft/Windows/CurrentVersion/App Management/ARPCache/{KEY}
HKEY_LOCAL_MACHINE/SOFTWARE/Microsoft/Windows/CurrentVersion/Installer/Folders (KEY with ProductKey)
HKEY_LOCAL_MACHINE/SOFTWARE/Microsoft/Windows/CurrentVersion/Installer/UserData/{USERID}/Products/{KEY}
HKEY_LOCAL_MACHINE/SOFTWARE/Microsoft/Windows/CurrentVersion/Uninstall/{KEY}
 
I am now in the process of attempting re-install and will let you know if it succeeds.
 
Thanks,
Brett B.

Download TTU v 13.10 - forum topic by rychitre

$
0
0

From where I can download TTU v 13.10? I see only TTU v 15 available to download?

Forums: 

Download TTU v 13.10 - forum topic by rychitre

$
0
0

From where I can download TTU v 13.10? I see only TTU v 15 available to download?

Forums: 

3706 expecting something between beginning of the request and SELECT - forum topic by AB75151

$
0
0

SELECT * FROM CUST.MEMBER
This simple SQL runs just fine if I type it on SQL Assistant. But if I copy paste this from notepad SQL assistant throws this 3706 error. There are times, to debug a code, I need to copy paste few 100 lines of working code to SQL assistance and it makes me type the whole code to execute it. what a waste of time :( did someone experience this before?
I am running a TD 14 client on a TD13 database. We have 1 test on 14, and Prod on 13.
 
ODBC is Provider version is ODBC 14.10.00.00
 
Your assitance is appreciated.

Forums: 

Need help with selecting wide Teradata colum through Oracle Transparent Gateway - forum topic by deborah.j.hunt@boeing

$
0
0

Has anyone gotten this to work?  Apparently Oracle Transparent Gateway is trying to convert a character column defined as varchar(10000) -- only the first 999 characters are populatedm, but trim doesn't help -- OTG converts it to LONG.  The select / join gets an error -- illegal use of LONG datatype. 
Originally the user was getting "remote statement has unoptimized view with remote object", so they created a view that mirrors the Teradata view they're using...so now we have the new error above.
The column that's giving up the trouble is the first column in the table, and is defined as:
RANGE_STR VARCHAR(10000) CHARACTER SET LATIN CASESPECIFIC NOT NULL,
the query is a SELECT DISTINCT (no inserts involved).
Thanks in advance.

Forums: 

how can you drop global temp table definition? - forum topic by tomcat711

$
0
0

I have to change the structure of GTT definision. I was trying to drop first and then create. Looks like it does not work in teradata.
Please help!  
 
Thanks

Forums: 

Can we able to perform all the ETL activites by using VIEWS ????? - response (3) by goldminer

$
0
0

We have been very successful in switching some of the ETL processing out of a seriel ETL engine and into the parallel efficiency of the Teradata RDBMS.  In many instances we have accomplished this utilizing correctly formulated views.  I would like to communicate a potential 'gotcha' when adopting this method though...  Once ETL starts getting 'pushed down' into the database, it can perform very well due to the MPP architecture of the database.  It also has the potential to perform much more poorly than the same processing performed in the ETL engine.  Why you ask can this be so?.... Usually the culprit is a lack of statistics.  Pushdown creates an level of stats rigor that was not present before pushdown became so popular.  Make sure your ETL developers are cognizant of the perils of the lack of stats when executing pushdown sql... or you can enforce by combing through the DBQL, logs which of course is reactionary instead of proactive.  A good point of load stats process is also very helpful in avoiding those pesky stale stats situations.
 
Some do not like doing things like this outside the ETL tool because of lineage, metadata, business rules, etc. issues.  I suppose there are pros and cons to both.

Multi-lingual data support - response (1) by david.craig

$
0
0

In Teradata you need to use Unicode client and server character sets. There are no levels to set.

how can you drop global temp table definition? - response (1) by Glass

$
0
0

It does if you have the proper accessright on the table(drop table)
 

ODBC Connection Error - Specified driver could not be loaded - response (7) by piyush.bijwal

$
0
0

Hi Guys,
Finally!!! i got it resolved yesterday. For us the main challenge was due to recent uninstall of OBIEE to the lower version i.e. from 11.1.1.7.14 to 11.1.1.6.5. So the OBDC detils in registry was causing the major issue.
Here's the step I followed,
1. Uninstalled the Teradata ODBC (ODBC-ICU-GSS)
2. I cleaned the registry for any entry reference to Teradata ODBC driver. (To my surprise those were still there, i was expecting that uninstall of driver would have done that as well)
3. I cleaned the folder structure for the ODBC driver on C:\Program Files\Teradata
4. I restarted the server, this was done to ensure that all system files and variables could get itself set right
5. Thence, i installed the Driver  in the order GSS-ICU-ODBC (mine is 64-bit Window Server 2008)
6. I checked the PATH variable to point to the Teradat directory
Boom... it worked. :)
I hope this would help others.

TD Express 14.10 CREATE FUNCTION permission - forum topic by george.goh

$
0
0

Hi,
I'm new to Teradata, and I'm trying to run a script in TD Express 14.10(SLES 11) that creates a UDF, but I'm getting the error message "Failure 3524 The user does not have CREATE FUNCTION access to database DBC".
How do I enable access to this function for the 'dbc' user?
Any helpwould be much appreciated.
Thanks,
-George

Forums: 

How to optimized Multiple CASE statement - forum topic by deepak.goyal

$
0
0

Hi..

 

I am new to Teradata and really confused with the working of Case statement in Teradata.

Somebody please help me out to use most optimized Multiple CASE statement in below example.. 
Select

CAST(COUNT(DISTINCT(CASE WHEN period_id <350 THEN period_id END)) AS DECIMAL(30,0))  AS Column1,                                  

CAST(COUNT(DISTINCT(CASE WHEN period_id >=351 AND period_id<=375 THEN period_id END)) AS DECIMAL(30,0)) AS Column2,                                

CAST(COUNT(DISTINCT(CASE WHEN period_id >=376 AND period_id<=400 THEN period_id END)) AS DECIMAL(30,0)) AS Column3,                             

CAST(COUNT(DISTINCT(CASE WHEN period_id >=401 AND period_id<=450 THEN period_id END)) AS DECIMAL(30,0)) AS Column4,                             

CAST(COUNT(DISTINCT(CASE WHEN period_id >=451 AND period_id<=575 THEN period_id END)) AS DECIMAL(30,0)) AS Column5,

CAST(COUNT(DISTINCT(CASE WHEN pmc ='UVS' THEN pmc END)) AS DECIMAL(30,0)) AS Column6,                                

CAST(COUNT(DISTINCT(CASE WHEN pmc ='VAL' THEN pmc END)) AS DECIMAL(30,0)) AS Column7                            

FROM [Table1]

 

Thanks in Advance,

Deepak

 

Forums: 

How to start teradata service in suse linux vmware image (teradata express 13.0) - response (8) by Purushotham

$
0
0

Hi dieter,

If want to execute bteq script,where I have to execute(which directory)
What is the command to execute a bteq script in unix

Regards,
Purushotham

alternate for cast('9999-12-31 00:00:00.0' as timestamp) - forum topic by Altaaf

$
0
0

I am doing cast for datetime value and for some resaons the query is very slow is ther any alternative for the cast above. This is used inside IBM cognos.

Forums: 

TPT Script to load multiple data types from a delimited file - forum topic by andydoorey

$
0
0

I have a file to load which doesn't seem to be in too unusual a format, but I'm not sure if it is possible to load it using TPT.  Can anyone let me know whether this is possible, or if I have to write some unix script to edit the file before I can load it.
 
The data looks like this:

SVC_NAME,OPUNIT_NO,DEPOT_CODE,OPU_TYPE,REGION_NUMBER

"Aberdeen",7240,"051","M",4

"Aberystwyth SC",7252,"067","M",6

"Middleton",7130,"002","M",3

"Aylesham",7120,"061","M",5

 

The first row is a header which has the column names from the source system.

The data rows have char/varchar fields which are quoted, and number fields which are not.  The fields are delimited by commas.

 

If I define the file as delimited it states that all fields need to be varchar/vardate.  This would mean that all the fields would need to be quoted, which they aren't.

Also, I want to skip the first row, but it seems that even if I set SkipRows=1 it still checks that it matches the same schema as the data rows, and therefore each of the fields needs to be quoted.

 

I don't think this is an unusual format.  In fact the CSVLD function would be able to split this out into the correct fields.  Unfortunately I tried using this function on a fairly large table of unformated records similar to the above and it caused Teradata to crash.  It seems more sensible to be able to load this using TPT into the correct fields in the first place, which is why I'm trying to find out whether it is possible or not.

 

Can anyone let me know whether this is possible or not?

Tags: 
Forums: 

One of the fast Export session has been logged off - forum topic by Proactive

$
0
0

Hi All,
My FastExport query failed twice both times the error in the log file was ‘RDBMS Error one of the FastExport session has been logged off’.
 

 
Kindly provide reason for session getting logged off as here the query was not aborted and there is no trace as of why the sessions were logged off in viewpoint either.
 
Adding to it, I tried to find explanation for error 2594 and remedy for it what I got is to delete the answer set spool file if necessary and then resubmit the SQL Select. So my second question is how to find answer set spool file? and Is it actually necessary? How far this remedy given can be useful? 
 
Thanks in advance.

Forums: 

Failed [7502 : HY000] A system trap was caused by UDF/XSP/UDM TESTER."UDF call" for SIGSEGV - forum topic by lavanya2389

$
0
0

I am getting following error when I am trying to compile my UDF
Failed [7502 : HY000] A system trap was caused by UDF/XSP/UDM TESTER.TESTER.UDACL_CONTRACT  for SIGSEGV
My Code:
#define byte unsigned char
#define boolean int
#define false 0
#define true 1
#define OK 0
#define SQL_TEXT Latin_Text
#define _CRT_SECURE_NO_DEPRECATE
#include <limits.h>
#include <stddef.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sqltypes_td.h>
#define FALSE 0
#define TRUE 1
#define MAX_KEY_LEN 100
void udacl_contract (
INTEGER *result,
int *indicator_Result,
char sqlstate[6],
SQL_TEXT extname[129],
SQL_TEXT specific_name[129],
SQL_TEXT error_message[257])
{
FNC_TblOpColumnDef_t *Output_columns; // column definition for output stream
FNC_TblOpColumnDef_t *Input_columns; // column definition for input stream
int colcount; // number of columns in output
int i;
char key[MAX_KEY_LEN]; // key in custom clause
int keylen; // actual key length
Key_info_t valuesKEY; // values associated with key SUM
// Key_info_t valuesAVG; // values associated with key AVG
int *index; // indices of columns in input stream to
// apply aggregates
boolean_t found;
char colname[FNC_MAXNAMELEN_EON];
/* -------- process custom clause information -------- */
FNC_TblOpGetCustomKeyInfoOf("KEY", &valuesKEY);
valuesKEY.values_r = FNC_malloc( sizeof(Values_t) * valuesKEY.numOfVal );
// compute number of columns in output stream
colcount = FNC_TblOpGetColCount(0, 'W');
FNC_TblOpGetCustomValuesOf(&valuesKEY);
// Allocate memory for the output columns
Output_columns = FNC_malloc ( TblOpSIZECOLDEF(colcount) );
// initialize output columns
TblOpINITCOLDEF(Output_columns, colcount);
// Allocate memory for input columns
Input_columns = FNC_malloc ( TblOpSIZECOLDEF( FNC_TblOpGetColCount(0, 'R') ) );
// initialize input columns
TblOpINITCOLDEF( Input_columns, FNC_TblOpGetColCount(0, 'R') );
FNC_TblOpGetColDef(0, 'R', Input_columns);
strncpy( Output_columns->column_types[i].column, strncat(colname, (char *) valuesKEY.values_r[i].value, valuesKEY.values_r[i].valueLen),FNC_MAXNAMELEN_EON);
Output_columns->column_types[i].datatype = REAL_DT;
Output_columns->column_types[i].bytesize = SIZEOF_FLOAT;
FNC_TblOpSetOutputColDef(0, Output_columns);
// pass indices to table operator in the contract context
FNC_TblOpSetContractDef(index, sizeof(int) * colcount );
// release memory
*result = Output_columns->num_columns;
FNC_free(Output_columns);
FNC_free(Input_columns);
FNC_free(valuesKEY.values_r);
//FNC_free(valuesAVG.values_r);
FNC_free(index);
}
 
void udacl ()
{
int null_ind, length;
FNC_TblOpHandle_t *Input_handle; // input stream handle
FNC_TblOpHandle_t *Output_handle; // output stream handle
FNC_TblOpColumnDef_t *Input_columns; // input stream column definitions
FNC_TblOpColumnDef_t *Output_columns; // output stream column definitions
double *aggregates; // aggregates computation
int *index; // columns index in input stream of aggregates
int rowcount = 0 ; // number of rows
int i, j, k, tmp;
BYTE *ptr;
int colcount;
/* Allocate memory for the output columns */
colcount = FNC_TblOpGetColCount(0, 'W');
/* FNC_TblOpGetColCount This function retrieves the number of columns in the stream*/
Output_columns = FNC_malloc ( TblOpSIZECOLDEF( colcount ) );
/* TblOpSIZECOLDEF = sizeof(parm_tx)*colcount + 2*sizeof(int) */
/* initialize output columns */
TblOpINITCOLDEF(Output_columns, colcount);
/* TblOpINITCOLDEF = if (coldef != NULL) {coldef->num_columns = colcount; coldef->length = sizeof(parm_tx)*colcount; memset(coldef->column_types,0,coldef->length);} */
FNC_TblOpGetColDef(0, 'W', Output_columns);
/* Allocate memory for input columns */
Input_columns = FNC_malloc ( TblOpSIZECOLDEF( FNC_TblOpGetColCount(0, 'R') ) );
/* initialize input columns */
TblOpINITCOLDEF( Input_columns, FNC_TblOpGetColCount(0, 'R') );
FNC_TblOpGetColDef(0, 'R', Input_columns);
/* initialize aggregated values for group */
aggregates = FNC_malloc( sizeof(float) * Output_columns->num_columns );
for (i=0; i<Output_columns->num_columns; i++)
{
aggregates[i] = 0;
}
/* get indices from contract context */
index = FNC_malloc( FNC_TblOpGetContractLength() );
FNC_TblOpGetContractDef(index, FNC_TblOpGetContractLength(), &tmp );
/* FNC_TblOpGetContractDef Retrieve the contract context */
/* FNC_TblOpGetContractLength Retrieve the length of contract context.*/
/* The basic row iterator would be structured as follows */
Input_handle = FNC_TblOpOpen(0, 'R', TBLOP_NOOPTIONS); // start iterator for input stream
Output_handle = FNC_TblOpOpen(0, 'W', TBLOP_NOOPTIONS); // start iterator for output stream
/* FNC_TblOpRead = return SUCCESS,  EOF (no more data), ABORT or ERROR. */
while ( FNC_TblOpRead(Input_handle) == TBLOP_SUCCESS)
{
rowcount++;
// update aggregate for each column
for (i=0; i<Output_columns->num_columns; i++)
{
/* increment aggregated values */
FNC_TblOpGetAttributeByNdx(Input_handle, index[i], (void **) &ptr, &null_ind,
&length);
switch (Input_columns->column_types[index[i]].datatype)
{
case BYTEINT_DT: aggregates[i] += *((signed char *) ptr); break;
case SMALLINT_DT: aggregates[i] += *((short *) ptr); break;
case INTEGER_DT: aggregates[i] += *((int *) ptr); break;
case BIGINT_DT: aggregates[i] += *((long long *) ptr); break;
}
}
}
/* set output values */
for (i=0; i<Output_columns->num_columns; i++)
{
if (Output_columns->column_types[i].column[0] == 'A')
{
if (rowcount != 0)
{
aggregates[i] = aggregates[i] / ((double) rowcount);
}
}
FNC_TblOpBindAttributeByNdx(Output_handle, i, aggregates+i, 0, sizeof(double));
}
/* write output row */
FNC_TblOpWrite(Output_handle);
FNC_TblOpClose(Input_handle);
FNC_TblOpClose(Output_handle);
// release memory
FNC_free(Output_columns);
FNC_free(Input_columns);
FNC_free(aggregates);
FNC_free(index);
}
 
Can you please tell me what is causing this error?
Thanks in advace

Forums: 
Viewing all 27759 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>