Quantcast
Channel: Teradata Forums - All forums
Viewing all 27759 articles
Browse latest View live

Fastloading csv file with double quotes - response (4) by SuSeSi

$
0
0

Fastload does not support or can ignore quoting character as part of data. The csv file while saving by different application adds additinal quotes for their application differently.
I will suggest in this case to use a shell/perl script to
1. change the quoted character '"'
2. change the two double quote character which represent a single quote in data to one.
the data:
"D9F570F6-538B-4795-98A4-021D99C03576"|"""BUREAU,TARIFICATION"""
becomes:
*D9F570F6-538B-4795-98A4-021D99C03576*|*"BUREAU,TARIFICATION"*
then use fastload command as:
SET RECORD VARTEXT DELIMITER '|' QUOTE YES '*';

 


Teradata Online Training & certification - forum topic by Keentrainings

$
0
0

Teradata Online Training By Keen Technologies.We are Best Institute for Online Training and we have Expert trainers to share knowledge on Teradata. Terada course materials,videos are designed by our trainers.we
 
Should you require any further information please do not hesitate to contact us
 
Email ID : trainings@keentechnologies.com
Phone @ IND : +91-9989754807 / USA : +1-347-952-4634
http://www.keentechnologies.com/data_warehouse-teradata-online-training.html
 
will give project support,resume preparation & interview question to our students & consultents

Forums: 

Recursive Query Issue - response (5) by Qaisar Aftab Kiani

$
0
0

See my other post, I have provided the solution for a similar problem.

 

http://forums.teradata.com/forum/database/help-in-recursive-query-required

 

 

or there is a blog you take help from

http://walkingoncoals.blogspot.com/2009/12/fun-with-recursive-sql-part-2.html

 

Drop a note if you still facing issues while tweaking it according to your requirements.

Transpose and Concatenating Values - response (3) by Qaisar Aftab Kiani

$
0
0

You need to write the recursive query.
I haven't tested this SQL, but I hope it will work for you

WITH RECURSIVE LOOKUP(ID,VALUE,LVL)
AS
(
	SELECT ID, MIN(VALUE(VARCHAR(1000))) AS VALUE,1 as LVL
	FROM table1
	GROUP BY 1

	UNION ALL

	SELECT b.ID,trim(a.VALUE) || ';' || trim( b.VALUE), LVL+1
	FROM table1 a INNER JOIN LOOKUP b
	ON a.ID = b.ID
	AND a.VALUE > b.VALUE
)

SELECT ID, VALUE, LVL
FROM LOOKUP
QUALIFY RANK() OVER(PARTITION BY id ORDER BY VALUE DESC) = 1;

 

Recursive Query Issue - response (6) by Qaisar Aftab Kiani

$
0
0

I have written something very quickly but haven't tested it. Probably some minor tweaks would do the trick.

CREATE SET TABLE foo (
Number_id integer,
FF_String VARCHAR(100),
STRGY_TC VARCHAR(100),
)
PRIMARY INDEX ( Number_id );

INSERT INTO FOO VALUES (135, 'DD_B_SEG', '48647_002'); 
INSERT INTO FOO VALUES (127, 'ATM_B_NP', '48557_001|48557_002|48557_003|48558_004|48558_005|485 5 8_006|48558_007|48558_008');

With Recursive LOOKUP (Number_id, FF_String, STRGY_TC ) As
(
Select    
Number_id
, FF_String
, substr(STRGY_TC, 1, index(STRGY_TC,'|')-1) || '' || substr(STRGY_TC, index(STRGY_TC,'|') +1, character_length(STRGY_TC)) as STRGY_TC
From	foo root
union all
Select 
direct.Number_id
, direct.FF_String
, substr(direct.STRGY_TC, index(direct.STRGY_TC,'|')+1,100 ) as STRGY_TC
From     LOOKUP direct
where index(direct.STRGY_TC,'|') >0)
select 	b.Number_id
		, b.FF_String
		, TRIM(substr(b.STRGY_TC, 1, index(b.STRGY_TC, '_')-1)) AS Strgy
		, TRIM(substr(b.STRGY_TC, index(b.STRGY_TC, '_')+1,character_length(b.STRGY_TC))) AS Test
from LOOKUP b
where index(b.STRGY_TC,'|')=0

 

RECURSIVE VIEW - response (1) by Qaisar Aftab Kiani

$
0
0

This should help.
http://www.info.teradata.com/HTMLPubs/DB_TTU_14_00/index.html#page/SQL_Reference/B035_1184_111A/Create_Procedure-Details.011.105.html

Fast Export and MLoad - response (5) by islanderman

$
0
0

Hoping someone can give me guidance.  Currently we have large loads of data being inserted to Teradata via a perl script from Linux RedHat.  The insertion is line by line so it takes hours to load.  I was able to install tdicu, tdodbc, TeraGSS by downloading the tar files from this site.  I was able to configure DBD::ODBC to connect to the Teradata DB and the insertion works fine.  I've been told to use JDBC for Mload or Bteq or Fastload, but I'm not familiar enough with Java.  I've been told to work with TPT 14, which is part of TTU, but I don't know how to get the software for Linux.  
First of all, I assume that all of the Mload/Fastload/Bteq is all part of the TTU package, is that correct?
If I happen to get the software can I use perl to execute the Mload/Fastload/Bteq script?
Any help would be appreciated.

Fast Export and MLoad - response (6) by feinholz

$
0
0

If you are going to be using our load/unload tools for the first time, you should use TPT. It is the suite of load/unload tools going forward.
TPT is part of TTU and can be found in the same media as the rest of the software.
 


TPT error in loading from an external file - response (3) by simplesim092

$
0
0

Does any one has resolution for this..I am facing teh same issue as point 2.. as above and There is no EOF

TPT error in loading from an external file - response (4) by feinholz

$
0
0

I would imagine all files have an EOF.
Every record should also have EOR.
(We are fixing the code so that we will treat EOF as EOR on the last record, but that fix is not out yet. Thus, add an EOR to your last record.)
 

Unable to load TD driver into ODBC - forum topic by jb186040

$
0
0

I tried to upgrade my laptop TTU to 14.10. Now I can not connect to any database via ODBC. The error states that the "seup routines for teradata driver could not be found". I've uninstalled and re-installed several times. Still the same message.

Tags: 
Forums: 

TPT Data load from csv file - forum topic by simplesim092

$
0
0

Hi ,
I am tryng to load the data from CSV file to TD table using TPT..I am using version14.10..TPT havng delimiter as ',' and also  in the csv file I have data in below format..How can I load this data.
SEQ_ID ,Prod_TYPE ,ACTIVE_ID ,Region_MAX_LAT ,Region_MAX_LONG ,AR_MIN_LAT ,AR_MIN_LONG ,AR_NUMERIC_ID ,CD_ACQUISITION_ID ,CD_SYSTEM_ID ,EFf_DATE ,EXP_DATE ,LOCAL_SYSTEM_ID ,PPDM_GUID ,PRED_NAME ,REMARK ,SOURCE_TD ,SOURCE_DOCUMENT ,ROW_CHANGED_BY ,ROW_CHANGED_DATE ,ROW_CREATED_BY ,ROW_CREATED_DATE ,ROW_QUALITY
Msrd,ABC_DEF,Y,,,,,4,,,,,,,'BKC Company, Ltd.',,EMW,,,,ETLR,,AWD
I tried with enclosing data with comma with double quotes...but its giving m below error..Please guide me on this..i am using Update operator as I need to use format expression for date column...So cant go for fastload..i,e. load operator in TPT..there is no EOF character..Also in the notepad for this csv, I have place cursor to next line..
Facing error due to data having comma and also dot ...and my csv have delimiter as comma
UPDATE_OPERATOR: entering DML Phase
UPDATE_OPERATOR: entering Acquisition Phase
FILE_READER: TPT19350 I/O error on file '/data/scripts/AR.csv'.
FILE_READER: TPT19003 Delimited Data Parsing error: Too many columns in row 1
FILE_READER: TPT19221 Total files processed: 0.
UPDATE_OPERATOR: disconnecting sessions
 
 
Kindly revert about how can I handle such data...
Thanks in advance..Awaiting for response..

Forums: 

BTEQ Batch mode, commands / SQL run from shared server. - forum topic by istari

$
0
0

Windows 7: BTEQ 13 and 14
Calling a batch file from end users machine shortcut, target is \\server\folder\file.bat, start in %temp% to prevent CMD prompt errors.
bteq < \\server\folder\commands.txt
commands.txt contains
login info
.EXPORT REPORT FILE=\\server\folder\output.csv
.RUN file \\server\folder\sql.txt
 
Problem:  When shortcut is run, it outputs bteq 0<\\server\folder\commands.txt and breaks.  (note the zero before the input)
in BTEQ, when just running the command straight from there bteq < \\server\folder\commands.txt everything works.
When everything is local, and shortcut points to a local batch file, with local commands, login, and sql files, it runs.
 
Thoughts/Suggstions?
 
Thank you.
 

Forums: 

Volatile Tables - forum topic by kgr

Using a PPI column in PI - response (2) by raj786

$
0
0

hi
i have  PI as colA and now the skew is 80%. i added one more col to that pi i.e my proposed pi is (colA,colB) AND the skew is 7%. is there any impact on downstream. like my table is  A->B->C->D. I am working on B if changed PI at B is it impact on C & D. How resolve this?


Script for Complete Teradata database system backup - forum topic by Teradata_User1

$
0
0

Hi,
I am trying to take the complete teradata development system backup through ABU. But I am getting "Failure ARC0202:Reserve was not expected" error. Kindly let me know whether I can exclude the spool reserve database from backup script. May I know the script for Full Development teradata database backup script ??? If I exclude the spool reserve database from backup script then will it be full backup ?????
 
Regards,
Mahendra.
 

Forums: 

BTEQ Import REPORT Mode. Growing to buffer - forum topic by Jugalkishorebhatt1

$
0
0

Hello All,
When i try to import the fixed length data. The data is growing to buffer to 65473 and getting imported to the table. The problem is in the USING part. But not able to figure how to overcome it.
Plz experts help me with this issue
Import Script:
.LOGON 127.0.0.1/jugal,jugal
.IMPORT REPORT FILE=/root/jugal/samples12
.QUIET ON
.REPEAT *
USING
EmpId (INTEGER),
EmpName (CHAR(5)).
DeptId (INTEGER)
INS INTO jugal.NewTable1 values (:EmpId,:Empname,:DeptId);
.LOGOFF
 
samples12.txt
10 Jugal 100
20 Jugal 200
30 Anil 300
 
Source Table:
CREATE TABLE jugal.NewTable1
(EmpId INT,
EmpName CHAR(10),
DeptId VARCHAR(10)
);

Forums: 

Volatile Tables - response (1) by VandeBergB

$
0
0

everything you seek can be found....in the user documentation available for free download from Teradata.

Using a PPI column in PI - response (3) by VandeBergB

$
0
0

if your downstream queries don't access the new composite PI you'll have full table scans.  what defines the most common access path to the table with colA and colB?  You've got an extreme case of 80% skew and 7% skew.  I'd suspect that your initial PI doesn't define the most common access path because of the data clustering in colA.
build some test cases and see what the explains look like
 
-cheers

TPT Data load from csv file - response (1) by simplesim092

$
0
0

This esolution has been done..My data in csv file had same symbol as i m using as delimiter..So I added
in the  DEFINE OPERATOR FILE_READER
 TYPE DATACONNECTOR PRODUCER
 SCHEMA TGT_TABLE_SCHEMA
       ATTRIBUTES
        (
   VARCHAR OpenQuoteMark = '"',
   VARCHAR CloseQuoteMark = '"',
VARCHAR Format = 'Delimited',
   VARCHAR OpenMode = 'Read',
It worked well and data got loaded....!!!  :-)

Viewing all 27759 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>