Quantcast
Channel: Teradata Forums - All forums
Viewing all 27759 articles
Browse latest View live

Teradata Training Material available - response (260) by teranathan

$
0
0

Dear Todd,
 
I am planning for Teradata Basics Exam, , kindly provide the materials.
 
Please send it to nathan.nvp@gmail.com
 
Thanks,
Nathan


Teradata Lock - 3 Insert Statements - response (7) by Raja_KT

$
0
0

Hi Rohit,
It is processed in sequence.
Cheers,
Raja

ODBC Connection issues with Teradata 14 on VMWare Player - response (2) by md186027

$
0
0

OK the solution to this is:
check your ODBC Connection, set the IP Address correctly to the VM Ware, incase you are using a loop back adapter make sure that the adapter address is updated. Optionally you could use Teradata.Net instead of ODBC. 

Teradata Training Material available - response (261) by Neyaz

Top Function - response (7) by M.Saeed Khurram

$
0
0

Hi Ratnam,
In a Sample n function, the amps will choose rows in some random fashion. So there is no need to sort the rows.
In TOP n function the rows are sorted first and then the top n rows specified are returnd. 
You can use random if you want to get a truely random result.
 
 

Query with TOP and COUNT - response (4) by M.Saeed Khurram

TPT with Unicode columns in source. - forum topic by indrajit_td

$
0
0
SCHEMA SECTION (NOT ALL COLUMNS):
--------------
DESCRIPTION 'TABLE table_ld ODBC SCHEMA'  ( 
      SourceCol_1          NUMBER(38) 
    , SourceCol_2          TIMESTAMP 
    , SourceCol_3          VARCHAR(60) CHARACTER SET UNICODE 
    , SourceCol_4          TIMESTAMP
    , SourceCol_5          VARCHAR(60) CHARACTER SET UNICODE
    , SourceCol_6          VARCHAR(60) CHARACTER SET UNICODE 
    , SourceCol_7          TIMESTAMP 
    , SourceCol_8          VARCHAR(20) CHARACTER SET UNICODE
    , SourceCol_9          VARCHAR(255) CHARACTER SET UNICODE 
    , SourceCol_10         VARCHAR(20) CHARACTER SET UNICODE 
    , SourceCol_11         VARCHAR(20) CHARACTER SET UNICODE 
    , SourceCol_12         VARCHAR(255) CHARACTER SET UNICODE 
    , SourceCol_13         VARCHAR(255) CHARACTER SET UNICODE 
    , SourceCol_14         VARCHAR(60) CHARACTER SET UNICODE 
    , SourceCol_15         VARCHAR(255) CHARACTER SET UNICODE 
    , SourceCol_16         VARCHAR(60) CHARACTER SET UNICODE 
    , SourceCol_17         VARCHAR(60) CHARACTER SET UNICODE 
    , SourceCol_18         VARCHAR(255) CHARACTER SET UNICODE 
    , SourceCol_19         VARCHAR(255) CHARACTER SET UNICODE 
    , SourceCol_20         VARCHAR(255) CHARACTER SET UNICODE 
    , SourceCol_21         VARCHAR(255) CHARACTER SET UNICODE 
    , SourceCol_22         VARCHAR(255) CHARACTER SET UNICODE 
    , SourceCol_23         VARCHAR(4000) CHARACTER SET UNICODE 
    , SourceCol_24         VARCHAR(255) CHARACTER SET UNICODE 
    , SourceCol_25         VARCHAR(60) CHARACTER SET UNICODE 

APPLY SECTION:
-------------
APPLY
    ( 
      'INSERT INTO LoadTablesaa.branch_ld
      ( 
          id_num 
        , created_date 
        , created_by 
        , last_modified 
        , last_modified_by 
        , lock_user 
        , lock_date 
        , state 
        , cr_state 
        , branch_type 
        , release_target 
        , cr_release_target 
        , branch_name 
        , responsible 
        , cr_responsible 
        , branch_manager 
        , branch_steward 
        , keywords 
        , baseline_planned 
        , baseline_actual 
......
   )
      VALUES 
      (  
        :SourceCol_1 
      , :SourceCol_2 
      , :SourceCol_3 
      , :SourceCol_4 
      , :SourceCol_5 
      , :SourceCol_6 
      , :SourceCol_7 
      , :SourceCol_8 
      , :SourceCol_9 
      , :SourceCol_10 
      , :SourceCol_11 
      , :SourceCol_12 
      , :SourceCol_13 
      , :SourceCol_14 
      , :SourceCol_15 
....
    ) 
    TO OPERATOR ( STREAM_OPERATOR[1]) 
    SELECT * FROM OPERATOR ( table_ld_READ_OPERATOR[1] );
  
TBUILD CALL:
-----------
tbuild -f $CTL_FILE -u "TDPassword = '$DSS_PWD', SRCPassword = '$SRC_PWD'" wsl-$LOAD_TABLE-$SEQUENCE >> $AUD_FILE

ERROR MESSAGE RECEIVED:
----------------------
TPT_INFRA: At "CHARACTER" missing { RPAREN_ COMMA_ MACROCHARSET_ METADATA_ OFFSET_ } in Rule: Column Definition

Compilation failed due to errors. Execution Plan was not generated
Job script compilation failed.
Teradata Parallel Transporter Version 14.10.00.02
Job terminated with status 8.

Hi,
We have been trying to load data using TPT from an oracle source that has unicode columns but have been getting errors while loading it. The problem we are getting is with the UNICODE columns.
 
Regards,
Indrajit
 
 
 
 
 

Forums: 

Top Function - response (8) by dnoeth

$
0
0

Hi Ratnam,
SAMPLE (RANDOMIZED ALLOCATION) returns a truely random result while TOP n (without PERCENT/ORDER BY) simply returns the first n rows found on a single AMP (or multiple AMPs). SAMPLE is always more overhead/slower compared to TOP (without PERCENT/ORDER BY)
If you just want to see some data better use TOP, if you do some statistical stuff switch to SAMPLE.
Btw, what is MECONOSAM LEVEL?


Call a macro from a stored procedure? - response (6) by s@ir@m

$
0
0

Hi ,
 
can i call a stored procedure with in the macro ?
 
ratnam

Does one need unicode compression in Tredata 14.0 - response (4) by M.Saeed Khurram

Does one need unicode compression in Tredata 14.0 - response (5) by M.Saeed Khurram

$
0
0

Hi
I was going through some material and I came to know that TRANSUNICODETOUTF8 can only be used to compress UNICODE columns which contain ASCII LATIN 7 Bit data. So I guess if TD 14 is to store Unicode in UTF8 then it will require the data in ASCII LATIN, else it will store it as UTF16.
 

Call a macro from a stored procedure? - response (7) by dnoeth

$
0
0

Hi Ratnam,
yes, if it's the only statement.
Of course you could have tried that easily on your own :-)

Does one need unicode compression in Tredata 14.0 - response (6) by dnoeth

$
0
0

Hi Khurram,
TransUnicodeToUTF8 works for any UTF16 character, but if there's a lot of Latin chars it simply compresses better:
Most of the Latin chars are stored in one byte in UTF8 while some of the more exotic chars might need more than 2 bytes.

Does one need unicode compression in Tredata 14.0 - response (7) by Raja_KT

$
0
0

In the context of table joins, I feel that we need to be careful so that both  the joining fields are of the same characters, else there will be performance degrade. I have heard quite a number of cases.
Raja

Does one need unicode compression in Tredata 14.0 - response (8) by dnoeth

$
0
0

Hi Raja,
this only relates to LATIN vs. UNICODE, of course they hash differently and thus you can't get PI-to-PI joins. But algorithmic compression doesn't change the charset, only the storage (btw, you can't compress a PI column).
Joining on columns with different character sets is a sign of bad database design :-)


Call a macro from a stored procedure? - response (8) by s@ir@m

$
0
0

Thank u Dieter,
 
1.and as well as macro inside another macro?
2.we are using DDL in Macro?
 
Ratnam

Many tables or many databases? - forum topic by rhuaman

$
0
0

For a project, we have several groups that will produce numbers using models. Input and output data will be in the DB. Each group is independent and works in their own different way; however, the outputs they produce will need to be combined, aggregated and queried. 
I am debating whether to create one database with a lot of tables or many smaller databases which belong to each group. I am leaning towards creating multiple databses because of easier access management. However, I am concerned about performance.
Will querying tables across databases affect Teradata performance? 
Thank you!
 
 
 

Forums: 

Clarification on Rowhash Match Scan and Sync Scan - forum topic by Santanu84

$
0
0

Hi Folks,
 
I need confirmation on my understaning of Rowhash Match Scan and Sync Scan. 
 
1.
I am joining two tables TableA and TableB using TableA.A1 column which is PI and TableB.B2 column which is not PI.
Prior to join TableB needs to be spooled for Rowhash Match Scan. 
This means TableB needs to be redistributed across AMPs on B2 and spooled, for join with TableA.
Is my understanding correct?
 
2.
In Sync Scan, while accessing a large table, if User1 is currently reading Datablock3 of TableA and User2 comes in to access TableA, User2 will start from Datablock3 with User1. When the reading is over, User2 will be wrapped back to finish its reading of DB1 and DB2 of TableA.
This way only a portion of the datablock will be cached in memory but not the entire large table. Am I right?
 
3.
Now in case of JOIN, when 2 large tables are getting joined through Rowhash Match scan, what it means when it says in Explain "not cached in memory, but eligible for synchronized scanning". Does it mean that the Rowhash Matching is happening on Sync Scan basis?
 
There may be some previous discussions in this forum on the same topic. However, I just wanted to clarify my understanding.
 
Thanks
Santanu
 

Forums: 

Many tables or many databases? - response (1) by VandeBergB

$
0
0

If you go the route of one database, based upon your description, you'll be forcing the dba team to maintain object level permissions, never a good idea.  If you create a db for each team, your security paradigm is much simpler.
Creating a db for each team will allow them the flexibility to alter their own schema without impacting any of the other teams.  Combining, Aggregating and Querying their "common" results will not result in performance degradation with a few caveats, make sure the "output" table DDL is as close to identical as possible to reduce the amount of processing in the "combined" view, ensure that every "output" table has the same Primary Index, NUPI or UPI doesn't matter, if you assign them all the same PI, you'll be designing in AMP local work, which is where Teradata shines.
Set up an aggregate view in a reporting/analysis db separate from the specific team db's and grant select on the base view databases for each individual team.
Have Fun!

Query on Skew-Sensitivity in the TD12 above Optimizer - forum topic by Santanu84

$
0
0

Hi All,
I was reading Carrie's blog on skew sensitivity of TD12 optimizer.
http://developer.teradata.com/database/articles/skew-sensitivity-in-the-optimizer-new-in-teradata-12
 
Here at one point she mentioned, 
"Instead of assuming each AMP will receive 1.1 millions rows during row redistribution, the Teradata 12 optimizer assumes a worst-case redistribution of 6 million rows for at least one AMP. It uses that worst-case number of rows to adjust the estimated cost of the row redistribution choice."
 
Assume that TableA PI column is joined with TableB non-PI column. Definitely TableB will be redistributed to all AMP. Now TableB is a very large table. 
As per my understanding in v2r6 it would have assumed that each AMP has received those huge rows and processing in each AMP would have been skewed. (performance of all AMP is equivalent to the slowest AMP).
Does that not happen any more in optimizer?
I mean, currently (from TD12 onwards) will the optimizer be able to pin-point the worst case scenario of row redistribution and accordingly will change its method so that the processing is similar in each AMP?
 
Please help me to clarify my understanding.
 
Thanks
Santanu

Forums: 
Viewing all 27759 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>