Quantcast
Channel: Teradata Forums - All forums
Viewing all articles
Browse latest Browse all 27759

Difference between Create table Statements - response (5) by KS42982

$
0
0
I am afraid to say that I do not get almost similar explain in both the cases. Here is the example. I am trying to insert data from table 1 to table 2. Table 1 is a multiset table. There are around 90 million records in table 1. Please take a look at the explain CREATE TABLE (SELECT ..). Please check the step 4.  1) First, we lock a distinct DATABASE1."pseudo table" for read on a      RowHash to prevent global deadlock for DATABASE1.TABLE1.   2) Next, we lock DATABASE1.TABLE2 for exclusive use, and we      lock DATABASE1.TABLE1 for read.   3) We create the table header.   4) We do an all-AMPs RETRIEVE step from DATABASE1.TABLE1 by way      of an all-rows scan with no residual conditions into Spool 1      (all_amps), which is redistributed by the hash code of (      DATABASE1.TABLE1.COLUMN1) to all AMPs.  Then we do a SORT to      order Spool 1 by row hash.  The input table will not be cached in      memory, but it is eligible for synchronized scanning.  The result      spool file will not be cached in memory.  The size of Spool 1 is      estimated with high confidence to be 49,607,857 rows (      4,563,922,844 bytes).  The estimated time for this step is 2      minutes and 6 seconds.   5) We do an all-AMPs MERGE into DATABASE1.TABLE2 from Spool      1 (Last Use).  The size is estimated with high confidence to be      49,607,857 rows.  The estimated time for this step is 1 second.   6) We lock a distinct DBC."pseudo table" for read on a RowHash for      deadlock prevention, we lock a distinct DBC."pseudo table" for      write on a RowHash for deadlock prevention, we lock a distinct      DBC."pseudo table" for write on a RowHash for deadlock prevention,      and we lock a distinct DBC."pseudo table" for write on a RowHash      for deadlock prevention.   7) We lock DBC.Indexes for write on a RowHash, we lock DBC.DBase for      read on a RowHash, we lock DBC.TVFields for write on a RowHash, we      lock DBC.TVM for write on a RowHash, and we lock DBC.AccessRights      for write on a RowHash.   8) We execute the following steps in parallel.        1) We do a single-AMP ABORT test from DBC.DBase by way of the           unique primary index.        2) We do a single-AMP ABORT test from DBC.TVM by way of the           unique primary index.        3) We do an INSERT into DBC.Indexes (no lock required).        4) We do an INSERT into DBC.TVFields (no lock required).        5) We do an INSERT into DBC.TVFields (no lock required).        6) We do an INSERT into DBC.TVFields (no lock required).        7) We do an INSERT into DBC.TVFields (no lock required).        8) We do an INSERT into DBC.TVFields (no lock required).        9) We do an INSERT into DBC.TVFields (no lock required).       10) We do an INSERT into DBC.TVFields (no lock required).       11) We do an INSERT into DBC.TVFields (no lock required).       12) We do an INSERT into DBC.TVFields (no lock required).       13) We do an INSERT into DBC.TVFields (no lock required).       14) We do an INSERT into DBC.TVFields (no lock required).       15) We do an INSERT into DBC.TVFields (no lock required).       16) We do an INSERT into DBC.TVFields (no lock required).       17) We do an INSERT into DBC.TVFields (no lock required).       18) We do an INSERT into DBC.TVFields (no lock required).       19) We do an INSERT into DBC.TVFields (no lock required).       20) We do an INSERT into DBC.TVFields (no lock required).       21) We do an INSERT into DBC.TVFields (no lock required).       22) We do an INSERT into DBC.TVFields (no lock required).       23) We do an INSERT into DBC.TVFields (no lock required).       24) We do an INSERT into DBC.TVFields (no lock required).       25) We do an INSERT into DBC.TVFields (no lock required).       26) We do an INSERT into DBC.TVFields (no lock required).       27) We do an INSERT into DBC.TVFields (no lock required).       28) We do an INSERT into DBC.TVFields (no lock required).       29) We do an INSERT into DBC.TVFields (no lock required).       30) We do an INSERT into DBC.TVFields (no lock required).       31) We do an INSERT into DBC.TVFields (no lock required).       32) We do an INSERT into DBC.TVFields (no lock required).       33) We do an INSERT into DBC.TVM (no lock required).       34) We INSERT default rights to DBC.AccessRights for           DATABASE1.TABLE2.   9) Finally, we send out an END TRANSACTION step to all AMPs involved      in processing the request.   -> No rows are returned to the user as the result of statement 1. Now, tTake a look at the exlain CREATE TABLE... INSERT INTO...   1) First, we lock a distinct DATABASE1."pseudo table" for write on a      RowHash to prevent global deadlock for DATABASE1.TABLE2.   2) Next, we lock a distinct DATABASE1."pseudo table" for read on a      RowHash to prevent global deadlock for DATABASE1.TABLE1.   3) We lock DATABASE1.TABLE2 for write, and we lock      DATABASE1.TABLE1 for read.   4) We do an all-AMPs MERGE into DATABASE1.TABLE2 from      DATABASE1.TABLE1.  The size is estimated with no confidence      to be 9,361,440 rows.  The estimated time for this step is 1      second.   5) We spoil the parser's dictionary cache for the table.   6) Finally, we send out an END TRANSACTION step to all AMPs involved      in processing the request.   -> No rows are returned to the user as the result of statement 1.

Viewing all articles
Browse latest Browse all 27759

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>