Quantcast
Channel: Teradata Forums - All forums
Viewing all articles
Browse latest Browse all 27759

Large-scale update question - response (1) by rtefft

$
0
0

No responses, but I found a solution.  It appears that using joined-updates or joined-deletes is not viable performance-wise unless both table have the same primary index AND that primary index exactly matches the WHERE clause.  I dropped execution time from 4 hours to 20 minutes by doing this:

  1. Create temp table containing all unique INV_GNBRs (the NUSI) I needed to process.
  2. Create a load table with all target records in the 700m row table matching the temp NUSI values.  This pulled about 10m rows even though I only need to update/delete about 600k rows.
  3. Apply the joined updates and deletes against the 10m load table.  About 600k rows touched.  Elapsed time was about 4 minutes.
  4. Use a joined-delete on the 700m row target table using the temp keys table.  They have the same PI and the WHERE clauses was *only* the PI.  It took about 10 minutes to deleted the 10m rows.
  5. Insert the !10m rows (slightly less due to deletes applied to it) rows from the load table into the 700m row target table.  Elapsed time was about 6 minutes.

In the end, I replaced 2 SQL statements (Update and Delete) with 4 pages of formatted code, but I got the performance I needed.


Viewing all articles
Browse latest Browse all 27759

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>