DB2 JDBC connections not showing in SQL Editor Connection Profile: dropdown - response (7) by fgrimmer
SQL Assistant key not found error - response (6) by sidhup
I'm facing the same error...I have installed the latest version 15.0.0.0
SQLA Version: 15.0.0.0
System.ArgumentException: Key not found: 'DBTree'
Parameter name: key
at Infragistics.Shared.KeyedSubObjectsCollectionBase.GetItem(String key)
at Infragistics.Win.UltraWinDock.DockableControlPanesCollection.get_Item(String key)
at Teradata.SQLA.MainFrm.SetConnectedState(ConnectInfo cn) in v:\cm.client.ttu150\tdcli\qman\sqla\MainFrm.vb:line 1381
at Teradata.SQLA.MainFrm.ConnectMenu_Click(ConnectInfo cn) in v:\cm.client.ttu150\tdcli\qman\sqla\MainFrm.vb:line 1348
at Teradata.SQLA.MainFrm.ToolbarMgr_ToolClick(Object sender, ToolClickEventArgs e) in v:\cm.client.ttu150\tdcli\qman\sqla\MainFrm.vb:line 918
SQL Assistant key not found error - response (7) by sidhupolimetla
Hi there,
I'm having the same issue...I have updated to latest version but still facing the same error..
SQLA Version: 15.0.0.0
System.ArgumentException: Key not found: 'DBTree'
Parameter name: key
at Infragistics.Shared.KeyedSubObjectsCollectionBase.GetItem(String key)
at Infragistics.Win.UltraWinDock.DockableControlPanesCollection.get_Item(String key)
at Teradata.SQLA.MainFrm.SetConnectedState(ConnectInfo cn) in v:\cm.client.ttu150\tdcli\qman\sqla\MainFrm.vb:line 1381
at Teradata.SQLA.MainFrm.ConnectMenu_Click(ConnectInfo cn) in v:\cm.client.ttu150\tdcli\qman\sqla\MainFrm.vb:line 1348
at Teradata.SQLA.MainFrm.ToolbarMgr_ToolClick(Object sender, ToolClickEventArgs e) in v:\cm.client.ttu150\tdcli\qman\sqla\MainFrm.vb:line 918
Teradata Studio 15.10.01.01 on Mac OS X -- Hangs - response (5) by astocks
Oh my....sorry I missed that. Thanks very much.
DDL for entire database - response (1) by srivigneshkn
1. The following statements would give the DDL's for view and table.
Select Requesttext from dbc.tables where tablekind='V'; -- View.
Select Requesttext from dbc.tables where tablekind='T'; -- Table.
2. However, in cases when the DDL was altered after a create statement , this would only give the latest DDL that was executed.
3. An alternative way is to prepare the show table, show view statements and run it via bteq looping through each line and exporting each command to a file.
Date value null - response (6) by ToddAWalter
To add a bit of ?why? to Fred's excellent answer...
CAST to character does not result in '?' for NULL values. Any function, arithmetic,... on NULL values results in a NULL value. So the result of the CAST above is NULL not '?'. Then the result of the comparison is then false because NULL cannot be equal to anything.
COALESCE should work, but you would need to specify a timestamp value to substitute when the column is NULL. And you would have to COALESCE both sides of the comparison. I like Fred's solution better since it is more obvious what is being done.
Also there is no need to do the CAST to CHAR in order to do the compare. Just let the system compare the timepstamps. It costs extra to CAST things if it is not necessary.
Need help to implement the logic - response (11) by srivigneshkn
Query:
select a.tt_id,b.priority_id as StartPriority,a.priority_id as EndPriority from priority a
inner join priority b on (a.tt_id=b.tt_id and a.priority_id>b.priority_id) order by a.tt_id
O/P:
TT_Id StartPriority EndPriority
------- -------------- --------------
1201 1 2
1203 1 3
1204 2 3
Regards,
Srivignesh KN
Generate a Alphanumeric ID - response (3) by srivigneshkn
I was able to manipulate with the following statement.
select 'OFFR'||insertdt||trim(row_number() over (order by 1 desc)) from Tablename.
1. 'OFFR' is the static character
2. Insertdt is the date from your table.
3. Row_Number for the identity column.
This would give you the desired output.
If you want to have additional zeros you can pad zeros before the rownumber statement as well.
Regards,
Srivignesh KN
Convert a Field - response (1) by srivigneshkn
1. This can be achieved by OREPLACE function in TD, the equivalent of the Oracle funtion.
Query : select oreplace(colname,'','x') from Table; -- ('' is space)
Output:
1xxx12xxx1x1
xxxxxxxxx11x
However, if OREPLACE is not accessible in your TD environment, we would need to use substr functions.
2. Alternative solution, without using OREPLACE function.
Query :
select case when substr(eid,1,1)='' then 'x' else substr(eid,1,1) end ||
case when substr(eid,2,1)='' then 'x' else substr(eid,2,1) end||
case when substr(eid,3,1)='' then 'x' else substr(eid,3,1) end ||
case when substr(eid,4,1)='' then 'x' else substr(eid,4,1) end ||
case when substr(eid,5,1)='' then 'x' else substr(eid,5,1) end ||
case when substr(eid,6,1)='' then 'x' else substr(eid,6,1) end ||
case when substr(eid,7,1)='' then 'x' else substr(eid,7,1) end ||
case when substr(eid,8,1)='' then 'x' else substr(eid,8,1) end ||
case when substr(eid,9,1)='' then 'x' else substr(eid,9,1) end ||
case when substr(eid,10,1)='' then 'x' else substr(eid,10,1) end ||
case when substr(eid,11,1)='' then 'x' else substr(eid,11,1) end ||
case when substr(eid,12,1)='' then 'x' else substr(eid,12,1) end
as replacedstr from Table;
Output :
1xxx12xxx1x1
xxxxxxxxx11x
Regards,
Srivignesh KN
Teradata Express 15 VMware connection issues - response (1) by jking108
Hey all, I think I fixed the problem. Within the Teradata Express VM, I opened YaST2 and used the network setup method. I then edited my VMware Single Port adapter. This enabled the DHCP. If you choose the automatic address setup (via DHCP) it should configure and give you a new IP address (you then need to go to "ifconfig" to check the new IP". I then was able to use this IP address to connect to my VM on the host. Now bteq and SQLA (on my host machine) can logon to the guest VM database.
I also I realize VMnet0 is your bridged connection, however, my host isn't showing a VMnet0 at all... I wonder if this is an issue... So I'm investigating this further. Please comment if you know why.
I'm still unable to connect to the internet, but as long as I can connect to my VM dbc, it'll suffice. Please comment if you have any other ideas or options that might help.
DB2 JDBC connections not showing in SQL Editor Connection Profile: dropdown - response (8) by madhuB
Teradata Studio Express
Convert a Field - response (2) by kparadise
Thank you,
This function actually only supplies the output of 'x' up to the last real digit. It's no problem, but the out put is more like:
'xxxxx1'
'1xx12xx1'
Rather than:
'xxxxxx1xxxxx'
'1xx12xx1xxxx'
Any idea?
fast export and mload with unicode column type and value - response (29) by akd2k6
Sorry Steve, I do not have the data to share as have some restrictions.
If in a table, out of 5 columns one column char set is declared as Unicode. then in template job we have to use-
USING CHARACTER SET UTF-8 or USING CHARACTER SET UTF-16
if we do not use any statement that means it's ASCII mode.
Then will that unicode field will be loaded properly with ascii mode ? For me it's getting rejected.
Now if we use UTF8 or UTF16, the size is going beyond 64k as template job is using all field as unicode and multiplying by 2/3.
I think it's a bug and need some improvement here to handle the character set issue.
As for most of the table we have combination of unicode and ascii,latin charset.
Getting an error when trying to connect using Teradata Hadoop Connector for Sqoop - forum topic by kbollam
Hi,
I'm trying to do a Sqoop import into local File system from Teradata and I get the following error.
I'm tried using the following import statement
import -fs file://// --connect
import -fs local --connect
Warning: /usr/hdp/2.3.2.0-2950/accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 16/02/23 14:24:19 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.3.2.0-2950 16/02/23 14:24:19 WARN fs.FileSystem: "local" is a deprecated filesystem name. Use "file:///" instead. 16/02/23 14:24:19 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 16/02/23 14:24:19 INFO manager.SqlManager: Using default fetchSize of 1000 16/02/23 14:24:19 INFO tool.CodeGenTool: The connection manager declares that it self manages mapping between records & fields and rows & columns. No class will will be generated. 16/02/23 14:24:19 INFO teradata.TeradataConnManager: Importing from Teradata Table:PDCR_INFO_LOG 16/02/23 14:24:19 INFO teradata.TeradataSqoopImportHelper: Setting input file format in TeradataConfiguration to textfile 16/02/23 14:24:19 INFO teradata.TeradataSqoopImportHelper: Table name to import PDCR_INFO_LOG 16/02/23 14:24:19 INFO teradata.TeradataSqoopImportHelper: Setting job type in TeradataConfiguration to hdfs 16/02/23 14:24:19 INFO teradata.TeradataSqoopImportHelper: Setting input file format in TeradataConfiguration to textfile 16/02/23 14:24:19 INFO teradata.TeradataSqoopImportHelper: Setting number of mappers in TeradataConfiguration to 4 16/02/23 14:24:19 INFO teradata.TeradataSqoopImportHelper: Setting input batch size in TeradataConfiguration to 1000 16/02/23 14:24:19 INFO teradata.TeradataSqoopImportHelper: Setting input separator in TeradataConfiguration to \u0021 16/02/23 14:24:19 INFO teradata.TeradataSqoopImportHelper: Setting source table to : PDCR_INFO_LOG SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/hdp/2.3.2.0-2950/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hdp/2.3.2.0-2950/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Seehttp://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 16/02/23 14:24:19 INFO common.ConnectorPlugin: load plugins in jar:file:/usr/hdp/2.3.2.0-2950/sqoop/lib/teradata-connector-1.4.1-hadoop2.jar!/teradata.connector.plugins.xml 16/02/23 14:24:19 INFO processor.TeradataInputProcessor: input preprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor starts at: 1456259059883 16/02/23 14:24:20 INFO utils.TeradataUtils: the input database product is Teradata 16/02/23 14:24:20 INFO utils.TeradataUtils: the input database version is 14.10 16/02/23 14:24:20 INFO utils.TeradataUtils: the jdbc driver version is 15.0 16/02/23 14:24:22 INFO processor.TeradataInputProcessor: the teradata connector for hadoop version is: 1.4.1 16/02/23 14:24:22 INFO processor.TeradataInputProcessor: input jdbc properties are jdbc:teradata://172.19.7.22/database=BIGDATA_POC_WORK_TABLES 16/02/23 14:24:23 INFO processor.TeradataInputProcessor: the number of mappers are 4 16/02/23 14:24:23 INFO processor.TeradataInputProcessor: input preprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor ends at: 1456259063699 16/02/23 14:24:23 INFO processor.TeradataInputProcessor: the total elapsed time of input preprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor is: 3s 16/02/23 14:24:24 INFO impl.TimelineClientImpl: Timeline service address: http://xxxx.xxxx.com:8188/ws/v1/timeline/ 16/02/23 14:24:24 INFO client.RMProxy: Connecting to ResourceManager atxxx.xxxx.com/172.19.26.26:8050 16/02/23 14:24:24 INFO processor.TeradataInputProcessor: input postprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor starts at: 1456259064288 16/02/23 14:24:24 INFO processor.TeradataInputProcessor: input postprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor ends at: 1456259064288 16/02/23 14:24:24 INFO processor.TeradataInputProcessor: the total elapsed time of input postprocessor com.teradata.connector.teradata.processor.TeradataSplitByHashProcessor is: 0s 16/02/23 14:24:24 ERROR teradata.TeradataSqoopImportHelper: Exception running Teradata import job com.teradata.connector.common.exception.ConnectorException: java.io.FileNotFoundException: File file:/hdp/apps/2.3.2.0-2950/mapreduce/mapreduce.tar.gz does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:609) at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:822) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:599) at org.apache.hadoop.fs.DelegateToFileSystem.getFileStatus(DelegateToFileSystem.java:125) at org.apache.hadoop.fs.AbstractFileSystem.resolvePath(AbstractFileSystem.java:467) at org.apache.hadoop.fs.FilterFs.resolvePath(FilterFs.java:157) at org.apache.hadoop.fs.FileContext$25.next(FileContext.java:2193) at org.apache.hadoop.fs.FileContext$25.next(FileContext.java:2189) at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) at org.apache.hadoop.fs.FileContext.resolve(FileContext.java:2189) at org.apache.hadoop.fs.FileContext.resolvePath(FileContext.java:601) at org.apache.hadoop.mapreduce.JobSubmitter.addMRFrameworkToDistributedCache(JobSubmitter.java:457) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:142) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at com.teradata.connector.common.tool.ConnectorJobRunner.runJob(ConnectorJobRunner.java:134) at com.teradata.connector.common.tool.ConnectorJobRunner.runJob(ConnectorJobRunner.java:56) at org.apache.sqoop.teradata.TeradataSqoopImportHelper.runJob(TeradataSqoopImportHelper.java:370) at org.apache.sqoop.teradata.TeradataConnManager.importTable(TeradataConnManager.java:504) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:148) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:184) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:226) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:235) at org.apache.sqoop.Sqoop.main(Sqoop.java:244) at com.teradata.connector.common.tool.ConnectorJobRunner.runJob(ConnectorJobRunner.java:140) at com.teradata.connector.common.tool.ConnectorJobRunner.runJob(ConnectorJobRunner.java:56) at org.apache.sqoop.teradata.TeradataSqoopImportHelper.runJob(TeradataSqoopImportHelper.java:370) at org.apache.sqoop.teradata.TeradataConnManager.importTable(TeradataConnManager.java:504) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:148) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:184) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:226) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:235) at org.apache.sqoop.Sqoop.main(Sqoop.java:244) 16/02/23 14:24:24 INFO teradata.TeradataSqoopImportHelper: Teradata import job completed with exit code 1 16/02/23 14:24:24 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Exception running Teradata import job at org.apache.sqoop.teradata.TeradataSqoopImportHelper.runJob(TeradataSqoopImportHelper.java:373) at org.apache.sqoop.teradata.TeradataConnManager.importTable(TeradataConnManager.java:504) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:148) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:184) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:226) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:235) at org.apache.sqoop.Sqoop.main(Sqoop.java:244) Caused by: com.teradata.connector.common.exception.ConnectorException: java.io.FileNotFoundException: File file:/hdp/apps/2.3.2.0-2950/mapreduce/mapreduce.tar.gz does not exist at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:609) at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:822) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:599) at org.apache.hadoop.fs.DelegateToFileSystem.getFileStatus(DelegateToFileSystem.java:125) at org.apache.hadoop.fs.AbstractFileSystem.resolvePath(AbstractFileSystem.java:467) at org.apache.hadoop.fs.FilterFs.resolvePath(FilterFs.java:157) at org.apache.hadoop.fs.FileContext$25.next(FileContext.java:2193) at org.apache.hadoop.fs.FileContext$25.next(FileContext.java:2189) at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) at org.apache.hadoop.fs.FileContext.resolve(FileContext.java:2189) at org.apache.hadoop.fs.FileContext.resolvePath(FileContext.java:601) at org.apache.hadoop.mapreduce.JobSubmitter.addMRFrameworkToDistributedCache(JobSubmitter.java:457) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:142) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308) at com.teradata.connector.common.tool.ConnectorJobRunner.runJob(ConnectorJobRunner.java:134) at com.teradata.connector.common.tool.ConnectorJobRunner.runJob(ConnectorJobRunner.java:56) at org.apache.sqoop.teradata.TeradataSqoopImportHelper.runJob(TeradataSqoopImportHelper.java:370) at org.apache.sqoop.teradata.TeradataConnManager.importTable(TeradataConnManager.java:504) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:148) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:184) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:226) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:235) at org.apache.sqoop.Sqoop.main(Sqoop.java:244) at com.teradata.connector.common.tool.ConnectorJobRunner.runJob(ConnectorJobRunner.java:140) at com.teradata.connector.common.tool.ConnectorJobRunner.runJob(ConnectorJobRunner.java:56) at org.apache.sqoop.teradata.TeradataSqoopImportHelper.runJob(TeradataSqoopImportHelper.java:370) ... 9 more
fast export and mload with unicode column type and value - response (30) by feinholz
When using aclient session character set of ASCII, the unicode field will only be loaded properly if the the data for that column consists of single-byte characters.
If the data contains multi-byte characters, then you have to use UTF8 or UTF16. And when you use those client session character sets, then the character data for all character fields could potentially be 1-3 bytes (UTF8), and will be expected to be 2 bytes per character for UTF16. And that is the behavior for all fields.
Of course, all single byte ASCII characters will continue to be single byte with a UTF8 client session character set, but the schema must be adjusted (tripled) because TPT has no knowledge of the data and must be prepared for a character of any size (1 to 3 bytes).
DB2 JDBC connections not showing in SQL Editor Connection Profile: dropdown - response (9) by fgrimmer
What database are you connecting to? Are you running Studio Express inside a VM or from your desktop? What is the error that is occurring when you try to connect from the Data Source Explorer?
fast export and mload with unicode column type and value - response (31) by akd2k6
I did not get the last statement-"Of course, all single byte ASCII characters will continue to be single byte with a UTF8 client session character set, but the schema must be adjusted (tripled) because TPT has no knowledge of the data and must be prepared for a character of any size (1 to 3 bytes)."
In below Table DDL, col2 have special unicode chars and they are loaded into table. will the template job be able to copy table to another same DDL table as I have to use . In my case it's failing with error code
USING CHARACTER SET UTF-8
$UPDATE: TPT10508: RDBMS error 3798: A column or character expression is larger than the max size.
USING CHARACTER SET UTF-16
$EXPORT: TPT10508: RDBMS error 9804: Response Row size or Constant Row size overflow.
Is there any way I can handle this as those data are loaded into one table which teradata allowed and need to copy in another table(dev/COB)
CREATE SET TABLE database.table1 ,NO FALLBACK ,
NO BEFORE JOURNAL,
NO AFTER JOURNAL,
CHECKSUM = DEFAULT,
DEFAULT MERGEBLOCKRATIO
(
col1 DECIMAL(18,0) NOT NULL,
col2 VARCHAR(31000) CHARACTER SET UNICODE NOT CASESPECIFIC,
col3 DECIMAL(18,0),
col4 VARCHAR(1000) NOT CASESPECIFIC
)
PRIMARY INDEX xxx ( col1 );
fast export and mload with unicode column type and value - response (32) by feinholz
UTF8 characters can be 1-, 2-, or 3-bytes in length.
Thus, even with a character set of UTF8, a 7-bit ASCII character is still a single-byte UTF8 character.
However, our load/unload products need to account for the largest size possible for data for each column.
And thus, if you use a client session character set of UTF8, the CHAR field sizes must be tripled.
In the table definition you provided, the CHAR field sizes are in terms of "characters". The data, however (and the TPT schema), needs to be specified in terms of "bytes".
For the table definition you provided, if you specify a client session character set of UTF8, then your schema will be too large for the data to be loaded into Teradata.
And that is the error you are getting.
Do you happen to know what the "export width" setting is on your server?
Recursive Query to List down the Lineage for a Particular Table with Immediate Parents - forum topic by himanshugaurav
Hi All
I have require to list down the lineage for a particular table as below:
Source --> Target1
Target1 --> Target2
Target1 --> Target 3
Target2 --> Target 4
Here the target become the source and so on. I used a recursive query but it is not giving me immediate parent but the source parent and data looks as below :
Source --> Target1
Source --> Target2
Source --> Target 3
Source --> Target 4
As we all know that there can be multiple views on a table , and then each of these views can be exposed in deifferent application via some new views on top of them .
I am trying to list it .
If anyone has already implemented this , Please share your inputs.
Regards
Himanshu
fast export and mload with unicode column type and value - response (33) by akd2k6
Hi Steve,
in that case, how the data is loaded to the production table using ETL tool like datastage, abinitio? Should not they face similar issue?
In my case I can see data are loaded using datastage tool.
Madhu, What database are you trying to connect to? Are you using Teradata Studio or Teradata Studio Express?