Thursday, 9 November 2017

Oracle Golden Gate Interview Questions

Q 1: What is Oracle GoldenGate?
A 1: Oracle GoldenGate 12c is a fancy name for a piece of software which enables the continuous, real-time capture, routing, transformation, and delivery of transactional data across heterogeneous environments. So it transfers committed data changes from oracle/non-oracle database to target oracle/non-oracle databases with low latency.
Q 2: What is Oracle GoldenGate Delivery module?
A 2: The Oracle GoldenGate Delivery module takes the data transactions from the latest Trail File and applies that data to the target using the native SQL for that relational database management system—delivery can be made to any ODBC compliant database. The Delivery module applies each transaction in the same order as it was committed and within the same transactional context as at the source, to ensure consistency and referential integrity at the target.
One key point to note is that the changes can be delivered to Oracle or Non-Oracle databases.
Q 3: What is the significance of Oracle GoldenGate Manager?
A 3: To give users control over Oracle GoldenGate processes, Manager provides a command line interface to perform a variety of administrative, housekeeping, and reporting activities, including:
  • Setting parameters to configure and fine-tune Oracle GoldenGate processes
  • Starting, stopping, and monitoring capture and delivery modules
  • Critical, informational event, and threshold reporting
  • Resource management
  • Trail File management
Q  4: What are the Key Benefits of GoldenGate?
A 4:
A) Transforming and Mapping the data between two databases without any need of middleware software
— Oracle GoldenGate can flexibly accommodate transformations and mappings within either the Capture or Delivery modules—no middle tier server is needed. The product supports table and row filtering based on user-defined criteria.
B) Flexible topology support
Oracle GoldenGate’s architecture allows customers to support a variety of topologies, including one source to one target, one-to-many, many-to-one, many-tomany, and cascading, and bidirectional configurations.
C) Bidirectional configuration support
Oracle GoldenGate enables both active-passive and active-active solutions for maximum system availability.
D) Routing and Compression
Oracle GoldenGate utilizes TCP/IP for sending data so no geographical distance constraints are imposed between source and target systems. In addition, Oracle GoldenGate can apply additional compression to the data as it is routed
E) Data Encryption
Data encryption ensures secure, confidential data transmissions
F) Excellent Support for Operational Reporting Solutions
  • Organizations can split reporting operations to synced systems minimize the overhead on the production environment.
  • Oracle GoldenGate does not limit the type of operations that can be executed on the reporting instance. Reporting needs that include the creation of temporary tables or other processes to prepare data for reporting can be run on the reporting instance without impacting primary business systems.
  • Oracle GoldenGate can be combined with Oracle Data Integrator for complex data transformation.
  • By using Oracle GoldenGate the option exists to include bi-directional replication for key system tables or for entire sets of data. Oracle GoldenGate is flexible to fit the needs of both the application being used and business users.
  • Oracle GoldenGate is tested and certified on major Oracle applications including E-Business Suite, JD Edwards, PeopleSoft and Siebel CRM.
  • Oracle GoldenGate reads changed data from database transaction logs rather than from the database tables themselves. Because it requires minimal modifications to the applications, this solution provides organizations with a compact, non-intrusive, and easily configured method for providing access to real-time data for reporting purposes
Just for sake of distinguishing between Operational and Analytical reporting, below are basic definition.
Operational reports are used for day-to-day business processes and decision making. They are typically based on an active data set that contains the last quarter or two of transactions. The reports contain detailed information for the active accounts and are used by managerial and business stuff to optimally steer daily operations. For operational reporting, the data source can be either E-Business Suite directly or for larger implementations it may come from an Operational Data Store.
On the other hand, analytics reports are based on larger data sets that usually include one to ten or more years of transactional records. Executive staff uses them for strategic decision making and intelligence gathering. Data is typically stored in a data warehouse and is aggregated in a way that enables efficient analysis.
Q 5 : For reporting needs, as compared to GoldenGate solution, how are old ETL-Based Bulk Data Movement less effective?
A 5: Some key points were traditional reporting solutions lack are:
  • Batch windows: With ETL integration, the data transfer processes are executed during maintenance windows when the data sources are quiesced; this is necessary to ensure that data sources don’t change during data acquisition, which would create inconsistencies. For business-critical systems these batch windows are shrinking while the business operations approach 24/7. Further, handling ever-growing data volumes in these shrinking batch windows is becoming a major challenge for ETL users.
  • Overhead on the source: Source tables are queried and potentially large amounts of data are copied either to other tables for export or directly to flat files in the file system. The net is that the more data extracted, the more resources were required to perform the work.
  • Reliability/recoverability: Because ETL tools are geared towards processing data in batches, they do not effectively maintain the transaction integrity of data in transit. If an ETL process is interrupted, the partially transferred data often cannot be applied.
Q 6: Why it is highly desirable that tables that you want to replicate should have primary key?
In simple words, to uniquely identify a record GoldenGate requires a primary key.
If the primary key does not exist on the source table, GoldenGate will create its own unique identifier by concatenating all the table columns together. This will certainly prove inefficient as volume of data that needs to be extracted from the redo logs will increase exponentially. In normal scenario, when a table has primary key, GoldenGate process will fetch only the primary key and the changed data (before and after images in the case of an update statement).
GoldenGate process will also warn you that primary key does not exist on the target table and  you may receive the following warning in the GoldenGate error log:
WARNING OGG-xxxx No unique key is defined for table ‘TARGET_TABLE_NAME’. All viable columns will be used to represent the key, but may not guarantee uniqueness. KEYCOLS may be used to define the key.
Having primary key also insure fast data lookup when the Replicat recreates and applies the DML statements against the target database. But keep in mind that it is not “mandatory” that primary key must be present for the table.
Q 7: Is it MUST that the source database should be in archivelog mode?
A 7: It is NOT must that the source database is in the archivelog mode but for any serious, mission-critical  GoldenGate system it is almost mandatory to have source system in Archive Log mode.
GoldenGate needS Archivelog mode to allow the mining of its archived redo logs, following a fallback or outage in GoldenGate replication to prevent any data loss.
So for testing/demo purpose you can have source database in nonarchivelog mode since for demo purpose you are not bothered much about data consistencies or data loss but if you are creating GoldenGate setup for your company’s productions system, keep source archivelog mode action as your foremost priority.
Q 8: Without going into details, explain high level steps of setting up GoldenGate.
A 8: Below are the key steps to install/configure the GoldenGate system.
  1. Download the software from the Oracle website and upload to server
  2. Unpack/Unzip the installation zip file
  3. Prepare source and target system
  4. Install the software on the source and target system (for 12c use OUI)
  5. Prepare the source database (some DB parameters need to be adjusted)
  6. Configure the Manager process on the source and target system
  7. Configure the Extract process on the source system
  8. Configure the data pump process on the source system
  9. Configure the Replicat process on the target system
  10. Start the Extract process
  11. Start the data pump process
  12. Start the Replicat process
Q 9: When creating GoldenGate database user for database 12c, what special precaution you need to take?
A 9: You must grant the GoldenGate admin user access to all database containers on the source side so that GoldenGate can access the redo logs for all the databases (container and pluggable)
You must also grant the DBA role with the container=all option.
SQL> GRANT DBA TO C##GOLDENADMIN CONTAINER=ALL
OGG-06203 Failed to switch to catalog PDB1. OCI Error ORA-01031: insufficient privileges SQL ALTER SESSION SET CONTAINER=PDB1
Q 10: What is the concept of GoldenGate Credential Store in GoldenGate 12c?
A 10:
You can store the GoldenGate user’s database logon credentials using the Credential Store. This way you will not have to store the password in clear text in the Goldengate parameter files.
Simple commands to create a CredentialStore are mentioned below:
ggsci > create wallet
ggsci > Add CredentialStore
ggsci > alter CredentialStore Add User ogg@ORCL Password ogg Alias aogg
ggsci > info CredentialStore
ggsci > dblogin useridalias aogg
Here ogg is the goldengate admin user, ORCL is database SID, aogg is alias. Last command “dblogin useridalias aogg” will connect you to database.
Q 11: Why do you need to increase value of UNDO_RETENTION parameter while setting up GoldenGate?
A 11:  It is required to increase the UNDO_RETENTION value so that GoldenGate can maintain a read-consistent row image for a specific time or SCN to match the redo record. Please note that Oracle GoldenGate 12c uses Flashback Query to obtain all the values from the undo records
for special database objects like User-defined types, Nested tables etc. Increasing UNDO_RETENTION will make sure that GoldenGate is able to meet the goal of read consistency.
Oracle recommends that UNDO_RETENTION parameter should be set to 86400 seconds (24 hours).
Q 12: Which all databases does GoldenGate 12c support?
A 12: GoldenGate 12c Support 11g and 12c databases.
Q 13: What is Downstream capture mode of GoldenGate?
A 13: Traditionally log mining work for the source data happens on Source database side but in Downstream capture mode Oracle Data Guard redo
transport mechanism is used.This enables continuous log shipping to the target database’s standby redo logs in real time. Log mining work to fetch DDL/DML transactions happens on the target side.
One clear benefit of this configuration is that if a failure occurs at Site A, the database log mining server at Site B will continue to process the redo log stream up to the point of failure.
Q 14: How do you take backup of GoldenGate?
A 14: Your source/database you can backup easily using backup tools like Oracle Recovery Manager (RMAN) but to backup the GoldenGate you will need to back up the GoldenGate home and subdirectories that contain the trail files, checkpoint files etc. Without these key files, GoldenGate will not be able to recover from the last checkpoint. It means that if somehow you loose all these key GoldenGate files then you will have no option but to go for a new initial load. RMAN simply do not have capability to backup the OS or nondatabase files.
so either you keep all your GoldenGate related files on some kind of SAN setup which gets backed up daily at storage level or use  Unix shell commands etc in cron job to take filesystem backups..
Q 15: What is checkpoint table? In which capture mode it is used : classic or integrated?
Oracle GoldenGate extract and replicat processes perform checkpoint operations. Now in the event of some unexpected failure, the checkpoint file or database table ensures extract and replicat re-start from the point of failure and avoid re-capture and re-apply of transactions.
So, Checkpoint table enables the checkpoint to be included within Replicat’s transaction, ensuring complete recovery from all failure scenarios.
You use the GGSCI add checkpointtable command to create the checkpoint table.
Checkpoint table is used for Classic capture/replicate mode.
For Integrated mode, the Checkpoint table is not required and should not be created.
Q 16: Why is DISCARDFILE important to be specified in extract/replciate parameters file?
The DISCARDFILE parameter when defined in the parameter files will monitor and capture the rejected rows with the associated cause of failure. This proves very important when you are trying to troubleshoot. In the parameter file you define the size and location of the discard file and GoldenGate engine will start populating this table in case of failure.
Q 17: Your database is already generating archivelogs. Why do you need to generate trail files for GoldenGate? Why can’t you just capture changes at source and apply to target without wasting time in writing changes first to Trail files?
A 17: One of the key aspect of GoldenGate (besides the speed of data movement) is that it maintains data integrity even when any process/network fails.
Trail files are used by Extract and Replicat processes and sole purpose is to maintain data persistence. So even if network between source and target fails for some reason, the GoldenGate process will re-start from exact same point when Network is restored. No data loss and no re-instantiation of tables will be required. This is very critical for a production system.
Also note that GoldenGate writes only the committed transactions to trail files in large blocks which minimizes the I/O.
Q 18: What is Downstream mining database topology in context of GoldenGate? 
A 18: Just like Oracle Streams, where both the capture and apply processes can be configured on the target system, Oracle has provided this capability in GoldenGate version higher than 11gr2. This is also known as source database offloading. In this configuration, the source database sends its redo stream data to the target system. On the target side, he Extract process uses the Logminer server on the “downstream” target database to mine the logs and apply the changes to the target database.
Redo stream data that will get transferred to target database will contain both online and archive logs. This is similar to a Data Guard setup, which is used by Oracle Streams in downstream capture mode which means that for using this setup you will modify LOG_ARCHIVE_DEST_STATE_n on source and target, creating standby redo logs on target database etc.
Q 19: What is BATCHSQL mode and how it can increase performance of GoldenGate?
A 19: When GoldenGate is replicating the changes on the target database, in default mode, the Replicat process will apply SQL to the target database, one statement at a time.Now this can cause a performance issue, where the Replicat process cannot apply the changes quickly enough as compared to the rate at which the Extract process delivers the data.
GoldenGate has addressed this issue through the use of the BATCHSQL configuration parameter for the Replicat process. BATCHSQL segregates similar SQL statements into batches and applies them all at once. Similar SQL statements are those that perform a specific operation type (insert, update, or delete) against the same target table with the same column list.
Note that Oracle Large Objects (LOBs) and rows greater than 25 KB are not supported by BATCHSQL.

GoldenGate Subdirectories Brief Details

After successfully installing Oracle GoldenGate 12c, it will create a lot of sub-directories structure which will be used by the different components of Oracle GoldenGate. There are so many directories that new user will get confused what to look where.The directories are the default locations for GoldenGate when creating objects and parameter files.
Below are the brief description of each directory:
Sub-directories Description 
dirbdb This directory contains the GoldenGat e datastor e files
dirchk This directory contains the GoldenGate checkpoint files. Extract and Replicat processes stores current read and write positions in the checkpoint file to support data accuracy and fault tolerance. Extension of the file is either .cpe for Extract checkpoint files or .cpr for Replicat checkpoint files.
dircrd This directory contains the GoldenGate Credential Store
dirdat This directory contains the GoldenGate trail files. Created by Extract/Pump processes to store records of extracted data for further processing (example to be processed further by replicat process).File name format starts with a user-defined two-character prefix.
dirdef This directory contains the GoldenGate definition mapping files. data definitions files are created by the DEFGEN utility and it contain source or target data definitions used in a heterogeneous synchronization environment. Written in external ASCII. File name format is a user-defined name specified in the DEFGEN parameter file.
dirdmp This directory contains the GoldenGate process dump
dirjar This directory contains the GoldenGate Java Archive
dirout This directory contains the GoldenGate output files
dirpcs This directory contains the GoldenGate process files. It is default location for status files. File name is <group>.<extension> where <group> is the name of the group and <extension> is either pce (Extract), pcr (Replicat), or pcm (Manager). These files are only created while a process is running. The file shows the program name, the process name, the port and process id that is running.
dirprm This directory contains the GoldenGate process parameter files. It store run-time parameters for GoldenGate process groups or utilities. Written in external ASCII format. File name format is <group name/user-defined name>.prm or mgr.prm.
dirrpt This directory contains the GoldenGate process report. It is default location for process report files created by Extract, Replicat, and Manager processes to report statistical information relating to a processing run. Written in external ASCII format.
File name format is <group name><sequence number>.rpt where <sequence number> is a sequential number appended to aged files.
dirsql This directory contains the GoldenGate user defined files. It iis default location for scripts created by the TRIGGEN utility to contain SQL syntax for creating GoldenGate logging triggers and GoldenGate log tables. Written in external format.
File name format is a user-defined name or the defaults of GGSLOG (table-creation script) or the table name (trigger-creation script), with the extension of .sql.
These scripts can be edited if needed.
dirtmp This directory contains the GoldenGate temporary files. It is default location for storing large transactions when the size exceeds the allocated memory size.
dirwlt This directory contains the GoldenGate wallet files
dirwww This directory contains the GoldenGate web pages
dirver This directory contains the GoldenGate Veridata files

Performance Tuning Tips for GoldenGate

There are a number of factors that we need to keep in mind to be able to successfully tune a GoldenGate Setup. If you carefully design your system keeping in mind the end goal and apply some proven performance tuning measures you should be able to get real time replication from your GoldenGate setup.
Below are some of the key points that we found helpful when doing Performance Tuning for GoldenGate.
1) Have a very clear definition of Baseline
Before we can start tuning, we must record our baseline. See when load on your application peaks and take multiple readings of key database performance parameters like CPU/Memory/cursor usage /Transactions count/ etc. Generate and save AWR reports also for that time.
Involve your Application Stress testing team to simulate data and transactions in an exact Production replica.

2) Think “Parallel”
If you have a lot of tables to replicate, do not just put all of them in single extract/replicate process. Divide those large number of tables among parallel processes and trail files. This will certainly help the system in performing better.
We have also seen that increasing parallelism on the Replicat (target) side give better results than on the Extract (source) side.
You increase parallelism on the 12c GoldenGate by adding below kind of parameter in the parameter file:
DBOPTIONS INTEGRATEDPARAMS (parallelism 6)
And always keep in mind parallelism will increase the CPU/Memory consumption so keep monitoring those parameters closely.

3) Study and Optimize Online Redo Logs
It is very important that Redo logs should be arranged in such a way that GoldenGate performance can be optimized. Redo logs are constantly read and write making them first candidate of applying Tuning methods.
  • Keep redo logs on fast disk (example SSD, Flash Disk) wherever possible
  • Ensure adequate number of redo log groups are configured
  • Read your AWR carefully, see if any redo log related events(example ‘log file sync’) are showing up.
  • Incorrect RAID settings of Disk for Redo logs is another issue. Recommended is RAID1+0 (mirrored striped sets) for these files and RAID 5 is least recommended.

4) Understand the data in tables carefully
Does your table have any LOB (BLOB, CLOB, NLOB). LOB data types store large blocks of unstructured data in the binary or character format and it is a good idea that such tables having LOB type data should have separate Extract, data pump, and Replicat process group so that we can increase throughput.
Does your tables have primary key. Without a key on a table with multiple columns can definitely increase the redo log generation.

5) Avoid Contention at Trail file level
According to Oracle’s best practice for every remote trail file there should not be more than three Replicat processes that read the same remote trail file. So it is better to pair each Replicat with its own trail file and Extract process.

6) Verify if Network is your culprit
If network is performing poorly, you will get high latency, possible disconnections and poor performance. All this will lead to data pump process unable to write to the remote trail quickly enough.
Work with your network team, see if OS TCP socket buffers can be adjusted to higher value, TCP packet size can be changed etc. Usually OS default parameters are defined towards lower value and definitely not optimized for applications like GoldenGate.

7) Check if your applications are hosted on over-utilized hardware
If you have a very busy system which is already utilizing the hardware resources to the max, then it may prove helpful if some of those hardware parameters value can be increased to accommodate the GoldenGate resource demands. Closely monitor your CPU Consumption/ Memory usage / I/O contentions.

8) Plan to install latest GoldenGate Release
At the time of writing this post 12cR2 is latest GoldenGate release. This release has some new features like integrated Replicat which speed up the data apply feature and is a good performance improvement technique.

9) Use new Integrated Replicat features that are present in 12c version
You  can configure the Replicat processes to commit asynchronously at the session level by using commit_wait = ‘NOWAIT’ parameter setting. This will ensure that Oracle will not wait for a commit to succeed before allowing the session to continue
That can be setup by using SQLEXEC statement in each parameter file:
SQLEXEC "alter session set commit_wait = 'NOWAIT'";
you can also explore some other parameters like BATCHTRANSOPS, BATCHSQL_MODE which also help boost the replicat speed.

10) Calculate transaction size on the Source side and tweak parameters to increase throughput
Use GoldenGate’s LOGDUMP utility to identify the transaction size on source database
If required, Increase EAGER_SIZE  inbound server parameter for the integrated Replicat to value slightly higher than the transaction size.
Example:
DBOPTIONS INTEGRATEDPARAMS (EAGER_SIZE 18000)

11) If you get stuck somewhere, don’t forget to trace
Tracing enables additional information to be gathered from a given process and do help sometimes when you are simply not able to figure out what process is slow or what table is bottleneck.
Oracle GoldenGate 12c offers a similar tracing mechanism through its trace and trace2 options. “trace” provides processing information, whereas “trace2” identifies the processes with wait events.
Trace will help you collect some key parameters like: table name, operation name/type/number, average/maximum waits.

References / Read More
Note 1557031.1 Recommended patches for Integrated Capture/Replicat
Note 1448324.1 OGG Integrated Healthcheck Script
Note 1488668.1 GoldenGate Performance Data Gathering
Note 1596493.1 GoldenGate Integration with AWR
Note 1485620.1 Best Practices Downstream Capture
MAA white paper Oracle GoldenGate Performance Best Practices
http://www.oracle.com/technetwork/database/availability/maa-gg-performance-1969630.pdf

Frequently Used GoldenGate Commands

Below are some of the GoldenGate commands that we use frequently. We work on GoldenGate 12c version but almost all commands work in same manner in 11g and 12c.
MANAGER RELATED COMMANDS
COMMANDDESCRIPTION
INFO MANAGER
or
INFO MGR
or
STATUS MANAGER
Displays the Manager process status information.
START MANAGERStarts the Manager process.
STOP MGR
or
STOP MANAGER
or
STOP MANAGER!
Stops the Manager process.
If you put ! mark in the end then it will not ask you y/n prompt also and directly stop it.
SEND MANAGERNot used very frequently but is very useful when you are troubleshooting some issues. Checks addition status
information when used with extra options.Example to get the port information use like below:
SEND MANAGER GETPORTINFO DETAIL
EXTRACT RELATED COMMANDS
COMMANDDESCRIPTION
INFO <EXTRACT_NAME>
or
INFO EXTRACT <EXTRACT_NAME>
OR
STATUS EXTRACT
Displays the status summary for an Extract process. Use “DETAIL” keyword to get further details.
INFO EXTRACT, DETAIL
It also supports wild card, Example like below you can get details of all EXTRACT process that starts with E in single statement.
INFO E*, DETAIL
 ADD EXTRACT creates a new Extract group
Example for Integrated Extract:
ADD EXTRACT ESOURCE01, INTEGRATED TRANLOG, BEGIN NOW
 ALTER EXTRACT This command alters an existing Extract group
 STOP EXTRACTThis command stops the Extract process
Supports wildcards too:STOP EXTRACT *
 STATS EXTRACTDisplays the Extract process statistics.
 START EXTRACTThis command starts the Extract process.
Supports wildcards too:START EXTRACT *
 REGISTER EXTRACTRegisters an Extract process group with its source database
 UNREGISTER EXTRACTUnregisters an Extract process group from its source database
 KILL EXTRACTKills an Extract process. Use when the process cannot be stopped gracefully.
LAG EXTRACTDisplays the lag time between the Extract process and the data source
CLEANUP EXTRACTDeletes the run history for the specified Extract group. you must stop the Extract before cleaning it up.
DELETE EXTRACTIt will delete the Extract group. The Extract process must first be stopped.
SEND EXTRACTIt will direct Extract process to generate statistics/reports
VIEW REPORTAllows process reports generated by the SEND command to be viewed
REPLICAT RELATED COMMANDS
COMMAND DESCRIPTION
INFO REPLICAT <REPLICAT_NAME>
or
STATUS REPLICAT <REPLICAT_NAME>
Displays the status summary for the Replicat process
The DETAIL option provides more information:
INFO REPICAT, DETAIL
START REPLICAT <REPLICAT_NAME>Starts the Replicat process
STATS REPLICAT <REPLICAT_NAME>Displays the Replicat process statistics.
STOP REPLICAT <REPLICAT_NAME>Stops the Replicat process
KILL REPLICAT <REPLICAT_NAME>kills the Replicat process and is used when the process cannot be stopped gracefully.
LAG REPLICAT <REPLICAT_NAME>Displays the lag time between the Replicat process and the data source.
REGISTER REPLICAT <REPLICAT_NAME>Registers the Replicat process group with its target database
UNREGISTER REPLICAT <REPLICAT_NAME>unregisters an Replicat from its target database.
CLEANUP REPLICAT <REPLICAT_NAME>Deletes the run history for the specified Replicat group. The Replicat process
must be stopped first.
DELETE REPLICAT <REPLICAT_NAME>This command deletes an Extract group. The Replicat process must be stopped first.
SEND REPLICAT <REPLICAT_NAME>Sends ad hoc report generation or trace file generation command to Replicat
VIEW REPORT <REPLICAT_NAME>Allows process to view reports generated by the SEND command
MISCELLANEOUS COMMANDS
COMMANDDESCRIPTION
ADD EXTTRAILCreates a local trail for an Extract group.
ALTER EXTTRAILCan alter local trail example if you want to edit the trail file size
ADD RMTTRAILCreates a remote trail for an Extract or data pump group
ALTER RMTTRAILCan be used to alter the remote trail size
INFO EXTTRAILProvides information on the local trail status
INFO RMTTRAILProvides information on the remote trail status
DELETE EXTTRAILDeletes the local trail
DELETE RMTTRAILDeletes the remote trail
VIEW PARAMSAllows a process’ parameter file to be viewed
EDIT PARAMSAllows a process’ parameter file to be edited
ADD CREDENTIALSTORECreates a Credentials Store wallet to store database user/password
INFO CREDENTIALSTORELists all users defined in the Credentials Store wallet.
DBLOGINprovides access to the database with the username and password OR you can also use db login alias. Example:
DBLOGIN USERIDALIAS sourcepass
ADD SCHEMATRANDATAThis command can be run only after you have connected to database using dblogin. It adds database schema level supplemental logging
ADD TRANDATAThis command can be run only after you have connected to database using dblogin. It adds supplemental logging to a database table.
DELETE TRANDATAThis command stops the supplemental logging on a database table
INFO TRANDATAProvides the supplemental logging status on a database table
INFO SCHEMATRANDATAProvides the supplemental logging status on a database schema
HISTORYLists a history of GGSCI commands that were used.
ADD CHECKPOINTTABLEAdds a checkpoint table on the target database.
DELETE CHECKPOINTTABLEDeletes the checkpoint table from the database.
INFO CHECKPOINTTABLEShows the checkpoint table status and creation date
OBEYYou can run a set of command in chunks by putting those commands in .oby file and executing it
Example:
OBEY dirprm/conf.oby
SHELLAllows OS commands to be executed from GGSCI command prompt
SHELL df -h dirdat
Also at any time you can type “HELP” in ggsci command prompt to get help on a command
Example, for help on a specific command, type HELP [command] [object]
HELP ADD EXTRACT
REFERENCE:
https://docs.oracle.com/goldengate/1212/gg-winux/GWURF/ggsci_commands.htm#GWURF110