Streams Troubleshooting Guide

Streams Troubleshooting Guide

Troubleshooting the Capture Process

Is the capture process enabled?
Verify if the capture process is ENABLED using the following query:
prompt
prompt ++ CAPTURE PROCESSES IN DATABASE ++ 
col capture_name HEADING 'Capture|Name' format a30 wrap
col status HEADING 'Status' format a10 wrap
col QUEUE HEADING 'Queue' format a25 wrap
col RSN HEADING 'Positive|Rule Set' format a25 wrap
col RSN2 HEADING 'Negative|Rule Set' format a25 wrap
col capture_type HEADING 'Capture|Type' format a10 wrap
col error_message HEADING 'Capture|Error Message' format a60 word
col logfile_assignment HEADING 'Logfile|Assignment'
col checkpoint_retention_time HEADING 'Days to |Retain|Checkpoints'
col Status_change_time HEADING 'Status|Timestamp'
col error_number HEADING 'Error|Number'
col version HEADING 'Version'
SELECT capture_name, queue_owner||'.'||queue_name QUEUE, capture_type, status,
rule_set_owner||'.'||rule_set_name RSN, negative_rule_set_owner||'.'||negative_rule_set_name RSN2,
checkpoint_retention_time, version, logfile_assignment,error_number, status_change_time, error_message
FROM DBA_CAPTURE;
If the capture process is DISABLED, restart the process uisng:
dbms_capture_adm.start_capture(‘’)
If the capture process is ABORTED, check the error_number and error_messages columns for the error that caused the process to abort.
Some typical capture errors are:

O
RA-1323 Invalid State
This error typically indicates that an archived log file is missing.
Check the Logminer log table (SYSTEM.LOGMNR_LOG$) for the last log file for each thread, then look at the operating system files for the next log file in sequence. Typically, this is the log file that cannot be found.
If this doesn't help, try turning on logminer and capture tracing and restart capture, look at the capture trace file in the bdump directory.
· ALTER SYSTEM SET EVENTS '1349 trace name context forever, level 7';· exec dbms_capture_adm.set_parameter('yourcapturename','trace_level','127'); · exec dbms_capture_adm.start_capture('yourcapturename');
To turn off tracing:
· ALTER SYSTEM SET EVENTS '1349 trace name context off'; · exec dbms_capture_adm.set_parameter('yourcapturename','trace_level',null);

ORA-1  ORA-1347 Supplemental log data no longer found

This error indicates that minimum supplemental logging is not enabled for the instance. This occurs most commonly on 9iR2 RAC instances. When configuring supplemental logging for RAC in 9iR2, it is necessary to issue the ALTER DATABASE command at each instance in the cluster BEFORE creating the capture process. In 10g, supplemental logging can be initiated from a single instance so it is no longer necessary to issue the ALTER DATABASE ADD SUPPLEMENTAL LOG DATA command at multiple instances. After issuing the ALTER DATABASE ADD SUPPLEMENTAL LOG DATA, be sure to issue an ALTER SYSTEM ARCHIVE LOG CURRENT or ALTER SYSTEM SWITCH LOGFILE.
This error can also be signaled if supplemental logging has been dropped, either explicitly or implicitly. ALTER DATABASE DROP SUPPLEMENTAL LOG DATA explicitly disables supplemental logging. If this command is issued, the capture process will abort with an ORA-1347 error.
Supplemental logging can be implicitly disabled by DML statements that use a BUFFER hint. The BUFFER hint is frequently used in TPCC benchmarks. Logging can also be disabled when using a TEMPORARY TABLE and CLOB in combination. This is reported as bug 3172456 and fixed in 9.2.0.6


ORA-1372: Insufficient processes for specified LogMiner operation

This error indicates that there are not enough processes available to start the capture process. Check the following:
    1. Verify that the init.ora parameter parallel_max_servers is sufficient to start the capture and apply processes. For each capture defined on the database, the number of processes required is 2 + parallelism defined for  capture. If the capture parallelism parameter is set to 1 (the default), then 3 processes are required to start the capture. For capture parallelism value of 3, then 2+3 or 5 processes are required to start the capture.
    2. Check if the database resource manager is used for this database. Check for any plans that have limitations et for parallel processes by running the following:
              select PLAN, TYPE, PARALLEL_DEGREE_LIMIT_P1 from DBA_RSRC_PLAN_DIRECTIVES;

PARALLEL_DEGREE_LIMIT_P1 specifies a limit on the degree of parallelism for any operation and the default is UNLIMITED.

If this is the cause you may need to disable the plan or set the parallelism value high enough for the system_plan.


ORA-23605 invalid value for the Streams parameter FILE_NAME 

Capture abort with ORA-23605 can occur for one of the following reasons:
    1. Invalid value for the Streams parameter FILE_NAME indicates an inconsistency between the capture logminer session and existing logminer sessions. Generally, the registered archived logs view is empty when this     occurs.
        This error can occur when attempting to add a logfile to a logminer session. To confirm this problem run the following query:
                  select logminer_id from dba_capture where not exists (select session# from system.logmnr_session$);

                  
If rows are returned, most likely this is the problem. Check if the customer attempted to remove metadata at some point or performed an incomplete drop of the capture process? To fix, drop the existing capture process  with the non-existent logminer session. Then recreate the capture process.
    2. Attempting to use a dictionary build from a previously deleted logfile (bug 5278539, fixed in 10.2.0.4, 11.1). In this situation, there are multiple entries in the V$ARCHIVED_LOG view for the same logfile, with the name being NULL for deleted logfiles. The patch avoids checking DELETED entries in V$ARCHIVED_LOG:

         select name, status from v$archived_log;

What is the status of the capture process?
If the capture process is ENABLED check its current status using the following query:
prompt ++ CAPTURE STATISTICS ++
COLUMN PROCESS_NAME HEADING "Capture|Process|Number" FORMAT A7
COLUMN CAPTURE_NAME HEADING 'Capture|Name' FORMAT A10
COLUMN SID HEADING 'Session|ID' FORMAT 99999999999999
COLUMN SERIAL# HEADING 'Session|Serial|Number'
COLUMN STATE HEADING 'State' FORMAT A17
SELECT SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS_NAME,
c.CAPTURE_NAME,
c.STARTUP_TIME,
c.SID,
c.SERIAL#,
c.STATE
FROM gV$STREAMS_CAPTURE c, gV$SESSION s
WHERE c.SID = s.SID AND.SERIAL# = s.SERIAL#;
If the status of the capture process is CAPTURING CHANGES, verify if messages are being enqueued int the capture queue.

Are LCRs being enqueued into the capture queue?

The following query can be used to verify if capture is enqueuing messages to the queue:
SELECT SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS_NAME,
c.CAPTURE_NAME,
C.STARTUP_TIME,
c.SID,
c.SERIAL#,
c.STATE,
c.state_changed_time,
c.TOTAL_MESSAGES_CAPTURED,
c.TOTAL_MESSAGES_ENQUEUED, total_messages_created
FROM gV$STREAMS_CAPTURE c, gV$SESSION s
WHERE c.SID = s.SID AND c.SERIAL# = s.SERIAL#;

Are capture rules defined properly?
If the capture process is capturing changes but messages are not being enqueued to the capture queue, verify if capture rules are defined properly using the following queries:
prompt ++  STREAMS TABLE SUBSETTING RULES ++
col NAME Heading 'Capture Name' format a25 wraP
col object format A25 WRAP
col source_database format a15 wrap
col RULE format a35 wrap
col TYPE format a15 wrap
col dml_condition format a40 wrap
select streams_name NAME,schema_name||'.'||object_name OBJECT,
RULE_TYPE || 'TABLE RULE' TYPE,
rule_owner||'.'||rule_name RULE,
DML_CONDITION , SUBSETTING_OPERATION
from dba_streams_rules where streams_type = 'CAPTURE' and
(dml_condition is not null or subsetting_operation is not null);
prompt
prompt ++ CAPTURE RULES BY RULE SET ++
col capture_name format a25 wrap  heading 'Capture|Name'
col RULE_SET format a25 wrap heading 'Rule Set|Name'
col RULE_NAME format a25 wrap heading 'Rule|Name'
col condition format a50 wrap heading 'Rule|Condition'
set long 4000
REM break on rule_set
select c.capture_name, rsr.rule_set_owner||'.'||rsr.rule_set_name RULE_SET ,
rsr.rule_owner||'.'||sr.rule_name RULE_NAME, r.rule_condition CONDITION
from dba_rule_set_rules rsr, DBA_RULES r ,DBA_CAPTURE c
where rsr.rule_name = r.rule_name and rsr.rule_owner = r.rule_owner and
rsr.rule_set_owner=c.rule_set_owner and rsr.rule_set_name=c.rule_set_name and rsr.rule_set_name in
(select rule_set_name from dba_capture) order by rsr.rule_set_owner,rsr.rule_set_name;
prompt +** CAPTURE RULES IN NEGATIVE RULE SET **+
prompt
select c.capture_name, rsr.rule_set_owner||'.'||rsr.rule_set_name RULE_SET ,rsr.rule_owner||'.'||rsr.rule_name RULE_NAME, r.rule_condition CONDITION
from dba_rule_set_rules rsr, DBA_RULES r ,DBA_CAPTURE c
where rsr.rule_name = r.rule_name and rsr.rule_owner = r.rule_owner and rsr.rule_set_owner=c.negative_rule_set_owner and rsr.rule_set_name=c.negative_rule_set_name and
rsr.rule_set_name in
(select negative_rule_set_name rule_set_name from dba_capture) order by rsr.rule_set_owner,rsr.rule_set_name;

Are there any overlapping rules?
If capture rules are set up correctly and capture is still not enqueuing messages, verify if there are any overlapping rules. The following PL/SQL code can be used for that:
set serveroutput on
declare
overlap_rules boolean := FALSE;
verbose boolean := TRUE;
cursor overlapping_rules is
select a.streams_name sname, a.streams_type stype,
a.rule_set_owner rule_set_owner, a.rule_set_name rule_set_name,
a.rule_owner owner1, a.rule_name name1, a.streams_rule_type type1,
b.rule_owner owner2, b.rule_name name2, b.streams_rule_type type2
from dba_streams_rules a, dba_streams_rules b
where a.rule_set_owner = b.rule_set_owner
and a.rule_set_name = b.rule_set_name
and a.streams_name = b.streams_name and a.streams_type = b.streams_type
and a.rule_type = b.rule_type
and (a.subsetting_operation is null or b.subsetting_operation is null)
and (a.rule_owner != b.rule_owner or a.rule_name != b.rule_name)
and ((a.streams_rule_type = 'GLOBAL' and b.streams_rule_type
in ('SCHEMA', 'TABLE') and a.schema_name = b.schema_name)
or (a.streams_rule_type = 'SCHEMA' and b.streams_rule_type = 'TABLE'
and a.schema_name = b.schema_name)
or (a.streams_rule_type = 'TABLE' and b.streams_rule_type = 'TABLE'
and a.schema_name = b.schema_name and a.object_name = b.object_name
and a.rule_name < b.rule_name)
or (a.streams_rule_type = 'SCHEMA' and b.streams_rule_type = 'SCHEMA'
and a.schema_name = b.schema_name and a.rule_name < b.rule_name)
or (a.streams_rule_type = 'GLOBAL' and b.streams_rule_type = 'GLOBAL'
and a.rule_name < b.rule_name))
order by a.rule_name;
begin
for rec in overlapping_rules loop
overlap_rules := TRUE;
dbms_output.put_line('+  WARNING:  The rule ''' || rec.owner1 || '''.''' || rec.name1 || ''' and
''' || rec.owner2 || '''.''' || rec.name2 || ''' from rule set ''' || rec.rule_set_owner || '''.''' ||
rec.rule_set_name || ''' overlap.');
end loop;
if overlap_rules and verbose then
dbms_output.put_line('+Overlapping rules are a problem especially when rule-based transformations exist.');
dbms_output.put_line('+Streams makes no guarantees of which rule in a rule set will evaluate to TRUE,');
dbms_output.put_line('+ hus overlapping rules will cause inconsistent behavior, and should be avoided.');
end if;
dbms_output.put_line('+');
end;
/

Is the capture paused for flow control?
If the capture is PAUSED FOR FLOW CONTROL, verify if messages that have been enqueued have been browsed using the following queries:
select * from gv$buffered_publishers;
Check the current number of messages in the queue and compare it to the number of unbrowsed messages. If messages have been browsed and are not getting removed from the queue then verify if aq_tm_processes is set explicitly to 0 in the source or target databases:
declare
mycheck number;
begin
select 1 into mycheck from v$parameter where name = 'aq_tm_processes' and value = '0' and
(ismodified <> 'FALSE' OR isdefault='FALSE');
if mycheck = 1 then
dbms_output.put_line('+ERROR: The parameter ''aq_tm_processes'' should not be explicitly set to 0!');
dbms_output.put_line('+Queue monitoring is disabled for all queues.');
dbms_output.put_line('+To resolve this problem, set the value to 1 using: ALTER SYSTEM SET AQ_TM_PROCESSES=1; ');
end if;
exception when no_data_found then null;
end;
/
If messages are not being browsed, check the status of the propagation and apply processes.
For additional information, please review the following note:

Note.746247.1 Troubleshooting Streams Capture when status is Paused For Flow Control

Is the capture waiting for REDO or in DICTIONARY INITIALIZATION state?
Turn on logminer and capture tracing and restart capture. Look at the capture trace file in the bdump directory.
· ALTER SYSTEM SET EVENTS '1349 trace name context forever, level 7'; · exec dbms_capture_adm.set_parameter('yourcapturename','trace_level','127'); · exec dbms_capture_adm.start_capture('yourcapturename');
To turn off tracing:
· ALTER SYSTEM SET EVENTS '1349 trace name context off'; · exec dbms_capture_adm.set_parameter('yourcapturename','trace_level',null);
The trace will indicate which archive is required by the capture process. Verify if the archive exists in the archive_destination specified for the database. If the file exists, verify if it is registered with the capture process using the following query:
COLUMN CONSUMER_NAME HEADING 'Capture|Process|Name' FORMAT A15
COLUMN SOURCE_DATABASE HEADING 'Source|Database' FORMAT A10
COLUMN SEQUENCE# HEADING 'Sequence|Number' FORMAT 999999
COLUMN NAME HEADING 'Archived Redo Log|File Name' format a35
COLUMN FIRST_SCN HEADING 'Archived Log|First SCN'
COLUMN FIRST_TIME HEADING 'Archived Log Begin|Timestamp'
COLUMN NEXT_SCN HEADING 'Archived Log|Last SCN'
COLUMN NEXT_TIME HEADING 'Archived Log Last|Timestamp'
COLUMN MODIFIED_TIME HEADING 'Archived Log|Registered Time'
COLUMN DICTIONARY_BEGIN HEADING 'Dictionary|Build|Begin' format A6
COLUMN DICTIONARY_END HEADING 'Dictionary|Build|End' format A6
COLUMN PURGEABLE HEADING 'Purgeable|Archive|Log' format a9
SELECT r.CONSUMER_NAME,
r.SOURCE_DATABASE,
r.SEQUENCE#,
r.NAME,
r.first_scn,
r.FIRST_TIME,
r.next_scn,
r.next_time,
r.MODIFIED_TIME,
r.DICTIONARY_BEGIN,
r.DICTIONARY_END,
r.purgeable
FROM DBA_REGISTERED_ARCHIVED_LOG r, DBA_CAPTURE c
WHERE r.CONSUMER_NAME = c.CAPTURE_NAME
ORDER BY source_database, consumer_name, r.first_scn;
If the archive is not registered, check the alert.log for any errors during registration and try registering it manually using:
alter database register logfile '' for '';

Is the capture process current?
The following query can be used to determine the message enqueuing latency of each capture process on the database:
COLUMN CAPTURE_NAME HEADING 'Capture|Process|Name' FORMAT A10COLUMN LATENCY_SECONDS HEADING 'Latency|in|Seconds' FORMAT 999999COLUMN CREATE_TIME HEADING 'Message Creation|Time' FORMAT A20
COLUMN ENQUEUE_TIME HEADING 'Enqueue Time' FORMAT A20
COLUMN ENQUEUE_MESSAGE_NUMBER HEADING 'Message|Number' FORMAT 999999
SELECT CAPTURE_NAME,
(ENQUEUE_TIME-ENQUEUE_MESSAGE_CREATE_TIME)*86400 LATENCY_SECONDS,
TO_CHAR(ENQUEUE_MESSAGE_CREATE_TIME, 'HH24:MI:SS MM/DD/YY') CREATE_TIME,
TO_CHAR(ENQUEUE_TIME, 'HH24:MI:SS MM/DD/YY') ENQUEUE_TIME,
ENQUEUE_MESSAGE_NUMBER
FROM V$STREAMS_CAPTURE;

Are there any long running transactions?
The following query can be usde to identify if there are any long running transactions in the system:
prompt
prompt ++ Current Long Running Transactions ++
prompt Current transactions open for more than 20 minutes
prompt
col runlength HEAD 'Txn Open|Minutes' format 9999.99
col sid HEAD 'Session' format a13
col xid HEAD 'Transaction|ID' format a18
col terminal HEAD 'Terminal' format a10
col program HEAD 'Program' format a27 wrap
select t.inst_id, sid||','||serial# sid,xidusn||'.'||xidslot||'.'||xidsqn xid,
(sysdate -  start_date ) * 1440 runlength ,terminal,program from gv$transaction t, gv$session s
where t.addr=s.taddr and (sysdate - start_date) * 1440 > 20;
Long running transaction are also reported in the alert.log.
Are there any large transactions being reported in the system?
Check the alert.log for any messages related to large transactions.  The alert.log will show information of when the large transaction was identified and also if it has been committed or rolled back.
If not commit or rollback for the transaction have been reported in the alert.log, it means the transaction is still running.
 
Troubleshooting the Propagation Process 


Does the Propagation Use the Correct Source and Destination Queue ?


Make sure the propagation has been configured properly to propagate messages from the correct source queue to the correct destination queue, and using a valid database link.

Check how the propagation has been defined:

- Queue-to-Database Link : The propagation is defined by a source queue and a database link pair. This is the default. The QUEUE_TO_QUEUE parameter is set to FALSE in this case.

- Queue-to-Queue : The propagation is defined by a source queue and destination queue pair. The QUEUE_TO_QUEUE parameter is set to TRUE.

From the Healthcheck report this can be visualized in section "++ PROPAGATIONS IN DATABASE ++"
COLUMN 'Source Queue' FORMAT A39
COLUMN 'Destination Queue' FORMAT A39
COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A35
column queue_to_queue HEADING 'Q-2-Q'
column error_message HEADING 'Error Message'
column error_date HEADING 'Error Date'

SELECT p.propagation_name, p.SOURCE_QUEUE_OWNER ||'.'||
p.SOURCE_QUEUE_NAME ||'@'||
g.GLOBAL_NAME "Source Queue",
p.DESTINATION_QUEUE_OWNER ||'.'||
p.DESTINATION_QUEUE_NAME ||'@'||
p.DESTINATION_DBLINK "Destination Queue",
p.queue_to_queue,
p.status,
p.error_date,
p.error_message
FROM DBA_PROPAGATION p, GLOBAL_NAME g;



Is the Propagation Enabled ? 

For a propagation job to propagate messages, the propagation must be enabled. If messages are not being propagated by a propagation as expected, then the propagation might not be enabled.

Check :

- The propagation has an associated Jxxx Process

- whether the propagation is ENABLED, DISABLED, or ABORTED

- The date of the last error, if there are any propagation errors and the error number/ error message of the last error


From the Healthcheck report this can be visualized in section "++ SCHEDULE FOR EACH PROPAGATION++"

prompt
COLUMN PROPAGATION_NAME Heading 'Propagation|Name' format a17 wrap
COLUMN START_DATE HEADING 'Start Date'
COLUMN PROPAGATION_WINDOW HEADING 'Duration|in Seconds' FORMAT 9999999999999999
COLUMN NEXT_TIME HEADING 'Next|Time' FORMAT A8
COLUMN LATENCY HEADING 'Latency|in Seconds' FORMAT 9999999999999999
COLUMN SCHEDULE_DISABLED HEADING 'Status' FORMAT A8
COLUMN PROCESS_NAME HEADING 'Process' FORMAT A8
COLUMN FAILURES HEADING 'Number of|Failures' FORMAT 99
COLUMN LAST_ERROR_MSG HEADING 'Error Message' FORMAT A50
COLUMN TOTAL_BYTES HEADING 'Total Bytes|Propagated' FORMAT 9999999999999999
COLUMN CURRENT_START_DATE HEADING 'Current|Start' FORMAT A17
COLUMN LAST_RUN_DATE HEADING 'Last|Run' FORMAT A17
COLUMN NEXT_RUN_DATE HEADING 'Next|Run' FORMAT A17
COLUMN LAST_ERROR_DATE HEADING 'Last|Error' FORMAT A17
column message_delivery_mode HEADING 'Message|Delivery|Mode'
column queue_to_queue HEADING 'Q-2-Q'

SELECT p.propagation_name,TO_CHAR(s.START_DATE, 'HH24:MI:SS MM/DD/YY') START_DATE,
s.PROPAGATION_WINDOW,
s.NEXT_TIME,
s.LATENCY,
DECODE(s.SCHEDULE_DISABLED,
'Y', 'Disabled',
'N', 'Enabled') SCHEDULE_DISABLED,
s.PROCESS_NAME, s.total_bytes,
s.FAILURES,
s.message_delivery_mode,
p.queue_to_queue,
s.LAST_ERROR_MSG
FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p
WHERE p.DESTINATION_DBLINK =
NVL(REGEXP_SUBSTR(s.destination, '[^@]+', 1, 2), s.destination)
AND s.SCHEMA = p.SOURCE_QUEUE_OWNER
AND s.QNAME = p.SOURCE_QUEUE_NAME order by message_delivery_mode, propagation_name;

SELECT p.propagation_name, message_delivery_mode, TO_CHAR(s.LAST_RUN_DATE, 'HH24:MI:SS MM/DD/YY') LAST_RUN_DATE,
TO_CHAR(s.CURRENT_START_DATE, 'HH24:MI:SS MM/DD/YY') CURRENT_START_DATE,
TO_CHAR(s.NEXT_RUN_DATE, 'HH24:MI:SS MM/DD/YY') NEXT_RUN_DATE,
TO_CHAR(s.LAST_ERROR_DATE, 'HH24:MI:SS MM/DD/YY') LAST_ERROR_DATE
FROM DBA_QUEUE_SCHEDULES s, DBA_PROPAGATION p
WHERE p.DESTINATION_DBLINK =
NVL(REGEXP_SUBSTR(s.destination, '[^@]+', 1, 2), s.destination)
AND s.SCHEMA = p.SOURCE_QUEUE_OWNER
AND s.QNAME = p.SOURCE_QUEUE_NAME order by message_delivery_mode, propagation_name;



Are There Enough Job Queue Processes ? 

In 10.2, propagation jobs use job queue processes to propagate messages. Make sure the JOB_QUEUE_PROCESSES initialization parameter is set to 2 or higher in each database instance that runs propagations.
It should be set to a value that is high enough to accommodate all of the jobs that run simultaneously.

Check DBA_JOBS to ensure that:

- It shows a JOB like:

  next_date := sys.dbms_aqadm.aq$_propaq(job);

- This associated propagation JOB is not broken (BROKEN flag is N),

- The LOG_USER, PRIV_USER, SCHEMA_USER for this propagation JOB is SYS
prompt
set recsep each
set recsepchar =
select * from dba_jobs;



In 11.1 AQ Propagation uses Oracle SCheduler, enabling AQ propagation to take advantage of Scheduler features. Job queue processes parameters need not be set in Oracle Database 11g for propagation to work. Oracle Scheduler automatically starts up the required number of slaves for the existing propagation schedules.
prompt
select * from dba_scheduler_jobs
;



Check the Trace Files and Alert Log for Problems


Messages about propagation are recorded in trace files for the database in which the propagation job is running. A propagation job runs on the database containing the source queue in the propagation. These trace file messages can help you to identify and resolve problems in a Streams environment.

All trace files for background processes are written to the destination directory specified by the initialization parameter BACKGROUND_DUMP_DEST. The names of trace files are operating system specific, but each file usually includes the name of the process writing the file.

Each propagation uses a propagation job that depends on the job queue coordinator process and a job queue process. The job queue coordinator process is named cjqnn, where nn is the job queue coordinator process number, and a job queue process is named jnnn, where nnn is the job queue process number.

For example, on some operating systems, if the system identifier for a database running a propagation job is hqdb and the job queue coordinator process is 01, then the trace file for the job queue coordinator process starts with hqdb_cjq01.  Similarly, on the same database, if a job queue process is 001, then the trace file for the job queue process starts with hqdb_j001.  You can check the process name by querying the PROCESS_NAME column in the DBA_QUEUE_SCHEDULES data dictionary view.


Determining the Rule Sets for Each Propagation 
Make sure the Rule_Sets and Rules are setup properly according to the requirements


From the Healthcheck report this can be visualized in sections:
++ PROPAGATION RULE SETS IN DATABASE ++

COLUMN PROPAGATION_NAME HEADING 'Propagation' FORMAT A35
COLUMN Positive HEADING 'Positive|Rule Set' FORMAT A35
COLUMN Negative HEADING 'Negative|Rule Set' FORMAT A35

SELECT PROPAGATION_NAME, RULE_SET_OWNER||'.'||RULE_SET_NAME Positive,
NEGATIVE_RULE_SET_OWNER||'.'||NEGATIVE_RULE_SET_NAME Negative
FROM DBA_PROPAGATION;


++ STREAMS PROPAGATION RULES CONFIGURED ++

col NAME Heading 'Name' format a25 wrap
col PropNAME format a25 Heading 'Propagation Name'
col object format a25 wrap
col source_database format a15 wrap
col RULE format a35 wrap
col TYPE format a15 wrap
col dml_condition format a40 wrap

select streams_name NAME,schema_name||'.'||object_name OBJECT,
rule_set_type,
SOURCE_DATABASE,
STREAMS_RULE_TYPE ||' '||Rule_type TYPE ,
INCLUDE_TAGGED_LCR,
rule_owner||'.'||rule_name RULE
from dba_streams_rules where streams_type = 'PROPAGATION'
order by name,object, source_database, rule_set_type,rule;


++ STREAMS TABLE SUBSETTING RULES ++

col NAME format a25 wraP
col object format A25 WRAP
col source_database format a15 wrap
col RULE format a35 wrap
col TYPE format a15 wrap
col dml_condition format a40 wrap


select streams_name NAME,schema_name||'.'||object_name OBJECT,
RULE_TYPE || 'TABLE RULE' TYPE,
rule_owner||'.'||rule_name RULE,
DML_CONDITION , SUBSETTING_OPERATION
from dba_streams_rules where streams_type = 'PROPAGATION' and (dml_condition is not null or subsetting_operation is not null);


++ PROPAGATION RULES BY RULE SET ++

col RULE_SET format a25 wrap
col RULE_NAME format a25 wrap
col condition format a60 wrap
set long 4000
REM break on RULE_SET

set long 4000
select rsr.rule_set_owner||'.'||rsr.rule_set_name RULE_SET ,rsr.rule_owner||'.'||rsr.rule_name RULE_NAME,
r.rule_condition CONDITION from
dba_rule_set_rules rsr, dba_rules r
where rsr.rule_name = r.rule_name and rsr.rule_owner = r.rule_owner and rule_set_name in
(select rule_set_name from dba_propagation) order by rsr.rule_set_owner,rsr.rule_set_name;


++ PROPAGATION RULES IN NEGATIVE RULE SET ++

select c.propagation_name, rsr.rule_set_owner||'.'||rsr.rule_set_name RULE_SET ,rsr.rule_owner||'.'||rsr.rule_name RULE_NAME,
r.rule_condition CONDITION from
dba_rule_set_rules rsr, DBA_RULES r ,DBA_PROPAGATION c
where rsr.rule_name = r.rule_name and rsr.rule_owner = r.rule_owner and
rsr.rule_set_owner=c.negative_rule_set_owner and rsr.rule_set_name=c.negative_rule_set_name
and rsr.rule_set_name in
(select negative_rule_set_name rule_set_name from dba_propagation) order by rsr.rule_set_owner,rsr.rule_set_name;


++ PROPAGATION RULE TRANSFORMATIONS BY RULE SET ++

col RULE_SET format a25 wrap
col RULE_NAME format a25 wrap
col action_context_name format a32 wrap
col action_context_value format a32 wrap
REM break on RULE_SET

select rsr.rule_set_owner||'.'||rsr.rule_set_name RULE_SET , r.* from
dba_rule_set_rules rsr, dba_streams_transformations r
where
r.rule_name = rsr.rule_name and r.rule_owner = rsr.rule_owner and rule_set_name in
(select rule_set_name from dba_propagation)
order by rsr.rule_set_owner,rsr.rule_set_name, r.rule_owner, r.rule_name,transform_type desc, step_number, precedence;



Determining the Total Number of Messages and Bytes Propagated 


For determining the number of messages sent by a propagation, as well as the number of acknowledgements being returned from the target site, query the V$PROPAGATION_SENDER view at the Source site and the V$PROPAGATION_RECEIVER view at the destinarion site.


Following queries display information for each propagation:

++ EVENTS AND BYTES PROPAGATED FOR EACH PROPAGATION++

COLUMN Elapsed_propagation_TIME HEADING 'Elapsed |Propagation Time|(Seconds)' FORMAT 9999999999999999
COLUMN TOTAL_NUMBER HEADING 'Total |Events|Propagated' FORMAT 9999999999999999
COLUMN TOTAL_BYTES HEADING 'Total Bytes|Propagated' FORMAT 9999999999999999
COLUMN SCHEDULE_STATUS HEADING 'Schedule|Status'
column elapsed_dequeue_time HEADING 'Elapsed|Dequeue Time|(Seconds)'
column elapsed_pickle_time HEADING 'Total Time|(Seconds)'
column total_time HEADING 'Elapsed|Pickle Time|(Seconds)'
column high_water_mark HEADING 'High|Water|Mark'
column acknowledgement HEADING 'Target |Ack'


SELECT p.propagation_name,q.message_delivery_mode, DECODE(p.STATUS,
'DISABLED', 'Disabled',
'ENABLED', 'Enabled') SCHEDULE_STATUS,
q.instance,
q.total_number TOTAL_NUMBER, q.TOTAL_BYTES ,
q.elapsed_dequeue_time/100 elapsed_dequeue_time, q.elapsed_pickle_time/100 elapsed_pickle_time,
q.total_time/100 total_time
FROM DBA_PROPAGATION p, dba_queue_schedules q
WHERE p.DESTINATION_DBLINK =
NVL(REGEXP_SUBSTR(q.destination, '[^@]+', 1, 2), q.destination)
AND q.SCHEMA = p.SOURCE_QUEUE_OWNER
AND q.QNAME = p.SOURCE_QUEUE_NAME
order by q.message_delivery_mode, p.propagation_name;


++ PROPAGATION SENDER STATISTICS ++

col queue_id HEADING 'Queue ID'
col queue_schema HEADING 'Source|Queue|Owner'
col queue_name HEADING 'Source|Queue|Name'
col dst_queue_schema HEADING 'Destination|Queue|Owner'
col dst_queue_name HEADING 'Destination|Queue|Name'
col dblink Heading 'Destination|Database|Link'
col total_msgs HEADING 'Total|Messages'
col max_num_per_win HEADING 'Max Msgs|per|Window'
col max_size HEADING 'Max|Size'
col src_queue_schema HEADING 'Source|Queue|Owner'
col src_queue_name HEADING 'Source|Queue|Name'
column elapsed_dequeue_time HEADING 'Elapsed|Dequeue Time|(CentiSecs)'
column elapsed_pickle_time HEADING 'Total Time|(CentiSecs)'
column total_time HEADING 'Elapsed|Pickle Time|(CentiSecs)'


SELECT * from v$propagation_sender;


++ PROPAGATION RECEIVER STATISTICS++


column src_queue_name HEADING 'Source|Queue|Name'
column src_dbname HEADING 'Source|Database|Name'
column startup_time HEADING 'Startup|Time'
column elapsed_unpickle_time HEADING 'Elapsed|Unpickle Time|(CentiSeconds'
column elapsed_rule_time HEADING 'Elapsed|Rule Time|(CentiSeconds)'
column elapsed_enqueue_time HEADING 'Elapsed|Enqueue Time|(CentiSeconds)'

SELECT * from gv$propagation_receiver;


To display information about suscribers for all buffered queues in the instance, query V$BUFFERED_SUBSCRIBERS.
In Oracle0g Release 2 and later, there are two rows for each propagation: one for queue-to-database link propagation
and one for queue-to-queue propagation.

++ BUFFERED SUBSCRIBERS ++

select * from gv$buffered_subscribers;


NOTE: An optimization first available in Oracle Database 11g, Release 1, isa capture process that automatically sends LCRs directly to an apply process. This occurs when there is a single publisher and consumer defined for the queue that contains the captured changes. This optimized configuration is called Combined Capture and Apply (CCA). When CCA is in use, LCRs are transmitted directly from the capture process to the apply process via a database link. In this mode, the capture does not stage the LCRs in a queue or use queue propagation to deliver them.

Propagation Statistics are zero when CCA optimization is in effect.

Buffered Subscribers view statistics are zero when CCA optimization is in effect.


Propagation does not start after error or database restart

At times, the propagation job may become "broken" or fail to start after an error has been encountered or after a database
restart.

The typical solution is to disable the propagation and then re-enable it.
For example, for the propagation named STRMADMIN_PROPAGATE the commands would be:

10.2
exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE');
exec dbms_propagation_adm.start_propagation('STRMAMDIN_PROPAGATE');

If the above does not fix the problem, stop the propagation specifying the force parameter
(2nd parameter on stop_propagation) as TRUE.
For example, for the propagation named STRMADMIN_PROPAGATE , the commands would be:

exec dbms_propagation_adm.stop_propagation('STRMADMIN_PROPAGATE',true);
exec dbms_propagation_adm.start_propagation('STRMAMDIN_PROPAGATE');

The statistics for the propagation are cleared when the force parameter is set to TRUE.


Common Propagation Errors
The most common propagation errors result from an incorrect network configuration.

Below list shows errors caused by tnsnames.ora file or database links being configured incorrectly.

- ORA-12154: TNS:could not resolve service name.

- ORA-12505: TNS:listener does not currently know of service requested in connect descriptor.

- ORA-12514: TNS:listener does not currently know of service requested in connect descriptor.

- ORA-12541: TNS-12541 TNS:no listener


Other common errors:


- ORA-02082: A loopback database link must have a connection qualifier.

Can occur if you use the Streams Setup Wizard in Oracle Enterprise Manager without first configuring the GLOBAL_NAME for your database.

- ORA-25307: Enqueue rate too high. Enable flow control.

This is an informative message that indicates flow control has been automatically enabled to reduce the rate at which events are being enqueued into the staging area.

DBA_QUEUE_SCHEDULES will display this informational message when the automatic flow control (10g feature of Streams) has been invoked.

This typically occurs when the target site is unable to keep up with the rate of messages flowing from the source site. Other than checking that the apply process is running normally on the target site, no action is required by the DBA.  Propagation and the capture process will be resumed automatically when the target site is able to accept more messages.

In some situations, propagation may become disabled (if the number of failures is 16).  In these situations, the propagation can be re-enabled manually.


- ORA-25315 unsupported configuration for propagation of buffered messages.

This error typically indicates that an attempt was made to propagate buffered messages with the database link pointing to an instance in the destination database which is not the owner instance of the destination queue. To resolve, use queue to queue propagation for buffered messages.


- ORA-600 [KWQBMCRCPTS101] after dropping / recreating propagation

For cause/fixes refer to:

Note 421237.1 ORA-600 [KWQBMCRCPTS101] reported by a Qmon slave process after dropping a Streams Propagation


PROPAGATION Related NOTEs

Note 437838.1 Recommended Patches for Streams
Note.749181.1 How to Recover Streams After Dropping Propagation
Note 368912.1 Queue to Queue Propagation Schedule encountered ORA-12514 in a RAC environment
 
Troubleshooting the Apply Process


Is the Apply process ENABLED?

SELECT APPLY_NAME, STATUS FROM DBA_APPLY;

If the apply process is DISABLED, try starting it using:


EXEC DBMS_APPLY_ADM.START_APPLY( '' )


If the apply process is ABORTED, you can use the following query to identified the error condition that caused the apply to abort:

COLUMN APPLY_NAME HEADING 'Apply|Process|Name' FORMAT A15
COLUMN STATUS_CHANGE_TIME HEADING 'Abort Time'
COLUMN ERROR_NUMBER HEADING 'Error|Number ' FORMAT 99999999
COLUMN ERROR_MESSAGE HEADING 'Error Message' FORMAT A33
SELECT APPLY_NAME, TO_CHAR(STATUS_CHANGE_TIME, 'DD-MON-RRRR hh24:mi:ss') 
       STATUS_CHANGE_TIME, ERROR_NUMBER, ERROR_MESSAGE 
FROM DBA_APPLY WHERE STATUS='ABORTED';

If the apply process is aborted due an error on Apply Error queue, check the error queue using following query:
 
   col source_commit_scn HEADING 'Source|Commit|Scn'
   col message_number HEADING 'Message in| Txn causing|Error'
   col message_count HEADING 'Total|Messages|in Txn'
   col local_transaction_id HEADING 'Local|Transaction| ID'
   col error_message HEADING 'Apply|Error|Message'
   col ERROR_CREATION_TIME HEADING 'Error|Creation|Timestamp'
   col source_transaction_id HEADING 'Source|Transaction| ID'
   select apply_name, source_database,source_commit_scn, message_number, message_count,
          local_transaction_id, error_message , error_creation_time, source_transaction_id 
   from DBA_APPLY_ERROR order by apply_name ,source_commit_scn ;

Apply won't re-start if there is any error on apply error queue when apply parameter DISABLE_ON_ERROR is TRUE (default) for the specific apply process. Use below query  to check this parameter:

   col APPLY_NAME format a30
   col parameter format a20
   col value format a20
   break on apply_name

   select * from dba_apply_parameters order by apply_name,parameter;

Some common apply errors include:

APPLY MOST COMMON ERRORS TROUBLESHOOTING


ORA-26666 cannot alter STREAMS process

On shutdown of the apply process, the error ORA- timeout occurred. When attempting to restart the apply process, the apply process aborts with the following message:
ORA-26666: cannot alter STREAMS process STRMADMIN_SITE1_US_ORACLE
ORA-6512: at SYS.DBMS_LOGREP_UTIL
ORA-6512: at SYS.DBMS_APPLY_ADM
ORA-6512: at line 2
In the case, you will need to do a "forced" shutdown of the apply process. Then restart the apply process. For example:
exec dbms_apply_adm.stop_apply('STRMADMIN_SITE1_US_ORACLE',force=>true)
exec dbms_apply_adm.start_apply('STRMADMIN_SITE1_US_ORACLE');

ORA-01403 No Data Found

Classic error where most of the time caused by a data mismatch from source and destination tables. The ORA-1403 error occurs when an apply process tries to update an existing row and the OLD_VALUES in the row LCR do not match the current values at the destination database. Basically conditions to receive the errors:

-	Missing supplemental log for key columns at source database;

-	Missing Primary Key or Unique Key at source database;

-	Missing substitute key columns at destination database when there is no primary key or unique key at source database;

- 	Applying a transaction out of order or a transaction being applied depends on another transaction which has not yet executed.

-	Data mismatch between the row LCR and the table for which the LCR is applying the change. 

There are two ways to fix the ORA-1403:

-	You can manage the ORA-1403 automatically via DML Handler or 

-	You can update the current values in the row so that the row LCR can be applied successfully. If changes to the row at apply site are also captured by a capture process at the destination database, then you will need to use DBMS_STREAMS.SET_TAG to avoid re-capturing these changes, which would lead for new ORA-1403 at others Apply sites. For example:

a.  Set a tag in the session that corrects the row. Make sure you set the tag to a value that prevents the manual change from being replicated. For example, the tag can prevent the change from being captured by a capture process.

EXEC DBMS_STREAMS.SET_TAG(tag => HEXTORAW('17'));
 
In some environments, you might need to set the tag to a different value.

b.	Update the row in the table so that the data matches the old values in the LCR.

c.	Re-execute the error or re-execute all errors. To re-execute an error, run the EXECUTE_ERROR procedure in the DBMS_APPLY_ADM package, and specify the transaction identifier for the transaction that caused the error. For example:

	EXEC DBMS_APPLY_ADM.EXECUTE_ERROR(local_transaction_id => '5.4.312');

	Or, execute all errors for the apply process by running the EXECUTE_ALL_ERRORS procedure:

	EXEC DBMS_APPLY_ADM.EXECUTE_ALL_ERRORS(apply_name => 'APPLY');

d.	If you are going to make other changes in the current session that you want to replicate destination databases, then reset the tag for the session to an appropriate value, as in the following example:

	EXEC DBMS_STREAMS.SET_TAG(tag => NULL);

	In some environments, you might need to set the tag to a value other than NULL.


	Refer to the follow note for additional instructions:

 	Note:265201.1 Troubleshooting Streams Apply Error ORA-1403, ORA-26787 or ORA-26786


ORA-26687 Instantiation SCN Not Set


Error occurs because the instantiation SCN does not for the object. Basically it might be caused by :

- Importing an object without preparing the object for instantiation prior export; 
- You used original export/import for instantiation, and you performed the import without specifying y for the STREAMS_INSTANTIATION import parameter.

- You have not called any of DBMS_APPLY_ADM procedures: 

   SET_TABLE_INSTANTIATION_SCN
   SET_SCHEMA_INSTANTIATION_SCN
   SET_GLOBAL_INSTANTIATION_SCN

- When applying DDL changes, and you have not set the instantiation SCN at the SCHEMA or GLOBAL level.


Check current objects prepared for instantiation thru the view DBA_APPLY_INSTANTIATED_OBJECTS or section "++  TABLES PREPARED FOR CAPTURE ++" of your Streams Health Check 
report.


Refer to the following notes for additional instructions:

Note:272731.1 Objects Not Instantiated When Using DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN ORA-26687

Note:223000.1 Streams Apply Process Fails with ORA-26687 or ''Missing Streams multi-version data dictionary''

   Note:783815.1 DBA_APPLY_INSTANTIATED_OBJECTS and ORA-26687


ORA-26688 Missing Key in LCR

-	Missing/disabled primary key at source database. When not using primary keys, make sure you have set an alternative key at destination database using SET_KEY_COLUMNS procedure in the DBMS_APPLY_ADM package.

-	Missing supplemental logging at source database, if using Apply PARALLELISM greater than 1,  for any indexed column at a destination database, which includes unique or non unique index columns, foreign key columns, and bitmap index columns. 


Refer to the follow note for additional instructions:

Note:290280.1 Ora-26688 on Apply Process of Streams after Online Table Redefinition


ORA-1031 Insufficient Privileges

The user designated as the apply user does not have the necessary privileges to perform SQL operations on the replicated objects. The apply user privileges must be granted by an explicit grant of each privilege. Granting these privileges through a role is not sufficient for the Streams apply user. 

Additionally if the apply user does not have explicit EXECUTE privilege on an apply handler procedure or custom rule-based transformation function, then an ORA-06550 error might result when the apply user tries to run the procedure or function. Typically, this error is causes the apply process to abort without adding errors to the DBA_APPLY_ERROR view. However, the trace file for the apply coordinator reports the error. Specifically, errors similar to the following appear in the trace file:

ORA-12801 in STREAMS process
ORA-12801: error signaled in parallel query server P000
ORA-06550: line 1, column 15:
PLS-00201: identifier 'STRMADMIN.TO_AWARDFCT_RULEDML' must be declared
ORA-06550: line 1, column 7:
PL/SQL:  Statement ignored

In this example, the apply user dssdbo does not have execute privilege on the to_award_fct_ruledml function in the strmadmin schema. To correct the problem, grant the required EXECUTE privilege to the apply user.

Refer to the following note for additional instructions:

Note:215372.1 Streams Apply Process Fails With ORA-1031: Insufficient Privileges


MISSING Streams multi-version data dictionary

Although you see this message on Streams Apply destination site, it is caused by the Streams data dictionary information for the specified object not available on source database at the time Streams Capture was created. 

Pleaser refer to the Streams Capture Troubleshooting section on this same article and below Metalink note for additional instructions:

 Note 212044.1  Resolving the MISSING Streams Multi-version Data Dictionary Error


Other common errors:

ORA-24031 Invalid Value, parameter_name Should Be Non-NULL
ORA-26689 Column Type Mismatch

For additional details on above errors, please refer to section "Troubleshooting Specific Apply Errors" from on-line documentation of your Oracle book:

	Manual: Oracle� Streams Replication Administrator's Guide.
 	Chapter: 13 Troubleshooting Streams Replication
 	Section: Troubleshooting Specific Apply Errors


Apply Enabled but not dequeuing/moving


1.	First of all check if messages are reaching Apply destination queue by querying the buffer queue with following query multiple times:

col QUEUE format a50 wrap
col "Message Count" format 9999999999999999 heading 'Current Num. of|Outstanding|Mesg|in Queue'
col "Spilled Msgs" format 9999999999999999 heading 'Current Number of|Spilled|Messages|in Queue'
col "TOtal Messages" format 9999999999999999 heading 'Cumulative |Number| of Messages|in Queue'
col "Total Spilled Msgs" format 9999999999999999 heading 'Cum. Num|of Spilled|Messages|in Queue'
col "Expired_Msgs" heading 'Current Number of|Expired|Messages|in Queue'

SELECT queue_schema||'.'||queue_name Queue, startup_time, num_msgs "Message Count", spill_msgs "Spilled Msgs", cnum_msgs "Total Messages", cspill_msgs "Total Spilled Msgs", 
expired_msgs  FROM  gv$buffered_queues;

	Look for values changing for "Cumulative Number of Messages in Queue" and "Current Number of Outstanding Messages in Queue". In an event that messages are not reaching Apply buffered queue, this might be an indication for some problem at Capture and/or Propagation processes, please refer to the related sections on this Article for further troubleshooting information.


2.	If messages are reaching Apply queue, check if messages are being dequeued by Apply Reader  process using below query:

col oldest_scn_num HEADING 'Oldest|SCN'
col apply_name HEADING 'Apply Name'
col apply_captured HEADING 'Captured or|User-Enqueued LCRs'
col process_name HEADING 'Process'
col state HEADING 'STATE'
col total_messages_dequeued HEADING 'Total Messages|Dequeued'
col total_messages_spilled Heading 'Total Messages|Spilled'
col sga_used HEADING 'SGA Used'
col oldest_transaction_id HEADING 'Oldest|Transaction'

SELECT ap.APPLY_NAME,
       DECODE(ap.APPLY_CAPTURED,
                'YES','Captured LCRS',
                'NO','User-Enqueued','UNKNOWN') APPLY_CAPTURED,
       SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS_NAME,
       r.STATE,
       r.TOTAL_MESSAGES_DEQUEUED,
       r.TOTAL_MESSAGES_SPILLED,
       r.SGA_USED, 
       oldest_scn_num,
       oldest_xidusn||'.'||oldest_xidslt||'.'||oldest_xidsqn 
             oldest_transaction_id
       FROM gV$STREAMS_APPLY_READER r, gV$SESSION s, DBA_APPLY ap
       WHERE r.SID = s.SID AND
             r.SERIAL# = s.SERIAL# AND
             r.APPLY_NAME = ap.APPLY_NAME;

Ideally "Total Messages Dequeued" must be increasing, otherwise 'Total Messages Spilled' in case of spilling.

3. We also might get into the scenario where messages are reaching Apply queue but dequeue is not happening, therefore the problem might be that the apply process has fallen behind. You can check apply process latency by querying the V$STREAMS_APPLY_COORDINATOR dynamic performance view. If apply process latency is high, then you might be able to improve performance by adjusting the setting of the parallelism apply process parameter. 

	Run the following queries to display the capture to apply latency using the     V$STREAMS_APPLY_COORDINATOR view for a message for each apply process:

COLUMN APPLY_NAME HEADING 'Apply Process|Name' FORMAT A17
COLUMN 'Latency in Seconds' FORMAT 999999
COLUMN 'Message Creation' FORMAT A17
COLUMN 'Apply Time' FORMAT A17
COLUMN HWM_MESSAGE_NUMBER HEADING 'Applied|Message|Number' FORMAT 999999
COLUMN APPLIED_MESSAGE_NUMBER HEADING 'Applied|Message|Number' FORMAT 999999

SELECT APPLY_NAME,
     (HWM_TIME-HWM_MESSAGE_CREATE_TIME)*86400 "Latency in Seconds",
     TO_CHAR(HWM_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY') 
        "Message Creation",
     TO_CHAR(HWM_TIME,'HH24:MI:SS MM/DD/YY') "Apply Time",
     HWM_MESSAGE_NUMBER  
  FROM V$STREAMS_APPLY_COORDINATOR;


SELECT APPLY_NAME,
     (APPLY_TIME-APPLIED_MESSAGE_CREATE_TIME)*86400 "Latency in Seconds",
     TO_CHAR(APPLIED_MESSAGE_CREATE_TIME,'HH24:MI:SS MM/DD/YY') 
        "Message Creation",
     TO_CHAR(APPLY_TIME,'HH24:MI:SS MM/DD/YY') "Apply Time",
     APPLIED_MESSAGE_NUMBER  
  FROM DBA_APPLY_PROGRESS;


4. Also check Apply Servers:

COLUMN APPLY_NAME HEADING 'Apply Process Name' FORMAT A22
COLUMN PROCESS_NAME HEADING 'Process Name' FORMAT A12
COLUMN STATE HEADING 'State' FORMAT A17
COLUMN TOTAL_ASSIGNED HEADING 'Total|Transactions|Assigned' FORMAT 99999999
COLUMN TOTAL_MESSAGES_APPLIED HEADING 'Total|Messages|Applied' FORMAT 99999999

SELECT r.APPLY_NAME,
       SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS_NAME,
       r.STATE,
       r.TOTAL_ASSIGNED, 
       r.TOTAL_MESSAGES_APPLIED
  FROM V$STREAMS_APPLY_SERVER R, V$SESSION S 
  WHERE r.SID = s.SID AND 
        r.SERIAL# = s.SERIAL# 
  ORDER BY r.APPLY_NAME, r.SERVER_ID;

You might be able to see one of these Apply server states when it is processing a transaction:

  RECORD LOW-WATERMARK
  ADD PARTITION
  DROP PARTITION
  EXECUTE TRANSACTION
  WAIT COMMIT
  WAIT DEPENDENCY
  WAIT FOR NEXT CHUNK
  TRANSACTION CLEANUP

or IDLE in case of there is no transaction currently being applied.

5. Check if AQ_TM_PROCESSES initialization parameter is set explicitly to 0 (zero) or is not set.

conn / as sysdba
show parameter aq_tm_processes

Refer to the follow note for additional instructions:

Note.428441.1 "Warning Aq_tm_processes Is Set To 0" Message in Alert Log After Upgrade 		   to 10.2.0.3 or 10.2.0.4 

6. Check Apply PARALLELISM parameter running below query:

col APPLY_NAME format a30
col parameter format a20
col value format a20
break on apply_name

select * from dba_apply_parameters order by apply_name,parameter;

You may need to increase Parallelism process, for example:

  BEGIN
    DBMS_APPLY_ADM.SET_PARAMETER(
      apply_name   => 'strm01_apply',
      parameter    => 'parallelism',
      value        => '2');
  END;
  /


7.	Check for Apply Process Encountering Contention

	An apply server encounters contention when the apply server must wait for a resource that is being used by another session. Contention can result from logical dependencies. For example, when an apply server tries to apply a change to a row that a user has locked, then the apply server must wait for the user. Contention can also result from physical dependencies. For example, interested transaction list (ITL) contention results when two transactions that are being applied, which might not be logically dependent, are trying to lock the same block on disk. In this case, one apply server locks rows in the block, and the other apply server must wait for access to the block, even though the second apply server is trying to lock different rows. See "Is the Apply Process Waiting for a Dependent Transaction?" for detailed information about ITL contention
   
	The following four wait states are possible for an apply server:

- Not waiting: The apply server is not encountering contention and is not waiting. No action is necessary in this case.

- Waiting for an event that is not related to another session: 

 	An example of an event that is not related to another session is a log file sync event, where redo data must be flushed because of a commit or rollback.   In these cases, nothing is written to the log initially because such waits are common and are usually transient. If the apply server is waiting for the  same event after a certain interval of time, then the apply server writes a message to the alert log and apply process trace file. For example, an apply server a001 might write a message similar to the following:

      A001: warning -- apply server 1, sid 26 waiting for event:
   A001: [log file sync] ...
   This output is written to the alert log at intervals until the problem is rectified.

- Waiting for an event that is related to a non apply server session: 

	The apply server writes a message to the alert log and apply process trace file immediately. For example, an apply server a001 might write a message similar to the following:

      A001: warning -- apply server 1, sid 10 waiting on user sid 36 for event:
   A001: [enq: TM - contention] name|mode=544d0003, object #=a078, table/partition=0

	This output is written to the alert log at intervals until the problem is rectified.

	- Waiting for another apply server session: 

	This state can be caused by interested transaction list (ITL) contention, but it can also be caused by more serious issues, such as an apply handler that obtains conflicting locks. In this case, the apply server that is blocked by another apply server prints only once to the alert log and the trace file for the apply process, and the blocked apply server issues a rollback to the blocking apply server. When the blocking apply server rolls back, another message indicating that the apply server has been rolled back  is printed to the log files, and the rolled back transaction is reassigned by the coordinator process for the apply process.

	For example, if apply server 1 of apply process a001 is blocked by apply server 2 of the same apply process (a001), then the apply process writes the following messages to the log files:

    A001: apply server 1 blocked on server 2
    A001: [enq: TX - row lock contention] name|mode=54580006, usn<<16 sequence="1853</font" slot="1000e,">
    A001: apply server 2 rolled back

	Check TOTAL_ROLLBACKS from below query:

col apply_name HEADING 'Apply Name' format a22 wrap
col process HEADING 'Process' format a7
col RECEIVED HEADING 'Total|Txns|Received' 
col ASSIGNED HEADING 'Total|Txns|Assigned' 
col APPLIED HEADING 'Total|Txns|Applied' 
col ERRORS HEADING 'Total|Txns|w/ Error' 
col total_ignored HEADING 'Total|Txns|Ignored' 
col total_rollbacks HEADING 'Total|Txns|Rollback' 
col WAIT_DEPS HEADING 'Total|Txns|Wait_Deps' 
col WAIT_COMMITS HEADING 'Total|Txns|Wait_Commits' 
col STATE HEADING 'State' format a10 word

SELECT ap.APPLY_NAME,
       SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS,
       c.STATE,
       c.TOTAL_RECEIVED RECEIVED,
       c.TOTAL_ASSIGNED ASSIGNED,
       c.TOTAL_APPLIED APPLIED,
       c.TOTAL_ERRORS ERRORS,
       c.total_ignored,
       c.total_rollbacks,
       c.TOTAL_WAIT_DEPS WAIT_DEPS, c.TOTAL_WAIT_COMMITS WAIT_COMMITS
       FROM gV$STREAMS_APPLY_COORDINATOR  c, gV$SESSION s, DBA_APPLY ap
       WHERE c.SID = s.SID AND
             c.SERIAL# = s.SERIAL# AND
             c.APPLY_NAME = ap.APPLY_NAME;


8.	Check for Apply Process waiting for dependent transactions (applies only when having Apply PARALLELISM parameter greater than 1). Use same query as above:


SELECT ap.APPLY_NAME,
       SUBSTR(s.PROGRAM,INSTR(S.PROGRAM,'(')+1,4) PROCESS,
       c.STATE,
       c.TOTAL_RECEIVED RECEIVED,
       c.TOTAL_ASSIGNED ASSIGNED,
       c.TOTAL_APPLIED APPLIED,
       c.TOTAL_ERRORS ERRORS,
       c.total_ignored,
       c.total_rollbacks,
       c.TOTAL_WAIT_DEPS WAIT_DEPS, c.TOTAL_WAIT_COMMITS WAIT_COMMITS
       FROM gV$STREAMS_APPLY_COORDINATOR  c, gV$SESSION s, DBA_APPLY ap
       WHERE c.SID = s.SID AND
             c.SERIAL# = s.SERIAL# AND
             c.APPLY_NAME = ap.APPLY_NAME;



To avoid the problem in the future, perform one of the following actions:

- 	Increase the number of ITLs available. You can do so by changing the INITRANS setting for the table using the ALTER TABLE statement.

-	Set the commit_serialization parameter to none for the apply process.

-	Set the parallelism apply process parameter to 1 for the apply process.


9.	Check for poor Apply performance for certain transactions:

	The following query displays information about the transactions being applied by each apply server:

SET LINE 120
COLUMN APPLY_NAME HEADING 'Apply Name' FORMAT A20
COLUMN SERVER_ID HEADING 'Apply Server ID' FORMAT 99999999
COLUMN STATE HEADING 'Apply Server State' FORMAT A20
COLUMN APPLIED_MESSAGE_NUMBER HEADING 'Applied Message|Number' FORMAT 99999999
COLUMN MESSAGE_SEQUENCE HEADING 'Message Sequence|Number' FORMAT 99999999

SELECT APPLY_NAME, SERVER_ID, STATE, APPLIED_MESSAGE_NUMBER, MESSAGE_SEQUENCE 
  FROM V$STREAMS_APPLY_SERVER
  ORDER BY APPLY_NAME,SERVER_ID;

	If you run this query repeatedly, then over time the apply server state, applied message number, and message sequence number should continue to change for each apply server as it applies transactions. If these values do not change for one or more apply servers, then the apply server might not be performing well. In this case, you should make sure that, for each table to which the apply process applies changes, every key column has an index.


	Use following queries to determine the object in interest for the poor Apply processing transaction:

COLUMN OPERATION HEADING 'Operation' FORMAT A20
COLUMN OPTIONS HEADING 'Options' FORMAT A20
COLUMN OBJECT_OWNER HEADING 'Object|Owner' FORMAT A10
COLUMN OBJECT_NAME HEADING 'Object|Name' FORMAT A10
COLUMN COST HEADING 'Cost' FORMAT 99999999

SELECT p.OPERATION, p.OPTIONS, p.OBJECT_OWNER, p.OBJECT_NAME, p.COST
  FROM V$SQL_PLAN p, V$SESSION s, V$STREAMS_APPLY_SERVER a
  WHERE a.APPLY_NAME = '' AND a.SERVER_ID = 
    AND s.SID = a.SID
    AND p.HASH_VALUE = s.SQL_HASH_VALUE;

SELECT t.SQL_TEXT
  FROM V$SESSION s, V$SQLTEXT t, V$STREAMS_APPLY_SERVER a
  WHERE a.APPLY_NAME = '' AND a.SERVER_ID = 
    AND s.SID = a.SID
    AND s.SQL_ADDRESS = t.ADDRESS
    AND s.SQL_HASH_VALUE = t.HASH_VALUE
    ORDER BY PIECE;

	p.s. 	Fill out "a.APPLY_NAME" and "a.SERVER_ID" from the WHERE clause appropriately with 	information from previous query.

	This query returns the operation being performed currently by the specified apply server. The query also returns the owner and name of the table on which the operation is being performed and the cost of the operation. If the results show FULL for the COST column, then the operation is causing full table scans

	You might increase the Apply performance on such scenario by:

	- Creating indexes for each key column in this table has an index.


10. Refer to Note 335516.1 "Streams Performance Recommendations", section "Apply Recommendations" for further suggestions in order to speed up Streams Apply process.


APPLY DML AND ERROR HANDLERS TROUBLESHOOTING & SAMPLE


1.	Check for Apply DML / Error Handlers defined at Destination database:

col object format a35 wrap
col user_procedure HEADING 'User |Procedure' format a40 wrap
col dblink Heading 'Apply|DBLink' format a15 wrap
col apply_database_link HEAD 'Database Link|for Remote|Apply' format a25 wrap
col operation_name HEADING 'Operation|Name' format a13
col typ  Heading 'Handler|Type' format a5 wrap
col lob_assemble HEADING 'Assemble|Lob?' format a8
col apply_name HEADING 'Apply Process|Name' FORMAT A15


 select object_owner||'.'||object_name OBJECT, operation_name , 
 user_procedure, apply_name, decode(error_handler,'Y','Error','N','DML','UNKNOWN') TYP, 
decode(assemble_lobs,'Y','Yes','N','No','UNKNOWN') lob_assemble,
 APPLY_Database_link
 from dba_apply_dml_handlers 
order by object_owner,object_name,apply_name;

 	p.s.	(i) If "Apply Process Name" is NULL as result from above query, it means that handler is a general 	handler that runs for all of the local apply processes.

      	(ii) "Handler Type" indicates if the Apply Handler is for any "DML" (DML Handler) applied or only when 	an "ERROR" (Error Handler) happens.

 	DML and Error handler are customized accordingly application needs and data model, so if an apply process is not behaving as expected, then check the handler  PL/SQL procedures used by the apply process, and correct any flaws. You might need to modify a handler procedure or remove it to correct an apply problem.


2.	Common errors when DML / Error handlers are implemented:

a.	ORA-1435 user not found

If using schema name transformation in any way, you might get this error if the source database schema does not exist at destination database. Let say you have tables on schema 'REP1' to be replicated to another database where the schema name is 'REP2', you will get the ORA-1435 error if the schema  'REP1' does not exist at the destination database. The schema name and it's object definitions need to exist at the destination site but NO ROWs or data is  required in this schema. The workaround for this problem it to create the structure definition of the original schema and objects. 

This can generally by done by a schema level export from the source site and a schema level import with the ROWS=NO into the target site.

b.	ORA-6550 line , column :
 
	One of the most common reasons for receiving this error in a DML HANDLER or TRANSFORMATION is privileges.  Typically, this error is causes the apply  process to 'ABORT' with no ERRORS in the DBA_APPLY_ERROR view.  However, the trace file for the apply coordinator will report the error.   If the   specified apply user does not have explicit privilege to execute the dml_handler procedure or the transformation function, you will receive errors similar to the following in the apply trace files:
 
 ORA-12801 in STREAMS process
 ORA-12801: error signaled in parallel query server P000
 ORA-06550: line 1, column 15:
 PLS-00201: identifier 'STRMADMIN.USER_FUNCTION_NAME' must be declared
 ORA-06550: line 1, column 7:
 PL/SQL:  Statement ignored

In this example, the apply user does not have execute privilege on the "USER_FUNCTION_NAME" function in the STRMADMIN schema.

p.s.	Check "APPLY_USER" column from DBA_APPLY view to see what schema is being used to Apply 	the changes.

c.	ORA-23605 Invalid Value for Streams Parameter

 When calling SYS.LCR$ member functions this error may be raised if the value of the parameters do not match the lcr. For example adding a  old column value to an insert lcr, or setting the value of lob column to a number. This error can occur if an incorrect value is passed for  a Streams parameter or if an INSERT LCR contains 'OLD' values, or if a DELETE LCR contains 'NEW' values.  Verify that the correct parameter type  ('OLD','NEW') is specified for the LCR type (INSERT/DELETE/UPDATE).

d.	ORA-23607 Invalid Column

This error is raised by SYS.LCR$* member functions, when the value of the column_name parameter does not match the name of any of the columns in the lcr. Check the column names in the LCR.  This error is encountered if:

 -	You attempt to delete a column from an LCR and the LCR does not have the column (typically occurs on UPDATEs);
 -	You attempt to rename a column that does not exist in the LCR;
 -	In an UPDATE statement, use GET_VALUE or GET_VALUES members for NEW values in a dml_handler or transformation and explicitly set the  USE_OLD parameter to 'N' (default is 'Y').
 

e.	ORA-24031 Invalid parameter, should be non-NULL

This error can occur when a 'NULL' value is passed to an LCR method instead of an ANYDATA.  

Wrong:
 
new_lcr.add_column('OLD','LANGUAGE',NULL);

Correct:
 
new_lcr.add_column('OLD','LANGUAGE',sys.AnyData.ConvertVarchar2(NULL));


f.	Ora-26689 Column Type Mismatch

	Generally is some sort of metadata mismatch. Some possibilities:

-	"column name is valid but the types do not match"

   	Source type" not equal to "target type". Confirm that Conversion utility data type matches the column data type in the handler / transformation.   For example, if the column is specified as VARCHAR2, then use sys.anydata.convertvarchar2  to convert the data from type ANY to VARCHAR2.    Confirm that the DATATYPE of the column name matches between the LCR and the target table. 


  -	"lcr contains extra columns"

Confirm that all of the columns in the LCR are defined at the destination site. If the destination table does not have all of the columns specified in the LCR.  Eliminate any columns from the LCR that should not be applied at the destination table. Check that column name casing matches the database. Generally column names are Upper Case



3.	For DML Handler samples, please refer to the following Metalink Notes:

Note.265867.1 Example Streams Apply DML Handler Showing the Adding of Columns to the Apply LCR
Note.302018.1 Example Streams Apply DML Handler Showing LCR Column Rename
Note.265481.1 Example Streams Apply DML Handler Showing Rows and Columns Filter from the Apply 	            Process
Note.234094.1 Usage and Restrictions of Streams Apply Handlers

4.	For Error Handler samples, please refer to the following Metalink Notes:

 	Note.401275.1 Handling Apply Insert And Delete Conflicts In A Streams Environment - Error Handlers
 	Note.387829.1  Auto Correction Example for Streams using Error Handlers



APPLY MEMORY CONTENTION TROUBLESHOOTING





APPLY PERFORMANCE TROUBLESHOOTING





APPLY IN HETEROGENEOUS ENVIRONMENTS


In a heterogeneous environment where destination database is a non-Oracle database, Streams Apply process will exist at source Oracle database and it will apply the changes through  an Oracle Transparent Gateway to the destination non-Oracle database.

1.	Configuration checking:

a.	If you use substitute key columns for any of the tables at the non-Oracle database, then make sure to specify the database link to the non-Oracle database when you run the SET_KEY_COLUMNS procedure in the DBMS_APPLY_ADM package.

b.	If you use a DML handler to process row LCRs for any of the tables at the non-Oracle database, then specify the database link to the non-Oracle database when you run the SET_DML_HANDLER procedure in the DBMS_APPLY_ADM package.

c.	If you want to use a message handler to process user-enqueued messages for a non-Oracle database, then, when you run the CREATE_APPLY procedure in the DBMS_APPLY_ADM package, specify the database link to the non-Oracle database using the apply_database_link parameter, and specify the message handler procedure using the message_handler parameter.

d.	You must set the parallelism apply process parameter to 1, the default setting, when an apply process is applying changes to a non-Oracle database. Currently, parallel apply to non-Oracle databases is not supported. However, you can use multiple apply processes to apply changes a non-Oracle database.

2.	You can refer to the most of Apply sections from this article to troubleshoot the heterogeneous Streams Apply process. All the steps apply, except by Error and Conflict Handlers which currently are not supported. If an apply error occurs, then the transaction containing the LCR that caused the error is moved into the error queue in the Oracle database.


Also please refer to the follow Metalink notes for additional information:

Note.313766.1	Streams Heterogeneous Apply To SQL*Server
Note.377857.1	Apply process aborts with ORA-28584 setting up streams replication to MySQL
Note.436112.1	'ORA-28550 : Pass-Through SQL: Cursor Not Found' Error When Using Oracle Streams 		Heterogenous Apply to Sybase
Note.466882.1	Streams Apply Process Aborts On Decimal Values Using Tg4sybase - Error ORA-28500



APPLY SPECIFIC STATIC & DYNAMIC VIEWS


Up to the present RDBMS version, these are the Streams Apply specific views which can be used in order to retrieve useful Streams Apply information:

Static Views

ALL_APPLY
ALL_APPLY_CONFLICT_COLUMNS
ALL_APPLY_DML_HANDLERS
ALL_APPLY_ENQUEUE
ALL_APPLY_ERROR
ALL_APPLY_EXECUTE
ALL_APPLY_KEY_COLUMNS
ALL_APPLY_PARAMETERS
ALL_APPLY_PROGRESS
ALL_APPLY_TABLE_COLUMNS
DBA_APPLY_TABLE_COLUMNS
DBA_APPLY_PROGRESS
DBA_APPLY_PARAMETERS
DBA_APPLY_KEY_COLUMNS
DBA_APPLY_EXECUTE
DBA_APPLY_ERROR
DBA_APPLY_ENQUEUE
DBA_APPLY_CONFLICT_COLUMNS
DBA_APPLY_DML_HANDLERS
DBA_APPLY_INSTANTIATED_GLOBAL
DBA_APPLY_INSTANTIATED_OBJECTS
DBA_APPLY_INSTANTIATED_SCHEMAS
DBA_APPLY_OBJECT_DEPENDENCIES
DBA_APPLY_SPILL_TXN
DBA_APPLY_VALUE_DEPENDENCIES
DBA_HIST_STREAMS_APPLY_SUM


Dynamic Views


V$STREAMS_APPLY_COORDINATOR
V$STREAMS_APPLY_READER
V$STREAMS_APPLY_SERVER
GV$STREAMS_APPLY_COORDINATOR
GV$STREAMS_APPLY_READER
GV$STREAMS_APPLY_SERVER

Troubleshooting Rules and Rule-Based Transformations Problems 

If a capture process, a propagation, an apply process, or a messaging client is behaving in an unexpected way, then the problem might be that the rules in either the positive rule set or negative rule set for the Streams client are not configured properly. For example, if you expect a capture process to capture changes made to a particular table, but the capture process is not capturing these changes, then the cause might be that the rules in the rule sets used by the capture process do not instruct the capture process to capture changes to the table.

Spelling Counts!

Rules can be thought of as a SQL WHERE clause against which each message is  evaluated.  If the message does not meet the rule condition specification, the rule evaluation return is set to FALSE and the message is excluded from further handling by the particular streams process. For example, if you configure Streams to capture changes to the 'SOCTT.EMP' table, changes made to the actual table 'SCOTT.EMP' will not be captured.  Each expression included in the rule_condition must evaluate to TRUE in order for the rule to evaluate to TRUE.


Eliminate duplicate/overlapping rules

Duplicate rules should be eliminated to reduce confusion and eliminate unnecessary processing time.  Two rules with the same rule condition, one with a transformation specified and a similar rule without a transformation,  is a good example of the confusion that can be caused by duplicate rules.  When the  rule set is processed, there is no guarantee as to the ordering of the rule processing.  These duplicate rules can lead to inconsistent behavior.


Avoid including tables with unsupported data types

When using SCHEMA or GLOBAL rules, be sure to modify the rules so that no objects with unsupported data types are included for Streams. 


Avoid complex rules wherever possible

Avoid eliminating tables by pattern (e.g. :dml.get_object_name like 'DR%' ) or using a NOT operator as this will force a full rule evaluation for the rule.  It is frequently much faster to explicitly name the desired table, even if it results in multiple rules.  

If you are configuring a propagation that takes ALL changes from the source queue to the destination queue (ie. no selectivity requirements),you can remove the rule set from the propagation definition.  This will eliminate the necessity to do ANY rule evaluation and will result in higher propagation throughput.


Use views to look at ruleset and rules

The DBA_STREAMS_TABLE_RULES view shows the original configuration of the rule and ruleset.  Manual modifications can be performed using the DBMS_RULE_ADM package.  Be sure to use the DBA_RULE_SET_RULES view to obtain the full set of rules participating in a ruleset.  To get the rule condition of each rule, use the DBA_RULES view.

select rsr.rule_set_name RULE_SET ,rsr.rule_owner ||'.'|| rsr.rule_name RULE_NAME,      
      r.rule_condition CONDITION 
from dba_rule_set_rules rsr, dba_rules r where rsr.rule_name = r.rule_name and 
    rsr.rule_owner = r.rule_owner 
order by rsr.rule_set_owner,rsr.rule_set_name;

If that query returns any such rules, then the rules returned might be causing the capture process to discard changes to the table. If that query returns no rules, then make sure there are one or more table rules in the positive rule set for the capture process that evaluate to TRUE for the table. "Displaying the Rules in the Positive Rule Set for a Streams Client" contains an example of a query that shows rules in a positive rule set.


Resolving Problems with Rules

It is possible that the Streams capture process, propagation, apply process, or messaging client is not behaving as expected because one or more rules should be altered or removed from a rule set.

If you have the correct rules, and the relevant messages are still filtered out by a Streams capture process, propagation, or apply process, then check your trace files and alert log for a warning about a missing "multi-version data dictionary", which is a Streams data dictionary. If you find such messages, and you are using custom capture process rules or reusing existing capture process rules for a new destination database, then make sure you run the appropriate procedure to prepare for instantiation:

    ·	PREPARE_TABLE_INSTANTIATION
    ·	PREPARE_SCHEMA_INSTANTIATION
    ·	PREPARE_GLOBAL_INSTANTIATION


Are Declarative Rule-Based Transformations Configured Properly?

A declarative rule-based transformation is a rule-based transformation that covers one of a common set of transformation scenarios for row LCRs. Declarative rule-based transformations are run internally without using PL/SQL. If a Streams capture process, propagation, apply process, or messaging client is not behaving as expected, then check the declarative rule-based transformations specified for the rules used by the Streams client and correct any mistakes.

The most common problems with declarative rule-based transformations are:
    ·	The declarative rule-based transformation is specified for a table or involves columns in a table, but the schema either was not specified or was incorrectly specified when the transformation was created. If the schema is not correct in a declarative rule-based transformation, then the transformation will not be run on the 	appropriate LCRs. You should specify the owning schema for a table when you create a declarative rule-	based transformation. If the schema is not specified when a declarative rule-based transformation is created, 	then the user who creates the transformation is specified for the schema by default.

	If the schema is not correct for a declarative rule-based transformation, then, to correct the problem, remove the transformation and re-create it, specifying the correct schema for each table.
    ·	If more than one declarative rule-based transformation is specified for a particular rule, then make sure the ordering is correct for execution of these transformations. Incorrect ordering of declarative rule-based transformations can result in errors or inconsistent data.  If the ordering is not correct for the declarative rule-	based transformation specified on a single rule, then, to correct the problem, remove the transformations and 	re-create them with the correct ordering.  

Are the Custom Rule-Based Transformations Configured Properly?

A custom rule-based transformation is any modification by a user-defined function to a message when a rule evaluates to TRUE. A custom rule-based transformation is specified in the action context of a rule, and these action contexts contain a name-value pair with STREAMS$_TRANSFORM_FUNCTION for the name and a user-created function name for the value. This user-created function performs the transformation. If the user-created function contains any flaws, then unexpected behavior can result.

If a Streams capture process, propagation, apply process, or messaging client is not behaving as expected, then check the custom rule-based transformation functions specified for the rules used by the Streams client and correct any flaws. You can find the names of these functions by querying the DBA_STREAMS_TRANSFORM_FUNCTION data dictionary view. You might need to modify a transformation function or remove a custom rule-based transformation to correct the problem. Also, make sure the name of the function is spelled correctly when you specify the transformation for a rule.

An error caused by a custom rule-based transformation might cause a capture process, propagation, apply process, or messaging client to abort. In this case, you might need to correct the transformation before the Streams client can be restarted or invoked.

Rule evaluation is done before a custom rule-based transformation. For example, if you have a transformation that changes the name of a table from emps to employees, then make sure each rule using the transformation specifies the table name emps, rather than employees, in its rule condition.


Are Incorrectly Transformed LCRs in the Error Queue?

In some cases, incorrectly transformed LCRs might have been moved to the error queue by an apply process. When this occurs, you should examine the transaction in the error queue to analyze the feasibility of re-executing the transaction successfully. If an abnormality is found in the transaction, then you might be able to configure a DML handler to correct the problem. The DML handler will run when you re-execute the error transaction. When a DML handler is used to correct a problem in an error transaction, the apply process that uses the DML handler should be stopped to prevent the DML handler from acting on LCRs that are not involved with the error transaction. After successful re-execution, if the DML handler is no longer needed, then remove it. Also, correct the rule-based transformation to avoid future errors.

The rule sets used by all Streams clients, including capture processes and propagations, determine the behavior of these Streams clients. Therefore, make sure the rule sets for any capture processes or propagations on which an apply process depends contain the correct rules. If the rules for these Streams clients are not configured properly, then the apply process queue might never receive the appropriate messages. Also, a message traveling through a stream is the composition of all of the transformations done along the path. For example, if a capture process uses subset rules and performs row migration during capture of a message, and a propagation uses a rule-based transformation on the message to change the table name, then, when the message reaches an apply 
process, the apply process rules must account for these transformations.

Rules related views/tables

V$RULE                                   
GV$RULE 
V$RULE_SET                               
GV$RULE_SET 
V$RULE_SET_AGGREGATE_STATS               
GV$RULE_SET_AGGREGATE_STATS 
DBA_STREAMS_GLOBAL_RULES 
DBA_STREAMS_MESSAGE_RULES
DBA_STREAMS_RULES 
DBA_STREAMS_SCHEMA_RULES 
DBA_STREAMS_TABLE_RULES 
DBA_RULE_SET_RULES 
DBA_RULE_SETS 
DBA_RULES 
DBA_HIST_RULE_SET

==================================================================


Comments