Remote mining means the database that analyze of redo log files or archived redo log files is not same as of source database- that generate redo/archived log files.
The system of using Logminer in remote DB is
-transfer the analysis redo or archived redo log files as well as
-transfer the mining dictionary to the remote database.
Step by step:
Prerequisites:
-To extract a LogMiner dictionary to the redo log files, the DB must be open and in ARCHIVELOG
mode and archiving must be enabled.
Scenario: Online redo log file redo01 need to be analyzed in remote machine neptune(local).
Suppose Production db or Source DB is Saturn.
Step1:
Extract logminer dictionary: (On Source Database)
In source that is in production server build the dictionary by,
SQL> EXECUTE DBMS_LOGMNR_D.BUILD(OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
PL/SQL procedure successfully completed.
Depending on the size of the dictionary, it may be contained in multiple redo log files.
This redo log file must have been created before the redo log file that you want to analyze, but should be as recent as possible.
If the relevant redo log files have been archived, you can find out which redo log files contain the start and end of an extracted dictionary.
To do so, query the V$ARCHIVED_LOG view, by,
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN='YES';
NAME
--------------------------------------------------------------------------------
/oracle/database_name/archivelog/2008_09_03/o1_mf_1_8601_4cw3q3t8_.arc
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_END='YES';
NAME
--------------------------------------------------------------------------------
/oracle/database_name/archivelog/2008_09_03/o1_mf_1_8601_4cw3q3t8_.arc
Specify the names of the start and end redo log files, and possibly other logs in between them, with the ADD_LOGFILE
procedure when you are preparing to begin a LogMiner session.
Transfer the logminer dictionary and log for analysis into the mining database.
SQL> !scp /oracle/database_name/archivelog/2008_09_03/o1_mf_1_8601_4cw3q3t8_.arc oracle@neptune:
Password:
o1_mf_1_8601_4cw3q3t 100% |*********************************************************************| 10836 KB 00:01
Then also transfer the redo log. Based on your requirement you can transfer archived log or online redo log. To see a defined time archived log query by select NAME,FIRST_NAME from v$archived_log where completion_time >SYSDATE-1;
In this example I will analysis online redo log file.
SQL> SELECT distinct member LOGFILENAME FROM V$LOGFILE;
LOGFILENAME
--------------------------------------------------------------------------------
/oradata1/database_name/datafiles/database_name/redo03.log
/oradata1/database_name/datafiles/database_name/redo01.log
/oradata1/database_name/datafiles/database_name/redo02.log
Password:
redo03.log 100% |*********************************************************************| 51200 KB 00:04
Step 3: Specify the files for analysis.(In mining/remote database) (Neptune)
Here specify the logfile that need to mine plus specify the dictionary files.
SQL>EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/export/home/oracle/redo01.log', OPTIONS => DBMS_LOGMNR.ADDFILE);
PL/SQL procedure successfully completed.
SQL>EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/export/home/oracle/o1_mf_1_1284_4c23hcd6_.arc', OPTIONS => DBMS_LOGMNR.ADDFILE);
PL/SQL procedure successfully completed.
Step 4: Start the logminer session(In mining database)
Don't specify here DICTFILENAME option. Execute the following,
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS);
Step 5:
Query V$LOGMNR_CONTENTS
Step 6:
End the LogMiner Session:
EXECUTE DBMS_LOGMNR.END_LOGMNR;