This blog is a note for self learning. Some writings are done by myself and some are collected, just to keep things in a organized way.
9.23.2008
ORA-30041
SQL> ALTER USER r_nahar QUOTA UNLIMITED ON TEMP;
ALTER USER r_nahar
*
ERROR at line 1:
ORA-30041: Cannot grant quota on the tablespace
Why this error:
It is is expected behavior in oracle 10.2g.
In version 10.2 oracle does not support quota on temporary tablespaces.
If your version is earlier than 10.2g then this is a bug.
Solution:
Don't try to assign quota on the temporary tablespaces if your database version is 10.2 as it is expected behavior.
9.12.2008
DBA Training Class
- Administration Part - 1
- Administration Part - 2
9.11.2008
Administration Part - 1
A. Install Oracle Database 10g
- Determining Server Requirements
- Installation Features
- Installation Performance Capabilities
- Instance Configuration
LESSON 2: EXPLORING THEORACLE DATABASE 10 ARCHITECTURE
A. Understand Database Architecture
B. Identify Database Object Types
- Database Object Types
- Database Object Naming Conventions
- The System Global Area
- Background Process Types
- The User Connection Process
- Data Dictionary
- Data Dictionary Views
- OEM Database Control
- OEM Grid Control
- OEM Performance Pages
- Grid Control Performance Pages
LESSON 3: CREATING A DATABASE
A. Create, Configure, and Delete a Database Using the DBCA
- Logical Design Planning
- DBCA Management Options
- Creating a Database with the DBCA
- Optimal Flexible Architecture
- Template Management
- Common Template Sections
- Database Interfaces
- SQL*Plus
- iSQL*Plus
LESSON 4: CONFIGURINGORACLENET SERVICES
A. Examine Oracle Net Services
- Client-Server Architecture
- N-tier Architecture
- Oracle Net
- Oracle Net Features
- Oracle Net Layered Architecture
B. Perform Basic Oracle Net Server-Side Configuration
- The Oracle Listener
- Listener Management
- Instructor Edition
C. Perform Basic Oracle Net Client-Side Configuration
- Name Resolution
- Name Resolution Methods
- Host Naming Configuration
- Local Naming Configuration
- Connection Issue Troubleshooting
- Tnsping Utility
- Oracle Net Logging
- Oracle Net Tracing
- Shared Server Architecture
- Shared Server Connection Process
- Shared Server Configuration
- Shared Server Monitoring and Tuning
- Connect-Time Failover
LESSON 5:USING SQL AND PL/SQL
A.Write Simple SQL SELECT Statements
- Structured Query Language
- Types of SQL Commands
- SQL Query Processing Steps Statement
- The WHERE Clause
- The ORDER BY Clause
- Expressions
- Single-Row Functions
- Single-Row Function Types
- Group Functions
- Group Function Options
Determining Server Requirements
Installation features
There are many installation features in Oracle 10g.
a. Database management
b. Preinstallation compatibility check
c. File storage
Three options are available through the OUI to configure database file storage:
group.
d. Backup and recovery
e. User passwords
f. Cluster Ready Services(CRSs)
g. Enterprise Configuration Management Tool(ECMT)
h. Oracle MetaLink integration
i. Cloning
Installation Performance Capabilities
Additional CD-ROMs:
To install supplementary applications, such as OEM Grid Control and HTTP Server, additional CD-ROMs are required:
• Oracle Database 10g Companion CD
• Oracle Database 10g Products CD
• Oracle Database 10g Client CD
• Oracle Database 10g Documentation CD
• Oracle Database 10g Enterprise Manager CD
• Oracle Database 10g Demos CD
All content on these CD-ROMs is available on a single DVD-ROM.
Instance Configuration
There are several instance configuration features. Such as-
a. Database Configuration Assistant (DBCA)features:
The creation or configuration of a database using the DBCA in Oracle Database 10g automatically creates and implements a new, auxiliary system tablespace, SYSAUX, which supplements the SYSTEM database as a central location for the storage of database metadata. The flash recovery area is an optional disk location used to store files related to recovery, such as control file and online redo log copies, archived logs, and flashback logs. When the DBCA is used to create a database, it automatically creates the flash recovery area.
b. Policy-based database configuration framework:
c. Initialization parameters:
9.10.2008
Examples
SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
SQL> col NAME format a70
SQL> col SQL_REDO format a30
SQL> col SQL_UNDO format a30
SQL> column USERNAME format a15
you must have the
SELECT ANY TRANSACTION
privilege to query V$LOGMNR_CONTENTS
.Examples Using LogMiner:
Example 1: Finding All Modifications in the Last Archived Redo Log FileStep 1:
Determine which redo log file was most recently archived.
This example assumes that you know that you want to mine the redo log file that was most recently archived.
SELECT NAME FROM V$ARCHIVED_LOG
WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);
NAME
-------------------------------------------
/oracle/DATABASE_NAME/archivelog/2008_08_31/o1_mf_1_8554_4cngdomq_.arc
Step 2:
Specify the list of redo log files to be analyzed.
Specify the redo log file that was returned by the query in Step 1. The list will consist of one redo log file.
BEGIN DBMS_LOGMNR.ADD_LOGFILE
('/oracle/database_name/archivelog/2008_08_31/o1_mf_1_8554_4cngdomq_.arc');
END;
/
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/usr/oracle/data/db1arch_1_16_482701534.dbf', -
OPTIONS => DBMS_LOGMNR.NEW);
Step 3:
Start LogMiner.
SQL>EXECUTE DBMS_LOGMNR.START_LOGMNR(-
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
PL/SQL procedure successfully completed.
Step 4:
Query the V$LOGMNR_CONTENTS view.
Note that there are four transactions (two of them were committed within the redo log file being analyzed, and two were not).
The output shows the DML statements in the order in which they were executed; thus transactions interleave among themselves.
SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,
SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username IN ('HR', 'OE');
USR XID SQL_REDO SQL_UNDO
---- --------- --------------------- -------------------------------
HR 1.11.1476 set transaction read write;
HR 1.11.1476 insert into "HR"."EMPLOYEES"( delete from "HR"."EMPLOYEES"
"EMPLOYEE_ID","FIRST_NAME", where "EMPLOYEE_ID" = '306'
"LAST_NAME","EMAIL", and "FIRST_NAME" = 'Nandini'
"PHONE_NUMBER","HIRE_DATE", and "LAST_NAME" = 'Shastry'
"JOB_ID","SALARY", and "EMAIL" = 'NSHASTRY'
"COMMISSION_PCT","MANAGER_ID", and "PHONE_NUMBER" = '1234567890'
"DEPARTMENT_ID") values and "HIRE_DATE" = TO_DATE('10-JAN-2003
('306','Nandini','Shastry', 13:34:43', 'dd-mon-yyyy hh24:mi:ss')
'NSHASTRY', '1234567890', and "JOB_ID" = 'HR_REP' and
TO_DATE('10-jan-2003 13:34:43', "SALARY" = '120000' and
'dd-mon-yyyy hh24:mi:ss'), "COMMISSION_PCT" = '.05' and
'HR_REP','120000', '.05', "DEPARTMENT_ID" = '10' and
'105','10'); ROWID = 'AAAHSkAABAAAY6rAAO';
------------------------------------------------------------------------------------
SQL> col SQL_REDO format a30
SQL> col SQL_UNDO format a30
SQL> column USERNAME format a15
SELECT username,session# SID, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,
SQL_REDO, SQL_UNDO,to_char(timestamp,'mm/dd/yy hh24:mi:ss')timestamp
FROM V$LOGMNR_CONTENTS WHERE username IN ('PROD7');
Step 5 End the LogMiner session.
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();
-------------------------------------------------
Example 2: Grouping DML Statements into Committed Transactions
Step 1:SELECT NAME FROM V$ARCHIVED_LOG
WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);
NAME
-------------------------------------------
/oracle/database_name/archivelog/2008_08_31/o1_mf_1_8554_4cngdomq_.arc
Step 2:
BEGIN DBMS_LOGMNR.ADD_LOGFILE(
LOGFILENAME => '/oracle/database_name/archivelog/2008_08_31/o1_mf_1_8554_4cngdomq_.arc',
OPTIONS => DBMS_LOGMNR.NEW);
END;
/
Step 3:
Start LogMiner by specifying the dictionary to use and the COMMITTED_DATA_ONLY
option.
EXECUTE DBMS_LOGMNR.START_LOGMNR(-
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
DBMS_LOGMNR.COMMITTED_DATA_ONLY);
Step 4:
SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID, SQL_REDO,
SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username IN ('HR', 'OE');
;
Example 4: Using the LogMiner Dictionary in the Redo Log Files
This example shows how to use the dictionary that has been extracted to the redo log files.
When you use the dictionary in the online catalog, you must mine the redo log files in the same database that generated them.
Using the dictionary contained in the redo log files enables you to mine redo log files in a different database.
This example assumes that you know that you want to mine the redo log file that was most recently archived.
SELECT NAME, SEQUENCE# FROM V$ARCHIVED_LOG
WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);
NAME SEQUENCE#
------------------------------------------------------------------------------------------------------ ----------------
/oracle/database_name/archivelog/2008_08_31/o1_mf_1_8568_4cpdcgcw_.arc 8568
SELECT NAME, SEQUENCE#, DICTIONARY_BEGIN d_beg, DICTIONARY_END d_end
FROM V$ARCHIVED_LOG
WHERE SEQUENCE# = (SELECT MAX (SEQUENCE#) FROM V$ARCHIVED_LOG
WHERE DICTIONARY_END = 'YES' and SEQUENCE# <= 8568);
Example 6: Filtering Output by Time Range
ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';To filter out redo records based on timestamp values is by specifying the time range in the
DBMS_LOGMNR.START_LOGMNR
procedure call.If you want to see the archive log file name from V$ARCHIVED_LOG
SELECT NAME,to_char(FIRST_TIME,'mm/dd/yy hh24:mi:ss') Time FROM V$ARCHIVED_LOG
WHERE FIRST_TIME like ('01-SEP-08%');
Step 1: Create a list of redo log files to mine.The subsequent SQL
EXECUTE
statement calls the procedure and specifies the starting date 01-SEP-2008: -- my_add_logfiles
-- Add all archived logs generated after a specified start_time.
CREATE OR REPLACE PROCEDURE my_add_logfiles (in_start_time IN DATE) AS
CURSOR c_log IS
SELECT NAME FROM V$ARCHIVED_LOG
WHERE FIRST_TIME >= in_start_time;
count pls_integer := 0;
my_option pls_integer := DBMS_LOGMNR.NEW;
BEGIN
FOR c_log_rec IN c_log
LOOP
DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => c_log_rec.name,
OPTIONS => my_option);
my_option := DBMS_LOGMNR.ADDFILE;
DBMS_OUTPUT.PUT_LINE('Added logfile ' || c_log_rec.name);
END LOOP;
END;
/
EXECUTE my_add_logfiles(in_start_time => '01-sep-2008');
Step 2:
Query the V$LOGMNR_LOGS to see the list of redo log files.
SQL> col NAME format a70
SQL> SELECT FILENAME name, LOW_TIME start_time, FILESIZE bytes FROM V$LOGMNR_LOGS;
Query the V$LOGMNR_LOGS to see the list of redo log files.
i.
EXECUTE DBMS_LOGMNR.START_LOGMNR(-
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
DBMS_LOGMNR.COMMITTED_DATA_ONLY + -
DBMS_LOGMNR.PRINT_PRETTY_SQL);
Or
ii.
SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
EXECUTE DBMS_LOGMNR.START_LOGMNR(-
STARTTIME => '01-sep-2008 15:00:00', -
ENDTIME => '02-sep-2008 16:00:00', -
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
DBMS_LOGMNR.COMMITTED_DATA_ONLY + -
DBMS_LOGMNR.PRINT_PRETTY_SQL);
To avoid the need to specify the date format in the call to the PL/SQLDBMS_LOGMNR.START_LOGMNR
procedure, you can use the SQLALTER
SESSION
SET NLS_DATE_FORMAT
statement first, as shown in the following example.
ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
EXECUTE DBMS_LOGMNR.START_LOGMNR( -
DICTFILENAME => '/oracle/database/dictionary.ora', -
STARTTIME => '01-Jan-1998 08:30:00', -
ENDTIME => '01-Jan-1998 08:45:00'-
OPTIONS => DBMS_LOGMNR.CONTINUOUS_MINE);
----------------------
select username, timestamp,scn_to_timestamp(SCN),
seg_type_name, seg_name, table_space, session# SID, serial# , operation
from v$logmnr_contents
where table_name = 'LOGIN' and
(scn_to_timestamp(SCN))> TO_date ('27-AUG-08 00:22:39', 'dd-mon-yyyy hh24:mi:ss');--run saturn
select username,scn_to_timestamp(SCN) as DataTime ,
seg_type_name, seg_name, session# SID, serial# , operation
from v$logmnr_contents
where table_name = 'LOGIN' and
(scn_to_timestamp(SCN))> TO_date ('27-AUG-08 00:22:39', 'dd-mon-yyyy hh24:mi:ss');
-----------------------------------
SQL> set lines 200
SQL> column USERNAME format a20
SQL> column SEG_NAME format a15
SQL> column SEG_TYPE_NAME format a15
SQL> column TABLE_SPACE format a15
select username,scn_to_timestamp(SCN)as DataTime,
seg_type_name, seg_name, session# SID, operation
from v$logmnr_contents
where table_name = 'LOGIN'
and (scn_to_timestamp(SCN))> TO_date ('27-AUG-08 01:00:00', 'dd-mon-yyyy hh24:mi:ss')
and (scn_to_timestamp(SCN))<>
select username,scn_to_timestamp(SCN)as DataTime,timestamp,
seg_type_name, seg_name, session# SID, operation
from v$logmnr_contents
where username in ('prod7');
select username,timestamp,seg_type_name, seg_name, session# SID, operation
from v$logmnr_contents
where table_name = 'LOGIN';
----test
select username,timestamp,seg_type_name, seg_name, session# SID, operation
from v$logmnr_contents
where table_name = 'LOGIN'
and timestamp > TO_date ('01-SEP-2008 20:00:00', 'dd-mon-yyyy hh24:mi:ss')
and timestamp <>
Remote Mining
Remote mining means the database that analyze of redo log files or archived redo log files is not same as of source database- that generate redo/archived log files.
The system of using Logminer in remote DB is
-transfer the analysis redo or archived redo log files as well as
-transfer the mining dictionary to the remote database.
Step by step:
Prerequisites:
-To extract a LogMiner dictionary to the redo log files, the DB must be open and in ARCHIVELOG
mode and archiving must be enabled.
Scenario: Online redo log file redo01 need to be analyzed in remote machine neptune(local).
Suppose Production db or Source DB is Saturn.
Step1:
Extract logminer dictionary: (On Source Database)
In source that is in production server build the dictionary by,
SQL> EXECUTE DBMS_LOGMNR_D.BUILD(OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
PL/SQL procedure successfully completed.
Depending on the size of the dictionary, it may be contained in multiple redo log files.
This redo log file must have been created before the redo log file that you want to analyze, but should be as recent as possible.
If the relevant redo log files have been archived, you can find out which redo log files contain the start and end of an extracted dictionary.
To do so, query the V$ARCHIVED_LOG view, by,
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN='YES';
NAME
--------------------------------------------------------------------------------
/oracle/database_name/archivelog/2008_09_03/o1_mf_1_8601_4cw3q3t8_.arc
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_END='YES';
NAME
--------------------------------------------------------------------------------
/oracle/database_name/archivelog/2008_09_03/o1_mf_1_8601_4cw3q3t8_.arc
Specify the names of the start and end redo log files, and possibly other logs in between them, with the ADD_LOGFILE
procedure when you are preparing to begin a LogMiner session.
Transfer the logminer dictionary and log for analysis into the mining database.
SQL> !scp /oracle/database_name/archivelog/2008_09_03/o1_mf_1_8601_4cw3q3t8_.arc oracle@neptune:
Password:
o1_mf_1_8601_4cw3q3t 100% |*********************************************************************| 10836 KB 00:01
Then also transfer the redo log. Based on your requirement you can transfer archived log or online redo log. To see a defined time archived log query by select NAME,FIRST_NAME from v$archived_log where completion_time >SYSDATE-1;
In this example I will analysis online redo log file.
SQL> SELECT distinct member LOGFILENAME FROM V$LOGFILE;
LOGFILENAME
--------------------------------------------------------------------------------
/oradata1/database_name/datafiles/database_name/redo03.log
/oradata1/database_name/datafiles/database_name/redo01.log
/oradata1/database_name/datafiles/database_name/redo02.log
Password:
redo03.log 100% |*********************************************************************| 51200 KB 00:04
Step 3: Specify the files for analysis.(In mining/remote database) (Neptune)
Here specify the logfile that need to mine plus specify the dictionary files.
SQL>EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/export/home/oracle/redo01.log', OPTIONS => DBMS_LOGMNR.ADDFILE);
PL/SQL procedure successfully completed.
SQL>EXECUTE DBMS_LOGMNR.ADD_LOGFILE( LOGFILENAME => '/export/home/oracle/o1_mf_1_1284_4c23hcd6_.arc', OPTIONS => DBMS_LOGMNR.ADDFILE);
PL/SQL procedure successfully completed.
Step 4: Start the logminer session(In mining database)
Don't specify here DICTFILENAME option. Execute the following,
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS);
Step 5:
Query V$LOGMNR_CONTENTS
Step 6:
End the LogMiner Session:
EXECUTE DBMS_LOGMNR.END_LOGMNR;
LogMiner Setting
1. Using the Online Catalog: (Set LogMiner in Source Database)
Supplemental logging places additional column data into the redo log file whenever an UPDATE operation is performed.
Step-1:
Ensure that you have on at a minimal level supplemental logging.
To work with logMiner you must have database supplemental logging on of the source database at a minimum level.
By default, Oracle DB does not provide any supplemental logging, that means, by default LogMiner is not usable.
You can check your supplemental logging on of off by following commands,
SQL> select SUPPLEMENTAL_LOG_DATA_MIN from v$database;
SUPPLEME
--------
NO
In order to on it at a minimal level,
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Database altered.
Step-2:
Install the DBMS_LOGMNR package.
The next step is to install DBMS_LOGMNR package. If you have already installed then ignore this steps. You can install this package by running $ORACLE_HOME/rdbms/admin/dbmslm.sql script. If you create your database with dbca then this script run automatically. So you can ignore this step. However if you create database manually with CREATE DATABASE ... command then you must run the script before using logminer. That is ,
SQL>@$ORACLE_HOME/rdbms/admin/dbmslm.sql
Step-3:
Grant the EXECUTE_CATALOG_ROLE role.
The user who will do the mining task give him EXECUTE_CATALOG_ROLE. Here user is nahar.
SQL>GRANT EXECUTE_CATALOG_ROLE TO nahar;
Step-4:
Create the synonym. nahar creates a public synonym:
CREATE PUBLIC SYNONYM dbms_logminer_nahar FOR SYS.DBMS_LOGMNR;
--All above four steps are needed just for once.
************************************************************** kisukkhon por por abar ekhan theke shuru
Step-5:
Specify the scope of the mining.
Now you decide on which file you will do the analysis. You may have interest over archived redo log files or online redo log files based on your scenario.
i. Analysis Redo logs:
Suppose you have recent problem in your database and so you might show interest of your online redo log files. You can see
current online redo logs by,
SQL> SELECT distinct member LOGFILENAME FROM V$LOGFILE;
LOGFILENAME
--------------------------------------------------------------------------------
/oracle/app/oradata/FolderName/redo02.log
/oracle/app/oradata/FolderName/redo03.log
/oracle/app/oradata/FolderName/redo01.log
SQL>BEGIN DBMS_LOGMNR.ADD_LOGFILE
('/oracle/app/oradata/FolderName/redo01.log');
END;
/
PL/SQL procedure successfully completed.
BEGIN DBMS_LOGMNR.ADD_LOGFILE
('/oracle/app/oradata/FolderName/redo02.log');
END;
/
PL/SQL procedure successfully completed.
BEGIN DBMS_LOGMNR.ADD_LOGFILE
('/oracle/app/oradata/FolderName/redo03.log');
END;
/
PL/SQL procedure successfully completed.
Practice:
To add multiple log file:
SQL>BEGIN DBMS_LOGMNR.ADD_LOGFILE
('/oradata2/data1/dbase/redo01.log');
DBMS_LOGMNR.ADD_LOGFILE
('/oradata2/data1/dbase/redo03.log');
END;
/
ii. Analysis Archive logs:
Suppose you have past problem in your database and so you might show interest of your archive log files.
Sometimes, you want to mine the redo log file that was most recently archived. You can see --
To check recent archive log:
SQL> SELECT NAME FROM V$ARCHIVED_LOG
WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);
NAME
--------------------------------------------------------------------------------
/oracle/database_name/archivelog/2008_08_27/o1_mf_1_8516_4cb52pvt_.arc
SQL> BEGIN DBMS_LOGMNR.ADD_LOGFILE
('/oracle/database_name/archivelog/2008_08_27/o1_mf_1_8516_4cb52pvt_.arc');
END;
/
Practice:
SQL>column NAME format a70
SELECT NAME,to_char(FIRST_TIME,'mm/dd/yy hh24:mi:ss') Time FROM V$ARCHIVED_LOG
WHERE FIRST_TIME like ('27-AUG-08%'); --to get the 26 august's archive log
Step-6:
To direct LogMiner to use the dictionary currently in use for the database, specify the online catalog as your dictionary source -
Start the LogMiner session and specify a dictionary.
To start the LogMiner session:
BEGIN
DBMS_LOGMNR.START_LOGMNR
(options =>
dbms_logmnr.dict_from_online_catalog);
END;
/
PL/SQL procedure successfully completed.
Using the OPTIONS parameter, it is specified that Oracle Database read the dictionary information to convert the object names from the online catalog while starting LogMiner.
Step-7: ***
Request the redo data
Check contents from V$LOGMNR_CONTENTS view.
To get information and when DML or DDL happened in the V$LOGMNR_CONTENTS about table STUDENT we can issue
SQL> set lines 200
SQL> set pages 0 -------to delete the column header
sys@DatabaseName>column USERNAME format a20
sys@DatabaseName>column SEG_NAME format a15
sys@DatabaseName>column SEG_TYPE_NAME format a15
sys@DatabaseName>column TABLE_SPACE format a15
SQL> select username, to_char(timestamp,'mm/dd/yy hh24:mi:ss') timestamp,
seg_type_name, seg_name, table_space, session# SID, serial# , operation
from v$logmnr_contents
where table_name = 'STUDENT';
We can get SQL_UNDO and SQL_REDO information by,
SQL>col SQL_REDO format a30
SQL> col SQL_UNDO format a30
SQL> select sql_undo, sql_redo
from v$logmnr_contents
where table_name = 'STUDENT' and OPERATION='UPDATE';
Request the archive data:
select username, to_char(timestamp,'mm/dd/yy hh24:mi:ss') timestamp,
seg_type_name, seg_name, table_space, session# SID, serial# , operation
from v$logmnr_contents
where table_name = 'LOGIN';
select username, to_char(timestamp,'mm/dd/yy hh24:mi:ss') timestamp,
seg_type_name, seg_name, table_space, session# SID, serial# , operation
from v$logmnr_contents
where table_name = 'LOGIN'
and timestamp like ('27-AUG-08%'); --to get the info of a specific date
step-8:
End the LogMiner session.
Use the DBMS_LOGMNR.END_LOGMNR
procedure.
SQL>
BEGIN
DBMS_LOGMNR.END_LOGMNR;
END;
/
PL/SQL procedure successfully completed.
v$logmnr_contents
The type of change made to the database: INSERT
, UPDATE
, DELETE
, or DDL
(OPERATION
column).
SCN
column).The SCN at which a change was committed (COMMIT_SCN
column).
XIDUSN
, XIDSLT
, and XIDSQN
columns).The table and schema name of the modified object (SEG_NAME
and SEG_OWNER
columns).
USERNAME
column).If the change was due to a SQL DML statement, the reconstructed SQL statements showing SQL DML that is equivalent (but not necessarily identical) to the SQL DML used to generate the redo records (SQL_REDO
column).
SQL_REDO
column, the password is encrypted. SQL_REDO
column values that correspond to DDL statements are always identical to the SQL DDL used to generate the redo records.If the change was due to a SQL DML change, the reconstructed SQL statements showing the SQL DML statements needed to undo the change (SQL_UNDO
column).
LogMining
In simple we can say an undo entry provides the values of data stored before a change and the redo entry provides the values of data stored after a change. So we can get them from online redo logs and then to archived logs.
So from online redo logs and from archived redo logs we can get database redo and undo information. But online and archived logs have an unpublished format and are not human-readable. With the DBMS_LOGMNR package we can analysis redo log files and can get back undo and redo information in a human readable format.
Another scenario of use of logminer is to investigate database past in time. With Flashback Query we can get prior values of the record in the table at some point in the past but is limited to UNDO_RETENTION parameter (which is often as short as 30 minutes for an OLTP database.).So in order to analysis past activity on the database logminer is a good choice.
Every change made to an Oracle database by default generates undo and redo information which is accumulated in Oracle redo log files. LogMiner is an integrated feature of the Oracle Database that provides DBA's and auditors with the infrastructure required for relational access to Oracle's redo stream.
The Oracle LogMiner utility enables you to query redo logs through a SQL interface. Redo logs contain information about the history of activity on a database.
Why LogMiner:
determine steps needed for the recovery of inadvertent (done unintentionally) changes to data
assemble data on actual usage for use in performance-tuning and capacity-planning
auditing the operation of any commands run against the database
Note that LogMiner uses Oracle logs to reconstruct exactly how data changed, whereas the complementary utility Oracle Flashback addresses, reconstructs and presents the finished results of such changes, giving a view of the database at some point in time.
Never escape from current session because the data viewing of LogMiner will no be available from other session.
LogMiner Restrictions:
The following are not supported:
- Data types LONG and LOB
- Simple and nested abstract data types ( ADTs)
- Collections (nested tables and VARRAYs)
- Object Refs
- Index Organized Tables (IOTs)
LogMiner Configuration
The redo log files contain the changes made to the database or database dictionary.
The source database is the database that produces all the redo log files that you want LogMiner to analyze.
The mining database is the database that LogMiner uses when it performs the analysis.
The LogMiner dictionary allows LogMiner to provide table and column names, instead of internal object IDs, when it presents the redo log data that you request. LogMiner uses the dictionary to translate internal object identifiers and datatypes to object names and external data formats. Without a dictionary, LogMiner returns internal object IDs and presents data as binary data.
Without the dictionary, LogMiner will display: insert into "UNKNOWN"."OBJ#45522"("COL 1","COL 2","COL 3","COL 4") values(HEXTORAW('45465f4748'),HEXTORAW('546563686e6963616c20577269746572'),HEXTORAW('c229'),HEXTORAW('c3020b'))
LogMiner requires a dictionary to translate object IDs into object names when it returns redo data to you. LogMiner gives you three options for
supplying the dictionary.
1. Using the Online Catalog:
The online catalog contains the latest information about the database and may be the fastest way to start your analysis. Because DDL operations that change important tables are somewhat rare, the online catalog generally contains the information you need for your analysis. Remember, however, that the online catalog can only reconstruct SQL statements that are executed on the latest version of a table. As soon as a table is altered, the online catalog no longer reflects the previous version of the table. This means that LogMiner will not be able to reconstruct any SQL statements that were executed on the previous version of the table.
2. Extracting a LogMiner Dictionary to the Redo Log Files:
3. Extracting the LogMiner Dictionary to a Flat File: