This blog is a note for self learning. Some writings are done by myself and some are collected, just to keep things in a organized way.
11.09.2009
Expdp & Impdp Related
To see the status of Import: press ctrl+c
import>status
To see the status of Export: press ctrl+c
export>status
Unlike the original exp and imp utilities all data pump ".dmp" and ".log" files are created on the Oracle server, not the client machine.
Links:
http://www.oracle-base.com/articles/10g/OracleDataPump10g.php
Different Options for Exports/Imports:
expdp help=yTo estimate size without actually taking the dump:
expdp prod7/thword7 tables=OLD_DATA directory=DUMPS_NAHAR estimate_only=y
=>estimated "PROD7"."OLD_DATA" 156.8 GB
Directory creation
SQL> col DIRECTORY_PATH format a60
SQL> select DIRECTORY_NAME,DIRECTORY_PATH from dba_directories;
SQL> Create or replace directory DUMPS_NAHAR3 As '/oracle';
SQL> grant read,write on directory DUMPS_NAHAR3 to prod7;
EXPORT
expdp single table
expdp prod/pass tables=LOGIN directory=DBDUMPS dumpfile=LOGIN_20100620.dmp logfile=LOGIN_20100620_LOG.log
expdp Multiple tables, starting with different name
expdp prod/pass tables=LOGIN_CLIENTS,TAC_FORM_SUMMARY directory=DUMPS_TAHSEEN dumpfile=DUMPS_TAHSEEN_20100301.dmp logfile=DUMPS_TAHSEEN_EXPORT_LOG_20100301.log
expdp Multiple tables, starting with same name - TMS
expdp prod/pass DIRECTORY=DUMPS_NAHAR_NEPTUNE SCHEMAS=prod INCLUDE=TABLE:"like'TR_%'" DUMPFILE=DUMPS_ZAHIDUL_20100308.dmp logfile=DUMPS_ZAHIDUL_EXPORT_LOG_20100308.log
Example - Exclude tables
expdp prod/pass DIRECTORY=DUMPS_TERMINUS_PROD7_20100708 SCHEMAS=PROD EXCLUDE=TABLE:\"IN \(\'OLD_DATA\',\'USER_ACTIVITY\'\)\" DUMPFILE=DUMPS_TERMINUS_PROD_20100708.dmp logfile=DUMPS_TERMINUS_PROD_20100708_LOG_20100308.log
exclude=TABLE:\"='TABLE_NAME'\"
EXCLUDE=TABLE:\"IN \(\'TEMP\',\'TEMP1\'\)\"
Full schema
expdp atiq2/pass schemas=atiq2 directory=DUMPS_NAHAR3 dumpfile=atiq2_nd_20100907.dmp logfile=exp_atiq2_nd_20100907.log
In multiple file
expdp prod7/pass SCHEMAS=prod7 DIRECTORY=data_pump_dir DUMPFILE=saturn_prod7_%U.dmp FILESIZE=5G LOGFILE=saturn_prod7_exp.log
IMPORT
1: TABLE_EXISTS_ACTION=replace
impdp prod/pass DIRECTORY=DUMPS_NAHAR_NEPTUNE SCHEMAS=prod DUMPFILE=dbserver_scema_date.dmp TABLE_EXISTS_ACTION=replace PARALLEL=3
Full schema
impdp prod/pass DIRECTORY=DUMPS_NAHAR_NEPTUNE DUMPFILE=df01_prod_flexdoc_smaruf_nahar_20101012.dmp TABLE_EXISTS_ACTION=replace PARALLEL=3
2: REMAP_SCHEMA
impdp system/pass DIRECTORY=DUMPS_NAHAR4 DUMPFILE=atiq2_nd_20100907.dmp REMAP_SCHEMA=atiq2:states PARALLEL=3
-- REMAP_TABLE: We can't do it in 10g. This is only available in 11g.
3: CONTENT=DATA_ONLY/ METADATA_ONLY
impdp system/pass DIRECTORY=DUMPS_NAHAR CONTENT=DATA_ONLY DUMPFILE=OLD_DATA_20100705.dmp REMAP_SCHEMA=PROD7:NAHAR PARALLEL=3
4: Import on a fresh a schema
impdp states/pass DIRECTORY=DUMPS_NAHAR_NEPTUNE SCHEMAS=states DUMPFILE=states_dump_20100919.dmp PARALLEL=3
--wrong
expdp prod/pass DIRECTORY=DUMPS_TERMINUS_PROD_20100708 SCHEMAS=PROD DUMPFILE=DUMPS_TERMINUS_PROD_20100708.dmp logfile=DUMPS_TERMINUS_PROD_20100708_LOG_20100308.log
--If I wanted to take dump from different directory then I should use like:
DUMPFILE=datadir1:schema1%U.dmp,datadir2:schema2%U.dmp
How to zip & unzip
zip: nahar@mf01:~$ gzip df01_prod7_flexdoc_smaruf_nahar_201001012.dmp
zip: nahar@mf01:~$ gunzip df01_prod7_flexdoc_smaruf_nahar_20101012.zip
More:
www.oracle-dba-online.com
http://www.oracle-base.com/articles/10g/OracleDataPump10g.php
Working Example:
I have to pass following steps-
Directory Creation -> Export the dump file to that directory of Estern -> Copy the dump file to another machine -> Make a directory to that location where I create the dump -> Impdp .dmp file to that directory of juptr
Step-1: I am searching where the space available to export?
oracle@Estern:~$ df -h
I need to make a physical directory-
oracle@Estern:~$ cd /dump
oracle@Estern:/dump$ mkdir dumps_nahar_20091015
Step-2: Create Directory
oracle@Estern:~$ sqlplus
Enter user-name: s as sysdba
SQL> Create or replace directory dumps_nahar As '/dump/dumps_nahar_20091015';
Testing the view is really created or not-
SQL> select OWNER,DIRECTORY_NAME,DIRECTORY_PATH from DBA_DIRECTORIES where DIRECTORY_NAME like '%NAHAR%';
SQL> grant read,write on directory dumps_nahar to prod;
Step-3: Taking the Dump or Export the Dump
expdp prod/password DIRECTORY=dumps_nahar SCHEMAS=prd7 DUMPFILE=nahar_from_Estern_20091015_%U.dmp
Step-4: copy dump from Estern to Jupiter
Now I am coping the dump file from Estern to Juptr's /backup1/dumps_nahar_20091015 location.
bash-3.00$ scp nahar_from_Estern_20091015_01.dmp oracle@juptr:/backup1/dumps_nahar_20091015
--Export from Estern and Copy the dump in juptr done.
--Now I will Import the dump in Juptr's nahar user.
oracle@juptr:~$ sqlplus s as sysdba
Step-5: Want to import the dump at nahar user
If nahar does not exists-
SQL> create user nahar identified by nahar
default tablespace users quota unlimited on users;
SQL> grant dba to nahar;
Step-6: Create Directory where the dump I copied
SQL> Create or replace directory dumps_nahar As '/backup1/dumps_nahar_20091015';
Testing the view is really created or not-
SQL> select OWNER,DIRECTORY_NAME,DIRECTORY_PATH from DBA_DIRECTORIES where DIRECTORY_NAME like '%NAHAR%';
SQL> grant read,write on directory dumps_nahar to nahar;
Step-7: Import the dump
impdp system/password DIRECTORY=dumps_nahar SCHEMAS=prd7 DUMPFILE=nahar_from_Estern_20091015_01.dmp REMAP_SCHEMA=prd7:nahar PARALLEL=3
11.02.2009
Database Status
col "Database Size" format a20
col "Free space" format a20
col "Used space" format a20
select round(sum(used.bytes) / 1024 / 1024 / 1024 ) || ' GB' "Database Size"
, round(sum(used.bytes) / 1024 / 1024 / 1024 ) -
round(free.p / 1024 / 1024 / 1024) || ' GB' "Used space"
, round(free.p / 1024 / 1024 / 1024) || ' GB' "Free space"
from (select bytes
from v$datafile
union all
select bytes
from v$tempfile
union all
select bytes
from v$log) used
, (select sum(bytes) as p
from dba_free_space) free
group by free.p;
Database Size: 436 GB
Used space: 355 GB
Free space: 81 GB
How many Table's under "Nahar" user?
select count(*) from dba_objects do where do.owner = 'NAHAR'
and lower(object_type)=lower('Table');
Total tables: 490
10.28.2009
Pinter Setting
My printer IP: 192.168.4.101
Queue Name: PS-FL1-U
Manufacturer/Model: HP/LaserJet 1020
Now, I have to follows the steps:
The following steps are for debian users. This should work on ubuntu too.
Other linux distribution/OS users please see corresponding cups documentation.
* As root issue the following command
apt-get install cupsys foomatic-db foomatic-filters-ppds
* Fire up your browser and go http://localhost:631
* Click "Administration"
* Click "Add printer"
* Give a name e.g ."HP". The rest two are optional. Click "Continue"
* Choose "LPD/LPR Host or Printer"
* Put "lpd://printer-ip-address/queue-name" from above table like "lpd://192.168.1.213/PS-FL2-U"
* Select "HP"
* Choose "HP LaserJet 1010 Foomatic/hpijs (en)"
* Click "Add Printer"
* You are done.
* Don't give "Test page" print.
* Print the first page of your document you want to print to test whether the setup is successful.
OEM Basic
grant connect, resource to user1;
grant select_catalog_role to user1;
To create a new DBControl repository:
emca -config dbcontrol db -repos create
To start a DB Console:
emctl start dbconsole
Managing the agent:
oracle@terminus ~$ set ORACLE_SID=sid
oracle@terminus ~$ echo $ORACLE_SID
THRP
oracle@terminus ~$ emctl stop agent
Stopping Oracle Enterprise Manager 10g Database Control Stopped.
Agent is not running.
oracle@terminus ~$ emctl start agent
Starting agent .... started.
oracle@terminus ~$ emctl start dbconsole
oracle@terminus ~$ emctl status agent
Agent is Running and Ready
When error:
emca -config dbcontrol db -repos recreate
Oracle Enterprise Manager (OEM) is a set of systems management tools provided by Oracle Corporation for managing the Oracle environment and automate tasks.
Database control: OEM application for managing a single Oracle Database.
Grid control: OEM's web based interface for centrally managing all your Oracle environments from a single point.
Implementation: OEM creates a SYSMAN schema in the Oracle DB to store metadata and statistics. SYSMAN also acts as the super-administrator account for first-time login. The default password for SYSMAN is oem_temp.
10.22.2009
ORA-00600
I was getting this ERROR by running a query in saturn.
I was also running the same query in terminus, but there was no error.
Then I checked in both server:
1. select banner from v$version;
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Prod
PL/SQL Release 10.2.0.3.0 - Production
CORE 10.2.0.3.0 Production
TNS for Solaris: Version 10.2.0.3.0 - Production
NLSRTL Version 10.2.0.3.0 - Production
Both was same.
2. show parameter optimizer
@saturn:
optimizer_features_enable string 10.2.0.3
@terminus:
optimizer_features_enable string 10.2.0.2
Then @saturn I run -
alter system set optimizer_features_enable='10.2.0.2';
And, the problem solved. :)
10.19.2009
Tablespaces
CREATE SMALLFILE TABLESPACE "TEST_NAHAR"
DATAFILE '/oracle/oradata/TEST_NAHAR_01.dbf'
SIZE 10M
REUSE LOGGING
EXTENT MANAGEMENT LOCAL
UNIFORM SIZE 1024K
SEGMENT SPACE MANAGEMENT AUTO;
Syntax-2:
DROP TABLESPACE "DATA02" INCLUDING CONTENTS AND DATAFILES CASCADE CONSTRAINTS;
CREATE TABLESPACE "TS_DATA02"
DATAFILE '/oradata/datafiles/THRP/DF_DATA02.dbf'
SIZE 1G
AUTOEXTEND ON
NEXT 50M
MAXSIZE 30G
BLOCKSIZE 8K;
Syntax-3:
CREATE TABLESPACE "DATA02"
DATAFILE '/oracle/oradata/JUPITER/DATA02_01.dbf'
SIZE 100M
BLOCKSIZE 8K;
DROP:
DROP TABLESPACE TEST_NAHAR
INCLUDING CONTENTS AND DATAFILES CASCADE CONSTRAINTS;
ALTER:
SQL> ALTER TABLESPACE users OFFLINE;
SQL> ALTER TABLESPACE users ONLINE;
DBA_TABLESPACES Details:
If I set REUSE LOGGING, then LOGGING=LOGGING will set.
If I set BLOCKSIZE 8K, then BLOCKSIZE=8192 will set.
If I set BLOCKSIZE 16K, then BLOCKSIZE=16384 will set.
Default Settings:
LOGGING=LOGGING
EXTENT MANAGEMENT = LOCAL
SEGMENT_SPACE_MANAGEMENT=AUTO
DBA_DATA_FILES Details:
BYTES=10485760 =>SIZE 10M (10*1024*1024)
Q: Which Tablespace contains which datafile?
select t.tablespace_name, d.file_name, t.segment_space_management, t.block_size, t.bigfile
from DBA_TABLESPACES t,DBA_DATA_FILES d
where t.tablespace_name=d.tablespace_name
order by t.tablespace_name;
Or simply,
select d.tablespace_name,d.file_name
from DBA_DATA_FILES d order by d.tablespace_name;
Q: How to Check OBJECT on Tablespace?
select owner, segment_name, segment_type, tablespace_name
from dba_segments
where lower(segment_name) like lower('%SYS_LOB0000151343C00022%')
order by owner, segment_name;
Q: How to Check all OBJECTs of a Tablespace?
select owner, segment_name, segment_type, tablespace_name
from dba_segments
where lower(tablespace_name) like lower('%DATA02%')
order by owner, segment_name;
10.08.2009
Working with Vi
® Move cursor one space right.
¬ Move cursor one space left.
Move cursor up one line.
¯ Move cursor down one line.
ctrl-F Move forward one screen.
ctrl-B Move backward one screen.
$ Move cursor to end of line.
^ Move cursor to beginning of line.
:1 Move to first line of file
:$ Move to last line of file
/ Search for a character string.
? Reverse search for a character string.
x Delete the character at the cursor position.
dd Delete the current line.
p Paste data that was cut with x or dd commands.
u Undo.
Entering Input Mode:
a Add text after the cursor.
i Insert text before the cursor.
R Replace text starting at the cursor.
o Insert a new line after the current one.
Entering Command Mode:
esc Switch from Input mode to Command mode.
Exiting or Saving Your File:
:w Write file to disk, without exiting editor.
ZZ Save the file and exit.
:q! Quit without saving.
10.05.2009
Hints
Oracle comes with an optimizer that promises to optimize a query's execution plan.
When this optimizer is really doing a good job, no hints should be required at all.
All hints except /*+ rule */
Example:
SELECT /*+ index(t1 t1_abc) index(t2 t2_abc) */ COUNT(*)
FROM t1, t2
WHERE t1.col1 = t2.col1;
FIRST_ROWS(n):
The FIRST_ROWS hint explicitly chooses the cost-based approach to optimize a statement block with a goal of best response time (minimum resource usage to return first row).
set autotrace trace exp
SELECT table_name FROM dba_tables
WHERE owner = 'SYS' AND table_name LIKE '%$' ORDER BY 1;
SELECT /*+ FIRST_ROWS(10) */ table_name
FROM dba_tables WHERE owner = 'SYS'
AND table_name LIKE '%$' ORDER BY 1;
Access Method INDEX:
CREATE INDEX ix_customers_gender
ON customers(gender);
set autotrace traceonly explain
SELECT * FROM customers WHERE gender = 'M';
SELECT /*+ INDEX(customers ix_customers_gender) */ *
FROM customers WHERE gender = 'M';
SELECT /*+ INDEX_ASC(customers ix_customers_gender) */ *
FROM customers WHERE gender = 'M';
SELECT /*+ INDEX_DESC(customers ix_customers_gender) */ *
FROM customers WHERE gender = 'M';
We can use First row and Index Together:
/*+FIRST_ROWS(20) INDEX(table_alias index_name) */
Links:
http://psoug.org/reference/hints.html
http://www.adp-gmbh.ch/ora/sql/hints/index.html
9.27.2009
Partition Index rebuild - PL/SQL
tab_name VARCHAR2(30);
stmt VARCHAR2(500);
sqlstr VARCHAR2(800);
TYPE TabCurTyp IS REF CURSOR;
ind_cursor TabCurTyp;
indx_name VARCHAR2(35);
part_name VARCHAR2(35);
BEGIN
tab_name := UPPER('&tab_name');
sqlstr:='SELECT distinct uip.index_name, uip.partition_name
FROM USER_IND_PARTITIONS uip, USER_TAB_PARTITIONS utp, USER_INDEXES ui
WHERE uip.partition_name= utp.partition_name
AND ui.index_name = uip.index_name
AND ui.table_name = utp.table_name
AND utp.table_name LIKE '''||tab_name||'%''';
OPEN ind_cursor FOR sqlstr;
LOOP
FETCH ind_cursor INTO indx_name,part_name;
EXIT WHEN ind_cursor%NOTFOUND;
stmt := 'ALTER INDEX ' || indx_name || ' REBUILD PARTITION ' ||part_name ||' ONLINE';
DBMS_OUTPUT.PUT_LINE(stmt);
EXECUTE IMMEDIATE stmt;
END LOOP;
END;
/
Here you only have to put the table name.
Office work- 10 September, 2009.
Single Syntax:
ALTER INDEX index_nm REBUILD PARTITION partition_name;
8.17.2009
MyISAM & Innodb Compare
If there are many modifications of the data, it's said that InnoDB works faster because it uses row locking instead of table locking, like MyISAM. However, if there are mainly SELECT statements, a MyISAM table might be faster.
http://dev.mysql.com/doc/refman/5.0/en/converting-tables-to-innodb.html
Make sure that you do not fill up the tablespace: InnoDB tables require a lot more disk space than MyISAM tables. If an ALTER TABLE operation runs out of space, it starts a rollback, and that can take hours if it is disk-bound.
6.29.2009
DDL - DML - DCL - TCL
Data Definition Language (DDL) statements are used to define the database structure or schema. Examples:
- CREATE - to create objects in the database
- ALTER - alters the structure of the database
- DROP - delete objects from the database
- TRUNCATE - remove all records from a table, including all spaces allocated for the records
- COMMENT - add comments to the data dictionary
- RENAME - rename an object
DML
Data Manipulation Language (DML) statements are used for managing data within schema objects. Some examples:
- SELECT - retrieve data from the a database
- INSERT - insert data into a table
- UPDATE - updates existing data within a table
- DELETE - deletes all records from a table, the space for the records remain
- MERGE - UPSERT operation (insert or update)
- CALL - call a PL/SQL or Java subprogram
- EXPLAIN PLAN - explain access path to data
- LOCK TABLE - control concurrency
DCL
Data Control Language (DCL) statements. Some examples:
- GRANT - gives user's access privileges to database
- REVOKE - withdraw access privileges given with the GRANT command
TCL
Transaction Control (TCL) statements are used to manage the changes made by DML statements. It allows statements to be grouped together into logical transactions.
- COMMIT - save work done
- SAVEPOINT - identify a point in a transaction to which you can later roll back
- ROLLBACK - restore database to original since the last COMMIT
- SET TRANSACTION - Change transaction options like isolation level and what rollback segment to use
DML commands can not be rollback when a DDL command is executed immediately after a DML. DDL after DML means "auto commit".
6.08.2009
MySQL - Overview
- Managing the Database
- Understanding MySQL Table Type
- Working with tables
- Creating and removing Index
- Querying data from MySQL
- INSERT-UPDATE-DELETE
- Database Table Maintanance
More..
Table Types
- ISAM
- MyISAM
- InnoDB
- BerkeleyDB (BDB)
- MERGE
- HEAP
- Only InnoDB and BDB tables are transaction safe and
- Only MyISAM tables support full-text indexing and searching feature.
- MyISAM is also the default table type.
ISAM
ISAM had been deprecated and removed from version 5.x. All of it functionality entire replace by MyISAM. ISAM table has a hard size 4GB and is not portable.
MyISAM
- This is default type when you create table.
- MyISAM table work very fast but not transaction-safe.
- The size of table depends on the OS and the data file are portable.
- Hard size - 64 keys per table and maximum key length of 1024 bytes.
InnoDB
- InnoDB table are transaction safe.
- Supports row-level locking.
- Foreign keys are supported in InnoDB tables.
- The data file of InnoDB table can be stored in more than one file. So,
- The size of table depends on the disk space.
- Like the MyISAM table type, data file of InnoDB is portable.
Disadvantage - In comparison with MyISAM is it take more disk space.
BDB
- BDB is similar to InnoDB in transaction safe.
- It supports page level locking but data file are not portable.
MERGE
Merge table type is added to treat multiple MyISAM tables as a single table so it remove the size limitation from MyISAM tables.
HEAP
- Heap table is stored in memory so it is the fastest one.
- Because of storage mechanism, the data will be lost when the power failure and sometime it can cause the server run out of memory.
Disadvantage: Heap tables do not support columns with AUTO_INCREMENT, BLOB and TEXT characteristics.
DDL
Statement Pattern
CREATE TABLE [IF NOT EXISTS] table_name(
column_list
) type=table_type
Example-1
CREATE TABLE employees (
employeeNumber into(11) NOT NULL,
lastName varchar(50) NOT NULL,
officeCode varchar(10) NOT NULL,
reportsTo int(11) default NULL,
PRIMARY KEY (employeeNumber)
);
Example-2: Defining Duplicate Primary Key
CREATE TABLE payments (
customerNumber int(11) NOT NULL,
checkNumber varchar(50) NOT NULL,
paymentDate datetime NOT NULL,
amount double NOT NULL,
PRIMARY KEY (customerNumber,checkNumber)
);
Example-3: Defining Storage Engine
CREATE TABLE database_name.table_name(
column1 NOT NULL AUTO_INCREMENT ,
column2 VARCHAR( 20 ) NOT NULL ,
column3 VARCHAR( 20 ) CHARACTER SET utf8 COLLATE utf8_unicode_ci NULL ,
PRIMARY KEY ( column1)
) ENGINE = MYISAM ;
Example-4: Create table from another table
CREATE TABLE new_table_name
AS (select * from old_table);
CREATE TABLE new_table_name
AS (select col1, col2 from old_table where cond1);
DESCRIBE TABLE: DESCRIBE table_name;
SHOW TABLES: SHOW TABLES
This will show all the tables.
More..
Internal Locking Methods
MySQL uses -
Table-level locking for -
MyISAM
, MEMORY
, and MERGE
tables, andRow-level locking for -
InnoDB
tables.If you want to use a storage engine with row-level locking, you should look at what your application does and what mix of select and update statements it uses.
For example, most Web applications perform many selects, relatively few deletes, updates based mainly on key values, and inserts into a few specific tables. The base MySQL MyISAM setup is very well tuned for this.
Table locking (in MySQL) is deadlock-free for storage engines that use table-level locking. Deadlock avoidance is managed by always requesting all needed locks at once at the beginning of a query and always locking the tables in the same order.
MySQL grants table write locks as follows:
1. If there are no locks on the table, put a write lock on it.
2. Otherwise, put the lock request in the write lock queue.
MySQL grants table read locks as follows:
1. If there are no write locks on the table, put a read lock on it.
2. Otherwise, put the lock request in the read lock queue.
SELECT activity for the table. However, if you have many updates for a table,
SELECT statements wait until there are no more updates.The MyISAM storage engine supports concurrent inserts to reduce contention between readers and writers for a given table: If a MyISAM table has no free blocks in the middle of the data file, rows are always inserted at the end of the data file. In this case, you can freely mix concurrent INSERT and SELECT statements for a MyISAM table without locks. That is, you can insert rows into a MyISAM table at the same time other clients are reading from it. Holes can result from rows having been deleted from or updated in the middle of the table. If there are holes, concurrent inserts are disabled but are enabled again automatically when all holes have been filled with new data.. This behavior is altered by the concurrent_insert system variable.
Advantages of row-level locking:
- Fewer lock conflicts when different sessions access different rows
- Fewer changes for rollbacks
- Possible to lock a single row for a long time
- Requires more memory than table-level locks
- Slower than table-level locks when used on a large part of the table because you must acquire many more locks
- Slower than other locks if you often do GROUP BY operations on a large part of the data or if you must scan the entire table frequently
- Most statements for the table are reads
- Statements for the table are a mix of reads and writes
- SELECT combined with concurrent INSERT statements, and very few UPDATE or DELETE statements
- Many scans or GROUP BY operations on the entire table without any writers
6.07.2009
Managing the Database
To create a database in MySQL, you use the CREATE DATABASE statement:
Command: CREATE DATABASE [IF NOT EXISTS] database_name;
OR: CREATE DATABASE database_name;
SHOW DB:
SHOW DATABASE statement will show all databases in your server.
Command: SHOW DATABASES;
SELECT DB:
To select a database which you will work with, you use this statement.
Command: USE database_name;
REMOVE DB:
Removing database means you delete the database, all the data and related objects inside the database permanently and cannot undo it.
Command: DROP DATABASE [IF EXISTS] database_name;
Link -1 : www.mysqltutorial.org
2.23.2009
Oracle DBA scripts - Performance related
2. identify heavy SQL (Get the SQL with heavy DISK_READS)
3. Last 30 minutes result those resources that are in high demand on your system.
4. What user is waiting the most?
5. What SQL is currently using the most resources?
6. What object is currently causing the highest resource waits?
7. Wait related.
8. From a given Time range.
9. How many Times a query executed?
1.Identify heavy SQL (Get the SQL with heavy BUFFER_GETS)
from v$sqlarea
where decode(executions,0,buffer_gets,buffer_gets/executions)
> (select avg(decode(executions,0,buffer_gets,buffer_gets/executions))
+ stddev(decode(executions,0,buffer_gets,buffer_gets/executions))
from v$sqlarea) and parsing_user_id !=3D;
2.Identify heavy SQL (Get the SQL with heavy DISK_READS)
from v$sqlarea
where decode(executions ,0,disk_reads,disk_reads/executions)
> (select avg(decode(executions,0,disk_reads,disk_reads/executions))
+ stddev(decode(executions,0,disk_reads,disk_reads/executions))
from v$sqlarea)
and parsing_user_id !=3D;
3. Last 30 minutes result those resources that are in high demand on your system.
sum(active_session_history.wait_time +
active_session_history.time_waited) total_wait_time
from v$active_session_history active_session_history
where active_session_history.sample_time between sysdate - 60/2880 and sysdate
group by active_session_history.event
order by 2;
4.What user is waiting the most?
select sesion.sid,
sum(active_session_history.wait_time +
active_session_history.time_waited) total_wait_time
from v$active_session_history active_session_history,
v$session sesion
where active_session_history.sample_time between sysdate - 60/2880 and sysdate
and active_session_history.session_id = sesion.sid
group by sesion.sid, sesion.username
order by 3;
5. What SQL is currently using the most resources?
dba_users.username,
sqlarea.sql_text,
sum(active_session_history.wait_time +
active_session_history.time_waited) total_wait_time
from v$active_session_history active_session_history,
v$sqlarea sqlarea,
dba_users
where active_session_history.sample_time between sysdate - 60/2880 and sysdate
and active_session_history.sql_id = sqlarea.sql_id
and active_session_history.user_id = dba_users.user_id
group by active_session_history.user_id,sqlarea.sql_text, dba_users.username
order by 4 desc;
6. What object is currently causing the highest resource waits?
column OBJECT_NAME format a30
column EVENT format a30
select dba_objects.object_name,
dba_objects.object_type,
active_session_history.event,
sum(active_session_history.wait_time +
active_session_history.time_waited) ttl_wait_time
from v$active_session_history active_session_history,
dba_objects
where active_session_history.sample_time between sysdate - 60/2880 and sysdate
and active_session_history.current_obj# = dba_objects.object_id
group by dba_objects.object_name, dba_objects.object_type, active_session_history.event
order by 4 desc;
7.Wait related..
SELECT distinct wait_class#, wait_class
FROM v$event_name ORDER BY wait_class#;
SELECT wait_class_id, wait_class#, wait_class, total_waits, time_waited
FROM v$system_wait_class
Order by time_waited desc;
8. Top SQLs Elaps time and CPU time in a given time range..
SELECT SQL_TEXT,X.CPU_TIME
FROM DBA_HIST_SQLTEXT DHST,
(SELECT DHSS.SQL_ID SQL_ID,SUM(DHSS.CPU_TIME_DELTA) CPU_TIME
FROM DBA_HIST_SQLSTAT DHSS
WHERE DHSS.SNAP_ID IN(SELECT SNAP_ID FROM DBA_HIST_SNAPSHOT
WHERE BEGIN_INTERVAL_TIME>=TO_DATE('10/16/2008','MM/DD/YYYY')
AND END_INTERVAL_TIME<=TO_DATE('10/18/2008','MM/DD/YYYY')) GROUP BY DHSS.SQL_ID) X WHERE X.SQL_ID=DHST.SQL_ID ORDER BY X.CPU_TIME DESC; More.. --X.ELAPSED_TIME/1000000 => From Micro second to second
--X.ELAPSED_TIME/1000000/X.EXECUTIONS_DELTA => How many times the sql ran
SELECT SQL_TEXT
,ROUND(X.ELAPSED_TIME/1000000/X.EXECUTIONS_DELTA,3) ELAPSED_TIME_SEC
,ROUND(X.CPU_TIME /1000000/X.EXECUTIONS_DELTA,3) CPU_TIME_SEC
, EXECUTIONS_DELTA
,X.ELAPSED_TIME
,X.CPU_TIME
,X.EXECUTIONS_DELTA
FROM DBA_HIST_SQLTEXT DHST,
(SELECT DHSS.SQL_ID SQL_ID,SUM(DHSS.CPU_TIME_DELTA) CPU_TIME,
SUM (DHSS.ELAPSED_TIME_DELTA) ELAPSED_TIME
, SUM(DHSS.EXECUTIONS_DELTA) EXECUTIONS_DELTA
FROM DBA_HIST_SQLSTAT DHSS
WHERE DHSS.SNAP_ID IN(SELECT SNAP_ID FROM DBA_HIST_SNAPSHOT
WHERE BEGIN_INTERVAL_TIME >= TO_DATE('17-feb-2009 08:00', 'dd-mon-yyyy hh24:mi')
AND END_INTERVAL_TIME <= TO_DATE('17-feb-2009 16:00', 'dd-mon-yyyy hh24:mi')) GROUP BY DHSS.SQL_ID) X WHERE X.SQL_ID=DHST.SQL_ID ORDER BY ELAPSED_TIME_SEC DESC;
For specific owner..
SELECT SQL_TEXT
,ROUND(X.ELAPSED_TIME/1000000,3) ELAPSED_TIME_SEC
,ROUND(X.CPU_TIME /1000000,3) CPU_TIME_SEC
FROM DBA_HIST_SQLTEXT DHST,
(SELECT DHSS.SQL_ID SQL_ID, SUM(DHSS.CPU_TIME_DELTA) CPU_TIME
, SUM (DHSS.ELAPSED_TIME_DELTA) ELAPSED_TIME
, SUM(DHSS.EXECUTIONS_DELTA) EXECUTIONS_DELTA
FROM DBA_HIST_SQLSTAT DHSS
WHERE DHSS.SNAP_ID IN
(SELECT SNAP_ID FROM DBA_HIST_SNAPSHOT
WHERE BEGIN_INTERVAL_TIME >= TO_DATE('01-feb-2009 08:00', 'dd-mon-yyyy hh24:mi')
AND END_INTERVAL_TIME <= TO_DATE('17-feb-2009 16:00', 'dd-mon-yyyy hh24:mi')) AND DHSS.parsing_schema_name='PROD8' GROUP BY DHSS.SQL_ID ) X WHERE X.SQL_ID = DHST.SQL_ID ORDER BY ELAPSED_TIME_SEC DESC;
--Latest
SELECT dbms_lob.substr(SQL_TEXT,4000,1) as SQL
,ROUND(X.ELAPSED_TIME/1000000,2) "ELAPSED TIME (sec)" --From Micro second to second
,ROUND(X.CPU_TIME /1000000,2) "CPU TIME (sec)"
,X.EXECUTIONS_DELTA as "TOTAL NO OF EXECUTIONS"
,ROUND(((X.ELAPSED_TIME/1000000) /X.EXECUTIONS_DELTA),2) as "Execution Time Per Query (sec)"
FROM DBA_HIST_SQLTEXT DHST,
(
SELECT DHSS.SQL_ID SQL_ID, SUM(DHSS.CPU_TIME_DELTA) CPU_TIME
, SUM (DHSS.ELAPSED_TIME_DELTA) ELAPSED_TIME
, SUM (DHSS.EXECUTIONS_DELTA) EXECUTIONS_DELTA --DHSS.EXECUTIONS_DELTA = No of queries execution (per hour)
FROM DBA_HIST_SQLSTAT DHSS
WHERE DHSS.SNAP_ID IN
(
SELECT SNAP_ID FROM DBA_HIST_SNAPSHOT
WHERE BEGIN_INTERVAL_TIME > TO_DATE('20-dec-10 23:59', 'dd-mon-yy hh24:mi')
AND END_INTERVAL_TIME < TO_DATE('28-dec-10 00:01', 'dd-mon-yy hh24:mi')
)
AND DHSS.parsing_schema_name='PROD7' GROUP BY DHSS.SQL_ID
) X
WHERE X.SQL_ID = DHST.SQL_ID
ORDER BY "ELAPSED TIME (sec)" DESC;
9. How many Times a query executed?
Per Hour sql execution growth:
Begin_interval_time means snapshot interval time.
As we fixed this 60 min so here you can see the execution growth (per hour).
The execution delta will show you the no of queries execution (per hour).
select s.begin_interval_time, sql.sql_id as sql_id, sql.executions_delta as exe_delta, sql.EXECUTIONS_TOTAL
from dba_hist_sqlstat sql, dba_hist_snapshot s
where sql_id='b4q5gbua0dzy3'
and s.snap_id = SQL.snap_id
and s.begin_interval_time> TO_date('18-oct-2008 14:00', 'dd-mon-yyyy hh24:mi')
and s.begin_interval_time< TO_date('21-oct-2008 18:30', 'dd-mon-yyyy hh24:mi') order by s.begin_interval_time;
2.22.2009
Oracle DBA scripts - Bind variable related
1. Find the value of Bind variable
select NAME,POSITION,DATATYPE_STRING,VALUE_STRING from v$sql_bind_capture where sql_id='d9kf91muzy2wq';
v$sql_bind_capture Details: http://youngcow.net/doc/oracle10g/server.102/b14237/dynviews_2114.htm
To get data from history:
DBA_HIST_SQLBIND
DBA_HIST_SQL_BIND_METADATA
Oracle DBA scripts - Session related
echo $ORACLE_SID
or, less .profile
2. Find my SID
SELECT sys_context('USERENV','SID') from dual;
3. Find others SID
SELECT a.sid,a.serial#,b.spid,b.pid,a.username,a.osuser,a.machine
FROM v$session a,v$process b
WHERE a.username IS NOT NULL
AND a.paddr=b.addr
Order by a.sid,a.username;
4. Find connected session,login_time in a given time
SELECT a.sid,a.serial#,a.username,a.osuser,a.machine,a.service_name
,to_char(a.logon_time, 'hh24:mi dd/mm/yy') login_time
FROM v$session a,v$process b
WHERE a.username IS NOT NULL
AND a.paddr=b.addr AND a.type='USER'
--AND a.logon_time> = sysdate- 1/48 --last 30 min
AND a.logon_time > to_date ('2008-10-18 15:00:00','YYYY-MM-DD HH24:MI:SS')
AND a.logon_time <> to_date ('2008-10-18 15:00:00','YYYY-MM-DD HH24:MI:SS')
Order by a.sid,a.username;