2. Password file
3. Alert log:
This blog is a note for self learning. Some writings are done by myself and some are collected, just to keep things in a organized way.
SELECT ANY TRANSACTION
privilege to query V$LOGMNR_CONTENTS
.This example assumes that you know that you want to mine the redo log file that was most recently archived.
SELECT NAME FROM V$ARCHIVED_LOG
WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);
NAME
-------------------------------------------
/oracle/DATABASE_NAME/archivelog/2008_08_31/o1_mf_1_8554_4cngdomq_.arc
Step 2:
Specify the list of redo log files to be analyzed.
Specify the redo log file that was returned by the query in Step 1. The list will consist of one redo log file.
BEGIN DBMS_LOGMNR.ADD_LOGFILE
('/oracle/database_name/archivelog/2008_08_31/o1_mf_1_8554_4cngdomq_.arc');
END;
/
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => '/usr/oracle/data/db1arch_1_16_482701534.dbf', -
OPTIONS => DBMS_LOGMNR.NEW);
Step 3:
Start LogMiner.
SQL>EXECUTE DBMS_LOGMNR.START_LOGMNR(-
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
PL/SQL procedure successfully completed.
Step 4:
Query the V$LOGMNR_CONTENTS view.
Note that there are four transactions (two of them were committed within the redo log file being analyzed, and two were not).
The output shows the DML statements in the order in which they were executed; thus transactions interleave among themselves.
SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,
SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username IN ('HR', 'OE');
USR XID SQL_REDO SQL_UNDO
---- --------- --------------------- -------------------------------
HR 1.11.1476 set transaction read write;
HR 1.11.1476 insert into "HR"."EMPLOYEES"( delete from "HR"."EMPLOYEES"
"EMPLOYEE_ID","FIRST_NAME", where "EMPLOYEE_ID" = '306'
"LAST_NAME","EMAIL", and "FIRST_NAME" = 'Nandini'
"PHONE_NUMBER","HIRE_DATE", and "LAST_NAME" = 'Shastry'
"JOB_ID","SALARY", and "EMAIL" = 'NSHASTRY'
"COMMISSION_PCT","MANAGER_ID", and "PHONE_NUMBER" = '1234567890'
"DEPARTMENT_ID") values and "HIRE_DATE" = TO_DATE('10-JAN-2003
('306','Nandini','Shastry', 13:34:43', 'dd-mon-yyyy hh24:mi:ss')
'NSHASTRY', '1234567890', and "JOB_ID" = 'HR_REP' and
TO_DATE('10-jan-2003 13:34:43', "SALARY" = '120000' and
'dd-mon-yyyy hh24:mi:ss'), "COMMISSION_PCT" = '.05' and
'HR_REP','120000', '.05', "DEPARTMENT_ID" = '10' and
'105','10'); ROWID = 'AAAHSkAABAAAY6rAAO';
------------------------------------------------------------------------------------
SQL> col SQL_REDO format a30
SQL> col SQL_UNDO format a30
SQL> column USERNAME format a15
SELECT username,session# SID, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,
SQL_REDO, SQL_UNDO,to_char(timestamp,'mm/dd/yy hh24:mi:ss')timestamp
FROM V$LOGMNR_CONTENTS WHERE username IN ('PROD7');
Step 5 End the LogMiner session.
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();
-------------------------------------------------
Example 2: Grouping DML Statements into Committed Transactions
Step 1:SELECT NAME FROM V$ARCHIVED_LOG
WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);
NAME
-------------------------------------------
/oracle/database_name/archivelog/2008_08_31/o1_mf_1_8554_4cngdomq_.arc
BEGIN DBMS_LOGMNR.ADD_LOGFILE(
LOGFILENAME => '/oracle/database_name/archivelog/2008_08_31/o1_mf_1_8554_4cngdomq_.arc',
OPTIONS => DBMS_LOGMNR.NEW);
END;
/
Step 3:
Start LogMiner by specifying the dictionary to use and the COMMITTED_DATA_ONLY
option.
EXECUTE DBMS_LOGMNR.START_LOGMNR(-
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
DBMS_LOGMNR.COMMITTED_DATA_ONLY);
Step 4:
SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID, SQL_REDO,
SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username IN ('HR', 'OE');
;
This example shows how to use the dictionary that has been extracted to the redo log files.
When you use the dictionary in the online catalog, you must mine the redo log files in the same database that generated them.
Using the dictionary contained in the redo log files enables you to mine redo log files in a different database.
This example assumes that you know that you want to mine the redo log file that was most recently archived.
SELECT NAME, SEQUENCE# FROM V$ARCHIVED_LOG
WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);
NAME SEQUENCE#
------------------------------------------------------------------------------------------------------ ----------------
/oracle/database_name/archivelog/2008_08_31/o1_mf_1_8568_4cpdcgcw_.arc 8568
SELECT NAME, SEQUENCE#, DICTIONARY_BEGIN d_beg, DICTIONARY_END d_end
FROM V$ARCHIVED_LOG
WHERE SEQUENCE# = (SELECT MAX (SEQUENCE#) FROM V$ARCHIVED_LOG
WHERE DICTIONARY_END = 'YES' and SEQUENCE# <= 8568);
DBMS_LOGMNR.START_LOGMNR
procedure call.SELECT NAME,to_char(FIRST_TIME,'mm/dd/yy hh24:mi:ss') Time FROM V$ARCHIVED_LOG
WHERE FIRST_TIME like ('01-SEP-08%');
Step 1: Create a list of redo log files to mine.EXECUTE
statement calls the procedure and specifies the starting date 01-SEP-2008: -- my_add_logfiles
-- Add all archived logs generated after a specified start_time.
CREATE OR REPLACE PROCEDURE my_add_logfiles (in_start_time IN DATE) AS
CURSOR c_log IS
SELECT NAME FROM V$ARCHIVED_LOG
WHERE FIRST_TIME >= in_start_time;
count pls_integer := 0;
my_option pls_integer := DBMS_LOGMNR.NEW;
BEGIN
FOR c_log_rec IN c_log
LOOP
DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => c_log_rec.name,
OPTIONS => my_option);
my_option := DBMS_LOGMNR.ADDFILE;
DBMS_OUTPUT.PUT_LINE('Added logfile ' || c_log_rec.name);
END LOOP;
END;
/
EXECUTE my_add_logfiles(in_start_time => '01-sep-2008');
Step 2:
Query the V$LOGMNR_LOGS to see the list of redo log files.
SQL> col NAME format a70
SQL> SELECT FILENAME name, LOW_TIME start_time, FILESIZE bytes FROM V$LOGMNR_LOGS;
Query the V$LOGMNR_LOGS to see the list of redo log files.
i.
EXECUTE DBMS_LOGMNR.START_LOGMNR(-
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
DBMS_LOGMNR.COMMITTED_DATA_ONLY + -
DBMS_LOGMNR.PRINT_PRETTY_SQL);
Or
ii.
SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
EXECUTE DBMS_LOGMNR.START_LOGMNR(-
STARTTIME => '01-sep-2008 15:00:00', -
ENDTIME => '02-sep-2008 16:00:00', -
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
DBMS_LOGMNR.COMMITTED_DATA_ONLY + -
DBMS_LOGMNR.PRINT_PRETTY_SQL);
To avoid the need to specify the date format in the call to the PL/SQLDBMS_LOGMNR.START_LOGMNR
procedure, you can use the SQLALTER
SESSION
SET NLS_DATE_FORMAT
statement first, as shown in the following example.
----------------------
select username, timestamp,scn_to_timestamp(SCN),
seg_type_name, seg_name, table_space, session# SID, serial# , operation
from v$logmnr_contents
where table_name = 'LOGIN' and
(scn_to_timestamp(SCN))> TO_date ('27-AUG-08 00:22:39', 'dd-mon-yyyy hh24:mi:ss');--run saturn
select username,scn_to_timestamp(SCN) as DataTime ,
seg_type_name, seg_name, session# SID, serial# , operation
from v$logmnr_contents
where table_name = 'LOGIN' and
(scn_to_timestamp(SCN))> TO_date ('27-AUG-08 00:22:39', 'dd-mon-yyyy hh24:mi:ss');
-----------------------------------
SQL> set lines 200
SQL> column USERNAME format a20
SQL> column SEG_NAME format a15
SQL> column SEG_TYPE_NAME format a15
SQL> column TABLE_SPACE format a15
select username,scn_to_timestamp(SCN)as DataTime,
seg_type_name, seg_name, session# SID, operation
from v$logmnr_contents
where table_name = 'LOGIN'
and (scn_to_timestamp(SCN))> TO_date ('27-AUG-08 01:00:00', 'dd-mon-yyyy hh24:mi:ss')
and (scn_to_timestamp(SCN))<>
select username,scn_to_timestamp(SCN)as DataTime,timestamp,
seg_type_name, seg_name, session# SID, operation
from v$logmnr_contents
where username in ('prod7');
select username,timestamp,seg_type_name, seg_name, session# SID, operation
from v$logmnr_contents
where table_name = 'LOGIN';
----test
select username,timestamp,seg_type_name, seg_name, session# SID, operation
from v$logmnr_contents
where table_name = 'LOGIN'
and timestamp > TO_date ('01-SEP-2008 20:00:00', 'dd-mon-yyyy hh24:mi:ss')
and timestamp <>
-To extract a LogMiner dictionary to the redo log files, the DB must be open and in ARCHIVELOG
mode and archiving must be enabled.
Specify the names of the start and end redo log files, and possibly other logs in between them, with the ADD_LOGFILE
procedure when you are preparing to begin a LogMiner session.
Transfer the logminer dictionary and log for analysis into the mining database.
SQL> !scp /oracle/database_name/archivelog/2008_09_03/o1_mf_1_8601_4cw3q3t8_.arc oracle@neptune:
Password:
o1_mf_1_8601_4cw3q3t 100% |*********************************************************************| 10836 KB 00:01
Then also transfer the redo log. Based on your requirement you can transfer archived log or online redo log. To see a defined time archived log query by select NAME,FIRST_NAME from v$archived_log where completion_time >SYSDATE-1;
In this example I will analysis online redo log file.
SQL> SELECT distinct member LOGFILENAME FROM V$LOGFILE;
LOGFILENAME
--------------------------------------------------------------------------------
/oradata1/database_name/datafiles/database_name/redo03.log
/oradata1/database_name/datafiles/database_name/redo01.log
/oradata1/database_name/datafiles/database_name/redo02.log
1. Using the Online Catalog: (Set LogMiner in Source Database)
Supplemental logging places additional column data into the redo log file whenever an UPDATE operation is performed.
Step-1:
Ensure that you have on at a minimal level supplemental logging.
To work with logMiner you must have database supplemental logging on of the source database at a minimum level.
By default, Oracle DB does not provide any supplemental logging, that means, by default LogMiner is not usable.
You can check your supplemental logging on of off by following commands,
SQL> select SUPPLEMENTAL_LOG_DATA_MIN from v$database;
SUPPLEME
--------
NO
In order to on it at a minimal level,
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Database altered.
Step-2:
Install the DBMS_LOGMNR package.
The next step is to install DBMS_LOGMNR package. If you have already installed then ignore this steps. You can install this package by running $ORACLE_HOME/rdbms/admin/dbmslm.sql script. If you create your database with dbca then this script run automatically. So you can ignore this step. However if you create database manually with CREATE DATABASE ... command then you must run the script before using logminer. That is ,
SQL>@$ORACLE_HOME/rdbms/admin/dbmslm.sql
Step-3:
Grant the EXECUTE_CATALOG_ROLE role.
The user who will do the mining task give him EXECUTE_CATALOG_ROLE. Here user is nahar.
SQL>GRANT EXECUTE_CATALOG_ROLE TO nahar;
Step-4:
Create the synonym. nahar creates a public synonym:
CREATE PUBLIC SYNONYM dbms_logminer_nahar FOR SYS.DBMS_LOGMNR;
--All above four steps are needed just for once.
************************************************************** kisukkhon por por abar ekhan theke shuru
Step-5:
Specify the scope of the mining.
Now you decide on which file you will do the analysis. You may have interest over archived redo log files or online redo log files based on your scenario.
i. Analysis Redo logs:
Suppose you have recent problem in your database and so you might show interest of your online redo log files. You can see
current online redo logs by,
SQL> SELECT distinct member LOGFILENAME FROM V$LOGFILE;
LOGFILENAME
--------------------------------------------------------------------------------
/oracle/app/oradata/FolderName/redo02.log
/oracle/app/oradata/FolderName/redo03.log
/oracle/app/oradata/FolderName/redo01.log
SQL>BEGIN DBMS_LOGMNR.ADD_LOGFILE
('/oracle/app/oradata/FolderName/redo01.log');
END;
/
PL/SQL procedure successfully completed.
BEGIN DBMS_LOGMNR.ADD_LOGFILE
('/oracle/app/oradata/FolderName/redo02.log');
END;
/
PL/SQL procedure successfully completed.
BEGIN DBMS_LOGMNR.ADD_LOGFILE
('/oracle/app/oradata/FolderName/redo03.log');
END;
/
PL/SQL procedure successfully completed.
Practice:
To add multiple log file:
SQL>BEGIN DBMS_LOGMNR.ADD_LOGFILE
('/oradata2/data1/dbase/redo01.log');
DBMS_LOGMNR.ADD_LOGFILE
('/oradata2/data1/dbase/redo03.log');
END;
/
ii. Analysis Archive logs:
Suppose you have past problem in your database and so you might show interest of your archive log files.
Sometimes, you want to mine the redo log file that was most recently archived. You can see --
To check recent archive log:
SQL> SELECT NAME FROM V$ARCHIVED_LOG
WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);
NAME
--------------------------------------------------------------------------------
/oracle/database_name/archivelog/2008_08_27/o1_mf_1_8516_4cb52pvt_.arc
SQL> BEGIN DBMS_LOGMNR.ADD_LOGFILE
('/oracle/database_name/archivelog/2008_08_27/o1_mf_1_8516_4cb52pvt_.arc');
END;
/
Practice:
SQL>column NAME format a70
SELECT NAME,to_char(FIRST_TIME,'mm/dd/yy hh24:mi:ss') Time FROM V$ARCHIVED_LOG
WHERE FIRST_TIME like ('27-AUG-08%'); --to get the 26 august's archive log
Step-6:
To direct LogMiner to use the dictionary currently in use for the database, specify the online catalog as your dictionary source -
Start the LogMiner session and specify a dictionary.
To start the LogMiner session:
BEGIN
DBMS_LOGMNR.START_LOGMNR
(options =>
dbms_logmnr.dict_from_online_catalog);
END;
/
PL/SQL procedure successfully completed.
Using the OPTIONS parameter, it is specified that Oracle Database read the dictionary information to convert the object names from the online catalog while starting LogMiner.
Step-7: ***
Request the redo data
Check contents from V$LOGMNR_CONTENTS view.
To get information and when DML or DDL happened in the V$LOGMNR_CONTENTS about table STUDENT we can issue
SQL> set lines 200
SQL> set pages 0 -------to delete the column header
sys@DatabaseName>column USERNAME format a20
sys@DatabaseName>column SEG_NAME format a15
sys@DatabaseName>column SEG_TYPE_NAME format a15
sys@DatabaseName>column TABLE_SPACE format a15
SQL> select username, to_char(timestamp,'mm/dd/yy hh24:mi:ss') timestamp,
seg_type_name, seg_name, table_space, session# SID, serial# , operation
from v$logmnr_contents
where table_name = 'STUDENT';
We can get SQL_UNDO and SQL_REDO information by,
SQL>col SQL_REDO format a30
SQL> col SQL_UNDO format a30
SQL> select sql_undo, sql_redo
from v$logmnr_contents
where table_name = 'STUDENT' and OPERATION='UPDATE';
Request the archive data:
select username, to_char(timestamp,'mm/dd/yy hh24:mi:ss') timestamp,
seg_type_name, seg_name, table_space, session# SID, serial# , operation
from v$logmnr_contents
where table_name = 'LOGIN';
select username, to_char(timestamp,'mm/dd/yy hh24:mi:ss') timestamp,
seg_type_name, seg_name, table_space, session# SID, serial# , operation
from v$logmnr_contents
where table_name = 'LOGIN'
and timestamp like ('27-AUG-08%'); --to get the info of a specific date
step-8:
End the LogMiner session.
Use the DBMS_LOGMNR.END_LOGMNR
procedure.
SQL>
BEGIN
DBMS_LOGMNR.END_LOGMNR;
END;
/
PL/SQL procedure successfully completed.
The type of change made to the database: INSERT
, UPDATE
, DELETE
, or DDL
(OPERATION
column).
SCN
column).The SCN at which a change was committed (COMMIT_SCN
column).
XIDUSN
, XIDSLT
, and XIDSQN
columns).The table and schema name of the modified object (SEG_NAME
and SEG_OWNER
columns).
USERNAME
column).If the change was due to a SQL DML statement, the reconstructed SQL statements showing SQL DML that is equivalent (but not necessarily identical) to the SQL DML used to generate the redo records (SQL_REDO
column).
SQL_REDO
column, the password is encrypted. SQL_REDO
column values that correspond to DDL statements are always identical to the SQL DDL used to generate the redo records.If the change was due to a SQL DML change, the reconstructed SQL statements showing the SQL DML statements needed to undo the change (SQL_UNDO
column).
In simple we can say an undo entry provides the values of data stored before a change and the redo entry provides the values of data stored after a change. So we can get them from online redo logs and then to archived logs.
So from online redo logs and from archived redo logs we can get database redo and undo information. But online and archived logs have an unpublished format and are not human-readable. With the DBMS_LOGMNR package we can analysis redo log files and can get back undo and redo information in a human readable format.
Another scenario of use of logminer is to investigate database past in time. With Flashback Query we can get prior values of the record in the table at some point in the past but is limited to UNDO_RETENTION parameter (which is often as short as 30 minutes for an OLTP database.).So in order to analysis past activity on the database logminer is a good choice.
Every change made to an Oracle database by default generates undo and redo information which is accumulated in Oracle redo log files. LogMiner is an integrated feature of the Oracle Database that provides DBA's and auditors with the infrastructure required for relational access to Oracle's redo stream.
The Oracle LogMiner utility enables you to query redo logs through a SQL interface. Redo logs contain information about the history of activity on a database.
Why LogMiner:
determine steps needed for the recovery of inadvertent (done unintentionally) changes to data
assemble data on actual usage for use in performance-tuning and capacity-planning
auditing the operation of any commands run against the database
Note that LogMiner uses Oracle logs to reconstruct exactly how data changed, whereas the complementary utility Oracle Flashback addresses, reconstructs and presents the finished results of such changes, giving a view of the database at some point in time.
Never escape from current session because the data viewing of LogMiner will no be available from other session.
LogMiner Restrictions:
The following are not supported:
The redo log files contain the changes made to the database or database dictionary.
The source database is the database that produces all the redo log files that you want LogMiner to analyze.
The mining database is the database that LogMiner uses when it performs the analysis.
The LogMiner dictionary allows LogMiner to provide table and column names, instead of internal object IDs, when it presents the redo log data that you request. LogMiner uses the dictionary to translate internal object identifiers and datatypes to object names and external data formats. Without a dictionary, LogMiner returns internal object IDs and presents data as binary data.
Without the dictionary, LogMiner will display: insert into "UNKNOWN"."OBJ#45522"("COL 1","COL 2","COL 3","COL 4") values(HEXTORAW('45465f4748'),HEXTORAW('546563686e6963616c20577269746572'),HEXTORAW('c229'),HEXTORAW('c3020b'))
LogMiner requires a dictionary to translate object IDs into object names when it returns redo data to you. LogMiner gives you three options for
supplying the dictionary.
1. Using the Online Catalog:
2. Extracting a LogMiner Dictionary to the Redo Log Files:
3. Extracting the LogMiner Dictionary to a Flat File:
The database is the cornerstone of pretty much every business project, if you don't take the time to map out the needs of the project and how the database is going to meet them, then the chances are that the whole project will veer off course and lose direction. Furthermore, if you don't take the time at the start to get the database design right, then you'll find that any substantial changes in the database structures that you need to make further down the line could have a huge impact on the whole project, and greatly increase the likelihood of the project time-line slipping.
Admittedly it is impossible to predict every need that your design will have to fulfill and every issue that is likely to arise, but it is important to mitigate against potential problems as much as possible, by careful planning.
Normalization defines a set of methods to break down tables to their constituent parts until each table represents one and only one "thing", and its columns serve to fully describe only the one "thing" that the table represents.
The concept of normalization has been around for 30 years and is the basis on which SQL and relational databases are implemented. In other words, SQL was created to work with normalized data structures.
Normalizing your data is essential to good performance, and ease of development, but the question always comes up: "How normalized is normalized enough?" Still now, Upto 3rd Normal Form is essential, but 4th and 5th Normal Forms are really useful and, once you get a handle on them, quite easy to follow and well worth the time required to implement them. In reality, however, it is quite common that not even the first Normal Form is implemented correctly.
Names, are the first and most important line of documentation for your application. The names you choose are not just to enable you to identify the purpose of an object, but to allow all future programmers, users, and so on to quickly and easily understand how a component part of your database was intended to be used, and what data it stores. No future user of your design should need to wade through a 500 page document to determine the meaning of some wacky name.
As a practice I strongly advise against is the use of spaces and quoted identifiers in object names. You should avoid column names such as "Part Number" or, in Microsoft style, [Part Number], therefore requiring you users to include these spaces and identifiers in their code. It is annoying and simply unnecessary.
Acceptable alternatives would be PART_NUMBER, part_number, partNumber or PartNumber. Again, consistency is key. If you choose PartNumber then that's fine – as long as the column containing invoice numbers is called InvoiceNumber, and not one of the other possible variations.
By carefully naming your objects, columns, and so on, you can make it clear to anyone what it is that your database is modeling.
Documentation contain definitions on its tables, columns, relationships, and even default and check constraints, so that it is clear to everyone how they are intended to be used. In many cases, you may want to include sample values, where the need arose for the object, and anything else that you may want to know in a year or two when "future you" has to go back and make changes to the code.
Where this documentation is stored is largely a matter of corporate standards and/or convenience to the developer and end users. It could be stored in the database itself, using extended properties. Alternatively, it might be in maintained in the data modeling tools. It could even be in a separate data store, such as Excel or another relational database.
Your goal should be to provide enough information that when you turn the database over to a support programmer, they can figure out your minor bugs and fix them.
I know there is an old joke that poorly documented code is a synonym for "job security." While there is a hint of truth to this, it is also a way to be hated by your coworkers and never get a raise. And no good programmer I know of wants to go back and rework their own code years later. It is best if the bugs in the code can be managed by a junior support programmer while you create the next new thing. Job security along with raises is achieved by being the go-to person for new challenges.
Relational databases are based on the fundamental idea that every object represents one and only one thing. There should never be any doubt as to what a piece of data refers to. By tracing through the relationships, from column name, to table name, to primary key, it should be easy to examine the relationships and know exactly what a piece of data means.
The big myth is that the more tables there are, the more complex the design will be. So, conversely, shouldn't condensing multiple tables into a single "catch-all" table simplify the design? It does sound like a good idea but this idea is wrong in large application. This may seem a very clean and natural way to design a table for all but the problem is that it is just not very natural to work with in SQL. And in this situation, the SQL query will take long time and may arise performance issue.
The point of this tip is simply that it is better to do the work upfront, making structures solid and maintainable, rather than trying to attempt to do the least amount of work to start out a project. By keeping tables down to representing one "thing" it means that most changes will only affect one table, after which it follows that there will be less rework for you down the road.
First Normal Form dictates that all rows in a table must be uniquely identifiable. Hence, every table should have a primary key.
SQL Server allows you to define a numeric column as an IDENTITY column, and then automatically generates a unique value for each row.
Alternatively, you can use NEWID() (or NEWSEQUENTIALID()) to generate a random, 16 byte unique value for each row. These types of values, when used as keys, are what are known as surrogate keys. The word surrogate means "something that substitutes for" and in this case, a surrogate key should be the stand-in for a natural key.
The problem is that too many designers use a surrogate key column as the only key column on a given table. The surrogate key values have no actual meaning in the real world; they are just there to uniquely identify each row.
Now, consider the following Part table, whereby PartID is an IDENTITY column and is the primary key for the table:
PartID | PartNumber | Description |
1 | XXXXXXXX | The X part |
2 | XXXXXXXX | The X part |
3 | YYYYYYYY | The Y part |
How many rows are there in this table? Well, there seem to be three, but are rows with PartIDs 1 and 2 actually the same row, duplicated? Or are they two different rows that should be unique but were keyed in incorrectly?
The rule of thumb I use is simple. If a human being could not pick which row they want from a table without knowledge of the surrogate key, then you need to reconsider your design. This is why there should be a key of some sort on the table to guarantee uniqueness, in this case likely on PartNumber.
In summary: as a rule, each of your tables should have a natural key that means something to the user ,and can uniquely identify each row in your table. In the very rare event that you cannot find a natural key (perhaps, for example, a table that provides a log of events), then use an artificial/ surrogate key.
All fundamental, non-changing business rules should be implemented by the relational engine. The base rules of nullability, string length, assignment of foreign keys, and so on, should all be defined in the database.
Stored procedures are your friend. Use them whenever possible as a method to protect the database layer from the users of the data. Stored procedures make database development much cleaner, and encourage collaborative development between your database and functional programmers. A few of the other interesting reasons that stored procedures are important include the following.
Stored procedures provide a known interface to the data, this is probably the largest draw. Stored procedures give the database professional the power to change characteristics of the database code without additional resource involvement, making small changes, or large upgrades (for example changes to SQL syntax) easier to do.
Stored procedures allow you to "encapsulate" any structural changes that you need to make to the database so that the knock on effect on user interfaces is minimized. For example, say you originally modeled one phone number, but now want an unlimited number of phone numbers. You could leave the single phone number in the procedure call, but store it in a different table as a stopgap measure, or even permanently if you have a "primary" number of some sort that you always want to display. Then a stored procedure could be built to handle the other phone numbers. In this manner the impact to the user interfaces could be quite small, while the code of stored procedures might change greatly.
Stored procedures can provide specific and granular access to the system. For example, you may have 10 stored procedures that all update table X in some way. If a user needs to be able to update a particular column in a table and you want to make sure they never update any others, then you can simply grant to that user the permission to execute just the one procedure out of the ten that allows them perform the required update.
As database professionals know, the first thing to get blamed when a business system is running slow is the database. Why? First because it is the central piece of most any business system, and second because it also is all too often true.
By gaining deep knowledge of the system we have created and understanding its limits through testing.
Testing is the first thing to go in a project plan when time slips a bit. And what suffers the most from the lack of testing? Functionality? Maybe a little, but users will notice and complain if the "Save" button doesn't actually work and they cannot save changes to a row they spent 10 minutes editing. What really gets the shaft in this whole process is deep system testing to make sure that the design you (presumably) worked so hard on at the beginning of the project is actually implemented correctly.
Initially, major bugs come in thick and fast, especially performance related ones. If the first time you have tried a full production set of users, background process, work flow processes, system maintenance routines, ETL, etc, is on your system launch day, you are extremely likely to discover that you have not anticipated all of the locking issues that might be caused by users creating data while others are reading it, or hardware issues cause by poorly set up hardware.
Once the major bugs are squashed, the fringe cases (which are pretty rare cases, like a user entering a negative amount for hours worked) start to raise their ugly heads. What you end up with at this point is software that irregularly fails in what seem like weird places (since large quantities of fringe bugs will show up in ways that aren't very obvious and are really hard to find.)
Now, it is far harder to diagnose and correct because now you have to deal with the fact that users are working with live data and trying to get work done. Plus you probably have a manager or two sitting on your back saying things like "when will it be done?" every 30 seconds, even though it can take days and weeks to discover the kinds of bugs that result in minor (yet important) data aberrations. Had proper testing been done, it would never have taken weeks of testing to find these bugs, because a proper test plan takes into consideration all possible types of failures, codes them into an automated test, and tries them over and over. Good testing won't find all of the bugs, but it will get you to the point where most of the issues that correspond to the original design are ironed out.
If everyone insisted on a strict testing plan as an integral and immutable part of the database development process, then maybe someday the database won't be the first thing to be fingered when there is a system slowdown.
Database design and implementation is the cornerstone of any data centric project (99.9% of business applications) and should be treated as such when you are developing. Some of the tips, like planning properly, using proper normalization, using a strong naming standards and documenting your work– these are things that even the best DBAs and data architects have to fight to make happen.