Tuesday, April 27, 2010

How to use DBMS_PROFILER package?

DBMS_PROFILER: It is the PL/SQL code tuning technique. It allows you to check the run time behavior of your PL/SQL code and helps you in identifying the areas where the performance issue is. The output of DBMS_PROFILER package is very easy to read as it gives the execution time for each line of code and from there you can easily
identify the bottleneck.

Profiling is for the developer to understand where the PL/SQL code is spending the most time, so they can detect and optimize it. DBMS_PROFILER is to PL/SQL, what tkprof and Explain Plan are to SQL.

DBMS_PROFILER package has some subprograms like:

  1. FLUSH_DATA Function and Procedure: Flushes profiler data collected in the user's session.
  2. GET_VERSION Procedure: Gets the version of this API.
  3. INTERNAL_VERSION_CHECK Function: Verifies that this version of the DBMS_PROFILER package can work with the implementation in the database.
  4. PAUSE_PROFILER Function and Procedure: Pauses profiler data collection.
  5. RESUME_PROFILER Function and Procedure: Resumes profiler data collection.
  6. START_PROFILER Functions and Procedures: Starts profiler data collection in the user's session.
  7. STOP_PROFILER Function and Procedure: Stops profiler data collection in the user's session.
To execute the DBMS_PROFILER package we need to do certain settings in our oracle environment. There are two files that create the environment setting for DBMS_PROFILER:

  • .proftab.sql : This file creates three tables (PLSQL_PROFILER_RUNS, PLSQL_PROFILER_UNITS, PLSQL_PROFILER_DATA) and sequence and must be executed first from the oracle user from which the profiling is to be done. This needs to be executed before profload.sql.
  • .profload.sql : This file creates the specification and the body of DBMS_PROFILER package. This needs to be executed from sys user only. Some public synonyms also needs to be created for the tables to for the other oracle user (from which the profiling is to be done).
Following are the steps to do the settings and the execution of the DBMS_PROFILER (using a UNIX platform):

(We will consider an example, Oracle user "TRACETEST" needs profiling to be done on a sample package "TEST_PROFILER")

Step 1: Go to the admin directory under ORACLE_HOME/rdbms using the command "cd $ORACLE_HOME/rdbms/admin".

Step 2: Connect as tracetest
(Please make sure that the user has CREATE SEQUENCE, CREATE TABLE and CREATE PROCEDURE privilege) the following commands:
sqlplus /nolog
connect tracetest/tracetest

Step 3: Run proftab.sql file using the command
@$ORACLE_HOME/rdbms/admin/proftab.sql

Step 4: Grant some privileges to PUBLIC for the tables created using the following commands:
GRANT SELECT ON plsql_profiler_runnumber TO PUBLIC;
GRANT SELECT, INSERT, UPDATE, DELETE ON plsql_profiler_data TO PUBLIC;
GRANT SELECT, INSERT, UPDATE, DELETE ON plsql_profiler_units TO PUBLIC;
GRANT SELECT, INSERT, UPDATE, DELETE ON plsql_profiler_runs TO PUBLIC;

Step 5: Create a test_profiler procedure:
CREATE OR REPLACE PROCEDURE test_profiler AS
l_dummy NUMBER;
BEGIN
FOR i IN 1 .. 50 LOOP
SELECT l_dummy + 1
INTO l_dummy
FROM dual;
END LOOP;
END;
/

Step 6: Connect to SYSDBA user to run proftab.sql file using command "connect / as sysdba".

Step 7: Run profload.sql using the command
@$ORACLE_HOME/rdbms/admin/profload.sql

Step 8: Create PUBLIC SYNONYM using the following commands for "tracetest user":
CREATE PUBLIC SYNONYM plsql_profiler_runs FOR tracetest.plsql_profiler_runs;
CREATE PUBLIC SYNONYM plsql_profiler_units FOR tracetest.plsql_profiler_units;
CREATE PUBLIC SYNONYM plsql_profiler_data FOR tracetest.plsql_profiler_data;
CREATE PUBLIC SYNONYM plsql_profiler_runnumber FOR tracetest.plsql_profiler_runnumber;

Now, we are all set to run the DBMS_PROFILER package to check the runtime execution of the procedure TEST_PROFILER.

Step 9: To run the profiler:

DECLARE
l_result BINARY_INTEGER;
BEGIN
l_result := DBMS_PROFILER.start_profiler(run_comment => 'test_profiler_execution: ' || SYSDATE);
test_profiler;
l_result := DBMS_PROFILER.stop_profiler;
END;
/

Step 10: Syntax to see which run happened: (RUNID is the unique identifier associated to each run)

SET LINESIZE 200
SET TRIMOUT ON

COLUMN runid FORMAT 99999
COLUMN run_comment FORMAT A50
SELECT runid,
run_date,
run_comment,
run_total_time
FROM plsql_profiler_runs
ORDER BY runid;



Step 11: Syntax to see the details of runid of the runid you got from the above query:

COLUMN runid FORMAT 99999
COLUMN unit_number FORMAT 99999
COLUMN unit_type FORMAT A20
COLUMN unit_owner FORMAT A20

SELECT u.runid,
u.unit_number,
u.unit_type,
u.unit_owner,
u.unit_name,
d.line#,
d.total_occur,
d.total_time,
d.min_time,
d.max_time
FROM plsql_profiler_units u
JOIN plsql_profiler_data d ON u.runid = d.runid AND u.unit_number = d.unit_number
WHERE u.runid = 1
ORDER BY u.unit_number, d.line#;





Here we can see that the line no. 5 executed less times then line no. 4 but took lots of time.

Step 12: To check the line numbers of the source code:

SELECT line||' : ' ||text
FROM all_source
WHERE owner = 'TRACETEST'
AND type = 'PROCEDURE'
AND name = 'TEST_PROFILER';

LINE||' : '||TEXT
---------------------------------------------------
1 : CREATE OR REPLACE PROCEDURE test_profiler AS
2 : l_dummy NUMBER;
3 : BEGIN
4 : FOR i IN 1 .. 50 LOOP
5 : SELECT l_dummy + 1
6 : INTO l_dummy
7 : FROM dual;
8 : END LOOP;
9 : END;

We can easily identify that the loop didn't took much time in execution while the SQL query itself
took more time to execute. So, we can conclude that SQL_TRACE or other facilities can be used to tune the SQL further.

PS: Syntax for other DBMS_PROFILER options are:

DBMS_PROFILER.FLUSH_DATA;
DBMS_PROFILER.PAUSE_PROFILER;
DBMS_PROFILER.RESUME_PROFILER;
DBMS_PROFILER.INTERNAL_VERSION_CHECK
RETURN BINARY_INTEGER;
DBMS_PROFILER.GET_VERSION (
major OUT BINARY_INTEGER,
minor OUT BINARY_INTEGER);

Friday, April 23, 2010

Difference Between Invoker Rights and Definer Rights

Invoker Rights Vs Definer Rights

In Oracle database, the stored PL/SQL procedures executes by default with the owner's privileges. This means that these subprograms execute only for the schema where they exists or created.

Let me explain this concept clearly using a scenario:

Let's say there are two schema schema1 and schema2. Schema1 has procedure proc1 and table emp_tab and Schema2 has table emp_tab only.

The structure of the proc1 procedure in Schema1 is as follows:

Create or replace procedure proc1 (p_empid number, p_ename varchar2, p_salary number)
as
begin
insert into emp_tab (eid, emp_name,emp_sal)
values (p_empid,p_ename,p_salary);
commit;
end;
/

Now, User Schema1 granted EXECUTE permission to User Schema2 on this procedure. If user Schema2 wants to insert data into his emp_tab table using this procedure he cannot do because when user Schema2 executes the procedure, the procedure will be executed with the privileges granted to user Schema1.

Though the user Schema2 has the same table in his schema and does not have the permissions for table emp_tab in Schema1, still the procedure will insert the new values into Schema1 table instead of his schema table.

Before Oracle 8i release, there were following two ways out to resolve this problem :

First either to copy the procedure into Schema2 as well which leads to code replication and hampers the maintenance.

Secondly use the schema references to the objects used in the procedure (under schema1) like insert into schema2.emp_tab which leads to hamper the code portability. To resolve this issue you can pass the schema name as parameter to the procedure and associate in sql.

To overcome this problem, in Oracle 8i and higher releases the AUTHID CURRENT_USER clause included. This invoker rights enables the procedure to execute with the privileges of the current user.

The syntax of the procedure in Schema1 will be like this:


Create or replace procedure proc1
(p_empid number, p_ename varchar2, p_salary number) AUTHID CURRENT_USER
as
begin
insert into emp_tab (eid, emp_name,emp_sal)
values (p_empid,p_ename,p_salary);
commit;
end;

Now, if the user Schema2 executes the procedure which is residing under Schema1 user the procedure will update Schema2 emp_tab table only. If the table is not existing in Schema2 user it will throw an error.
By default, Oracle assumes AUTHID DEFINER (definer rights) if you don't use the clause.

Thursday, April 22, 2010

Difference Between Number Datatypes

The number data types in oracle database are used to store numeric values/data.

There are BINARY_INTEGER, NUMBER and PLS_INTEGER data types which have small differences w.r.t PL/SQL code performance point of view.

Let's have a look on all three and find which one is better to use and why....

NUMBER Data type: This is the very common data type used to store numeric data (fixed-point and floating-point). Its magnitude range starts from 1E-130 .. 10E125. Oracle throws error if the value exceed or under the specified range.

The syntax is NUMBER(Precision,scale).

Precision: This is the value equal to the total no. of digits.
Scale: This is the value equal to the digits after the decimal point.

e.g. If you want to store a value 1234.56 Then you need to specify NUMBER(6,2).

BINARY_INTEGER Data type: This data type is used to store signed integers. Its magnitude range is -2**31 .. 2**31. BINARY_INTEGER values require less storage space than NUMBER values. This uses library arithmetic hence BINARY_INTEGER operations are slower than PLS_INTEGER operations. If the BINARY_INTEGER calculation overflows then no error/exception is raised.

PLS_INTEGER Data type: This data type is also used to store the signed integers. Its magnitude is similar to BINARY_INTEGER only. If the PLS_INTEGER calculation overflows then an exception is raised. PLS_INTEGER uses machine arithmetic hence operations are faster then BINARY_INTEGER.

NOTE: In new applications, always try to use the PLS_INTEGER as it is faster.

Friday, February 26, 2010

How does MERGE Statement works in Oracle ?

MERGE was included in the Oracle 9i version and above. MERGE statement is used when the user wants to update and insert some values into a table based on selecting the values from another table. This statement allows user to use join to two different statements into one statement instead of going for multiple inserts/updates statements.

MERGE is a deterministic statement, a row which is updated cannot be updated again in the same MERGE statement. Prerequisites for this is that user should have INSERT and UPDATE table/view privileges on the target table and SELECT privilege on the source table.

INTO Clause:
INTO clause tells the target table which user wants to update and insert.

USING Clause:
USING clause is to specify the source of the data, which will be updated or inserted in the target table. The source can be a table, view, or the result of a sub-query.

ON Clause:
ON clause is to specify the condition based on which the MERGE statement will either
updates or inserts records in the target table. For each row in the target table for which the search condition is true, corresponding source table data will be updated.
If the condition is not true for any rows, then corresponding source table data will be inserted into the target table.

WHEN MATCHED | NOT MATCHED:
These clauses tells Oracle how to respond to the results of the join condition in the ON clause. Oracle performs update on the target table if the condition of the ON clause is true. When the update clause is executed on the target table, then all update triggers defined on that table will also gets executed.

Points to ponder while updating a View:
  • You cannot specify DEFAULT when updating a view.
  • You cannot update a column that is referenced in the ON condition clause.
Oracle performs insert on the target table if the condition of the ON clause is false. When the insert clause is executed on the target table, then all insert triggers defined on that table will also gets executed.

Following example will make it more clear:

We have an employee table with few records:

Select * from emp;

Emp_id Salary
******* ******
1 100
2 200
3 300
4 400
5 500

We will create our new target table and will populate with emp table data.

Create table t1(emp_id number, salary number);

Insert into t1 select * from emp;

Commit;

Lets assume after some time few more records added to emp table.

Select * from emp;

Emp_id Salary
****** ******
1 100
2 200
3 300
4 400
5 500
6 600
7 700
8 800

Before Oracle 9i, if we need to update the salary of the existing employees with 20% hike with the data of the new joined employees also from emp table into our target table t1.

Update
(Select a.emp_id empl_id, a.salary sal from t1 a, emp b where a.emp_id = b.emp_id) x
set sal = sal*.20;

Insert into t1 (emp_id, salary)
select emp_id, salary from emp where emp_id not in (select emp_id from t1);

After using MERGE statement this can be done in single statement which is as follows:

MERGE INTO t1 a
USING emp b
ON (a.emp_id = b.emp_id)
WHEN MATCHED THEN UPDATE SET salary = a.salary*.20
WHEN NOT MATCHED THEN INSERT (a.emp_id,a.salary)
VALUES(b.emp_id,b.salary);

In addition to this, In Oracle 10g we can use optional DELETE clause also in MERGE statement. But, there are following few constraints:
  • DELETE clause cannot be used independently in MERGE statement. It has to be embedded with UPDATE or INSERT statement.
  • DELETE clause will work only on the rows which are filtered based on the join condition mentioned in ON clause.
  • DELETE clause will affect only those rows which are updated in MERGE statement.
for e.g if user wants to delete the records where still the salary is less then 2000 after update.

MERGE INTO t1 a
USING emp b
ON (a.emp_id = b.emp_id)
WHEN MATCHED THEN
UPDATE SET a.salary = b.salary*.20
DELETE WHERE (a.salary < 2000)
WHEN NOT MATCHED THEN INSERT (a.emp_id,a.salary)
VALUES(b.emp_id,b.salary);

In this, first UPDATE clause will update the matched rows with 20% increase in salary and then updated employees with less then 2000 salary will be deleted from the target table.

Sunday, January 17, 2010

What is High Water Mark in Oracle database?

This topic comes under “Segment Management concept”. Which describes about how to manage storage of data in segments effectively? How to manage waste space? After Oracle 10g release this has become very easy and somewhat automated also.

High Water Mark is applicable to the Segments or even we can say DB blocks (at granule level) attached to the database table. This indicates the highest level up to, which the space occupied in the blocks by the table data.

This can be illustrated with a simple example from our day-to-day life.

You might have seen a glass filled half with milk. Now, the level where the milk is available in the glass is the high water mark. Even if you pour out some milk out of the glass still the mark will be there. This indicates that at one time the milk was filled up to that level.

Similarly, in the Oracle database high water is the level, which indicates the last block that held data.

Let’s say when a table gets created the following of DB blocks gets associated to it.Now there is no data into the table so, the high water mark will be set at the first block.When the data is populated into the table then the HWM is set to the DB block up to, which the data is stored.When the few rows gets deleted from the table Still the HWM remains at the last level only. This is because the HWM is not reset after the deletion of rows. This may result in two major problems:

1) During the full table scan, Oracle always scans the segment up to the level of
HWM. If we don’t have the data in those blocks then the time spends in scanning those block is useless.

2) These blank blocks will not appear in the Freelists of the database so, when new rows will be inserted with Direct path using APPEND hint or SQL*Loader direct path the data will be stored in the free blocks above HWM only. Therefore the empty blocks below the HWM will be wasted.

How to reset High Water Mark?

If we execute the “Truncate table” command then the HWM gets reset automatically.

In Oracle 9i and below, you can use the “ALTER TABLE….MOVE….” command to reset the HWM and use the empty block effectively. In Oracle 10g release this has become more effective. Now, you can “Shrink” tables, segments and indexes to use the space wasted due to HWM not reset but and make them available to database for other use. Your tablespace must have “ASSM (Automatic Segment Space Management)” enabled before you shrink them. We will look into both the options one by one.

In Oracle 9i and below, the space resetting is done by “ALTER TABLE….MOVE….” command in the following way:

Syntax:
Alter table move storage () tablespace ;

Alter table t1 move storage (initial 10K next 10K) tablespace new_tablespace;

In case you don’t want to move storage to a different tablespace then you can use the command as:

Syntax:
Alter table move storage ();

Alter table t1 move storage (initial 10K next 10K);

There are certain restrictions on doing so. Which are as follows:

1) Table should not have LOB, LONG or LONG RAW columns.

2) Entire partitioned table cannot be moved. Individual partition/sub-partition has to be moved separately.

3) Indexes associated to the table will beocme invalid after move so, rebuild the index using “alter index rebuild”.

4) This cannot be done online. The table has to be making unusable before moving.

In Oracle 10g release1, a new feature added to this process. It is called segment shrinking. Segment shrinking is allowed only to those segments that use “Automatic Segment Space Management (ASSM)”. This means the tablespace should be enabled with ASSM feature. Following are the steps to use HWM re-setting:

1) Before we start shrinking segments we need to tell Oracle to re-set the rowIDs of these rows by issuing the following command:

Alter table enable row movement;

2) Now, we are ready to shrink the segment using:

Alter table shrink space; (If you want to shrink space for only specific table)

Alter table shrink space cascade; (If you want to shrink space for all dependent objects also)

In this case:
1) We don’t need to do it offline.
2) Indexes will not become invalid or need not to re-create the indexes.

Followers

About Me

With a rich experience close to a decade in database. I am still open to learn the new concepts of Oracle database.