DBA Tips Archive for Oracle

  


Understanding LogMiner

by Jeff Hunter, Sr. Database Administrator

Contents

Introduction

Every instance of an Oracle database (clustered or non-clustered) records changes made to database blocks in redo log files. The Oracle instance writes these changes in the form of redo records (also known as redo entries) to its associated online redo logs. Online redo log files are written to in a circular manner. Once an online redo log fills, Oracle moves to the next one. If the database is in archived redo log mode, Oracle will make a copy of the online redo log to what is known as an archived redo log before the online version gets reused and overwritten by the circular nature of the logging process.

Redo log files are commonly used for the purpose of recovering a database as the result of some type of failure or when refreshing a physical standby database. But what if you wanted to read the actual redo records directly from the redo log files? For example, an organization may need to capture all changes made to the database for auditing purposes or data analysis. Prior to Oracle8i, this proved to be a painful exercise as the redo log files use an unpublished format that is not human-readable. The only solution then was for the DBA to send a dump file of the redo log files to Oracle support.

In Oracle8i, LogMiner was introduced as a tool capable of reading redo records found in the redo log files using a relational interface.

While LogMiner can be used for the aforementioned needs of auditing or data analysis, it can also used to recover from user errors; sometimes referred to as logical corruption. Unlike physical database corruption from media failure or block corruption, logical corruption occurs as the result of human or application mistakes with the data rather than hardware or software malfunction. For example, an application updates or deletes from a table with an incorrect WHERE clause or even worse, a critical object like a table gets dropped from the database. In situations like this, LogMiner can be used to not only pinpoint when the logical corruption to the database occurred, but to also construct the SQL needed to reverse or UNDO the changes. This can drastically reduce the Mean Time To Recovery (MTTR) over previous methods such as point-in-time recovery or restoring from an export.

This article provides insight into the the LogMiner utility and the different components that make it up.

To see a collection of examples using LogMiner, consult the following tutorial:

Introduction to LogMiner

Oracle LogMiner is a utility introduced in Oracle8i that provides a relational interface to redo records found in redo log files. LogMiner enables users to query online or archived redo log files through standard SQL statements or using a GUI provided through Oracle Enterprise Manager (OEM).

The benefits to LogMiner include identifying the time of a particular database event, generating the SQL statements needed to reverse incorrect changes made to the database, gathering data for capacity planning, or auditing certain operations run against the database.

The LogMiner utility can be accessed through SQL statements (command-line) or through the Oracle LogMiner Viewer graphical user interface which is part of Oracle Enterprise Manager.

 

Oracle LogMiner vs Oracle Flashback

Do not confuse LogMiner with another popular feature introduced in Oracle Database 10g called Oracle Flashback. LogMiner and Oracle Flashback both use the redo log files but for different purposes. While LogMiner uses the redo log files to read data changes and reconstruct how data was changed, Oracle Flashback actually performs the reconstruction and presents the finished results of such changes, giving a view of the database at some point in time.

LogMiner Configuration

A LogMiner configuration consists of four basic components: the source database, the mining database, the redo log files containing the data changes of interest, and the LogMiner data dictionary.

Source Database

The database that produces the redo log files you want LogMiner to analyze.

 

Mining Database

The database that LogMiner uses to perform its analysis on. This can be the source database or a remote database.

 

Redo Log Files

The redo log files contain all changes made to the database which includes user data and the data dictionary.

 

LogMiner Data Dictionary

A LogMiner data dictionary consists of the memory data structures and the database tables that are used to store and retrieve information about objects in the database and their versions. This information is used by the LogMiner tool to translate internal object identifiers, data types, and binary data values found in the redo log files into readable object names, external data formats, and data values.

The LogMiner data dictionary is also referred to as simply the LogMiner dictionary.

LogMiner can extract the dictionary from three different sources during the analysis phase; from an external flat file, the redo log files, or using the online catalog.

  • Extracting the LogMiner Dictionary to a Flat File

    Prior to Oracle9i, this was the only method available to create the dictionary. This option is maintained for backward compatibility with previous releases. This option does not guarantee transactional consistency. Oracle recommends that you use either the online catalog or extract the dictionary from redo log files instead.

  • Extracting a LogMiner Dictionary to the Redo Log Files

    Oracle recommends that you use this option when you do not expect to have access to the source database from which the redo log files were created, or if you anticipate that changes will be made to the column definitions in the tables of interest.

  • Using the Online Catalog

    Oracle recommends that you use this option when you will have access to the source database from which the redo log files were created and when no changes to the column definitions in the tables of interest are anticipated. This is the most efficient and easy-to-use option.

Without a dictionary, LogMiner returns internal object IDs and presents data as binary data. For example, consider the following SQL statement that got written to the redo log file:


insert into hr.departments(department_id, department_name, manager_id, location_id) values(hr.departments_seq.nextval, 'Engineering', null, 1700);

Without a dictionary, LogMiner will return the following cryptic redo statement:


insert into "UNKNOWN"."OBJ# 68924"("COL 1","COL 2","COL 3","COL 4") values (HEXTORAW('c2035b'),HEXTORAW('456e67696e656572696e67'),NULL,HEXTORAW('c212'));

With a dictionary available, LogMiner is able to translate the internal object identifiers and binary data values from the redo log files into something more readable:


insert into "HR"."DEPARTMENTS"("DEPARTMENT_ID","DEPARTMENT_NAME","MANAGER_ID","LOCATION_ID") values ('290','Engineering',NULL,'1700');

Figure 1 below shows an example LogMiner configuration. In this configuration, the source database named racdb generates redo log files that are archived and shipped to another database named testdb. A LogMiner dictionary has been extracted to these redo log files. The mining database, testdb, is where LogMiner will perform the work of analyzing the redo log files.


Figure 1: Example LogMiner Configuration

The configuration described above is just one of many valid LogMiner configurations. For example, it is possible to use the same database for the source and mining database, or using another method to provide the LogMiner dictionary.

Constraints

Oracle LogMiner has a number of constraints related to the source and mining database, the data dictionary, and the redo log files that LogMiner will analyze.

LogMiner does not allow you to mix redo log files from different databases or to use a dictionary from a different database than the one that generated the redo log files to be analyzed.

Requirements

Archivelog Mode

Archivelog mode must be enabled in order to generate usable redo log files.


SQL> select log_mode from v$database; LOG_MODE ------------ NOARCHIVELOG SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount ORACLE instance started. Total System Global Area 1653518336 bytes Fixed Size 2228904 bytes Variable Size 1291849048 bytes Database Buffers 352321536 bytes Redo Buffers 7118848 bytes Database mounted. SQL> alter database archivelog; Database altered. SQL> alter database open; Database altered. SQL> select log_mode from v$database; LOG_MODE ------------ ARCHIVELOG

Security

Users need to be aware of the system and object level privileges required to use LogMiner. In most cases, the DBA will create a special user and role to perform LogMiner operations.

In this example, a new role named LOGMNR_ADMIN will be created and assigned the appropriate privileges for LogMiner. This role will be assigned to a new user named MINER that will be used to perform all LogMiner analysis.

Create LogMiner Role


SQL> create role logmnr_admin; Role created. SQL> grant create session to logmnr_admin; Grant succeeded. SQL> grant select on v_$logmnr_contents to logmnr_admin; Grant succeeded. SQL> grant select on v_$logmnr_dictionary to logmnr_admin; Grant succeeded. SQL> grant select on v_$logmnr_parameters to logmnr_admin; Grant succeeded. SQL> grant select on v_$logmnr_logs to logmnr_admin; Grant succeeded. SQL> grant select on v_$archived_log to logmnr_admin; Grant succeeded. SQL> grant execute_catalog_role , select any dictionary , select any transaction , select any table , create tablespace , drop tablespace to logmnr_admin; Grant succeeded.

Create LogMiner User


SQL> create user miner identified by miner; User created. SQL> grant logmnr_admin to miner; Grant succeeded. SQL> alter user miner quota unlimited on users; User altered.

Supplemental Logging

LogMiner is a redo-based application and as such, requires at least minimal supplemental logging be enabled on the source database. Oracle does not enable any supplemental logging by default.

Supplemental logging must be enabled on the source database before generating redo log files that will be analyzed by LogMiner.

When minimal supplemental logging is enabled, information about some selected columns or all columns (in addition to the columns being modified) will be added to the redo stream and written to the redo log files to uniquely identify a row. This additional information is needed by LogMiner to identify, group, and merge the redo operations associated with DML changes. With minimal supplemental logging enabled, LogMiner is able to reconstruct DML statements that uniquely identify a row without having to use a ROWID and provides LogMiner with information necessary to support chained rows and various storage arrangements (for example, clustered tables).

Starting with Oracle Database 11g, Oracle LogMiner Viewer has been incorporated into the Oracle Enterprise Manager web-based interface and also integrated with the new Flashback Transaction feature, making it simple to recover transactions that may have been modified by mistake. Flashback Transaction allows the changes made by a transaction to be undone, optionally including changes made by dependent transactions. Supplemental logging for primary key columns must be enabled for Flashback Transaction to work.

At the very least, enable minimal supplemental logging on the source database. In order for the integrated Flashback Transaction feature to work with LogMiner, supplemental logging for primary key columns must also be enabled.


SQL> alter database add supplemental log data; Database altered. SQL> alter database add supplemental log data (primary key) columns; Database altered.

Verify that supplemental logging is enabled. If the query below returns a value of YES or IMPLICIT, supplemental logging is enabled.


SQL> select supplemental_log_data_min, supplemental_log_data_pk from v$database; SUPPLEMENTAL_LOG_DATA_MIN SUPPLEMENTAL_LOG_DATA_PK -------------------------- ------------------------- YES YES

In order for the new supplemental log configuration to begin archiving the additional column data to the redo logs, force a log switch.


SQL> alter system switch logfile; System altered.

LogMiner Components

LogMiner is a set of two PL/SQL packages and several views. The LogMiner packages are used to prepare a LogMiner session and to perform analysis on the redo log files of interest. The results of the LogMiner analysis, known as the LogMiner contents, are available through the private view V$LOGMNR_CONTENTS.

LogMiner PL/SQL Packages

DBMS_LOGMNR_D

Contains two procedures used to prepare a LogMiner session.


SET_TABLESPACE


The SET_TABLESPACE procedure is an optional procedure that allows you to re-create all LogMiner tables in an alternate tablespace other than the default SYSAUX tablespace.


DBMS_LOGMNR_D.SET_TABLESPACE ( new_tablespace IN VARCHAR2 );

For example:


CREATE TABLESPACE logmnrts DATAFILE SIZE 25M AUTOEXTEND ON NEXT 25M MAXSIZE UNLIMITED; Tablespace created. BEGIN DBMS_LOGMNR_D.SET_TABLESPACE ( new_tablespace => 'logmnrts' ); END; / PL/SQL procedure successfully completed.


BUILD


The BUILD procedure extracts the LogMiner data dictionary to either a flat file or one or more redo log files. The BUILD procedure is not used if the LogMiner session will be using the online catalog.


DBMS_LOGMNR_D.BUILD ( dictionary_filename IN VARCHAR2, dictionary_location IN VARCHAR2, options IN NUMBER );

Figure 2 shows a decision tree to help you select a LogMiner dictionary extraction method depending on the type of LogMiner analysis being performed.


Figure 2: Decision Tree for Data Dictionary Extraction

Extract LogMiner Dictionary to a Flat File

To extract the dictionary to a flat file, set the dictionary_filename parameter to the name of the file, the dictionary_location set to a directory on the database machine specified in UTL_FILE_DIR, and optionally set the options parameter to the constant DBMS_LOGMNR_D.STORE_IN_FLAT_FILE. Using these options, the BUILD procedure will query the data dictionary tables of the current database and create a text-based file containing the contents of the tables.

The directory specified in dictionary_location must exist as a value in the UTL_FILE_DIR initialization parameter. After setting this parameter, the instance will need to be restarted in order to take effect.

Only extract the dictionary to a flat file after all DDL statements have been run and prior to the creation of any redo log files that will be analyzed. Do not run DBMS_LOGMNR_D.BUILD while DDL operations are occurring in the database, else the snapshot of the dictionary will not be consistent.

The DDL_DICT_TRACKING option to the DBMS_LOGMNR.START_LOGMNR procedure (explained later) is only supported on flat file dictionaries created with Oracle9i and later.


BEGIN DBMS_LOGMNR_D.BUILD ( dictionary_filename => 'dictionary.ora', dictionary_location => '/u04/app/oracle/logmnr', options => DBMS_LOGMNR_D.STORE_IN_FLAT_FILE ); END; / PL/SQL procedure successfully completed. SQL> !ls -l /u04/app/oracle/logmnr/dictionary.ora -rw-r--r-- 1 oracle asmadmin 39250624 Sep 4 18:58 /u04/app/oracle/logmnr/dictionary.ora

Extract LogMiner Dictionary to Redo Log Files

To extract the dictionary to the redo log files, specify only the options parameter with the value set to the constant DBMS_LOGMNR_D.STORE_IN_REDO_LOGS. Do not specify the dictionary_filename or dictionary_location parameters when extracting the dictionary to the redo log files.

The size of the LogMiner dictionary may cause it to be contained in multiple redo log files.

Extracting the dictionary to the redo log files requires the following:

  • Oracle9i database or later
  • The COMPATIBLE initialization parameter must be set to 9.2.0 or higher
  • Archivelog mode enabled in order to generate usable redo log files
  • Supplemental logging (at least the minimum level) should be enabled in order to take advantage of all LogMiner features


BEGIN DBMS_LOGMNR_D.BUILD ( options => DBMS_LOGMNR_D.STORE_IN_REDO_LOGS ); END; / PL/SQL procedure successfully completed.

Determine the redo log files containing the LogMiner dictionary and any other log files to be analyzed by LogMiner.


ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS'; Session altered. COLUMN name FORMAT A65 HEAD "Log File Name" COLUMN first_time FORMAT A20 HEAD "First Time" COLUMN dictionary_begin FORMAT A5 HEAD "Dict|Begin" COLUMN dictionary_end FORMAT A5 HEAD "Dict|End" SELECT name , first_time , dictionary_begin , dictionary_end FROM v$archived_log WHERE name IS NOT NULL AND first_time BETWEEN '06-SEP-2012 00:00:00' and '06-SEP-2012 00:30:00' ORDER BY sequence#; Dict Dict Log File Name First Time Begin End ----------------------------------------------------------------- -------------------- ----- ----- +FRA/racdb/archivelog/2012_09_06/thread_1_seq_2865.1042.793238473 06-SEP-2012 00:00:57 NO NO +FRA/racdb/archivelog/2012_09_06/thread_1_seq_2866.1097.793238657 06-SEP-2012 00:01:12 NO NO +FRA/racdb/archivelog/2012_09_06/thread_1_seq_2867.1111.793238657 06-SEP-2012 00:04:16 NO NO +FRA/racdb/archivelog/2012_09_06/thread_1_seq_2868.266.793238661 06-SEP-2012 00:04:17 YES NO +FRA/racdb/archivelog/2012_09_06/thread_1_seq_2869.1015.793238663 06-SEP-2012 00:04:21 NO YES +FRA/racdb/archivelog/2012_09_06/thread_1_seq_2870.330.793238781 06-SEP-2012 00:04:23 NO NO +FRA/racdb/archivelog/2012_09_06/thread_1_seq_2871.968.793238969 06-SEP-2012 00:06:20 NO NO +FRA/racdb/archivelog/2012_09_06/thread_1_seq_2872.1075.793239475 06-SEP-2012 00:09:29 NO NO +FRA/racdb/archivelog/2012_09_06/thread_1_seq_2873.338.793239707 06-SEP-2012 00:17:55 NO NO +FRA/racdb/archivelog/2012_09_06/thread_1_seq_2874.1058.793240141 06-SEP-2012 00:21:47 NO NO +FRA/racdb/archivelog/2012_09_06/thread_1_seq_2875.983.793241495 06-SEP-2012 00:29:00 NO NO

 

DBMS_LOGMNR

Contains the procedures used to initialize the LogMiner tool and to begin and end a LogMiner session. The procedures used most often are described below.


ADD_LOGFILE


The ADD_LOGFILE procedure is used to add a redo log file to an existing or newly created list of log files for LogMiner to analyze.


DBMS_LOGMNR.ADD_LOGFILE ( LogFileName IN VARCHAR2, options IN BINARY_INTEGER default ADDFILE );

Add First Redo Log File

When adding the first redo log file to the list, specify the name and location of the log file and set the options parameter to the constant DBMS_LOGMNR.NEW. This will start a new LogMiner session and a new list of redo log files for analysis.

 

The NEW option implicitly calls the DBMS_LOGMNR.END_LOGMNR procedure to end the current LogMiner session and then creates a new session. The new LogMiner session starts a new list of redo log files to be analyzed, beginning with the specified redo log file.

The following example adds the first archived redo log file that contains the start of the LogMiner dictionary.


BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '+FRA/racdb/archivelog/2012_09_06/thread_1_seq_2868.266.793238661', options => DBMS_LOGMNR.NEW ); END; / PL/SQL procedure successfully completed.

Add Additional Redo Log Files

When adding additional redo log files to the LogMiner session, specify the name and location of the log file and set the options parameter to DBMS_LOGMNR.ADDFILE (or leave unspecified to accept the default value of ADDFILE).


BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '+FRA/racdb/archivelog/2012_09_06/thread_1_seq_2869.1015.793238663', options => DBMS_LOGMNR.ADDFILE ); END; / PL/SQL procedure successfully completed. BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '+FRA/racdb/archivelog/2012_09_06/thread_1_seq_2870.330.793238781', options => DBMS_LOGMNR.ADDFILE ); END; / PL/SQL procedure successfully completed. BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '+FRA/racdb/archivelog/2012_09_06/thread_1_seq_2871.968.793238969', options => DBMS_LOGMNR.ADDFILE ); END; / PL/SQL procedure successfully completed. BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '+FRA/racdb/archivelog/2012_09_06/thread_1_seq_2872.1075.793239475', options => DBMS_LOGMNR.ADDFILE ); END; / PL/SQL procedure successfully completed. BEGIN DBMS_LOGMNR.ADD_LOGFILE ( LogFileName => '+FRA/racdb/archivelog/2012_09_06/thread_1_seq_2873.338.793239707', options => DBMS_LOGMNR.ADDFILE ); END; / PL/SQL procedure successfully completed.


REMOVE_LOGFILE


The REMOVE_LOGFILE procedure removes a redo log file that was accidentally added from the existing list of redo log files for LogMiner to process.


DBMS_LOGMNR.REMOVE_LOGFILE ( LogFileName IN VARCHAR2 );

The following example removes an unnecessary redo log file from the current LogMiner session.


BEGIN DBMS_LOGMNR.REMOVE_LOGFILE ( LogFileName => '+FRA/racdb/archivelog/2012_09_06/thread_1_seq_2873.338.793239707' ); END; / PL/SQL procedure successfully completed.


START_LOGMNR


The START_LOGMNR procedure starts LogMiner by loading the dictionary that LogMiner will use to translate internal object identifiers to human-readable names. This procedure must be called before reading the log entries from the V$LOGMNR_CONTENTS view (described below).


DBMS_LOGMNR.START_LOGMNR ( startScn IN NUMBER default 0, endScn IN NUMBER default 0, startTime IN DATE default '01-jan-1988', endTime IN DATE default '31-dec-2110', DictFileName IN VARCHAR2 default '', Options IN BINARY_INTEGER default 0 );

Dictionary from Redo Log Files

If reading the dictionary from the redo log files, specify the options parameter with the value set to the constant DBMS_LOGMNR.DICT_FROM_REDO_LOGS. The redo log file(s) including the LogMiner dictionary from the DBMS_LOGMNR_D.BUILD procedure need to have been explicitly loaded for the current LogMiner session through the DBMS_LOGMNR.ADD_LOGFILE procedure. LogMiner will expect to find the LogMiner dictionary in the redo log files specified for the current LogMiner session.


BEGIN DBMS_LOGMNR.START_LOGMNR ( options => DBMS_LOGMNR.DICT_FROM_REDO_LOGS ); END; / PL/SQL procedure successfully completed.

Dictionary from Online Catalog

If reading the dictionary from the current online catalog, specify the options parameter with the value set to the constant DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG. In this case, it is not necessary to have previously extracted the LogMiner dictionary to a flat file or in the redo log files being analyzed for the current LogMiner sessions through the DBMS_LOGMNR_D.BUILD procedure.


BEGIN DBMS_LOGMNR.START_LOGMNR ( options => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG ); END; / PL/SQL procedure successfully completed.

In addition to using the online catalog to analyze online redo log files, you can use it to analyze archived redo log files, if you are on the same system that generated the archived redo log files.

The online catalog contains the latest information about the database and may be the fastest way to start your analysis. Because DDL operations that change important tables are somewhat rare, the online catalog generally contains the information you need for your analysis.

Remember, however, that the online catalog can only reconstruct SQL statements that are executed on the latest version of a table. As soon as a table is altered, the online catalog is updated and no longer reflects the previous version of the table. This means that LogMiner will not be able to reconstruct any SQL statements that were executed on the previous version of the table. Instead, LogMiner generates non-executable SQL (including hexadecimal-to-raw formatting of binary values) in the SQL_REDO column of the V$LOGMNR_CONTENTS view similar to the following example:


insert into HR.EMPLOYEES("COL 1","COL 2","COL 3","COL 4") values (HEXTORAW('c2035b'),HEXTORAW('456e67696e656572696e67'),NULL,HEXTORAW('c212'));

Expect to see a value of 2 in the STATUS column of the V$LOGMNR_CONTENTS view if the table definition in the database does not match the table definition in the redo log file.

The online catalog option requires that the database be open.

The online catalog option is not valid with the DDL_DICT_TRACKING option of DBMS_LOGMNR.START_LOGMNR.

Dictionary from Flat File

If reading the dictionary from a flat file located on the database machine, specify the name and location of the file using the DictFileName parameter. This file must have been created previously through the DBMS_LOGMNR_D.BUILD procedure. The dictfilename parameter is only valid when reading the dictionary from a flat file and should not be set when reading the dictionary from the online catalog or the redo log files.


BEGIN DBMS_LOGMNR.START_LOGMNR ( DictFileName => '/u04/app/oracle/logmnr/dictionary.ora' ); END; / PL/SQL procedure successfully completed.

Specifying Search Limit Options

You can limit the search to a specific date and time range using the startTime and endTime parameters. This can be useful to filter only those changes made by someone, for example, that occurred during a certain time frame. Note that the startScn and endScn parameters behave in a similar fashion but filters records based on the System Change Number (SCN).


ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS'; Session altered. BEGIN DBMS_LOGMNR.START_LOGMNR ( startTime => '06-SEP-2012 00:04:17', endTime => '06-SEP-2012 00:06:20', options => DBMS_LOGMNR.DICT_FROM_REDO_LOGS ); END; / PL/SQL procedure successfully completed.

Specifying Multiple Options

The DBMS_LOGMNR.START_LOGMNR procedure can be executed with several other options (specified through the options parameter), that control the behaviour of the LogMiner session. Click here for a list of all available constants that can be specified for the options parameter in the DBMS_LOGMNR.START_LOGMNR procedure.

To call multiple options at one time using the DBMS_LOGMNR.START_LOGMNR procedure, separate each option using the + character as demonstrated below:


BEGIN DBMS_LOGMNR.START_LOGMNR ( options => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.DDL_DICT_TRACKING ); END; / PL/SQL procedure successfully completed.

 

About DDL dictionary tracking

If the LogMiner dictionary in use is a flat file or in the redo log files, use the DBMS_LOGMNR.DDL_DICT_TRACKING option to the DBMS_LOGMNR.START_LOGMNR procedure procedure so LogMiner can update its internal dictionary if a DDL event occurs. This ensures that correct SQL_REDO and SQL_UNDO information is maintained for objects that are modified after the LogMiner internal dictionary is built. The database to which LogMiner is connected must be open.

Be aware that the DDL_DICT_TRACKING option to the DBMS_LOGMNR.START_LOGMNR procedure is not supported for flat file dictionaries created prior to Oracle9i. If you attempt to use the DDL_DICT_TRACKING option with a LogMiner database extracted to a flat file prior to Oracle9i, the ORA-01330 error (problem loading a required build table) is returned.

The following list describes the requirements for specifying the DDL_DICT_TRACKING option with the DBMS_LOGMNR.START_LOGMNR procedure.

  • The DDL_DICT_TRACKING option is not valid with the DICT_FROM_ONLINE_CATALOG option.
  • The DDL_DICT_TRACKING option requires that the database be open.
  • Supplemental logging must be enabled database-wide, or log groups must have been created for the tables of interest.

To prevent the dictionary from being reloaded at the beginning of each SELECT operation on the V$LOGMNR_CONTENTS view, also include the DBMS_LOGMNR.NO_DICT_RESET_ONSELECT option to the START_LOGMNR procedure.


END_LOGMNR


The END_LOGMNR procedure finishes a LogMiner session. Because this procedure performs cleanup operations that may not otherwise be done, you must use it to properly end a LogMiner session. This procedure is called automatically when you log out of a database session or when you call DBMS_LOGMNR.ADD_LOGFILE and specify the NEW option.


DBMS_LOGMNR.END_LOGMNR;

The following example ends a LogMiner session and performs all cleanup operations.


BEGIN DBMS_LOGMNR.END_LOGMNR; END; / PL/SQL procedure successfully completed.

LogMiner Views

V$LOGMNR_CONTENTS

The redo entries mined from the redo log files by LogMiner are made available through the V$LOGMNR_CONTENTS view for the current LogMiner session. Out of all of the LogMiner views, this is the one that you will use most often.

LogMiner uses the redo records embedded in the redo log files and the LogMiner dictionary to re-generate equivalent SQL statements executed on the database. The results of the LogMiner analysis, known as the LogMiner contents, are made available through a relation interface provided by the V$LOGMNR_CONTENTS private view. This SQL interface can be used to query SQL statements executed on the database during any point in time you have redo log files for. To query this view, you must have the SELECT ANY TRANSACTION privilege.

When a SELECT statement is executed against the V$LOGMNR_CONTENTS view, the redo log files are read sequentially. Translated records from the redo log files are returned as rows in the V$LOGMNR_CONTENTS view. This continues until either the filter criteria specified at startup (EndTime or endScn) are met or the end of the redo log file is reached.

It is important to note that LogMiner is applied at the session level so that no one can view the LogMiner contents through another session. The results in V$LOGMNR_CONTENTS are only visible during the life of the session which runs the procedure DBMS_LOGMRN.START_LOGMNR. This is because all the LogMiner memory is in PGA memory, so it is neither visible to other sessions, nor is it persistent after disconnecting the session.

Users will often combine the SQL_REDO, COMMIT_TIMESTAMP, and SCN columns to identify activity with a particular transaction that occurred in the database. The SQL_UNDO can then be used to generate a SQL statement that can be used to undo the effect of the original statement that made the change. In addition, the V$LOGMNR_CONTENTS view contains the segment name and owner which is useful in further identification of the objects being altered.

Click the following link for a description of the columns available with the V$LOGMNR_CONTENTS view (version 11.2).

  V$LOGMNR_CONTENTS

An example query against V$LOGMNR_CONTENTS may look like:


ALTER SESSION SET nls_date_format = 'DD-MON-YYYY HH24:MI:SS'; Session altered. COLUMN username FORMAT A8 COLUMN operation FORMAT A9 COLUMN sql_redo FORMAT A25 WORD_WRAPPED COLUMN sql_undo FORMAT A25 WORD_WRAPPED COLUMN timestamp FORMAT A20 SELECT username , operation , sql_redo , sql_undo , timestamp , scn FROM v$logmnr_contents WHERE username = 'SCOTT' AND operation = 'INSERT' AND seg_owner = 'HR'; USERNAME OPERATION SQL_REDO SQL_UNDO TIMESTAMP SCN -------- --------- ------------------------- ------------------------- -------------------- --------- SCOTT INSERT insert into delete from 06-SEP-2012 00:07:38 86033925 "HR"."DEPARTMENTS"("DEPAR "HR"."DEPARTMENTS" where TMENT_ID","DEPARTMENT_NAM "DEPARTMENT_ID" = '310' E","MANAGER_ID","LOCATION and "DEPARTMENT_NAME" = _ID") values 'Engineering' and ('310','Engineering',NULL "MANAGER_ID" IS NULL and ,'1700'); "LOCATION_ID" = '1700' and ROWID = 'AAAQ08AAEAAAACvAAc';

 

V$LOGMNR_DICTIONARY

The V$LOGMNR_DICTIONARY operational view shows information about a LogMiner dictionary that was created using the STORE_IN_FLAT_FILE option to DBMS_LOGMNR.START_LOGMNR. The information shown includes information about the database from which the LogMiner dictionary was created.

Click the following link for a description of the columns available with the V$LOGMNR_DICTIONARY view (version 11.2).

  V$LOGMNR_DICTIONARY

An example query against V$LOGMNR_DICTIONARY to obtain information about a dictionary creating using the STORE_IN_FLAT_FILE option to DBMS_LOGMNR.START_LOGMNR may look like:


ALTER SESSION SET nls_date_format = 'DD-MON-YYYY HH24:MI:SS'; Session altered. COLUMN timestamp FORMAT A20 COLUMN filename FORMAT A38 SELECT timestamp , filename , dictionary_scn , NVL(info, 'VALID') as DICTIONARY_STATUS FROM v$logmnr_dictionary; TIMESTAMP FILENAME DICTIONARY_SCN DICTIONARY_STATUS -------------------- -------------------------------------- -------------- ------------------ 28-AUG-2012 20:14:54 /u04/app/oracle/logmnr/dictionary.ora 79935187 VALID

 

V$LOGMNR_PARAMETERS

The V$LOGMNR_PARAMETERS operational view contains information about the current LogMiner session as specified in the call to the DBMS_LOGMNR.START_LOGMNR procedure.

Click the following link for a description of the columns available with the V$LOGMNR_PARAMETERS view (version 11.2).

  V$LOGMNR_PARAMETERS


ALTER SESSION SET nls_date_format = 'DD-MON-YYYY HH24:MI:SS'; Session altered. COLUMN start_date FORMAT A20 COLUMN end_date FORMAT A20 SELECT start_date , end_date , start_scn , end_scn , options FROM v$logmnr_parameters; START_DATE END_DATE START_SCN END_SCN OPTIONS -------------------- -------------------- ---------- ---------- ---------- 01-JAN-2111 00:59:59 86029483 0 40

 

V$LOGMNR_LOGS

Use the V$LOGMNR_LOGS operational view to determine which redo log files are being analyzed in the current LogMiner session. This view contains one row for each redo log file that was loaded using the DBMS_LOGMNR.ADD_LOGFILE procedure.

Click the following link for a description of the columns available with the V$LOGMNR_LOGS view (version 11.2).

  V$LOGMNR_LOGS

An example query against V$LOGMNR_LOGS may look like:


ALTER SESSION SET nls_date_format = 'DD-MON-YYYY HH24:MI:SS'; Session altered. COLUMN filename FORMAT A65 COLUMN low_time FORMAT A20 COLUMN high_time FORMAT A20 COLUMN dictionary_begin FORMAT A10 HEAD 'DICT_BEGIN' COLUMN dictionary_end FORMAT A8 HEAD 'DICT_END' SELECT log_id , filename , low_time , high_time , dictionary_begin , dictionary_end FROM v$logmnr_logs; LOG_ID FILENAME LOW_TIME HIGH_TIME DICT_BEGIN DICT_END ------ ----------------------------------------------------------------- -------------------- -------------------- ---------- -------- 2868 +FRA/racdb/archivelog/2012_09_06/thread_1_seq_2868.266.793238661 06-SEP-2012 00:04:17 06-SEP-2012 00:04:21 YES NO 2869 +FRA/racdb/archivelog/2012_09_06/thread_1_seq_2869.1015.793238663 06-SEP-2012 00:04:21 06-SEP-2012 00:04:23 NO YES 2870 +FRA/racdb/archivelog/2012_09_06/thread_1_seq_2870.330.793238781 06-SEP-2012 00:04:23 06-SEP-2012 00:06:20 NO NO 2871 +FRA/racdb/archivelog/2012_09_06/thread_1_seq_2871.968.793238969 06-SEP-2012 00:06:20 06-SEP-2012 00:09:29 NO NO 2872 +FRA/racdb/archivelog/2012_09_06/thread_1_seq_2872.1075.793239475 06-SEP-2012 00:09:29 06-SEP-2012 00:17:55 NO NO

How LogMiner Works

Hopefully some of the examples included in this guide may have given you a glimpse of how LogMiner operates. Analyzing redo log files using Oracle LogMiner will commonly include the steps outlined in this section. Depending on the LogMiner configuration and the dictionary options being used, some of these steps will be optional.

All LogMiner operations are performed using procedures contained in the DBMS_LOGMNR_D and DBMS_LOGMNR PL/SQL packages or through the Oracle LogMiner Viewer graphical user interface which is part of Oracle Enterprise Manager. This section describes how to analyze redo log files with LogMiner using the LogMiner PL/SQL packages.

  1. Verify that at least minimal supplemental logging is enabled on the source database before generating redo log files that will be analyzed by LogMiner.

  2. Specify a LogMiner dictionary on the source database. Use the DBMS_LOGMNR_D.BUILD procedure when extracting the dictionary to the redo log files or to a flat file. When using an online catalog, the dictionary option will be specified when you start the LogMiner process.

  3. Identify the redo log files that contain the data of interest that will be analyzed by LogMiner.

  4. Start a LogMiner session by explicitly adding the previously identified redo log files with the DBMS_LOGMNR.ADD_LOGFILE procedure or direct LogMiner to create a list of log files for analysis automatically when you start LogMiner using the DBMS_LOGMNR.START_LOGMNR procedure.

    A LogMiner session begins with a call to the DBMS_LOGMNR.ADD_LOGFILE procedure (if you plan to explicitly specify log files) or the DBMS_LOGMNR.START_LOGMNR procedure (if you plan to use continuous mining).

  5. Start LogMiner by calling the DBMS_LOGMNR.START_LOGMNR procedure which loads the dictionary that LogMiner will use to translate internal object identifiers to human-readable names. You can optionally filter the search to a specific date and time range using the startTime and endTime parameters or by SCN using the startScn and endScn parameters of the DBMS_LOGMNR.START_LOGMNR procedure.

    The DBMS_LOGMNR.START_LOGMNR procedure must be called before reading the log entries from the V$LOGMNR_CONTENTS view (described next).

  6. Query the results of the LogMiner analysis process through the V$LOGMNR_CONTENTS view.

  7. A LogMiner session ends with a call to the DBMS_LOGMNR.END_LOGMNR procedure.

To see a collection of examples using LogMiner, consult the following tutorial:

Further Reading

While this guide contained an overview to become familiar with Oracle LogMiner, it is by no means a substitute for the official Oracle documentation (see list below). In addition to this guide, users should also consult the following Oracle documents to gain a full understanding of alternative configuration options and additional constraints with Oracle LogMiner. Oracle's official documentation site is docs.oracle.com.

About the Author

Jeffrey Hunter is an Oracle Certified Professional, Java Development Certified Professional, Author, and an Oracle ACE. Jeff currently works as a Senior Database Administrator for The DBA Zone, Inc. located in Pittsburgh, Pennsylvania. His work includes advanced performance tuning, Java and PL/SQL programming, developing high availability solutions, capacity planning, database security, and physical / logical database design in a UNIX / Linux server environment. Jeff's other interests include mathematical encryption theory, tutoring advanced mathematics, programming language processors (compilers and interpreters) in Java and C, LDAP, writing web-based database administration tools, and of course Linux. He has been a Sr. Database Administrator and Software Engineer for over 20 years and maintains his own website site at: http://www.iDevelopment.info. Jeff graduated from Stanislaus State University in Turlock, California, with a Bachelor's degree in Computer Science and Mathematics.



Copyright (c) 1998-2017 Jeffrey M. Hunter. All rights reserved.

All articles, scripts and material located at the Internet address of http://www.idevelopment.info is the copyright of Jeffrey M. Hunter and is protected under copyright laws of the United States. This document may not be hosted on any other site without my express, prior, written permission. Application to host any of the material elsewhere can be made by contacting me at jhunter@idevelopment.info.

I have made every effort and taken great care in making sure that the material included on my web site is technically accurate, but I disclaim any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on it. I will in no case be liable for any monetary damages arising from such loss, damage or destruction.

Last modified on
Friday, 19-Oct-2012 13:02:10 EDT
Page Count: 7249