DBA Tips Archive for Oracle
No Title[an error occurred while processing this directive]
by Michael New, MichaelNew@earthlink.net, Gradation LLC
Every instance of an Oracle database (clustered or non-clustered) records changes made to database blocks in redo log files. The Oracle instance writes these changes in the form of redo records (also known as redo entries) to its associated online redo logs. Online redo log files are written to in a circular manner. Once an online redo log fills, Oracle moves to the next one. If the database is in archived redo log mode, Oracle will make a copy of the online redo log to what is known as an archived redo log before the online version gets reused and overwritten by the circular nature of the logging process.
Redo log files are commonly used for the purpose of recovering a database as the result of some type of failure or when refreshing a physical standby database. But what if you wanted to read the actual redo records directly from the redo log files? For example, an organization may need to capture all changes made to the database for auditing purposes or data analysis. Prior to Oracle8i, this proved to be a painful exercise as the redo log files use an unpublished format that is not human-readable. The only solution then was for the DBA to send a dump file of the redo log files to Oracle support.
In Oracle8i, LogMiner was introduced as a tool capable of reading redo records found in the redo log files using a relational interface.
While LogMiner can be used for the aforementioned needs of auditing or data analysis, it can also used to recover from user errors; sometimes referred to as logical corruption. Unlike physical database corruption from media failure or block corruption, logical corruption occurs as the result of human or application mistakes with the data rather than hardware or software malfunction. For example, an application updates or deletes from a table with an incorrect WHERE clause or even worse, a critical object like a table gets dropped from the database. In situations like this, LogMiner can be used to not only pinpoint when the logical corruption to the database occurred, but to also construct the SQL needed to reverse or UNDO the changes. This can drastically reduce the Mean Time To Recovery (MTTR) over previous methods such as point-in-time recovery or restoring from an export.
This article provides insight into the the LogMiner utility and the different components that make it up.
To see a collection of examples using LogMiner, consult the following tutorial:
Oracle LogMiner is a utility introduced in Oracle8i that provides a relational interface to redo records found in redo log files. LogMiner enables users to query online or archived redo log files through standard SQL statements or using a GUI provided through Oracle Enterprise Manager (OEM).
The benefits to LogMiner include identifying the time of a particular database event, generating the SQL statements needed to reverse incorrect changes made to the database, gathering data for capacity planning, or auditing certain operations run against the database.
The LogMiner utility can be accessed through SQL statements (command-line) or through the Oracle LogMiner Viewer graphical user interface which is part of Oracle Enterprise Manager.
A LogMiner configuration consists of four basic components: the source database, the mining database, the redo log files containing the data changes of interest, and the LogMiner data dictionary.
The database that produces the redo log files you want LogMiner to analyze.
The database that LogMiner uses to perform its analysis on. This can be the source database or a remote database.
Redo Log Files
The redo log files contain all changes made to the database which includes user data and the data dictionary.
LogMiner Data Dictionary
A LogMiner data dictionary consists of the memory data structures and the database tables that are used to store and retrieve information about objects in the database and their versions. This information is used by the LogMiner tool to translate internal object identifiers, data types, and binary data values found in the redo log files into readable object names, external data formats, and data values.
The LogMiner data dictionary is also referred to as simply the LogMiner dictionary.
LogMiner can extract the dictionary from three different sources during the analysis phase; from an external flat file, the redo log files, or using the online catalog.
Extracting the LogMiner Dictionary to a Flat File
Prior to Oracle9i, this was the only method available to create the dictionary. This option is maintained for backward compatibility with previous releases. This option does not guarantee transactional consistency. Oracle recommends that you use either the online catalog or extract the dictionary from redo log files instead.
Extracting a LogMiner Dictionary to the Redo Log Files
Oracle recommends that you use this option when you do not expect to have access to the source database from which the redo log files were created, or if you anticipate that changes will be made to the column definitions in the tables of interest.
Using the Online Catalog
Oracle recommends that you use this option when you will have access to the source database from which the redo log files were created and when no changes to the column definitions in the tables of interest are anticipated. This is the most efficient and easy-to-use option.
Without a dictionary, LogMiner returns internal object IDs and presents data as binary data. For example, consider the following SQL statement that got written to the redo log file:
insert into hr.departments(department_id, department_name, manager_id, location_id) values(hr.departments_seq.nextval, 'Engineering', null, 1700);
Without a dictionary, LogMiner will return the following cryptic redo statement:
insert into "UNKNOWN"."OBJ# 68924"("COL 1","COL 2","COL 3","COL 4") values (HEXTORAW('c2035b'),HEXTORAW('456e67696e656572696e67'),NULL,HEXTORAW('c212'));
With a dictionary available, LogMiner is able to translate the internal object identifiers and binary data values from the redo log files into something more readable:
insert into "HR"."DEPARTMENTS"("DEPARTMENT_ID","DEPARTMENT_NAME","MANAGER_ID","LOCATION_ID") values ('290','Engineering',NULL,'1700');
Figure 1 below shows an example LogMiner configuration. In this configuration, the source database named racdb generates redo log files that are archived and shipped to another database named testdb. A LogMiner dictionary has been extracted to these redo log files. The mining database, testdb, is where LogMiner will perform the work of analyzing the redo log files.
Figure 1: Example LogMiner Configuration
The configuration described above is just one of many valid LogMiner configurations. For example, it is possible to use the same database for the source and mining database, or using another method to provide the LogMiner dictionary.
Oracle LogMiner has a number of constraints related to the source and mining database, the data dictionary, and the redo log files that LogMiner will analyze.
LogMiner does not allow you to mix redo log files from different databases or to use a dictionary from a different database than the one that generated the redo log files to be analyzed.
Archivelog mode must be enabled in order to generate usable redo log files.
Users need to be aware of the system and object level privileges required to use LogMiner. In most cases, the DBA will create a special user and role to perform LogMiner operations.
In this example, a new role named LOGMNR_ADMIN will be created and assigned the appropriate privileges for LogMiner. This role will be assigned to a new user named MINER that will be used to perform all LogMiner analysis.
LogMiner is a redo-based application and as such, requires at least minimal supplemental logging be enabled on the source database. Oracle does not enable any supplemental logging by default.
Supplemental logging must be enabled on the source database before generating redo log files that will be analyzed by LogMiner.
When minimal supplemental logging is enabled, information about some selected columns or all columns (in addition to the columns being modified) will be added to the redo stream and written to the redo log files to uniquely identify a row. This additional information is needed by LogMiner to identify, group, and merge the redo operations associated with DML changes. With minimal supplemental logging enabled, LogMiner is able to reconstruct DML statements that uniquely identify a row without having to use a ROWID and provides LogMiner with information necessary to support chained rows and various storage arrangements (for example, clustered tables).
Starting with Oracle Database 11g, Oracle LogMiner Viewer has been incorporated into the Oracle Enterprise Manager web-based interface and also integrated with the new Flashback Transaction feature, making it simple to recover transactions that may have been modified by mistake. Flashback Transaction allows the changes made by a transaction to be undone, optionally including changes made by dependent transactions. Supplemental logging for primary key columns must be enabled for Flashback Transaction to work.
At the very least, enable minimal supplemental logging on the source database. In order for the integrated Flashback Transaction feature to work with LogMiner, supplemental logging for primary key columns must also be enabled.
Verify that supplemental logging is enabled. If the query below returns a value of YES or IMPLICIT, supplemental logging is enabled.
In order for the new supplemental log configuration to begin archiving the additional column data to the redo logs, force a log switch.
LogMiner is a set of two PL/SQL packages and several views. The LogMiner packages are used to prepare a LogMiner session and to perform analysis on the redo log files of interest. The results of the LogMiner analysis, known as the LogMiner contents, are available through the private view V$LOGMNR_CONTENTS.
Contains two procedures used to prepare a LogMiner session.
The SET_TABLESPACE procedure is an optional procedure that allows you to re-create all LogMiner tables in an alternate tablespace other than the default SYSAUX tablespace.
The BUILD procedure extracts the LogMiner data dictionary to either a flat file or one or more redo log files. The BUILD procedure is not used if the LogMiner session will be using the online catalog.
Figure 2 shows a decision tree to help you select a LogMiner dictionary extraction method depending on the type of LogMiner analysis being performed.
Extract LogMiner Dictionary to a Flat File
To extract the dictionary to a flat file, set the dictionary_filename parameter to the name of the file, the dictionary_location set to a directory on the database machine specified in UTL_FILE_DIR, and optionally set the options parameter to the constant DBMS_LOGMNR_D.STORE_IN_FLAT_FILE. Using these options, the BUILD procedure will query the data dictionary tables of the current database and create a text-based file containing the contents of the tables.
The directory specified in dictionary_location must exist as a value in the UTL_FILE_DIR initialization parameter. After setting this parameter, the instance will need to be restarted in order to take effect.
Only extract the dictionary to a flat file after all DDL statements have been run and prior to the creation of any redo log files that will be analyzed. Do not run DBMS_LOGMNR_D.BUILD while DDL operations are occurring in the database, else the snapshot of the dictionary will not be consistent.
The DDL_DICT_TRACKING option to the DBMS_LOGMNR.START_LOGMNR procedure (explained later) is only supported on flat file dictionaries created with Oracle9i and later.
Extract LogMiner Dictionary to Redo Log Files
To extract the dictionary to the redo log files, specify only the options parameter with the value set to the constant DBMS_LOGMNR_D.STORE_IN_REDO_LOGS. Do not specify the dictionary_filename or dictionary_location parameters when extracting the dictionary to the redo log files.
The size of the LogMiner dictionary may cause it to be contained in multiple redo log files.
Extracting the dictionary to the redo log files requires the following:
Determine the redo log files containing the LogMiner dictionary and any other log files to be analyzed by LogMiner.
Contains the procedures used to initialize the LogMiner tool and to begin and end a LogMiner session. The procedures used most often are described below.
The ADD_LOGFILE procedure is used to add a redo log file to an existing or newly created list of log files for LogMiner to analyze.
Add First Redo Log File
When adding the first redo log file to the list, specify the name and location of the log file and set the options parameter to the constant DBMS_LOGMNR.NEW. This will start a new LogMiner session and a new list of redo log files for analysis.
The following example adds the first archived redo log file that contains the start of the LogMiner dictionary.
Add Additional Redo Log Files
When adding additional redo log files to the LogMiner session, specify the name and location of the log file and set the options parameter to DBMS_LOGMNR.ADDFILE (or leave unspecified to accept the default value of ADDFILE).
The REMOVE_LOGFILE procedure removes a redo log file that was accidentally added from the existing list of redo log files for LogMiner to process.
The following example removes an unnecessary redo log file from the current LogMiner session.
The START_LOGMNR procedure starts LogMiner by loading the dictionary that LogMiner will use to translate internal object identifiers to human-readable names. This procedure must be called before reading the log entries from the V$LOGMNR_CONTENTS view (described below).
Dictionary from Redo Log Files
If reading the dictionary from the redo log files, specify the options parameter with the value set to the constant DBMS_LOGMNR.DICT_FROM_REDO_LOGS. The redo log file(s) including the LogMiner dictionary from the DBMS_LOGMNR_D.BUILD procedure need to have been explicitly loaded for the current LogMiner session through the DBMS_LOGMNR.ADD_LOGFILE procedure. LogMiner will expect to find the LogMiner dictionary in the redo log files specified for the current LogMiner session.
Dictionary from Online Catalog
If reading the dictionary from the current online catalog, specify the options parameter with the value set to the constant DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG. In this case, it is not necessary to have previously extracted the LogMiner dictionary to a flat file or in the redo log files being analyzed for the current LogMiner sessions through the DBMS_LOGMNR_D.BUILD procedure.
In addition to using the online catalog to analyze online redo log files, you can use it to analyze archived redo log files, if you are on the same system that generated the archived redo log files.
The online catalog contains the latest information about the database and may be the fastest way to start your analysis. Because DDL operations that change important tables are somewhat rare, the online catalog generally contains the information you need for your analysis.
Remember, however, that the online catalog can only reconstruct SQL statements that are executed on the latest version of a table. As soon as a table is altered, the online catalog is updated and no longer reflects the previous version of the table. This means that LogMiner will not be able to reconstruct any SQL statements that were executed on the previous version of the table. Instead, LogMiner generates non-executable SQL (including hexadecimal-to-raw formatting of binary values) in the SQL_REDO column of the V$LOGMNR_CONTENTS view similar to the following example:
Expect to see a value of 2 in the STATUS column of the V$LOGMNR_CONTENTS view if the table definition in the database does not match the table definition in the redo log file.
The online catalog option requires that the database be open.
The online catalog option is not valid with the DDL_DICT_TRACKING option of DBMS_LOGMNR.START_LOGMNR.
Dictionary from Flat File
If reading the dictionary from a flat file located on the database machine, specify the name and location of the file using the DictFileName parameter. This file must have been created previously through the DBMS_LOGMNR_D.BUILD procedure. The dictfilename parameter is only valid when reading the dictionary from a flat file and should not be set when reading the dictionary from the online catalog or the redo log files.
Specifying Search Limit Options
You can limit the search to a specific date and time range using the startTime and endTime parameters. This can be useful to filter only those changes made by someone, for example, that occurred during a certain time frame. Note that the startScn and endScn parameters behave in a similar fashion but filters records based on the System Change Number (SCN).
Specifying Multiple Options
The DBMS_LOGMNR.START_LOGMNR procedure can be executed with several other options (specified through the options parameter), that control the behaviour of the LogMiner session. Click here for a list of all available constants that can be specified for the options parameter in the DBMS_LOGMNR.START_LOGMNR procedure.
To call multiple options at one time using the DBMS_LOGMNR.START_LOGMNR procedure, separate each option using the + character as demonstrated below:
The END_LOGMNR procedure finishes a LogMiner session. Because this procedure performs cleanup operations that may not otherwise be done, you must use it to properly end a LogMiner session. This procedure is called automatically when you log out of a database session or when you call DBMS_LOGMNR.ADD_LOGFILE and specify the NEW option.
The following example ends a LogMiner session and performs all cleanup operations.
The redo entries mined from the redo log files by LogMiner are made available through the V$LOGMNR_CONTENTS view for the current LogMiner session. Out of all of the LogMiner views, this is the one that you will use most often.
LogMiner uses the redo records embedded in the redo log files and the LogMiner dictionary to re-generate equivalent SQL statements executed on the database. The results of the LogMiner analysis, known as the LogMiner contents, are made available through a relation interface provided by the V$LOGMNR_CONTENTS private view. This SQL interface can be used to query SQL statements executed on the database during any point in time you have redo log files for. To query this view, you must have the SELECT ANY TRANSACTION privilege.
When a SELECT statement is executed against the V$LOGMNR_CONTENTS view, the redo log files are read sequentially. Translated records from the redo log files are returned as rows in the V$LOGMNR_CONTENTS view. This continues until either the filter criteria specified at startup (EndTime or endScn) are met or the end of the redo log file is reached.
It is important to note that LogMiner is applied at the session level so that no one can view the LogMiner contents through another session. The results in V$LOGMNR_CONTENTS are only visible during the life of the session which runs the procedure DBMS_LOGMRN.START_LOGMNR. This is because all the LogMiner memory is in PGA memory, so it is neither visible to other sessions, nor is it persistent after disconnecting the session.
Users will often combine the SQL_REDO, COMMIT_TIMESTAMP, and SCN columns to identify activity with a particular transaction that occurred in the database. The SQL_UNDO can then be used to generate a SQL statement that can be used to undo the effect of the original statement that made the change. In addition, the V$LOGMNR_CONTENTS view contains the segment name and owner which is useful in further identification of the objects being altered.
Click the following link for a description of the columns available with the V$LOGMNR_CONTENTS view (version 11.2).
An example query against V$LOGMNR_CONTENTS may look like:
The V$LOGMNR_DICTIONARY operational view shows information about a LogMiner dictionary that was created using the STORE_IN_FLAT_FILE option to DBMS_LOGMNR.START_LOGMNR. The information shown includes information about the database from which the LogMiner dictionary was created.
Click the following link for a description of the columns available with the V$LOGMNR_DICTIONARY view (version 11.2).
An example query against V$LOGMNR_DICTIONARY to obtain information about a dictionary creating using the STORE_IN_FLAT_FILE option to DBMS_LOGMNR.START_LOGMNR may look like:
The V$LOGMNR_PARAMETERS operational view contains information about the current LogMiner session as specified in the call to the DBMS_LOGMNR.START_LOGMNR procedure.
Click the following link for a description of the columns available with the V$LOGMNR_PARAMETERS view (version 11.2).
Use the V$LOGMNR_LOGS operational view to determine which redo log files are being analyzed in the current LogMiner session. This view contains one row for each redo log file that was loaded using the DBMS_LOGMNR.ADD_LOGFILE procedure.
Click the following link for a description of the columns available with the V$LOGMNR_LOGS view (version 11.2).
An example query against V$LOGMNR_LOGS may look like:
Hopefully some of the examples included in this guide may have given you a glimpse of how LogMiner operates. Analyzing redo log files using Oracle LogMiner will commonly include the steps outlined in this section. Depending on the LogMiner configuration and the dictionary options being used, some of these steps will be optional.
All LogMiner operations are performed using procedures contained in the DBMS_LOGMNR_D and DBMS_LOGMNR PL/SQL packages or through the Oracle LogMiner Viewer graphical user interface which is part of Oracle Enterprise Manager. This section describes how to analyze redo log files with LogMiner using the LogMiner PL/SQL packages.
Verify that at least minimal supplemental logging is enabled on the source database before generating redo log files that will be analyzed by LogMiner.
Specify a LogMiner dictionary on the source database. Use the DBMS_LOGMNR_D.BUILD procedure when extracting the dictionary to the redo log files or to a flat file. When using an online catalog, the dictionary option will be specified when you start the LogMiner process.
Identify the redo log files that contain the data of interest that will be analyzed by LogMiner.
Start a LogMiner session by explicitly adding the previously identified redo log files with the DBMS_LOGMNR.ADD_LOGFILE procedure or direct LogMiner to create a list of log files for analysis automatically when you start LogMiner using the DBMS_LOGMNR.START_LOGMNR procedure.
A LogMiner session begins with a call to the DBMS_LOGMNR.ADD_LOGFILE procedure (if you plan to explicitly specify log files) or the DBMS_LOGMNR.START_LOGMNR procedure (if you plan to use continuous mining).
Start LogMiner by calling the DBMS_LOGMNR.START_LOGMNR procedure which loads the dictionary that LogMiner will use to translate internal object identifiers to human-readable names. You can optionally filter the search to a specific date and time range using the startTime and endTime parameters or by SCN using the startScn and endScn parameters of the DBMS_LOGMNR.START_LOGMNR procedure.
The DBMS_LOGMNR.START_LOGMNR procedure must be called before reading the log entries from the V$LOGMNR_CONTENTS view (described next).
Query the results of the LogMiner analysis process through the V$LOGMNR_CONTENTS view.
A LogMiner session ends with a call to the DBMS_LOGMNR.END_LOGMNR procedure.
To see a collection of examples using LogMiner, consult the following tutorial:
While this guide contained an overview to become familiar with Oracle LogMiner, it is by no means a substitute for the official Oracle documentation (see list below). In addition to this guide, users should also consult the following Oracle documents to gain a full understanding of alternative configuration options and additional constraints with Oracle LogMiner. Oracle's official documentation site is docs.oracle.com.
Jeffrey Hunter is an Oracle Certified Professional, Java Development Certified Professional, Author, and an Oracle ACE. Jeff currently works as a Senior Database Administrator for The DBA Zone, Inc. located in Pittsburgh, Pennsylvania. His work includes advanced performance tuning, Java and PL/SQL programming, developing high availability solutions, capacity planning, database security, and physical / logical database design in a UNIX / Linux server environment. Jeff's other interests include mathematical encryption theory, tutoring advanced mathematics, programming language processors (compilers and interpreters) in Java and C, LDAP, writing web-based database administration tools, and of course Linux. He has been a Sr. Database Administrator and Software Engineer for over 20 years and maintains his own website site at: http://www.iDevelopment.info. Jeff graduated from Stanislaus State University in Turlock, California, with a Bachelor's degree in Computer Science and Mathematics.
Copyright (c) 1998-2018 Jeffrey M. Hunter. All rights reserved.
All articles, scripts and material located at the Internet address of http://www.idevelopment.info is the copyright of Jeffrey M. Hunter and is protected under copyright laws of the United States. This document may not be hosted on any other site without my express, prior, written permission. Application to host any of the material elsewhere can be made by contacting me at email@example.com.
I have made every effort and taken great care in making sure that the material included on my web site is technically accurate, but I disclaim any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on it. I will in no case be liable for any monetary damages arising from such loss, damage or destruction.
Last modified on
Friday, 19-Oct-2012 13:02:10 EDT
Page Count: 7566