DBA Tips Archive for Oracle

  


Data Guard Configuration Example - (10g, Logical Standby)

by Jeff Hunter, Sr. Database Administrator

Contents

Introduction

Oracle Data Guard (known as Oracle Standby Database prior to Oracle9i), forms an extension to the Oracle RDBMS and provides organizations with high availability, data protection, and disaster recovery for enterprise databases. Oracle Data Guard provides the DBA with services for creating, maintaining, managing, and monitoring one or more standby databases. The functionality included with Oracle Data Guard enables enterprise data systems to survive both data corruption as well as major disasters.

This article provides instructions for creating and configuring a logical standby database from a primary database using Oracle Database 10g Release 2 (10.2) operating in maximum performance protection mode. It should be noted that several different methods exist to create a logical standby database configuration and that this is just one of those ways. The methods outlined in this guide present a simple approach that should be easy to implement in most situations. In fact, if you break down the essential tasks required to build a logical standby database, you will see that it is essentially nothing more than creating an initial physical standby database, building a dictionary in the redo data for LogMiner, verifying supplemental logging is enabled on both the primary and standby, converting the physical standby database to a logical standby database, putting the logical standby database in managed recovery mode (SQL Apply), and starting remote archiving from the primary database (Redo Transport). Obviously there are a number of smaller steps I am leaving out which will all be discussed in more depth throughout this guide.

All configuration parameters related to the Oracle instance and networking will be discussed as well as how to place the standby database in Managed Recovery Mode.

Introduction to Oracle Data Guard

The standby database feature of Oracle was first introduced with the release of Oracle 7 in the early 1990's. The design was fairly simple. Oracle used media recovery to apply archive logs to a remote standby database, however, none of the automation we now take for granted was present in this release of the product. DBA's were required to write custom scripts that shipped and applied archive logs to the remote standby database. It wasn't until Oracle8i where some form of automation was introduced that relied on Oracle Net Services to transfer and apply archive redo logs. DBA's were still required to supply scripts that handled gap resolution and resynchronize the primary and standby database when they lost connectivity with one another. Also included in Oracle8i was a set of pre-written scripts that simplified the switchover and failover process.

With the introduction of Oracle9i, the standby database feature was renamed to Oracle Data Guard. In addition to the re-branding of the product, Oracle delivered a comprehensive automated solution for disaster recovery that was fully integrated with the database kernel. Finally, a fully integrated disaster recovery solution without the need to maintain custom written scripts! Oracle9i also provided a vast array of new features which included automatic gap resolution, enhanced redo transport methods (synchronous and asynchronous redo transport), the ability to configure zero data loss, and the concept of protection modes.

Until Oracle9i Release 2, the only standby database type available was the physical standby database. A physical standby database is an identical, block-for-block copy of the primary database and is kept in sync with the primary using media recovery (also referred to as Redo Apply). Oracle introduced a new type of standby database with Oracle9i Release 2 named Logical Standby Database. This new type of standby database keeps in sync with the primary database using SQL Apply (versus Redo Apply used with a physical standby database). A logical standby database remains open for user access while logical records are being received and applied from the primary database which makes this a great candidate for a reporting database.

When the standby database site is hosted in a different geographical location than the primary site, it provides for an excellent High Availability (HA) solution. When creating a standby database configuration, the DBA should always attempt to keep the primary and standby database sites identical as well as keeping the physical location of the production database transparent to the end user. This allows for an easy role transition scenario for both planned and unplanned outages. When the secondary (standby) site is identical to the primary site, it allows predictable performance and response time after failing over (or switching over) from the primary site.

Oracle Database Enterprise Edition Requirement

Oracle Data Guard is only available as a bundled feature included within its Enterprise Edition release of the Oracle Database software. It is not available with Oracle Database Standard Edition. With the exception of performing a rolling database upgrade using logical standby database, it is mandatory that the same release of Oracle Database Enterprise Edition be installed on the primary database and all standby databases!

While it remains possible to simulate a standby database environment running Oracle Database Standard Edition, it requires the DBA to develop custom scripts that manually transfer archived redo log files and then manually applying them to the standby database. This is similar to the methods used to maintain a standby database with Oracle 7. The consequence with this type of configuration is that it does not provide the ease-of-use, manageability, performance, and disaster-recovery capabilities available with Data Guard.

Standby Database Types

There are two types of standby databases that can be created with Oracle Data Guard — physical or logical. Deciding which of the two types of standby databases to create is critical and depends on the nature of the business needs the organization is trying to satisfy.

A physical standby database is an identical, block-for-block copy of the primary database and is kept in sync with the primary using media recovery. As redo gets generated on the primary database, it gets transferred to the standby database where an RFS process receives the primary redo and applies the change vectors directly to the standby database. A physical standby database is an excellent choice for disaster recovery.

A logical standby database works in a different manner which keeps in sync with the primary by transforming redo data received from the primary database into logical SQL statements and then executes those SQL statements against the standby database. With a logical standby database, the standby remains open for user access in read/write mode while still receiving and applying logical records from the primary. While a physical standby database is an exact physical replica of the primary, a logical standby database is not. Because Oracle is applying SQL statements to the standby database and not performing media recovery (as is done with a physical standby database), it is possible for the logical standby database to contain the same logical data, but at the same time have a different physical structure. A logical standby database is an excellent solution for a reporting database while at the same time retaining the attributes of a disaster recovery solution. Not only does a logical standby database contain the same logical information as the primary, it can also support the creation of additional objects to support improved reporting requirements.

Data Protection Modes

After deciding between a physical or logical standby database, the next major decision is which data protection mode should be used to operate the Data Guard configuration. At the heart of this decision lies the answer to one important question — how much data loss is your organization willing to endure in the event of a failover? The obvious answer to expect from management is none. Configuring Data Guard with guaranteed no data loss, however, requires a significant investment in equipment and other resources necessary to provide support for this type of environment.

An Oracle Database 10g Data Guard configuration will always run in one of three data protection modes:

Each of the three modes provide a high degree of data protection; however they differ with regards to data availability and performance of the primary database.

When selecting a protection mode, always consider the one that best meets the needs of your business. Carefully take into account the need to protect the data against any loss vs. availability and performance expectations of the primary database. An in-depth discussion on the three available data protection modes and how redo transport works to support them is beyond the scope of this guide. To keep the article simple, I will be using the default protection mode of Maximum Performance.

 

For a detailed discussion on the various Oracle Data Guard protection modes, see my article entitled 'Data Protection Modes'.

Logical Standby Database Enhancements in Oracle Database 10g

Logical standby database, first released with Oracle Database in Release 9.2, has been enhanced in Oracle Database 10g Release 1 to allow rolling upgrades, improve the overall ease-of-use and manageability, expand the disaster recovery capabilities, and simplify the steps to create a logical standby database. The following list summarizes the new features for logical standby databases in Oracle Database 10g (10.1):

Hardware and O/S Configuration

Let's take a look at the Oracle configuration and a summary of the parameters that will be used to create the logical standby database configuration described in this guide.

Primary Database
Oracle Release Oracle 10g Release 2 — (10.2.0.5)
Host Name vmlinux1.idevelopment.info — (192.168.1.160)
Operating System Red Hat Linux 5 — (CentOS 5.5)
Database Name - (db_name) modesto
Database Domain - (db_domain) idevelopment.info
Oracle SID modesto
Database Unique Name - (db_unique_name) modesto
TNS Alias modesto.idevelopment.info
Service Names modesto.idevelopment.info, modesto
Database Files - (db_create_file_dest) /u02/oradata
Flash Recovery Area - (db_recovery_file_dest) /u03/flash_recovery_area
Local Online Redo Log Files - (log_archive_dest_1) location=use_db_recovery_file_dest — (online_logfiles,all_roles)
Remote Archive Destination - (log_archive_dest_2) service=turlock — (online_logfiles,primary_role)
Redo Log Files Received From Primary - (log_archive_dest_3) location=/u04/oracle/oraarch/MODESTO — (standby_logfiles,standby_role)

Logical Standby Database
Oracle Release Oracle 10g Release 2 — (10.2.0.5)
Host Name vmlinux2.idevelopment.info — (192.168.1.162)
Operating System Red Hat Linux 5 — (CentOS 5.5)
Database Name - (db_name) turlock
Database Domain - (db_domain) idevelopment.info
Oracle SID turlock
Database Unique Name - (db_unique_name) turlock
TNS Alias turlock.idevelopment.info
Service Names turlock.idevelopment.info, turlock
Database Files - (db_create_file_dest) /u02/oradata
Flash Recovery Area - (db_recovery_file_dest) /u03/flash_recovery_area
Local Online Redo Log Files - (log_archive_dest_1) location=use_db_recovery_file_dest — (online_logfiles,all_roles)
Remote Archive Destination - (log_archive_dest_2) service=modesto — (online_logfiles,primary_role)
Redo Log Files Received From Primary - (log_archive_dest_3) location=/u04/oracle/oraarch/TURLOCK — (standby_logfiles,standby_role)

It is assumed that Oracle Database 10g Release 2 and all patchsets have been installed on both nodes in the Oracle Data Guard configuration. Click here for a guide on installing Oracle Database 10g R2 on the Red Hat Linux 5 platform.

Prerequisite Conditions for Creating a Logical Standby Database

Before implementing a logical standby database, you should first examine your application to determine if logical standby has support to maintain all of the data types and tables. When a change is performed on the primary database to any objects that are not supported, those changes will not be propagated to the logical standby. When Log Apply services encounters unsupported data types, tables, sequences, or views sent from the the primary database, they will be automatically skipped by SQL Apply on the logical standby database. Moreover, no error message will be returned to the user. This is why it is important to identify unsupported database objects in your application on the primary database before deciding to implement logical standby database.

 

If you determine that the critical tables in your primary database will not be supported on a logical standby database, then you might want to consider using a physical standby database. Physical standby databases do not have any such data type restrictions.

Internal Schema Support

Some schemas that ship with the Oracle database are automatically skipped by SQL Apply. To determine exactly which schemas will be skipped, query the DBA_LOGSTDBY_SKIP view.


SQL> select owner from dba_logstdby_skip where statement_opt = 'INTERNAL SCHEMA' order by owner; OWNER ------------------------------ ANONYMOUS BI CTXSYS DBSNMP DIP DMSYS EXFSYS LBACSYS MDDATA MDSYS MGMT_VIEW OLAPSYS ORACLE_OCM ORDPLUGINS ORDSYS OUTLN SI_INFORMTN_SCHEMA SYS SYSMAN SYSTEM WKPROXY WKSYS WK_TEST WMSYS XDB 25 rows selected.

Determine Support for Data Types and Storage Attributes for Tables

When setting up a logical standby database, you must ensure the logical standby database can maintain the data types and tables in your primary database. Oracle Database 10g provides a simple method to determine exactly which database objects are supported and which are not supported. Run the following query on the primary database to list all tables that contain unsupported data types:


SQL> select distinct owner, table_name 2 from dba_logstdby_unsupported 3 order by owner, table_name; OWNER TABLE_NAME ------------------------------ ------------------------------ IX AQ$_ORDERS_QUEUETABLE_G IX AQ$_ORDERS_QUEUETABLE_H IX AQ$_ORDERS_QUEUETABLE_I IX AQ$_ORDERS_QUEUETABLE_S IX AQ$_ORDERS_QUEUETABLE_T IX AQ$_STREAMS_QUEUE_TABLE_C IX AQ$_STREAMS_QUEUE_TABLE_G IX AQ$_STREAMS_QUEUE_TABLE_H IX AQ$_STREAMS_QUEUE_TABLE_I IX AQ$_STREAMS_QUEUE_TABLE_S IX AQ$_STREAMS_QUEUE_TABLE_T IX ORDERS_QUEUETABLE IX STREAMS_QUEUE_TABLE OE CATEGORIES_TAB OE CUSTOMERS OE WAREHOUSES PM ONLINE_MEDIA PM PRINT_MEDIA SCOTT EMPLOYEE SCOTT PARTS SH MVIEW$_EXCEPTIONS 21 rows selected.

Any tables returned from the above query are considered unsupported by logical standby database for one of several reasons which include unsupported data types, use of compression, or belonging to a default Oracle schema that is not maintained. To determine which column/data type is unsupported, the above query can be modified to include the column information for a specified table along with why it is unsupported. The ATTRIBUTES column of DBA_LOGSTDBY_UNSUPPORTED displays the reason why the table is not supported by SQL Apply. If the structure of the table is unsupported (not just the data type of some columns), this view displays all columns in the table, with one of the following values displayed for each column in the table:

IOT with Overflow
IOT with LOB
Mapping table for physical rowid of IOT
IOT with row movement
Table Compression
Object Table
Table with encrypted column

If the structure of the table is supported but certain columns in the table have unsupported data types, the ATTRIBUTE column will be NULL.


SQL> select owner, table_name, column_name, data_type, attributes 2 from dba_logstdby_unsupported 3 where owner = 'SCOTT' 4 order by owner, table_name, column_name; OWNER TABLE_NAME COLUMN_NAME DATA_TYPE ATTRIBUTES ---------- ------------ ------------ ---------- ------------------ SCOTT EMPLOYEE EMPLOYEE PERSON_VA SCOTT PARTS PART_ID NUMBER Table Compression SCOTT PARTS PART_NAME VARCHAR2 Table Compression

The following lists the various database objects, storage types, and PL/SQL supplied packages that are supported and unsupported by logical standby databases with Oracle Database 10g Release 2.

Data Type Considerations

Storage Type Considerations

PL/SQL Supplied Packages Considerations

Skipped SQL Statements on a Logical Standby Database

By default, the following SQL statements are automatically skipped by SQL Apply:


ALTER DATABASE ALTER MATERIALIZED VIEW ALTER MATERIALIZED VIEW LOG ALTER SESSION ALTER SYSTEM CREATE CONTROL FILE CREATE DATABASE CREATE DATABASE LINK CREATE PFILE FROM SPFILE CREATE MATERIALIZED VIEW CREATE MATERIALIZED VIEW LOG CREATE SCHEMA AUTHORIZATION CREATE SPFILE FROM PFILE DROP DATABASE LINK DROP MATERIALIZED VIEW DROP MATERIALIZED VIEW LOG EXPLAIN LOCK TABLE SET CONSTRAINTS SET ROLE SET TRANSACTION

All other SQL statements executed on the primary database are applied to the logical standby database.

Ensure Table Rows in the Primary Database Can Be Uniquely Identified

When a row gets updated on the primary database, a redo record is created that can uniquely identify the exact row that was changed. If a table does not contain a primary key or non-null unique key index, the ROWID can be used to locate the row. Using a ROWID is an extremely fast and efficient way to uniquely locate a row on the primary database, however, it does not work in the case of a logical standby database. Even though the logical standby database is created from a backup copy of the primary database, the physical organization in a logical standby database is different from that of the primary database. Thus, ROWIDs contained in the redo records generated by the primary database cannot be used to identify the corresponding row in the logical standby database. To resolve this issue and to improve overall performance, add a primary key or non-null unique key index to as many of your application tables as possible. Doing so will allow SQL Apply to efficiently apply updates to those tables on the logical standby database.

Oracle uses primary-key or unique-constraint/index supplemental logging to logically identify a modified row in the logical standby database. When database-wide primary-key and unique-constraint/index supplemental logging is enabled, each UPDATE statement also writes the column values necessary in the redo log to uniquely identify the modified row in the logical standby database.

Oracle recommends that you add a primary key or a non-null unique index to tables in the primary database, whenever possible, to ensure that SQL Apply can efficiently apply redo data updates to the logical standby database.

Perform the following steps to ensure SQL Apply can uniquely identify rows of each table being replicated in the logical standby database.

Step 1 — Find tables without unique logical identifier in the primary database

Query the DBA_LOGSTDBY_NOT_UNIQUE view to display a list of tables that SQL Apply may not be able to uniquely identify. For example:


SQL> select owner, table_name 2 from dba_logstdby_not_unique 3 where (owner, table_name) not in ( 4 select distinct owner, table_name 5 from dba_logstdby_unsupported) 6 order by owner, table_name; OWNER TABLE_NAME -------------- ------------------------------ SCOTT LOCATIONS SH COSTS SH SALES SH SUPPLEMENTARY_DEMOGRAPHICS TSMSYS SRS$

Note that tables returned in the above query can still be supported by SQL Apply because supplemental logging (when enabled) will place additional information in the redo stream that will be used by SQL Apply to assist in uniquely identifying which rows to modify on the logical standby. The supplemental logging information ensures each update contains enough information to logically identify each row that is modified by the statement.

Step 2 — Add a disabled primary-key RELY constraint

If your application ensures the rows in a table are unique, you can create a disabled primary key RELY constraint on the table. This avoids the overhead of maintaining a primary key on the primary database.

To create a disabled RELY constraint on a primary database table, use the ALTER TABLE statement with a RELY DISABLE clause. The following example creates a disabled RELY constraint on a table named MYTAB, for which rows can be uniquely identified using the ID and NAME columns:


SQL> ALTER TABLE mytab ADD PRIMARY KEY (id, name) RELY DISABLE;

When you specify the RELY constraint, the system will assume that rows are unique. Because you are telling the system to rely on the information, but are not validating it on every modification done to the table, you must be careful to select columns for the disabled RELY constraint that will uniquely identify each row in the table. If such uniqueness is not present, then SQL Apply will not correctly maintain the table.

To improve the performance of SQL Apply, add a unique-constraint/index to the columns to identify the row on the logical standby database. Failure to do so results in full table scans during UPDATE or DELETE statements carried out on the table by SQL Apply.

Create the Logical Standby Database

After verifying that the logical standby database can be used to support your given application, use the steps outlined in this section to create the logical standby.

  1. Create a Physical Standby Database

    The first step in creating a logical standby using Oracle Database 10g or higher, is to create an initial physical standby database. This physical database will be converted to a logical standby using the steps outlined in this section. Once the physical standby is created, start the Redo Apply process and allow managed recovery (Redo Apply) to bring the standby database up to a consistent state with the primary database.

    The following article can be used to create the initial physical standby database and enable managed recovery mode:

    The remaining steps in this section will document how to convert the new physical standby database to a logical standby database.

  2. Stop Redo Apply on the Physical Standby Database

    It is possible to have Redo Apply running on the new physical standby database for any length of time before converting it to a logical standby database. However, before converting to a logical standby database, stop Redo Apply on the physical standby database. Stopping Redo Apply is necessary to avoid applying changes past the redo that contains the LogMiner dictionary (described in the next section).

    To stop Redo Apply, issue the following statement on the physical standby database. If the database is a RAC database comprised of multiple instances, then you must first stop all RAC instances except one before issuing this statement:


    SQL> alter database recover managed standby database cancel; Database altered.

  3. Build a Dictionary in the Redo Data

    A LogMiner dictionary must be built into the redo data so that the LogMiner component of SQL Apply can properly interpret changes it sees in the redo.

    To build the LogMiner dictionary, issue the following statement from the primary database:


    SQL> execute dbms_logstdby.build; PL/SQL procedure successfully completed.

    The DBMS_LOGSTDBY.BUILD procedure waits for all existing transactions to complete. Long-running transactions executed on the primary database will affect the timeliness of this command.

    The DBMS_LOGSTDBY.BUILD procedure uses Flashback Query to obtain a consistent snapshot of the data dictionary that is then logged in the redo stream. Oracle recommends setting the UNDO_RETENTION initialization parameter to 3600 on both the primary and logical standby databases.

    As part of building LogMiner Multiversioned Data Dictionary, supplemental logging is automatically set up to log primary key and unique-constraint/index columns (see next section).

  4. Verify Supplemental Logging is Enable on the Primary and Standby

    Supplemental logging must be enabled on both the primary and standby databases in order for a logical standby to function and support role transitions. When building the LogMiner Multiversioned Data Dictionary in the previous section, supplemental logging was automatically set up to log primary key and unique-constraint/index columns on the primary database. In order to support role transition, supplemental logging should be enabled on both the primary and new physical standby database.

    When supplemental logging is enabled, additional information is is placed in the redo stream that will be used by SQL Apply to assist in uniquely identifying which rows to modify on the logical standby. The supplemental logging information ensures each update contains enough information to logically identify each row that is modified by the statement.

    Run the following query on both the primary and standby database to determine if supplemental logging is enabled:

    Primary Database

    Supplemental logging should already be enabled on the primary database as a result of building the LogMiner Multiversioned Data Dictionary in the previous section.


    SQL> select 2 supplemental_log_data_pk as "PK Logging" 3 , supplemental_log_data_ui as "UI Logging" 4 from v$database; PK Logging UI Logging ----------- ----------- YES YES

    Standby Database

    In order to support role transition, supplemental logging should be enabled on the new physical standby database.


    SQL> select 2 supplemental_log_data_pk as "PK Logging" 3 , supplemental_log_data_ui as "UI Logging" 4 from v$database; PK Logging UI Logging ----------- ----------- NO NO

    If either column in the above query report NO, supplemental logging must be enabled by issuing the following statement on both the primary and standby database:


    SQL> alter database add supplemental log data (primary key, unique index) columns; Database altered.

    After executing the statement above, re-query V$DATABASE to ensure that SUPPLEMENTAL_LOG_DATA_PK and SUPPLEMENTAL_LOG_DATA_UI report a value of YES.


    SQL> select 2 supplemental_log_data_pk as "PK Logging" 3 , supplemental_log_data_ui as "UI Logging" 4 from v$database; PK Logging UI Logging ----------- ----------- YES YES

  5. Convert Physical Standby Database to a Logical Standby Database

    The following sections describe how to prepare the physical standby database to transition to a logical standby database.

    At this point, the redo logs contain all of the information necessary to convert the physical standby database to a logical standby database. To continue applying redo data to the physical standby database until it is ready to convert to a logical standby database, issue the following SQL statement on the standby:


    ALTER DATABASE RECOVER TO LOGICAL STANDBY db_name;

    For db_name, specify a database name to uniquely identify the new logical standby database. For the purpose of this guide, I will use the obvious choice of naming the database the same as the instance name (turlock). For example:


    SQL> alter database recover to logical standby turlock; Database altered.

    The SQL statement above runs the nid command-line utility underneath the scenes to change the name of the logical standby database to a unique name (DB_NAME). If you are using a server parameter file (spfile) at the time you issue this statement, then the database will update the file with appropriate information about the new logical standby database. If you are not using an spfile, then the database issues a message reminding you to set the name of the DB_NAME parameter after shutting down the database. The following is an excerpt from the alert.log file of the new logical standby database after running the ALTER DATABASE that displays tracing information from the nid command:


    *** DBNEWID utility started *** DBID will be changed from 2026707242 to new DBID of 1406699693 for database MODESTO DBNAME will be changed from MODESTO to new DBNAME of TURLOCK Starting datafile conversion kcv_lh_or_upgrade: 10.2 upgrading 1 incarnations Setting recovery target incarnation to 1 Datafile conversion complete Failed to find temporary file: /u02/oradata/TURLOCK/datafile/o1_mf_temp_%u_.tmp Database name changed to TURLOCK. Modify parameter file and generate a new password file before restarting. Database ID for database TURLOCK changed to 1406699693. All previous backups and archived redo logs for this database are unusable. Database has been shutdown, open with RESETLOGS option. Succesfully changed database name and ID. *** DBNEWID utility finished succesfully ***

    The ALTER DATABASE statement above waits, applying redo data to the standby until the LogMiner dictionary is found in the log files. This may take several minutes, depending on how long it takes redo generated in the previous section to be transmitted to the standby database, and how much redo data needs to be applied. I have always been uncertain why the Oracle documentation has us cancel managed recovery before building the LogMiner dictionary, but I've never had the ALTER DATABASE statement hang while waiting for the redo log containing the LogMiner dictionary to arrive at the standby so I don't give it much thought anymore. If the ALTER DATABASE statement does hang while waiting for the archived redo log containing the LogMiner dictionary, you may have to manually register the missing logs to the standby using:


    SQL> alter database register logfile '<name_of_log_file>';

    Continue registering any missing logs to the standby until you receive the error "ORA-01507: database not mounted". This indicates that the standby has received and applied the LogMiner dictionary and the ALTER DATABASE statement that was waiting, should now be complete.

    If a dictionary build was not successfully performed on the primary database, this command will never complete. You can cancel the SQL statement by issuing the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL statement from another SQL session.

  6. Create a New Password File for Logical Standby

    Because the conversion process changes the database name for the logical standby database (that was originally set with the DB_NAME initialization parameter), you must re-create the password file.


    [oracle@vmlinux2 ~]$ cd $ORACLE_HOME/dbs [oracle@vmlinux2 dbs]$ rm orapwturlock [oracle@vmlinux2 dbs]$ orapwd file=orapwturlock password=MySysPassword

  7. Adjust Initialization Parameters for the Logical Standby Database

    During the creation of the physical standby database, several initialization parameters were configured on both the primary and standby database for their role in the Data Guard configuration. The Data Guard configuration described in this guide will not be maintaining support for an existing physical standby database. Only the new logical standby database will be maintained and therefore will only require minimal changes to the initialization parameters on the primary and standby. In fact, the only initialization parameters that you need to be concerned about are the log archive destination and DB_FILE_NAME_CONVERT parameters.

    Log Archive Destination Parameters

    You may need to modify the LOG_ARCHIVE_DEST_n parameters because, unlike physical standby databases, logical standby databases are open databases that generate redo data and have multiple log files (online redo log files, archived redo log files, and standby redo log files). It is good practice to specify separate local destinations for:

    Modify the log archive destination initialization parameters for the logical standby database as follows:


    # ---[ Log Archive Destination Parameters (Logical Standby) ] --- # LOG_ARCHIVE_DEST_1= 'location=use_db_recovery_file_dest valid_for=(online_logfiles,all_roles) db_unique_name=turlock' LOG_ARCHIVE_DEST_2= 'service=modesto.idevelopment.info valid_for=(online_logfiles,primary_role) db_unique_name=modesto' LOG_ARCHIVE_DEST_3= 'location=/u04/oracle/oraarch/TURLOCK valid_for=(standby_logfiles,standby_role) db_unique_name=turlock' LOG_ARCHIVE_DEST_STATE_1='enable' LOG_ARCHIVE_DEST_STATE_2='enable' LOG_ARCHIVE_DEST_STATE_3='enable'

     

    To configure the local archive destination to use the Flash Recovery Area, you must explicitly define the destination parameter (i.e. LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST') and include the VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) option. See the Managing a Logical Standby Database section for more information.

    The following table describes the archival processing defined by the initialization parameters shown in this example for the new turlock logical standby database.

      When the Turlock Database Is Running in the
    Primary Role
    When the Turlock Database Is Running in the
    Logical Standby Role
    LOG_ARCHIVE_DEST_1 Directs archival of redo data generated by the primary database from the local online redo log files to the local archived redo log files in the Flash Recovery Area (USE_DB_RECOVERY_FILE_DEST). Directs archival of redo data generated by the logical standby database from the local online redo log files to the local archived redo log files in the Flash Recovery Area (USE_DB_RECOVERY_FILE_DEST).
    LOG_ARCHIVE_DEST_2 Directs transmission of redo data to the remote logical standby database modesto. Is ignored; LOG_ARCHIVE_DEST_2 is valid only when turlock is running in the primary role.
    LOG_ARCHIVE_DEST_3 Is ignored; LOG_ARCHIVE_DEST_3 is valid only when turlock is running in the standby role. Directs archival of redo data received from the primary database to the local archived redo log files in /u04/oracle/oraarch/TURLOCK.

    To support role transition, modify the log archive destination initialization parameters for the primary database as well:


    # ---[ Log Archive Destination Parameters (Primary) ] --- # LOG_ARCHIVE_DEST_1= 'location=use_db_recovery_file_dest valid_for=(online_logfiles,all_roles) db_unique_name=modesto' LOG_ARCHIVE_DEST_2= 'service=turlock.idevelopment.info valid_for=(online_logfiles,primary_role) db_unique_name=turlock' LOG_ARCHIVE_DEST_3= 'location=/u04/oracle/oraarch/MODESTO valid_for=(standby_logfiles,standby_role) db_unique_name=modesto' LOG_ARCHIVE_DEST_STATE_1='enable' LOG_ARCHIVE_DEST_STATE_2='enable' LOG_ARCHIVE_DEST_STATE_3='enable'

    The DB_FILE_NAME_CONVERT Initialization Parameter

    The DB_FILE_NAME_CONVERT initialization parameter is not honored once a physical standby database is converted to a logical standby database. This can be a problem, for example, when adding a non-OMF datafile to the primary database and the datafile paths are different between the primary and standby.

    If necessary, you should register a skip handler and provide SQL Apply with a replacement DDL string to execute by converting the path names of the primary database datafiles to the standby datafile path names.

    See the Managing a Logical Standby Database section for more information on how to set up a skip handler for DDL statements.

  8. Mount the Logical Standby Database

    On the logical standby database, shutdown the instance and issue the STARTUP MOUNT statement to start and mount the database. Do not open the database; it should remain closed to user access until later in the creation process.


    SQL> shutdown; SQL> startup mount;

  9. Open the Logical Standby Database

    The new standby database is logically the same as the primary database, but it is transactionally inconsistent with the primary database, and thus incompatible for recovery operations.

    To open the new logical standby database, you must open it with the RESETLOGS option by issuing the following statement:


    SQL> alter database open resetlogs;

    Because this is the first time the database is being opened, the database's global name is adjusted automatically to match the new DB_NAME initialization parameter.


    SQL> select * from global_name; GLOBAL_NAME --------------------------- TURLOCK.IDEVELOPMENT.INFO

  10. Start SQL Apply

    Issue the following statement to start real-time apply on the logical standby database to immediately apply redo data from the standby redo log files to the logical standby database:


    SQL> alter database start logical standby apply immediate;

    After starting SQL Apply for the first time, certain actions are performed to prepare the LogMiner dictionary that will be used for all future operations. The time it takes for this one-time operation to complete will vary depending on the size of the data dictionary and the capacity of the standby machine.

Start Remote Archiving

With the SQL Apply engine started on the logical standby database, the next and final phase is to start shipping redo data to the standby from the primary.

If the remote archive destination on the primary database, (the destination defined that will transfer redo data to the standby database), is set to DEFER, enable SQL Transport Services by issuing the following statement


SQL> alter system set log_archive_dest_state_2=enable scope=both; System altered.

With the protection mode used in this guide (maximum performance), archiving of redo data to the remote standby does not occur until after a log switch. By default, a log switch occurs when an online redo log becomes full which means the standby database does not get updated until then.

To force the current redo logs to be archived immediately, use the following statement on the primary database:


SQL> alter system archive log current; System altered.

At this point, the standby database will continue to apply changes from archive redo logs being transferred from the primary.

Verifying the Logical Standby Database

With the standby and primary databases now in operation, the next step is to verify the Data Guard configuration. This will ensure that Redo Transport on the primary and SQL Apply on the logical standby are working correctly.

Given this Data Guard configuration is running in maximum performance mode, the validation tasks will involve switching redo log files from the primary and verifying those log files are being shipped and applied to the physical standby database.

Redo Transport

From the primary database, perform a log switch and then verify the transmissions of the archive redo log file was successful:


SQL> alter system switch logfile; System altered. SQL> select status, error from v$archive_dest where dest_id = 2; STATUS ERROR --------- --------------------------------------------------------- VALID

If the transmission was successful, the status of the destination will be VALID as shown above. If for any reason the transmission was unsuccessful, the status will be INVALID and the full text of the error message will be populated in the ERROR column which can be used to investigate and correct the issue.

SQL Apply Progress

To verify SQL Apply, identify the current registered log for the logical standby database, archive a log or two from the primary, and then check the standby database again. This test will ensure that redo data was shipped from the primary and then successfully received, archived, and applied to the standby.

First, identify the current registered log for the logical standby database:


[oracle@vmlinux2 ~]$ sqlplus sys/MySysPassword@turlock as sysdba SQL> alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS'; Session altered. SQL> select sequence#, first_time, next_time, applied from dba_logstdby_log order by sequence#; SEQUENCE# FIRST_TIME NEXT_TIME APPLIED ---------- -------------------- -------------------- -------- 3958 17-JAN-2011 19:24:50 17-JAN-2011 19:28:08 CURRENT

From the primary database, archive the current log using the following SQL statement:


SQL> alter system archive log current; System altered.

Go back to the standby database and re-query the DBA_LOGSTDBY_LOG view to verify redo data was shipped, received, and applied:


SQL> select sequence#, first_time, next_time, applied from dba_logstdby_log order by sequence#; SEQUENCE# FIRST_TIME NEXT_TIME APPLIED ---------- -------------------- -------------------- -------- 3959 17-JAN-2011 19:28:08 17-JAN-2011 19:36:25 CURRENT

Another good method used to determine the progress of SQL Apply is to query the DBA_LOGSTDBY_PROGRESS view.


SQL> select applied_scn, newest_scn from dba_logstdby_progress; APPLIED_SCN NEWEST_SCN ----------- ---------- 2166247 2166247

The APPLIED_SCN column lists the highest SCN that has been applied to the logical standby database while the NEWEST_SCN column lists the highest SCN that has been received from the primary. When these two SCN values are equal, we know that all available redo that has arrived from the primary has been applied to the standby. If the APPLIED_SCN is below the NEWEST_SCN and is increasing, SQL Apply is currently applying changes. The READ_SCN column (not shown above), reports the SCN that the SQL Apply engine will start at, should it be restarted.

Monitoring the alert.log of the Standby Database

Querying the DBA_LOGSTDBY_LOG view from the standby database is a good way to ensure Redo Transport and SQL Apply is doing their job correctly. In addition, I also like to tail the alert.log file of the standby database as a double check.

From the standby database, perform a tail -f against the alert.log while issuing the "alter system archive log current" statement from the primary:


[oracle@vmlinux2 ~]$ cd /u01/app/oracle/admin/turlock/bdump [oracle@vmlinux2 bdump]$ tail -f alert_turlock.log Mon Jan 17 19:28:54 EST 2011 RFS[2]: Successfully opened standby log 4: '/u02/oradata/TURLOCK/onlinelog/o1_mf_4_6lt4dty1_.log' Mon Jan 17 19:28:54 EST 2011 RFS LogMiner: Client enabled and ready for notification Mon Jan 17 19:28:54 EST 2011 LOGMINER: Begin mining logfile for session 1 thread 1 sequence 3958, /u02/oradata/TURLOCK/onlinelog/o1_mf_4_6lt4dty1_.log Mon Jan 17 19:28:54 EST 2011 LOGMINER: End mining logfile: /u02/oradata/TURLOCK/onlinelog/o1_mf_4_6lt4dty1_.log Mon Jan 17 19:28:54 EST 2011 RFS LogMiner: Registered logfile [/u04/oracle/oraarch/TURLOCK/1_3958_736458474.dbf] to LogMiner session id [1] Mon Jan 17 19:28:54 EST 2011 LOGMINER: Log Auto Delete - deleting: /u04/oracle/oraarch/TURLOCK/1_3957_736458474.dbf Deleted file /u04/oracle/oraarch/TURLOCK/1_3957_736458474.dbf

Verify SQL Apply Engine

To view the SQL Apply engine on the logical standby and the status of each process associated with SQL Apply, use the V$LOGSTDBY view. For example, the following query will show the name of the SQL Apply process, the SCN it is currently working on, and its current status:


SQL> select type, high_scn, status, pid 2 from v$logstdby 3 order by type; TYPE HIGH_SCN STATUS PID ------------- ---------- ---------------------------------------------------------- -------- ANALYZER 2165785 ORA-16117: processing 29087 APPLIER 2164857 ORA-16116: no work available 29089 APPLIER 2111208 ORA-16116: no work available 29095 APPLIER 2111204 ORA-16116: no work available 29093 APPLIER 2164689 ORA-16116: no work available 29091 APPLIER ORA-16116: no work available 29097 BUILDER 2165786 ORA-16116: no work available 29083 COORDINATOR 2165848 ORA-16116: no work available 29143 PREPARER 2165785 ORA-16116: no work available 29085 READER 2165848 ORA-16240: Waiting for logfile (thread# 1, sequence# 3950) 29081

It is useful to track the value of the HIGH_SCN column of V$LOGSTDBY view. As long as this value is increasing on subsequent queries of the view, it is an indicator that progress is being made and that changes are being applied by the SQL Apply engine.

When any of the SQL Apply processes (above) show a status of no work available, it is considered to be idle and waiting on information from the primary database.

Another useful resource for gathering information about the SQL Apply engine is the V$LOGSTDBY_STATS view, which provides state and status information. Oracle Data Guard provides the DBMS_LOGSTDBY.APPLY_SET procedure which allows the DBA to set the values of various parameters that configure and maintain SQL Apply. All of these options and their values can be gathered from the V$LOGSTDBY_STATS view (both default and set values).


SQL> select * from v$logstdby_stats order by name; NAME VALUE -------------------------------- ---------- CTAS txns delivered 84 DDL txns delivered 433 DML txns delivered 1667 LCRs delivered to client 69484 Log Miner session ID 1 Recursive txns delivered 16394 Rolled back txns seen 29 apply delay 0 bytes checkpointed 4050284 bytes of redo processed 178564904 bytes paged out 0 bytes rolled back 0 coordinator state IDLE coordinator uptime 123418 maximum SGA for LCR cache 30 maximum events recorded 100 number of appliers 5 number of preparers 1 parallel servers in use 9 preserve commit order TRUE realtime logmining Y record applied DDL N record skip DDL Y record skip errors Y record unsupported operations N seconds spent in checkpoint 0 seconds spent in pageout 0 seconds spent in rollback 0 seconds system is idle 119103 transaction consistency FULL transactions applied 246 transactions ready 246 txns delivered to client 18494 33 rows selected.

You can also use the V$LOGSTDBY_STATS view to count the number of transactions applied or transactions read by SQL Apply to determine if transactions are being applied as fast as they are being read.

Post-Creation Steps

At this point, the logical standby database is running and can provide the maximum performance level of data protection (the default). The following list describes additional preparations you can take on the logical standby database:

Managing a Logical Standby Database

SQL Apply Architecture

As you have seen in this article, the Log Transport service or the method of delivering change data to a physical and logical standby database are pretty much the same. The same, however, cannot be said about the method used to apply changes to the standby database as they are very different from one another. A logical standby applies changes by reading redo from either archived redo logs or standby redo logs and converting this redo into SQL statements. These SQL statements are then applied to the database tables on the standby database while it is open in read/write mode. This section examines the six key processes that make up the SQL Apply engine.

SQL Apply uses a collection of parallel execution servers and background processes to apply changes from the primary database to the logical standby database. The purpose of these background processes are to redo the redo, transform the redo, construct transactions, and apply transactions to the database tables.

Figure 1: Overview of the SQL Apply Architecture

The different processes involved and their functions during log mining and apply processing are as follows.

During Log Mining:

During Apply Processing:

You can query the V$LOGSTDBY_PROCESS view to examine the activity of the SQL Apply processes. Another view that provides information about current activity is the V$LOGSTDBY_STATS view that displays statistics, current state, and status information for the logical standby database during SQL Apply activities.

Starting and Stopping SQL Apply

The first step in starting SQL Apply is to bring the logical standby database to an open state. After the standby database is open, issue the following statement as SYSDBA to begin applying redo data to the logical standby database:


SQL> alter database start logical standby apply;

After running the above statement, Oracle will spawn all six SQL Apply processes and will read redo data from the archived redo logs as they are registered with the logical standby database.

By default, Data Guard waits for the full archived redo log file to arrive on the standby database before applying it to the standby database. However, if you have configured standby redo log files on the standby database, you can optionally enable real-time apply. With real-time apply enabled, SQL Apply applies redo data from standby redo log files at the same time the log files are being written to, as opposed to applying from an archived redo log file after a log switch occurs. Immediately applying standby redo log files in this manner keeps the logical standby database closely caught up with the primary database, without requiring the standby redo log files to be archived at the standby site. This can result in quicker switchovers and failovers.

To start real-time apply on the logical standby database, include the IMMEDIATE keyword as shown in the following statement:


SQL> alter database start logical standby apply immediate;

Before shutting down the logical standby database or before attempting to change attributes of the SQL Apply engine, you will need to first stop logical apply. Use the following statement to stop SQL Apply on the logical standby database:


SQL> alter database stop logical standby apply;

Automatic Deletion of Log Files

By default, SQL Apply automatically deletes archived redo log files from the file system and the data dictionary of the logical standby when they are no longer needed.

This behavior can be overridden by executing the following PL/SQL procedure on the logical standby:


SQL> execute dbms_logstdby.apply_set('LOG_AUTO_DELETE', 'FALSE'); PL/SQL procedure successfully completed.

 

By default, SQL Apply will delete archived redo log files that it no longer needs. If you flash back the logical standby database, it may bring the logical standby database to a state, where an archived redo log file is present in SQL Apply metadata (as reflected in the DBA_LOGSTDBY_LOGS view) but absent in the file system. An attempt to restart SQL Apply following a Flashback Database operation may fail with the following error in the alert log:

Errors in file /u01/app/oracle/admin/turlock/bdump/turlock_lsp0_28465.trc:
ORA-00308: cannot open archived log '/u04/oracle/oraarch/TURLOCK/1_5197_736458474.dbf'
ORA-27037: unable to obtain file status

You need to copy the archived redo log files that have been deleted by the automatic deletion policy to the appropriate directory and restart SQL Apply.

Although SQL Apply automatically deletes archived redo log files when they are no longer needed on the logical standby database, there may be times when you want to manually remove them (for example, to reclaim disk space).

If you are overriding the default automatic log deletion capability, perform the following steps to identify and delete archived redo log files that are no longer needed by SQL Apply:

  1. To purge the logical standby session of metadata that is no longer needed, enter the following PL/SQL statement:


    SQL> execute dbms_logstdby.purge_session; PL/SQL procedure successfully completed.

    This statement also updates the DBA_LOGMNR_PURGED_LOG view that displays the archived redo log files that are no longer needed.

  2. Query the DBA_LOGMNR_PURGED_LOG view to list the archived redo log files that can be removed:


    SQL> select * from dba_logmnr_purged_log; FILE_NAME ------------------------------------------------- /u04/oracle/oraarch/TURLOCK/1_5198_736458474.dbf /u04/oracle/oraarch/TURLOCK/1_5199_736458474.dbf

  3. Use an operating system-specific command to delete the archived redo log files listed by the query above.

Protecting Replicated Tables on a Logical Standby from User Modifications

A common question after building a logical standby database is, "Now that the logical standby database is open for read / write user access and tables are being replicated from the primary, what is preventing users from modifying those replicated tables?" When working with a physical standby database, we never had to worry about this type of scenario. For example, if a user mistakenly issues a DML or DDL statement, it will fail because the physical standby database is either mounted or in read-only mode. A logical standby database, however, is open for read / write access. To address the concerns of protecting replicated tables in a logical standby database, Oracle Data Guard provides the GUARD feature.

A database GUARD can be configured with one of the following three values:

Use the ALTER DATABASE statement to set the GUARD for a database.


alter database guard (all | standby | none)

For example, to set the GUARD to STANDBY on a logical standby database, use:


SQL> alter database guard standby; Database altered.

 

In order to preserve the integrity of a logical standby database, many organizations choose to GUARD the logical standby using the NONE value which basically prevents any type of modifications at the database level. There are times, however, when a small modification needs to occur on the logical standby database. For example, a table or index needs to be created on the logical standby to support the requirements of an application working against the standby. Although the DBA can quickly set the GUARD to STANDBY or NONE, make the necessary modifications, and then set the GUARD back to NONE, this can present a dangerous scenario. Even though the entire process may require a very small amount of time, these actions are performed at the database level and can allow users temporary access to make other modifications outside of the ones planned.

Fortunately, Oracle Data Guard provides a better solution that allows the DBA to temporarily disable the GUARD for the current session only. To do this, issue the following statement:

SQL> alter session disable guard;

Session altered.

Once all modifications have been made to the logical standby database, re-enable the GUARD for you session:

SQL> alter session enable guard;

Session altered.

By default, the GUARD is set to NONE for a primary database which means users and applications are free to modify any tables for which they have the privileges necessary to perform modifications.


# ---[ Default GUARD Value for a Primary Database ] --- # SQL> select guard_status from v$database; GUARD_STATUS ------------ NONE

By default, the GUARD is set to ALL for a logical standby database which is the most stringent setting as it prevents user modifications to all tables in a database, replicated by SQL Apply or not.


# ---[ Default GUARD Value for a Logical Standby Database ] --- # SQL> select guard_status from v$database; GUARD_STATUS ------------ ALL

Note that the NONE and ALL settings are available to all databases (primary or standby) while the STANDBY setting is only meaningful on a logical standby database.

 

Be careful not to set the logical standby GUARD on the primary database as this one sure way to quickly bring the production database to a screeching halt.

SQL> connect sys/manager@modesto as sysdba    -- Primary database
Connected.

SQL> alter database guard standby;

Database altered.

SQL> connect scott/tiger    -- Connect as non-sys user
Connected.

SQL> update scott.emp set monthly_salary = 235000 where emp_id = 30;
update scott.emp set monthly_salary = 235000 where emp_id = 30
             *
ERROR at line 1:
ORA-16224: Database Guard is enabled

SQL> connect sys/manager@modesto as sysdba
Connected.

SQL> alter database guard none;    -- Let's go ahead and disable the GUARD!

Database altered.

You could have used the ALL setting on the primary database to achieve the same results. In fact, use the ALL setting is a quick and efficient method for making your production database a read-only database without the need for a shutdown.

Errors When Attempting to Use the Flash Recovery Area for Local Archiving

When configuring the local archive log destination for a primary or physical standby database, I use the Flash Recovery Area (FRA). Doing the same for a logical standby database proved to be a little more challenging.

When attempting to configure the logical standby database in this guide to use the FRA for its local archive log destination, I explicitly set the destination for the FRA by using the USE_DB_RECOVERY_FILE_DEST parameter as follows:


log_archive_dest_1='location=use_db_recovery_file_dest'

The logical standby database, however, didn't like that:


SQL> startup ORACLE instance started. Total System Global Area 1241513984 bytes Fixed Size 1273420 bytes Variable Size 318767540 bytes Database Buffers 905969664 bytes Redo Buffers 15503360 bytes ORA-16026: parameter location=USE_DB_RECOVERY_FILE_DEST contains an invalid attribute value

To configure the local archive destination to use the FRA, I found that you must explicitly define the destination parameter to include the VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) option.


LOG_ARCHIVE_DEST_1= 'location=use_db_recovery_file_dest valid_for=(online_logfiles,all_roles)'

Setting up a Skip Handler for a DDL Statement

The DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT initialization parameters are not honored once a physical standby database is converted to a logical standby database. This can be a problem, for example, when adding a datafile to the primary database and the datafile paths are different between the primary and standby. This section describes the steps necessary to register a skip handler and provide SQL Apply with a replacement DDL string to execute by converting the path names of the primary database datafiles to the standby datafile path names.

This may or may not be a problem for everyone. For example, if you are using Oracle Managed Files (OMF), SQL Apply will successfully execute DDL statements generated from the primary to CREATE and ALTER tablespaces and their associated system generated path name on the logical standby.


# ---[ Primary ] --- # SQL> select name, value from v$spparameter 2 where name in ('db_create_file_dest', 'db_unique_name'); NAME VALUE ------------------------- ------------------ db_create_file_dest /u02/oradata db_unique_name modesto SQL> create tablespace data2 datafile size 5m; Tablespace created. SQL> select tablespace_name, file_name 2 from dba_data_files 3 where tablespace_name = 'DATA2'; TABLESPACE_NAME FILE_NAME ------------------ --------------------------------------------------------- DATA2 /u02/oradata/MODESTO/datafile/o1_mf_data2_6lwh7wgv_.dbf # ---[ Logical Standby ] --- # SQL> select name, value from v$spparameter 2 where name in ('db_create_file_dest', 'db_unique_name'); NAME VALUE ------------------------- ------------------ db_create_file_dest /u02/oradata db_unique_name turlock ----------------------- alert.log ----------------------- Wed Jan 12 18:45:28 EST 2011 Completed: create tablespace data2 datafile size 5m --------------------------------------------------------- SQL> select tablespace_name, file_name 2 from dba_data_files 3 where tablespace_name = 'DATA2'; TABLESPACE_NAME FILE_NAME ------------------ --------------------------------------------------------- DATA2 /u02/oradata/TURLOCK/datafile/o1_mf_data2_6lwh8q9d_.dbf

If on the other hand, you attempt to specify a physical path name in the CREATE/ALTER tablespace statement that does not exist on the logical standby, SQL Apply will not succeed in processing the statement and will fail. Whenever SQL Apply encounters an error while applying a SQL statement, it will stop and provide the DBA with an opportunity to correct the statement and restart SQL Apply.


# ---[ Primary ] --- # SQL> alter tablespace data2 add datafile '/u05/oradata/MODESTO/data02.dbf' size 5m; Tablespace altered. # ---[ Logical Standby ] --- # ----------------------- alert.log ----------------------- Wed Jan 12 19:59:36 EST 2011 alter tablespace data2 add datafile '/u05/oradata/MODESTO/data02.dbf' size 5m Wed Jan 12 19:59:36 EST 2011 ORA-1119 signalled during: alter tablespace data2 add datafile '/u05/oradata/MODESTO/data02.dbf' size 5m... LOGSTDBY status: ORA-01119: error in creating database file '/u05/oradata/MODESTO/data02.dbf' ORA-27040: file create error, unable to create file Linux Error: 2: No such file or directory LOGSTDBY Apply process P004 pid=31 OS id=28497 stopped Wed Jan 12 19:59:36 EST 2011 Errors in file /u01/app/oracle/admin/turlock/bdump/turlock_lsp0_28465.trc: ORA-12801: error signaled in parallel query server P004 ORA-01119: error in creating database file '/u05/oradata/MODESTO/data02.dbf' ORA-27040: file create error, unable to create file Linux Error: 2: No such file or directory LOGSTDBY Analyzer process P003 pid=30 OS id=28495 stopped LOGSTDBY Apply process P005 pid=32 OS id=28499 stopped LOGSTDBY Apply process P006 pid=33 OS id=28501 stopped LOGSTDBY Apply process P007 pid=34 OS id=28503 stopped LOGSTDBY Apply process P008 pid=35 OS id=28505 stopped --------------------------------------------------------- SQL> select event_timestamp, event, status 2 from dba_logstdby_events; EVENT_TIMESTAMP EVENT Status ----------------------------- ------------------------------ ------------------------------ 12-JAN-11 07.59.36.134349 PM alter tablespace data2 add dat ORA-01119: error in creating d afile '/u05/oradata/MODESTO/da atabase file '/u05/oradata/MOD ta02.dbf' size 5m ESTO/data02.dbf' ORA-27040: file create error, unable to create file Linux Error: 2: No such file o r directory -- ---------------------------------------------- -- Disable the database guard for this session so -- we can modify the logical standby. -- ---------------------------------------------- SQL> alter session disable guard; Session altered. -- ---------------------------------------------- -- Issue a compensating transaction or statement -- on the logical standby. For example, issue the -- CREATE TABLESPACE or ALTER TABLESPACE command -- that failed, but use the correct file path -- name. -- ---------------------------------------------- SQL> alter tablespace data2 add datafile '/u05/oradata/TURLOCK/data02.dbf' size 5m; Tablespace altered. -- ---------------------------------------------- -- Re-enable the database guard for this session. -- ---------------------------------------------- SQL> alter session enable guard; Session altered. -- ---------------------------------------------- -- Finally, restart logical apply with a clause -- that will cause the failed transaction to be -- automatically skipped. -- ---------------------------------------------- SQL> alter database start logical standby apply immediate skip failed transaction; Database altered. -- ---------------------------------------------- -- Verify results. -- ---------------------------------------------- SQL> select tablespace_name, file_name 2 from dba_data_files 3 where tablespace_name = 'DATA2'; TABLESPACE_NAME FILE_NAME ------------------ --------------------------------------------------------- DATA2 /u02/oradata/TURLOCK/datafile/o1_mf_data2_6lwh8q9d_.dbf DATA2 /u05/oradata/TURLOCK/data02.dbf

As stated at the beginning of this section, it is possible to avoid errors of this nature on the logical standby database by registering a skip handler and provide SQL Apply with a replacement DDL string to execute by converting the path names of the primary database datafiles to the standby datafile path names. The steps to perform this are presented below. The actions below should be run on the logical standby database.

  1. First, create the PL/SQL 'skip procedure' to handle tablespace DDL transactions.


    create or replace procedure sys.handle_tbs_ddl ( old_stmt in varchar2 , stmt_typ in varchar2 , schema in varchar2 , name in varchar2 , xidusn in number , xidslt in number , xidsqn in number , action out number , new_stmt out varchar2 ) as begin -- -------------------------------------------------------- -- All primary file specification that contain a directory -- '/u05/oradata/MODESTO' should be changed to the -- '/u05/oradata/TURLOCK' directory specification. -- -------------------------------------------------------- new_stmt := replace(old_stmt, '/u05/oradata/MODESTO', '/u05/oradata/TURLOCK'); action := dbms_logstdby.skip_action_replace; exception when others then action := dbms_logstdby.skip_action_error; new_stmt := null; end handle_tbs_ddl;

  2. Stop SQL Apply.


    SQL> alter database stop logical standby apply; Database altered.

  3. Register the skip procedure with SQL Apply.


    SQL> execute dbms_logstdby.skip(stmt => 'TABLESPACE', proc_name => 'sys.handle_tbs_ddl'); PL/SQL procedure successfully completed.

  4. Start SQL Apply.


    SQL> alter database start logical standby apply immediate; Database altered.

  5. Perform a test.


    # ---[ Primary ] --- # SQL> alter tablespace data2 add datafile '/u05/oradata/MODESTO/data03.dbf' size 5m; Tablespace altered. # ---[ Logical Standby ] --- # ----------------------- alert.log ----------------------- Wed Jan 12 20:51:58 EST 2011 LOGSTDBY status: ORA-16110: user procedure processing of logical standby apply DDL LOGSTDBY status: ORA-16202: Skip procedure requested to replace statement Wed Jan 12 20:51:58 EST 2011 alter tablespace data2 add datafile '/u05/oradata/TURLOCK/data03.dbf' size 5m Completed: alter tablespace data2 add datafile '/u05/oradata/TURLOCK/data03.dbf' size 5m --------------------------------------------------------- SQL> select tablespace_name, file_name 2 from dba_data_files 3 where tablespace_name = 'DATA2'; TABLESPACE_NAME FILE_NAME ------------------ --------------------------------------------------------- DATA2 /u02/oradata/TURLOCK/datafile/o1_mf_data2_6lwh8q9d_.dbf DATA2 /u05/oradata/TURLOCK/data02.dbf DATA2 /u05/oradata/TURLOCK/data03.dbf

Recovering from a Failed DDL Transaction

The previous section provided an example of how to recover from a failed DDL transaction.

Recovering from a Failed DML Transaction

When SQL Apply receives an error during a DDL transactions, it is often safe for the user to manually issue a compensating DDL transaction, skip the failed transaction, and restart SQL Apply. A failed DML transaction, however, can become problematic and should be carefully reviewed before simply trying to skip it. For example, suppose you have a transaction that consists of 100 inserts and 2 updates. If one of the inserts fail and the user decides to simply skip that single insert, then the entire transaction will be considered logically corrupt, creating data integrity issues on the standby database.

In most cases, a failed DML transaction is the result of a table associated with that transaction being out of synch with its corresponding table on the primary. The following is a list of common reasons why a table on the logical standby may become out of synch with the primary:

The best way for resolving this type of error is to instantiate the table so that it is once again in synch with the primary. Fortunately, Oracle provides the DBMS_LOGSTDBY.INSTANTIATE_TABLE procedure as an easy and efficient method to bring a table on a logical standby back into sync with the version on the primary database.

 

About the INSTANTIATE_TABLE Procedure

This procedure creates and populates a table in the standby database from a corresponding table in the primary database. The table requires the name of the database link (dblink) as an input parameter. If the table already exists in the logical standby database, it will be dropped and re-created based on the table definition at the primary database. This procedure only brings over the data associated with the table, and not the associated indexes and constraints.

Use the INSTANTIATE_TABLE procedure to:

  • Add a table to a standby database.
  • Re-create a table in a standby database.

Syntax

DBMS_LOGSTDBY.INSTANTIATE_TABLE (
     schema_name         IN VARCHAR2,
     table_name          IN VARCHAR2,
     dblink              IN VARCHAR2);

Parameters

Parameter     Description
------------- -----------------------------------------------------------
schema_name   Name of the schema
table_name    Name of the table to be created or re-created in the
              standby database
dblink        Name of the database link account that has privileges
              to read and lock the table in the primary database,
              as well as the SELECT_CATALOG_ROLE on the primary database

Usage Notes

  • Use this procedure to create and populate a table in a way that keeps the data on the standby database transactionally consistent with the primary database.

  • This table will not be synchronized with the rest of the tables being maintained by SQL Apply and SQL Apply will not start to maintain it until SQL Apply encounters redo that occurred after the table was instantiated from the primary. The SCN at which the table was instantiated from the primary database is available in the DBA_LOGSTDBY_EVENTS view.

  • The specified table must be a table that is supported by logical standby (that is, it does not appear in the DBA_LOGSTDBY_UNSUPPORTED_TABLES view on the primary database).

  • If there are any skip rules that specifically name this table (without any wild cards), those skip rules will be dropped as part of INSTANTIATE_TABLE, so that the table will be properly maintained by SQL Apply in the future. If there are skip rules that indirectly reference this table (match a skip rule with a wild card in the schema_name or table_name, and have a TABLE, DML, or SCHEMA_DDL statement type), INSTANTIATE_TABLE will fail with an ORA-16278 error. Any multi-object skip rules that pertain to the table must be dropped or changed before re-attempting the INSTANTIATE_TABLE call.

The first step is to create a database link on the logical standby that points to the primary.


SQL> connect sys/manager@turlock as sysdba Connected. SQL> create database link modesto 2 connect to scott identified by tiger 3 using 'modesto.idevelopment.info'; Database link created.

Next, let's modify the structure of the SCOTT.EMP table on the logical standby which will result in a failed DML transaction on the logical standby when updates for this table are sent from the primary.


SQL> connect sys/manager@turlock as sysdba Connected. SQL> alter database guard none; Database altered. SQL> connect scott/tiger@turlock Connected. SQL> alter table emp add (new_column varchar2(10)); Table altered.

From the primary database, modify a record or two in the SCOTT.EMP table. After modifying the table records, perform a log switch on the primary to send its redo to the logical standby.


SQL> connect scott/tiger@modesto Connected. SQL> update emp set monthly_salary=22083 where emp_id=50; 1 row updated. SQL> commit; Commit complete. SQL> connect sys/manager@modesto as sysdba Connected. SQL> alter system switch logfile; System altered.

Query the DBA_LOGSTDBY_EVENTS view on the logical standby to look for the failed DML transaction.


SQL> connect sys/manager@turlock as sysdba Connected. SQL> select a.event_time, a.status 2 from (select b.event_time, b.status 3 from dba_logstdby_events b 4 order by b.event_time desc 5 ) a 6 where rownum < 5; EVENT_TIME STATUS -------------------- --------------------------------------------------------- 03-FEB-2011 16:11:39 ORA-26676: Table 'SCOTT.EMP' has 9 columns in the LCR and 10 columns in the replicated site 03-FEB-2011 16:11:38 ORA-16222: automatic Logical Standby retry of last action 03-FEB-2011 16:11:38 ORA-16111: log mining and apply setting up 03-FEB-2011 16:11:38 ORA-26676: Table 'SCOTT.EMP' has 9 columns in the LCR and 10 columns in the replicated site

To start the recovery process for the failed DML transaction, we first need to stop logical standby apply.


SQL> alter database stop logical standby apply; Database altered.

Next, use the DBMS_LOGSTDBY.INSTANTIATE_TABLE procedure on the logical standby to put the SCOTT.EMP table back in synch with the primary.


SQL> exec dbms_logstdby.instantiate_table('SCOTT', 'EMP', 'MODESTO.IDEVELOPMENT.INFO'); PL/SQL procedure successfully completed.

Restart logical standby apply.


SQL> alter database start logical standby apply immediate; Database altered.

Verify the integrity of the logical standby and the new SCOTT.EMP table.


SQL> select a.event_time, a.status 2 from (select b.event_time, b.status 3 from dba_logstdby_events b 4 order by b.event_time desc 5 ) a 6 where rownum < 5; EVENT_TIME STATUS -------------------- ------------------------------------------------------------------------------------- 03-FEB-2011 16:26:04 ORA-16111: log mining and apply setting up 03-FEB-2011 16:23:49 Instantiation SCN: 2704695 03-FEB-2011 16:11:39 ORA-26676: Table 'SCOTT.EMP' has 9 columns in the LCR and 10 columns in the replicate d site 03-FEB-2011 16:11:38 ORA-16222: automatic Logical Standby retry of last action SQL> select monthly_salary from scott.emp where emp_id = 50; MONTHLY_SALARY -------------- 22083

Set the GUARD on the logical standby back to its original setting.


SQL> alter database guard standby; Database altered.

Activating a Logical Standby Database (Role Transition)

A database can operate in one of two mutually exclusive modes in an Oracle Data Guard configuration: primary or standby. Whenever the role is changed between the primary and standby, this is referred to as a role transition. Role transition plays an important part in Data Guard by providing an interface that allows DBA's to activate a standby database to take over as the primary production database or vice versa. There are two types of role transitions supported in Oracle 10g Data Guard: switchover and failover. Knowing the difference between the two role transition methods is crucial.

Click here for a guide on how to perform role transitions and activating a logical standby database.

Further Reading

Additional information on Oracle 10g Release 2 Data Guard can be found in the Data Guard Concepts and Administration. This guide is available from the Oracle Documentation Library website located at the following address http://download.oracle.com/docs/cd/B19306_01/server.102/b14239.pdf.

About the Author

Jeffrey Hunter is an Oracle Certified Professional, Java Development Certified Professional, Author, and an Oracle ACE. Jeff currently works as a Senior Database Administrator for The DBA Zone, Inc. located in Pittsburgh, Pennsylvania. His work includes advanced performance tuning, Java and PL/SQL programming, developing high availability solutions, capacity planning, database security, and physical / logical database design in a UNIX / Linux server environment. Jeff's other interests include mathematical encryption theory, tutoring advanced mathematics, programming language processors (compilers and interpreters) in Java and C, LDAP, writing web-based database administration tools, and of course Linux. He has been a Sr. Database Administrator and Software Engineer for over 20 years and maintains his own website site at: http://www.iDevelopment.info. Jeff graduated from Stanislaus State University in Turlock, California, with a Bachelor's degree in Computer Science and Mathematics.



Copyright (c) 1998-2017 Jeffrey M. Hunter. All rights reserved.

All articles, scripts and material located at the Internet address of http://www.idevelopment.info is the copyright of Jeffrey M. Hunter and is protected under copyright laws of the United States. This document may not be hosted on any other site without my express, prior, written permission. Application to host any of the material elsewhere can be made by contacting me at jhunter@idevelopment.info.

I have made every effort and taken great care in making sure that the material included on my web site is technically accurate, but I disclaim any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on it. I will in no case be liable for any monetary damages arising from such loss, damage or destruction.

Last modified on
Thursday, 17-Apr-2014 02:51:09 EDT
Page Count: 1320