DBA Tips Archive for Oracle

  


Using the Oracle ASM Cluster File System (Oracle ACFS) on Linux - (11gR2)

by Jeff Hunter, Sr. Database Administrator

Contents

Introduction

Introduced with Oracle ASM 11g Release 2, Oracle ASM Cluster File System (Oracle ACFS) is a general purpose POSIX compliant cluster file system implemented as part of Oracle Automatic Storage Management (Oracle ASM). Being POSIX compliant, all operating system utilities we use with ext3 and other file systems can also be used with Oracle ACFS given it belongs to the same family of related standards. Oracle ACFS extends the Oracle ASM architecture and is used to support many types of files which are typically maintained outside of the Oracle database. For example Oracle ACFS can be used to store BFILEs, database trace files, executables, report files and even general purpose files like image, text, video, and audio files. In addition, Oracle ACFS can be used as a shared file system for Oracle home binaries.

The features included with Oracle ACFS allow users to create, mount, and manage ACFS using familiar Linux commands. Oracle ACFS provides support for snapshots and the ability to dynamically resize existing file systems online using Oracle ASM Dynamic Volume Manager (ADVM).

Oracle ACFS leverages Oracle ASM functionality that enables:

While Oracle ACFS is useful for storing general purpose files, there are certain files that it is not meant for. For example, Oracle ASM (traditional disk groups) is still the preferred storage manager for all database files because Oracle ACFS does not support directIO for file read and write operations in 11g Release 2 (11.2). Oracle ASM was specifically designed and optimized to provide the best performance for database file types. In addition to Oracle database files, Oracle ACFS does not support files for the Oracle Grid Infrastructure home. Finally, Oracle ACFS does not support any Oracle files that can be directly stored in Oracle ASM. For example, SPFILE, flashback log files, control files, archived redo log files, the Grid Infrastructure OCR and voting disk, etc should be stored in Oracle ASM disk groups. The key point to remember is that Oracle ACFS is the preferred file manager for non-database files and is optimized for general purpose / customer files which are maintained outside of the Oracle database.

This article describes three ways to create an Oracle ASM Cluster File System in an Oracle 11g Release 2 RAC database on the Linux operating environment:

There is actually a fourth method that can be employed to create an Oracle ASM Cluster File System which is to use the ASMCMD command line interface. Throughout this guide, I'll demonstrate how to use the ASMCMD command line interface in place of SQL where appropriate.

The Linux distribution used in this guide is CentOS 5.5. CentOS is a free Enterprise-class Linux Distribution derived from the Red Hat Enterprise Linux (RHEL) source and aims to be 100% binary compatible. Although Centos 5 is equivalent to RHEL 5, the CentOS operating system is not supported by Oracle ASM Cluster File System. Refer to the workaround documented in the prerequisites section of this article if you are using CentOS or a similar Red Hat clone.

 

It is assumed that an Oracle RAC database is already installed, configured, and running. Refer to this guide for instructions on how to build an inexpensive two-node Oracle RAC 11g Release 2 database on Linux.

ACFS Components

Before diving into the details on how to create and manage Oracle ASM Cluster File System, it may be helpful to first discuss the major components.

Figure 1 shows the various components that make up Oracle ACFS and provides an illustration of the example configuration that will be created using this guide.

Figure 1: Oracle ASM Cluster File System Components

Everything starts with an Oracle ASM disk group. An Oracle ASM disk group is made up of one or more disks shown in figure 1 as DOCSDG1. The next component is an Oracle ASM volume which is created within an Oracle ASM disk group. The example configuration illustrated above shows that we will be creating three volumes named docsvol1, docsvol2, and docsvol3 on the new disk group named DOCSDG1. Finally, we will be creating a cluster file system for each volume whose mount points will be /documents1, /documents2, and /documents3 respectively.

With Oracle ACFS, as long as there exists free space within the ASM disk group, any of the volumes can be dynamically expanded which means the file system gets expanded as a result. As I will demonstrate later in this article, expanding a volume / file system is an effortless process and can be performed online without the need to take any type of outage!

Oracle ASM Dynamic Volume Manager (ADVM)

Besides an Oracle ASM disk group, another key component to Oracle ACFS is the new Oracle ASM Dynamic Volume Manager (ADVM). ADVM provides volume management services and a standard driver interface to its client (ACFS, ext3, ext4, reiserfs, OCFS2, etc.). The ADVM services include functionality to create, resize, delete, enable, and disable dynamic volumes. An ASM dynamic volume is constructed out of an ASM file with an 'ASMVOL' type attribute to distinguish it from other ASM file types (i.e. DATAFILE, TEMPFILE, ONLINELOG, etc.):


ASM File Name / Volume Name / Device Name Bytes File Type --------------------------------------------------------------- ------------------ ------------------ +CRS/racnode-cluster/ASMPARAMETERFILE/REGISTRY.253.734544679 1,536 ASMPARAMETERFILE +CRS/racnode-cluster/OCRFILE/REGISTRY.255.734544681 272,756,736 OCRFILE ------------------ 272,758,272 +DOCSDG1 [DOCSVOL1] /dev/asm/docsvol1-300 34,359,738,368 ASMVOL +DOCSDG1 [DOCSVOL2] /dev/asm/docsvol2-300 34,359,738,368 ASMVOL +DOCSDG1 [DOCSVOL3] /dev/asm/docsvol3-300 26,843,545,600 ASMVOL ------------------ 95,563,022,336 +FRA/RACDB/ARCHIVELOG/2010_11_08/thread_1_seq_69.264.734565029 42,991,616 ARCHIVELOG +FRA/RACDB/ARCHIVELOG/2010_11_08/thread_2_seq_2.266.734565685 41,260,544 ARCHIVELOG < SNIP > +FRA/RACDB/ONLINELOG/group_3.259.734554873 52,429,312 ONLINELOG +FRA/RACDB/ONLINELOG/group_4.260.734554877 52,429,312 ONLINELOG ------------------ 12,227,537,408 +RACDB_DATA/RACDB/CONTROLFILE/Current.256.734552525 18,890,752 CONTROLFILE +RACDB_DATA/RACDB/DATAFILE/EXAMPLE.263.734552611 157,294,592 DATAFILE +RACDB_DATA/RACDB/DATAFILE/SYSAUX.260.734552569 1,121,984,512 DATAFILE +RACDB_DATA/RACDB/DATAFILE/SYSTEM.259.734552539 744,497,152 DATAFILE +RACDB_DATA/RACDB/DATAFILE/UNDOTBS1.261.734552595 791,683,072 DATAFILE +RACDB_DATA/RACDB/DATAFILE/UNDOTBS2.264.734552619 209,723,392 DATAFILE +RACDB_DATA/RACDB/DATAFILE/USERS.265.734552627 5,251,072 DATAFILE +RACDB_DATA/RACDB/ONLINELOG/group_1.257.734552529 52,429,312 ONLINELOG +RACDB_DATA/RACDB/ONLINELOG/group_2.258.734552533 52,429,312 ONLINELOG +RACDB_DATA/RACDB/ONLINELOG/group_3.266.734554871 52,429,312 ONLINELOG +RACDB_DATA/RACDB/ONLINELOG/group_4.267.734554875 52,429,312 ONLINELOG +RACDB_DATA/RACDB/PARAMETERFILE/spfile.268.734554879 4,608 PARAMETERFILE +RACDB_DATA/RACDB/TEMPFILE/TEMP.262.734552605 93,331,456 TEMPFILE +RACDB_DATA/RACDB/spfileracdb.ora 4,608 PARAMETERFILE ------------------ 3,352,382,464

Oracle ACFS and other supported 3rd party file systems can use Oracle ADVM as a volume management platform to create and manage file systems while leveraging the full power and functionality of Oracle ASM features. A volume may be created in its own Oracle ASM disk group or can share space in an already existing disk group. Any number of volumes can be created in an ASM disk group. Creating a new volume in an Oracle ASM disk group can be performed using the ASM Configuration Assistant (ASMCA), Oracle Enterprise Manager (OEM), SQL, or ASMCMD. For example:


asmcmd volcreate -G docsdg1 -s 20G docsvol3

Once a new volume is created in Linux, the ADVM device driver automatically creates a volume device on the OS that is used by clients to access the volume. These volumes may be used as block devices, may contain a file system such as ext3, ext4, reiserfs, OCFS2 or Oracle ACFS may used (as described in this guide) in which case the oracleacfs driver is also used for I/O to the file system.

 

On the Linux platform, Oracle ADVM volume devices are created as block devices regardless of the configuration of the underlying storage in the Oracle ASM disk group. Do not use raw (8) to map Oracle ADVM volume block devices into raw volume devices.

Under Linux, all volume devices are externalized to the OS and appear dynamically as special files in the /dev/asm directory. In this guide, we will use this OS volume device to create an Oracle ACFS.


$ ls -l /dev/asm/ total 0 brwxrwx--- 1 root asmadmin 252, 153601 Nov 28 13:49 docsvol1-300 brwxrwx--- 1 root asmadmin 252, 153602 Nov 28 13:49 docsvol2-300 brwxrwx--- 1 root asmadmin 252, 153603 Nov 28 13:56 docsvol3-300 $ /sbin/mkfs -t acfs -b 4k /dev/asm/docsvol3-300 -n "DOCSVOL3"

Oracle ADVM implements its own extent and striping algorithm to ensure the highest performance for general purpose files. An ADVM volume is four columns of 64MB extents and 128KB stripe width by default. ADVM writes data in 128KB stripes in a Round Robin fashion to each column before starting on the next four column extents. ADVM uses Dirty Region Logging (DRL) for mirror recovery after a node or instance failure. This DRL scheme requires a DRL file in the ASM disk group to be associated with each ASM dynamic volume.

ACFS Prerequisites

Install Oracle Grid Infrastructure

Oracle Grid Infrastructure 11g Release 2 (11.2) or higher is required for Oracle ACFS. Oracle Grid Infrastructure includes Oracle Clusterware, Oracle ASM, Oracle ACFS, Oracle ADVM, and driver resources software components, which are installed into the Grid Infrastructure Home using the Oracle Universal Installation (OUI) tool. Refer to this guide for instructions on how to configure Oracle Grid Infrastructure as part of an Oracle RAC 11g Release 2 database install on Linux.

Log In as the Grid Infrastructure User

To perform the examples demonstrated in this guide, it is assumed that the Oracle Grid Infrastructure owner is 'grid'. If the Oracle Grid Infrastructure owner is 'oracle', then log in as the oracle account.

Log in as the Oracle Grid Infrastructure owner and switch to the Oracle ASM environment on node 1 of the RAC when performing non-root ACFS tasks:


[grid@racnode1 ~]$ id uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper) [grid@racnode1 ~]$ . oraenv ORACLE_SID = [+ASM1] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid [grid@racnode1 ~]$ dbhome /u01/app/11.2.0/grid [grid@racnode1 ~]$ echo $ORACLE_SID +ASM1

Verify / Create ASM Disk Group

After validating the Oracle Grid Infrastructure installation and logging in as the Oracle Grid Infrastructure owner (grid), the next step is to decide which Oracle ASM disk group should be used to create the Oracle ASM dynamic volume(s). The following SQL demonstrates how to search the available ASM disk groups:


break on inst_id skip 1 column inst_id format 9999999 heading "Instance ID" justify left column name format a15 heading "Disk Group" justify left column total_mb format 999,999,999 heading "Total (MB)" justify right column free_mb format 999,999,999 heading "Free (MB)" justify right column pct_free format 999.99 heading "% Free" justify right ====================================================================== SQL> select inst_id, name, total_mb, free_mb, round((free_mb/total_mb)*100,2) pct_free 2 from gv$asm_diskgroup 3 where total_mb != 0 4 order by inst_id, name; Instance ID Disk Group Total (MB) Free (MB) % Free ----------- --------------- ------------ ------------ ------- 1 CRS 2,205 1,809 82.04 FRA 33,887 24,802 73.19 RACDB_DATA 33,887 30,623 90.37 2 CRS 2,205 1,809 82.04 FRA 33,887 24,802 73.19 RACDB_DATA 33,887 30,623 90.37

 

The same task can be accomplished using the ASMCMD command-line utility:

[grid@racnode1 ~]$ asmcmd lsdg

If you find an existing Oracle ASM disk group that has adequate space, the Oracle ASM dynamic volume(s) can be created on that free space or a new ASM disk group can be created.

For the purpose of this guide, I will be creating a dedicated Oracle ASM disk group named DOCSDG1 which will be used for all three Oracle ASM dynamic volumes. I already setup a shared iSCSI volume and provisioned it using ASMLib. The ASMLib shared volume that will be used to create the new disk group is named ORCL:ASMDOCSVOL1.


[grid@racnode1 ~]$ sqlplus / as sysasm SQL> select path, name, header_status, os_mb from v$asm_disk; PATH NAME HEADER_STATUS OS_MB ------------------ --------------- ------------- ---------- ORCL:ASMDOCSVOL1 PROVISIONED 98,303 ORCL:CRSVOL1 CRSVOL1 MEMBER 2,205 ORCL:DATAVOL1 DATAVOL1 MEMBER 33,887 ORCL:FRAVOL1 FRAVOL1 MEMBER 33,887

After identifying the ASMLib volume and verifying it is accessible from all Oracle RAC nodes, log in to the Oracle ASM instance and create the new disk group from one of the Oracle RAC nodes. After verifying the disk group was created, log in to the Oracle ASM instance on all other RAC nodes and mount the new disk group:


[grid@racnode1 ~]$ sqlplus / as sysasm SQL> CREATE DISKGROUP docsdg1 EXTERNAL REDUNDANCY DISK 'ORCL:ASMDOCSVOL1' SIZE 98303 M; Diskgroup created. SQL> @asm_diskgroups Disk Group Sector Block Allocation Name Size Size Unit Size State Type Total Size (MB) Used Size (MB) Pct. Used ---------- ------- ------ ----------- -------- ------ --------------- -------------- --------- CRS 512 4,096 1,048,576 MOUNTED EXTERN 2,205 396 17.96 DOCSDG1 512 4,096 1,048,576 MOUNTED EXTERN 98,303 50 .05 FRA 512 4,096 1,048,576 MOUNTED EXTERN 33,887 9,085 26.81 RACDB_DATA 512 4,096 1,048,576 MOUNTED EXTERN 33,887 3,264 9.63 --------------- -------------- Grand Total: 168,282 12,795 =============================================================================================== [grid@racnode2 ~]$ sqlplus / as sysasm SQL> ALTER DISKGROUP docsdg1 MOUNT; Diskgroup altered. SQL> @asm_diskgroups Disk Group Sector Block Allocation Name Size Size Unit Size State Type Total Size (MB) Used Size (MB) Pct. Used ---------- ------- ------ ----------- -------- ------ --------------- -------------- --------- CRS 512 4,096 1,048,576 MOUNTED EXTERN 2,205 396 17.96 DOCSDG1 512 4,096 1,048,576 MOUNTED EXTERN 98,303 50 .05 FRA 512 4,096 1,048,576 MOUNTED EXTERN 33,887 9,085 26.81 RACDB_DATA 512 4,096 1,048,576 MOUNTED EXTERN 33,887 3,264 9.63 --------------- -------------- Grand Total: 168,282 12,795

Verify Oracle ASM Volume Driver

The operating environment used in this guide is CentOS 5.5 x86_64:


[root@racnode1 ~]# uname -a Linux racnode1 2.6.18-194.el5 #1 SMP Fri Apr 2 14:58:14 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux

On supported operating systems, the Oracle ACFS modules will be configured and the Oracle ASM volume driver started by default after installing Oracle Grid Infrastructure. With CentOS and other unsupported operating systems, a workaround is required to enable Oracle ACFS. One of the first tasks is to manually start the Oracle ASM volume driver:


[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/acfsload start -s ADVM/ACFS is not supported on centos-release-5-5.el5.centos

The failed output from the above command should come as no surprise given Oracle ACFS is not supported on CentOS.

By default, the Oracle ACFS modules do not get installed on CentOS because it is not a supported operating environment. This section provides a simple, but unsupported, workaround to get Oracle ACFS working on CentOS. This workaround includes some of the manual steps that are required to launch the Oracle ASM volume driver when installing Oracle ACFS on a non-clustered system.

 

The steps documented in this section serve as a workaround in order to setup Oracle ACFS on CentOS and is by no means supported by Oracle Corporation. Do not attempt these steps on a critical production environment. You have been warned.

The following steps will need to be run from all nodes in an Oracle RAC database cluster as root.

First, make a copy of the following Perl module:


[root@racnode1 ~]# cd /u01/app/11.2.0/grid/lib [root@racnode1 lib]# cp -p osds_acfslib.pm osds_acfslib.pm.orig [root@racnode2 ~]# cd /u01/app/11.2.0/grid/lib [root@racnode2 lib]# cp -p osds_acfslib.pm osds_acfslib.pm.orig

Next, edit the osds_acfslib.pm Perl module. Search for the string 'support this release' (which was line 278 in my case).

Replace


if (($release =~ /enterprise-release-5/) || ($release =~ /redhat-release-5/))

with


if (($release =~ /enterprise-release-5/) || ($release =~ /redhat-release-5/) || ($release =~ /centos-release-5/))

This will get you past the supported version check; however, if you attempt to load the Oracle ASM volume driver from either Oracle RAC node, you get the following error:


[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/acfsload start -s acfsload: ACFS-9129: ADVM/ACFS not installed

To install ADVM/ACFS, copy the following kernel modules from the Oracle Grid Infrastructure home to the expected location:


[root@racnode1 ~]# mkdir /lib/modules/2.6.18-194.el5/extra/usm [root@racnode1 ~]# cd /u01/app/11.2.0/grid/install/usm/EL5/x86_64/2.6.18-8/2.6.18-8.el5-x86_64/bin [root@racnode1 bin]# cp *ko /lib/modules/2.6.18-194.el5/extra/usm/ [root@racnode2 ~]# mkdir /lib/modules/2.6.18-194.el5/extra/usm [root@racnode2 ~]# cd /u01/app/11.2.0/grid/install/usm/EL5/x86_64/2.6.18-8/2.6.18-8.el5-x86_64/bin [root@racnode2 bin]# cp *ko /lib/modules/2.6.18-194.el5/extra/usm/

Once the kernel modules have been copied, we can verify the ADVM/ACFS installation by running the following from all Oracle RAC nodes:


[root@racnode1 ~]# cd /u01/app/11.2.0/grid/bin [root@racnode1 bin]# ./acfsdriverstate -orahome /u01/app/11.2.0/grid version ACFS-9205: OS/ADVM,ACFS installed version = 2.6.18-8.el5(x86_64)/090715.1 [root@racnode2 ~]# cd /u01/app/11.2.0/grid/bin [root@racnode2 bin]# ./acfsdriverstate -orahome /u01/app/11.2.0/grid version ACFS-9205: OS/ADVM,ACFS installed version = 2.6.18-8.el5(x86_64)/090715.1

The next step is to record dependencies for the new kernel modules:


[root@racnode1 ~]# depmod [root@racnode2 ~]# depmod

Next, copy the Oracle ACFS executables to /sbin and set the appropriate permissions. The Oracle ACFS executables are located in the GRID_HOME/install/usm/EL5/<ARCHITECTURE>/<KERNEL_VERSION>/<FULL_KERNEL_VERSION>/bin directory or in the /u01/app/11.2.0/grid/install/usm/cmds/bin directory (12 files) and include any file without the *.ko extension:


[root@racnode1 ~]# cd /u01/app/11.2.0/grid/install/usm/EL5/x86_64/2.6.18-8/2.6.18-8.el5-x86_64/bin [root@racnode1 bin]# cp acfs* /sbin; chmod 755 /sbin/acfs* [root@racnode1 bin]# cp advmutil* /sbin; chmod 755 /sbin/advmutil* [root@racnode1 bin]# cp fsck.acfs* /sbin; chmod 755 /sbin/fsck.acfs* [root@racnode1 bin]# cp mkfs.acfs* /sbin; chmod 755 /sbin/mkfs.acfs* [root@racnode1 bin]# cp mount.acfs* /sbin; chmod 755 /sbin/mount.acfs* [root@racnode1 ~]# cd /u01/app/11.2.0/grid/install/usm/cmds/bin [root@racnode1 bin]# cp acfs* /sbin; chmod 755 /sbin/acfs* [root@racnode1 bin]# cp advmutil* /sbin; chmod 755 /sbin/advmutil* [root@racnode1 bin]# cp fsck.acfs* /sbin; chmod 755 /sbin/fsck.acfs* [root@racnode1 bin]# cp mkfs.acfs* /sbin; chmod 755 /sbin/mkfs.acfs* [root@racnode1 bin]# cp mount.acfs* /sbin; chmod 755 /sbin/mount.acfs* ------------------------------------------------------------------------------------------------------ [root@racnode2 ~]# cd /u01/app/11.2.0/grid/install/usm/EL5/x86_64/2.6.18-8/2.6.18-8.el5-x86_64/bin [root@racnode2 bin]# cp acfs* /sbin; chmod 755 /sbin/acfs* [root@racnode2 bin]# cp advmutil* /sbin; chmod 755 /sbin/advmutil* [root@racnode2 bin]# cp fsck.acfs* /sbin; chmod 755 /sbin/fsck.acfs* [root@racnode2 bin]# cp mkfs.acfs* /sbin; chmod 755 /sbin/mkfs.acfs* [root@racnode2 bin]# cp mount.acfs* /sbin; chmod 755 /sbin/mount.acfs* [root@racnode2 ~]# cd /u01/app/11.2.0/grid/install/usm/cmds/bin [root@racnode2 bin]# cp acfs* /sbin; chmod 755 /sbin/acfs* [root@racnode2 bin]# cp advmutil* /sbin; chmod 755 /sbin/advmutil* [root@racnode2 bin]# cp fsck.acfs* /sbin; chmod 755 /sbin/fsck.acfs* [root@racnode2 bin]# cp mkfs.acfs* /sbin; chmod 755 /sbin/mkfs.acfs* [root@racnode2 bin]# cp mount.acfs* /sbin; chmod 755 /sbin/mount.acfs*

Now, running acfsload start -s will complete without any further messages:


[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/acfsload start -s [root@racnode2 ~]# /u01/app/11.2.0/grid/bin/acfsload start -s

Check that the modules were successfully loaded on all Oracle RAC nodes:


[root@racnode1 ~]# lsmod | grep oracle oracleacfs 877320 4 oracleadvm 221760 8 oracleoks 276880 2 oracleacfs,oracleadvm oracleasm 84136 1 [root@racnode2 ~]# lsmod | grep oracle oracleacfs 877320 4 oracleadvm 221760 8 oracleoks 276880 2 oracleacfs,oracleadvm oracleasm 84136 1

Configure the Oracle ASM volume driver to load automatically on system startup on all Oracle RAC nodes. You will need to create an initialization script (/etc/init.d/acfsload) that contains the runlevel configuration and the acfsload command. Change the permissions on the /etc/init.d/acfsload script to allow it to be executed by root and then create links in the rc2.d, rc3.d, rc4.d, and rc5.d runlevel directories using 'chkconfig --add':


[root@racnode1 ~]# chkconfig --list | grep acfsload [root@racnode2 ~]# chkconfig --list | grep acfsload ======================================================= [root@racnode1 ~]# cat > /etc/init.d/acfsload <<EOF #!/bin/sh # chkconfig: 2345 30 21 # description: Load Oracle ASM volume driver on system startup ORACLE_HOME=/u01/app/11.2.0/grid export ORACLE_HOME \$ORACLE_HOME/bin/acfsload start -s EOF [root@racnode2 ~]# cat > /etc/init.d/acfsload <<EOF #!/bin/sh # chkconfig: 2345 30 21 # description: Load Oracle ASM volume driver on system startup ORACLE_HOME=/u01/app/11.2.0/grid export ORACLE_HOME \$ORACLE_HOME/bin/acfsload start -s EOF ======================================================= [root@racnode1 ~]# chmod 755 /etc/init.d/acfsload [root@racnode2 ~]# chmod 755 /etc/init.d/acfsload ======================================================= [root@racnode1 ~]# chkconfig --add acfsload [root@racnode2 ~]# chkconfig --add acfsload ======================================================= [root@racnode1 ~]# chkconfig --list | grep acfsload acfsload 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@racnode2 ~]# chkconfig --list | grep acfsload acfsload 0:off 1:off 2:on 3:on 4:on 5:on 6:off

If the Oracle Grid Infrastructure 'ora.registry.acfs' resource does not exist, create it. This only needs to be performed from one of the Oracle RAC nodes:


[root@racnode1 ~]# su - grid -c crs_stat | grep acfs [root@racnode2 ~]# su - grid -c crs_stat | grep acfs ======================================================= [root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl add type ora.registry.acfs.type \ -basetype ora.local_resource.type \ -file /u01/app/11.2.0/grid/crs/template/registry.acfs.type [root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl add resource ora.registry.acfs \ -attr ACL=\'owner:root:rwx,pgrp:oinstall:r-x,other::r--\' \ -type ora.registry.acfs.type -f ======================================================= [root@racnode1 ~]# su - grid -c crs_stat | grep acfs NAME=ora.registry.acfs TYPE=ora.registry.acfs.type [root@racnode2 ~]# su - grid -c crs_stat | grep acfs NAME=ora.registry.acfs TYPE=ora.registry.acfs.type

As a final step, modify any of the Oracle ACFS shell scripts copied to the /sbin directory (above) to include the ORACLE_HOME for Grid Infrastructure. The successful execution of these scripts requires access to certain Oracle shared libraries that are found in the Grid Infrastructure Oracle home. Since many of the Oracle ACFS shell scripts will be executed as the root user account, the ORACLE_HOME environment variable will typically not be set in the shell and will result in the executable to fail. For example:


[root@racnode1 ~]# /sbin/acfsutil registry /sbin/acfsutil.bin: error while loading shared libraries: libhasgen11.so: cannot open shared object file: No such file or directory

An easy workaround to get past this error is to set the ORACLE_HOME environment variable for the Oracle Grid Infrastructure home in the Oracle ACFS shell scripts on all Oracle RAC nodes. The ORACLE_HOME should be set at the beginning of the file after the header comments as shown in the following example:


#!/bin/sh # # Copyright (c) 2001, 2009, Oracle and/or its affiliates. All rights reserved. # ORACLE_HOME=/u01/app/11.2.0/grid ORA_CRS_HOME=%ORA_CRS_HOME% if [ ! -d $ORA_CRS_HOME ]; then ORA_CRS_HOME=$ORACLE_HOME fi ...

Add the ORACLE_HOME environment variable for the Oracle Grid Infrastructure home as noted above to the following Oracle ACFS shell scripts on all Oracle RAC nodes:

Verify ASM Disk Group Compatibility Level

The compatibility level for the Oracle ASM disk group must be at least 11.2 in order to create an Oracle ASM volume. From the Oracle ASM instance, perform the following checks:


SQL> SELECT compatibility, database_compatibility 2 FROM v$asm_diskgroup 3 WHERE name = 'DOCSDG1'; COMPATIBILITY DATABASE_COMPATIBILITY ---------------- ----------------------- 10.1.0.0.0 10.1.0.0.0

If the results show something other than 11.2 or higher (as the above example shows), we need to set the compatibility to at least 11.2 by issuing the following series of SQL statements from the Oracle ASM instance:


[grid@racnode1 ~]$ sqlplus / as sysasm SQL> ALTER DISKGROUP docsdg1 SET ATTRIBUTE 'compatible.asm' = '11.2'; Diskgroup altered. SQL> ALTER DISKGROUP docsdg1 SET ATTRIBUTE 'compatible.rdbms' = '11.2'; Diskgroup altered. SQL> ALTER DISKGROUP docsdg1 SET ATTRIBUTE 'compatible.advm' = '11.2'; Diskgroup altered.

 

If you receive an error while attempting to set the 'compatible.advm' attribute, verify that the Oracle ASM volume driver is running:

SQL> ALTER DISKGROUP docsdg1 SET ATTRIBUTE 'compatible.advm' = '11.2';
ALTER DISKGROUP docsdg1 SET ATTRIBUTE 'compatible.advm' = '11.2'
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15242: could not set attribute compatible.advm
ORA-15238: 11.2 is not a valid value for attribute compatible.advm
ORA-15477: cannot communicate with the volume driver

Verify the changes to the compatibility level:


SQL> SELECT compatibility, database_compatibility 2 FROM v$asm_diskgroup 3 WHERE name = 'DOCSDG1'; COMPATIBILITY DATABASE_COMPATIBILITY ---------------- ----------------------- 11.2.0.0.0 11.2.0.0.0

ASM Configuration Assistant (ASMCA)

This section includes step-by-step instructions on how to create an Oracle ASM cluster file system using the Oracle ASM Configuration Assistant (ASMCA). Note that at the time of this writing, ASMCA only supports the creation of volumes and file systems. Deleting an Oracle ASM volume or file system requires the command-line.

Create Mount Point

From each Oracle RAC node, create a directory that will be used to mount the new Oracle ACFS:


[root@racnode1 ~]# mkdir /documents1 [root@racnode2 ~]# mkdir /documents1

Create ASM Cluster File System

As the Oracle Grid Infrastructure owner, run the ASM Configuration Assistant (asmca) from only one node in the cluster (racnode1 for example):


[grid@racnode1 ~]$ asmca

Screen Name Response Screen Shot
Disk Groups When the Oracle ASM configuration assistant starts you are presented with the 'Disk Groups' tab.
Volumes Click on the 'Volumes' tab then click the [Create] button.
Create ASM Volume Create a new ASM volume by supplying a "Volume Name", "Disk Group Name", and "Size". For the purpose of this example, I will be creating a 32GB volume named "docsvol1" on the "DOCSDG1" ASM disk group.

After verifying all values in this dialog are correct, click the [OK] button.
Volume Created After the volume is created, acknowledge the 'Volume: Creation' dialog.

When returned to the "Volumes" tab, the "State" for the new ASM volume should be ENABLED for all Oracle RAC nodes (i.e. 'ENABLED(2 of 2)').
ASM Cluster File Systems Click on the 'ASM Cluster File Systems' tab then click the [Create] button.
Create ASM Cluster File System Verify that the newly created volume (DOCSVOL1) is selected in the 'Volume' list.

Select the 'General Purpose File System' option.

Enter the previously created mount point directory (/documents1) or leave the suggested mount point.

Select the 'Yes' option for 'Register MountPoint'.

After verifying all values in this dialog are correct, click the [OK] button.
ASM Cluster File System Created After the ASM Cluster File System is created, acknowledge the 'ASM Cluster File System: Creation' dialog.
ASM Cluster File Systems The newly created Oracle ASM cluster file system is now listed under the 'ASM Cluster File Systems' tab.

Note that the new clustered file system is not mounted. That will need to be performed manually on all Oracle RAC nodes as a privileged user (root) after exiting from the ASMCA.

Exit the ASM Configuration Assistant by clicking the [Exit] button.

Mount the New Cluster File System

Now that the new Oracle ASM cluster file system has been created and registered in the Oracle ACFS mount registry, log in to all Oracle RAC nodes as root and run the following mount command:


[root@racnode1 ~]# /bin/mount -t acfs /dev/asm/docsvol1-300 /documents1 /sbin/mount.acfs.bin: error while loading shared libraries: libhasgen11.so: cannot open shared object file: No such file or directory [root@racnode2 ~]# /bin/mount -t acfs /dev/asm/docsvol1-300 /documents1 /sbin/mount.acfs.bin: error while loading shared libraries: libhasgen11.so: cannot open shared object file: No such file or directory

If you don't have the ORACLE_HOME environment variable set to the Oracle Grid Infrastructure home as explained in the prerequisites section to this guide, the mount command will fail as shown above. In order to mount the new cluster file system, the Oracle ASM ACFS binaries need access to certain shared libraries in the ORACLE_HOME for Grid Infrastructure. An easy workaround to get past this error is to set the ORACLE_HOME environment variable for Grid Infrastructure in the file /sbin/mount.acfs on all Oracle RAC nodes. The ORACLE_HOME should be set at the beginning of the file after the header comments as follows:


#!/bin/sh # # Copyright (c) 2001, 2009, Oracle and/or its affiliates. All rights reserved. # ORACLE_HOME=/u01/app/11.2.0/grid ORA_CRS_HOME=%ORA_CRS_HOME% if [ ! -d $ORA_CRS_HOME ]; then ORA_CRS_HOME=$ORACLE_HOME fi ...

You should now be able to successfully mount the volume:


[root@racnode1 ~]# /bin/mount -t acfs /dev/asm/docsvol1-300 /documents1 [root@racnode2 ~]# /bin/mount -t acfs /dev/asm/docsvol1-300 /documents1

Verify Mounted Cluster File System

To verify that the new cluster file system mounted properly, run the following mount command from all Oracle RAC nodes:


[root@racnode1 ~]# mount /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sdb1 on /local type ext3 (rw) /dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) domo:PUBLIC on /domo type nfs (rw,addr=192.168.1.121) oracleasmfs on /dev/oracleasm type oracleasmfs (rw) /dev/asm/docsvol1-300 on /documents1 type acfs (rw) [root@racnode2 ~]# mount /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sdb1 on /local type ext3 (rw) /dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) domo:Public on /domo type nfs (rw,addr=192.168.1.121) oracleasmfs on /dev/oracleasm type oracleasmfs (rw) /dev/asm/docsvol1-300 on /documents1 type acfs (rw)

Set Permissions

With the new cluster file system now mounted on all Oracle RAC nodes, change the permissions to allow user access. For the purpose of this example, I want to grant the oracle user account and dba group read/write permissions. Run the following as root from only one node in the Oracle RAC:


[root@racnode1 ~]# chown oracle.dba /documents1 [root@racnode1 ~]# chmod 775 /documents1

Test

Now let's perform a test to see if all of our hard work paid off.

Node 1

Log in to the first Oracle RAC node as the oracle user account and create a test file on the new cluster file system:


[oracle@racnode1 ~]$ id uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper) [oracle@racnode1 ~]$ echo "The Hunter Family: Jeff, Melody, and Alex" > /documents1/test.txt [oracle@racnode1 ~]$ ls -l /documents1 total 72 drwxr-xr-x 5 root root 4096 Nov 23 21:17 .ACFS/ drwx------ 2 root root 65536 Nov 23 21:17 lost+found/ -rw-r--r-- 1 oracle oinstall 42 Nov 23 21:25 test.txt

Node 2

Log in to the second Oracle RAC node as the oracle user account and verify the presence and content of the test file:


[oracle@racnode2 ~]$ id uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper) [oracle@racnode2 ~]$ ls -l /documents1 total 72 drwxr-xr-x 5 root root 4096 Nov 23 21:17 .ACFS/ drwx------ 2 root root 65536 Nov 23 21:17 lost+found/ -rw-r--r-- 1 oracle oinstall 42 Nov 23 21:25 test.txt [oracle@racnode2 ~]$ cat /documents1/test.txt The Hunter Family: Jeff, Melody, and Alex

Oracle Enterprise Manager (OEM)

This section presents a second method that can be used to create an Oracle ASM cluster file system; namely, Oracle Enterprise Manager (OEM). Similar to the ASM Configuration Assistant (ASMCA), OEM provides a convenient graphical user interface for creating and maintaining ASM cluster file systems.

Create Mount Point

From each Oracle RAC node, create a directory that will be used to mount the new Oracle ACFS:


[root@racnode1 ~]# mkdir /documents2 [root@racnode2 ~]# mkdir /documents2

Create ASM Cluster File System

Log in to Oracle Enterprise Manager (OEM) as a privileged database user.


[oracle@racnode1 ~]$ emctl status dbconsole Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0 Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved. https://racnode1.idevelopment.info:1158/em/console/aboutApplication Oracle Enterprise Manager 11g is running. ------------------------------------------------------------------ Logs are generated in directory /u01/app/oracle/product/11.2.0/dbhome_1/racnode1_racdb/sysman/log [oracle@racnode2 ~]$ emctl status dbconsole Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0 Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved. https://racnode1.idevelopment.info:1158/em/console/aboutApplication EM Daemon is running. ------------------------------------------------------------------ Logs are generated in directory /u01/app/oracle/product/11.2.0/dbhome_1/racnode2_racdb/sysman/log

Screen Name Response Screen Shot
OEM Home From the OEM home screen, click the 'Cluster' tab. Scroll to the bottom of the OEM home page, then click on one of the ASM instances listed.
Automatic Storage Management On the resulting ASM home screen, click on the 'ASM Cluster File System' tab. Next, click the [Create] button.
Create ASM Cluster File System Click the [Create ASM Volume] button.
Create ASM Volume Create a new ASM volume by supplying a "Volume Name", "Disk Group Name", and "Size". For the purpose of this example, I will be creating a 32GB volume named "docsvol2" on the "DOCSDG1" ASM disk group.

After verifying all values in this dialog are correct, click the [OK] button.
Create ASM Cluster File System The newly created ASM volume (/dev/asm/docsvol2-300) will now be entered in the 'Volume Device' field.

Enter a 'Volume Label'. For example, DOCSVOL2.

Select the option for 'Register ASM Cluster File System Mount Point'.

Enter the previously created mount point directory (/documents2).

After verifying all values in this dialog are correct, click the [OK] button.
Automatic Storage Management OEM will create the new cluster file system, register it in the Oracle Grid Infrastructure registry, and leave it in the 'Dismounted' state. Select the new ASM volume and click the [Mount] button.
Cluster Select Hosts Accept the default node selection (all Oracle RAC nodes) and click the [Continue] button.
Mount ASM Cluster File System Enter the 'Mount Point' and click the [Generate Command] button.
Show Command Run the suggested command as the root user on all Oracle RAC nodes:
  [root@racnode1 ~]# /bin/mount -t acfs /dev/asm/docsvol2-300 /documents2
  [root@racnode2 /]# /bin/mount -t acfs /dev/asm/docsvol2-300 /documents2
Note: If you receive an error loading Oracle shared libraries, refer to the following workaround.

After mounting the ASM volume, click the [Return] button on this and the previous screen.
Automatic Storage Management The new Oracle ASM Cluster File System is ready for use!

Verify Mounted Cluster File System

To verify that the new cluster file system mounted properly, run the following mount command from all Oracle RAC nodes:


[root@racnode1 ~]# mount /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sdb1 on /local type ext3 (rw) /dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) domo:PUBLIC on /domo type nfs (rw,addr=192.168.1.121) oracleasmfs on /dev/oracleasm type oracleasmfs (rw) /dev/asm/docsvol1-300 on /documents1 type acfs (rw) /dev/asm/docsvol2-300 on /documents2 type acfs (rw) [root@racnode2 ~]# mount /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sdb1 on /local type ext3 (rw) /dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) domo:Public on /domo type nfs (rw,addr=192.168.1.121) oracleasmfs on /dev/oracleasm type oracleasmfs (rw) /dev/asm/docsvol1-300 on /documents1 type acfs (rw) /dev/asm/docsvol2-300 on /documents2 type acfs (rw)

Set Permissions

With the new cluster file system now mounted on all Oracle RAC nodes, change the permissions to allow user access. For the purpose of this example, I want to grant the oracle user account and dba group read/write permissions. Run the following as root from only one node in the Oracle RAC:


[root@racnode1 ~]# chown oracle.dba /documents2 [root@racnode1 ~]# chmod 775 /documents2

Test

Now let's perform a test to see if all of our hard work paid off.

Node 1

Log in to the first Oracle RAC node as the oracle user account and create a test file on the new cluster file system:


[oracle@racnode1 ~]$ id uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper) [oracle@racnode1 ~]$ echo "The Hunter Family: Jeff, Melody, and Alex" > /documents2/test.txt [oracle@racnode1 ~]$ ls -l /documents2 total 72 drwxr-xr-x 5 root root 4096 Nov 24 13:32 .ACFS/ drwx------ 2 root root 65536 Nov 24 13:32 lost+found/ -rw-r--r-- 1 oracle oinstall 42 Nov 24 14:10 test.txt

Node 2

Log in to the second Oracle RAC node as the oracle user account and verify the presence and content of the test file:


[oracle@racnode2 ~]$ id uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper) [oracle@racnode2 ~]$ ls -l /documents2 total 72 drwxr-xr-x 5 root root 4096 Nov 24 13:32 .ACFS/ drwx------ 2 root root 65536 Nov 24 13:32 lost+found/ -rw-r--r-- 1 oracle oinstall 42 Nov 24 14:10 test.txt [oracle@racnode2 ~]$ cat /documents2/test.txt The Hunter Family: Jeff, Melody, and Alex

Command Line / SQL

This section presents the third and final method described in this guide that can be used to create an Oracle ASM cluster file system; namely, the command line interface and SQL. Unlike using the ASM Configuration Assistant (ASMCA) or Oracle Enterprise Manager (OEM), using the command line tools does not require a graphic user interface and is the preferred method when working remotely over a slow network.

Create Oracle ASM Dynamic Volume

Log in as the Oracle Grid Infrastructure owner and switch to the Oracle ASM environment on node 1 of the RAC:


[grid@racnode1 ~]$ id uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper) [grid@racnode1 ~]$ . oraenv ORACLE_SID = [+ASM1] ? +ASM1 The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/grid [grid@racnode1 ~]$ dbhome /u01/app/11.2.0/grid [grid@racnode1 ~]$ echo $ORACLE_SID +ASM1

As the Oracle Grid Infrastructure owner, log in to the Oracle ASM instance using SQL*Plus and issue the following SQL:


[grid@racnode1 ~]$ sqlplus / as sysasm SQL> ALTER DISKGROUP docsdg1 ADD VOLUME docsvol3 SIZE 20G; Diskgroup altered.

 

The same task can be accomplished using the ASMCMD command-line utility:

[grid@racnode1 ~]$ asmcmd volcreate -G docsdg1 -s 20G --redundancy unprotected docsvol3

To verify that the new Oracle ASM dynamic volume was successfully created, query the view V$ASM_VOLUME (or since we are using Oracle RAC, GV$ASM_VOLUME). Make certain that the STATE column for each Oracle RAC instance is ENABLED:


break on inst_id skip 1 column inst_id format 9999999 heading "Instance ID" justify left column volume_name format a13 heading "Volume Name" justify left column volume_device format a23 heading "Volume Device" justify left column size_mb format 999,999,999 heading "Size (MB)" justify right column usage format a5 heading "Usage" justify right column state format a7 heading "State" justify right ============================================================================ SQL> select inst_id, volume_name, volume_device, size_mb, usage, state 2 from gv$asm_volume 3 order by inst_id, volume_name; Instance ID Volume Name Volume Device Size (MB) Usage State ----------- ------------- ----------------------- ------------ ----- ------- 1 DOCSVOL1 /dev/asm/docsvol1-300 32,768 ACFS ENABLED DOCSVOL2 /dev/asm/docsvol2-300 32,768 ACFS ENABLED DOCSVOL3 /dev/asm/docsvol3-300 20,480 ENABLED 2 DOCSVOL1 /dev/asm/docsvol1-300 32,768 ACFS ENABLED DOCSVOL2 /dev/asm/docsvol2-300 32,768 ACFS ENABLED DOCSVOL3 /dev/asm/docsvol3-300 20,480 ENABLED

 

The same task can be accomplished using the ASMCMD command-line utility:

[grid@racnode1 ~]$ asmcmd volinfo -G docsdg1 -a
Diskgroup Name: DOCSDG1

         Volume Name: DOCSVOL1
         Volume Device: /dev/asm/docsvol1-300
         State: ENABLED
         Size (MB): 32768
         Resize Unit (MB): 256
         Redundancy: UNPROT
         Stripe Columns: 4
         Stripe Width (K): 128
         Usage: ACFS
         Mountpath: /documents1

         Volume Name: DOCSVOL2
         Volume Device: /dev/asm/docsvol2-300
         State: ENABLED
         Size (MB): 32768
         Resize Unit (MB): 256
         Redundancy: UNPROT
         Stripe Columns: 4
         Stripe Width (K): 128
         Usage: ACFS
         Mountpath: /documents2

         Volume Name: DOCSVOL3
         Volume Device: /dev/asm/docsvol3-300
         State: ENABLED
         Size (MB): 20480
         Resize Unit (MB): 256
         Redundancy: UNPROT
         Stripe Columns: 4
         Stripe Width (K): 128
         Usage:
         Mountpath:

Additional checks can be performed on the new ASM volume using the /sbin/advmutil command:


[root@racnode1 ~]# /sbin/advmutil volinfo /dev/asm/docsvol3-300 Interface Version: 1 Size (MB): 20480 Resize Increment (MB): 256 Redundancy: unprotected Stripe Columns: 4 Stripe Width (KB): 128 Disk Group: DOCSDG1 Volume: DOCSVOL3

As a final check, list the newly created device file on the file system:


[root@racnode1 ~]# ls -l /dev/asm/* brwxrwx--- 1 root asmadmin 252, 153601 Nov 26 16:55 /dev/asm/docsvol1-300 brwxrwx--- 1 root asmadmin 252, 153602 Nov 26 16:55 /dev/asm/docsvol2-300 brwxrwx--- 1 root asmadmin 252, 153603 Nov 26 17:20 /dev/asm/docsvol3-300

Create Oracle ASM Cluster File System

The next step is to create the Oracle ASM cluster file system on the new Oracle ASM volume created in the previous section. This is performed using the mkfs OS command from only one of the Oracle RAC nodes:


[grid@racnode1 ~]$ /sbin/mkfs -t acfs -b 4k /dev/asm/docsvol3-300 -n "DOCSVOL3" mkfs.acfs: version = 11.2.0.1.0.0 mkfs.acfs: on-disk version = 39.0 mkfs.acfs: volume = /dev/asm/docsvol3-300 mkfs.acfs: volume size = 21474836480 mkfs.acfs: Format complete.

In the above mkfs command, the -t flag indicates that the new file system should be of type ACFS. The block size is set to 4K (-b). Finally, specify the Linux dynamic volume device (/dev/asm/docsvol3-300) and the volume label (DOCSVOL3).

Mount the New Cluster File System

The mkfs command in the previous section only prepared the volume to be mounted as a file system, but does not actually mount it. To mount the new cluster file system, first create a directory on each Oracle RAC node as the root user account that will be used to mount the new Oracle ACFS:


[root@racnode1 ~]# mkdir /documents3 [root@racnode2 ~]# mkdir /documents3

Mount the cluster file system on each Oracle RAC node using the regular Linux mount command as follows:


[root@racnode1 ~]# /bin/mount -t acfs /dev/asm/docsvol3-300 /documents3 /sbin/mount.acfs.bin: error while loading shared libraries: libhasgen11.so: cannot open shared object file: No such file or directory [root@racnode2 ~]# /bin/mount -t acfs /dev/asm/docsvol3-300 /documents3 /sbin/mount.acfs.bin: error while loading shared libraries: libhasgen11.so: cannot open shared object file: No such file or directory

If you don't have the ORACLE_HOME environment variable set to the Oracle Grid Infrastructure home as explained in the prerequisites section to this guide, the mount command will fail as shown above. In order to mount the new cluster file system, the Oracle ASM ACFS binaries need access to certain shared libraries in the ORACLE_HOME for Grid Infrastructure. An easy workaround to get past this error is to set the ORACLE_HOME environment variable for Grid Infrastructure in the file /sbin/mount.acfs on all Oracle RAC nodes. The ORACLE_HOME should be set at the beginning of the file after the header comments as follows:


#!/bin/sh # # Copyright (c) 2001, 2009, Oracle and/or its affiliates. All rights reserved. # ORACLE_HOME=/u01/app/11.2.0/grid ORA_CRS_HOME=%ORA_CRS_HOME% if [ ! -d $ORA_CRS_HOME ]; then ORA_CRS_HOME=$ORACLE_HOME fi ...

You should now be able to successfully mount the volume:


[root@racnode1 ~]# /bin/mount -t acfs /dev/asm/docsvol3-300 /documents3 [root@racnode2 ~]# /bin/mount -t acfs /dev/asm/docsvol3-300 /documents3

Verify Mounted Cluster File System

To verify that the new cluster file system mounted properly, run the following mount command from all Oracle RAC nodes:


[root@racnode1 ~]# mount /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sdb1 on /local type ext3 (rw) /dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) domo:PUBLIC on /domo type nfs (rw,addr=192.168.1.121) oracleasmfs on /dev/oracleasm type oracleasmfs (rw) /dev/asm/docsvol1-300 on /documents1 type acfs (rw) /dev/asm/docsvol2-300 on /documents2 type acfs (rw) /dev/asm/docsvol3-300 on /documents3 type acfs (rw) [root@racnode2 ~]# mount /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sdb1 on /local type ext3 (rw) /dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) domo:Public on /domo type nfs (rw,addr=192.168.1.121) oracleasmfs on /dev/oracleasm type oracleasmfs (rw) /dev/asm/docsvol1-300 on /documents1 type acfs (rw) /dev/asm/docsvol2-300 on /documents2 type acfs (rw) /dev/asm/docsvol3-300 on /documents3 type acfs (rw)

Set Permissions

With the new cluster file system now mounted on all Oracle RAC nodes, change the permissions to allow user access. For the purpose of this example, I want to grant the oracle user account and dba group read/write permissions. Run the following as root from only one node in the Oracle RAC:


[root@racnode1 ~]# chown oracle.dba /documents3 [root@racnode1 ~]# chmod 775 /documents3

Register New Volume

When creating the Oracle ACFS using the ASM Configuration Assistant (ASMCA) and Oracle Enterprise Manager (OEM), I glossed over this notion of registering the new volume in the Oracle ACFS mount registry. When a node configured with Oracle ACFS reboots, the newly created file systems do not remount by default. The Oracle ACFS mount registry acts as a global file system reference, much like the /etc/fstab file does in a UNIX/Linux environment. When mount points are registered in the Oracle ACFS mount registry, Oracle Grid Infrastructure will mount and unmount volumes on startup and shutdown respectively.

Use the /sbin/acfsutil utility on only one of Oracle RAC nodes to register the new mount point in the Oracle ACFS mount registry:


[root@racnode1 ~]# /sbin/acfsutil registry -f -a /dev/asm/docsvol3-300 /documents3 acfsutil registry: mount point /documents3 successfully added to Oracle Registry

Query the Oracle ACFS mount registry from all Oracle RAC nodes to verify the volume and mount point was successfully registered:


[root@racnode1 ~]# /sbin/acfsutil registry Mount Object: Device: /dev/asm/docsvol1-300 Mount Point: /documents1 Disk Group: DOCSDG1 Volume: DOCSVOL1 Options: none Nodes: all Mount Object: Device: /dev/asm/docsvol2-300 Mount Point: /documents2 Disk Group: DOCSDG1 Volume: DOCSVOL2 Options: none Nodes: all Mount Object: Device: /dev/asm/docsvol3-300 Mount Point: /documents3 Disk Group: DOCSDG1 Volume: DOCSVOL3 Options: none Nodes: all

Test

Now let's perform a test to see if all of our hard work paid off.

Node 1

Log in to the first Oracle RAC node as the oracle user account and create a test file on the new cluster file system:


[oracle@racnode1 ~]$ id uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper) [oracle@racnode1 ~]$ echo "The Hunter Family: Jeff, Melody, and Alex" > /documents3/test.txt [oracle@racnode1 ~]$ ls -l /documents3 total 72 drwxr-xr-x 5 root root 4096 Nov 24 18:44 .ACFS/ drwx------ 2 root root 65536 Nov 24 18:44 lost+found/ -rw-r--r-- 1 oracle oinstall 42 Nov 24 18:56 test.txt

Node 2

Log in to the second Oracle RAC node as the oracle user account and verify the presence and content of the test file:


[oracle@racnode2 ~]$ id uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper) [oracle@racnode2 ~]$ ls -l /documents3 total 72 drwxr-xr-x 5 root root 4096 Nov 24 18:44 .ACFS/ drwx------ 2 root root 65536 Nov 24 18:44 lost+found/ -rw-r--r-- 1 oracle oinstall 42 Nov 24 18:56 test.txt [oracle@racnode2 ~]$ cat /documents3/test.txt The Hunter Family: Jeff, Melody, and Alex

ACFS Snapshots

Oracle ASM Cluster File System includes a feature called snapshots. An Oracle ACFS snapshot is an online, read-only, point in time copy of an Oracle ACFS file system. The snapshot process uses Copy-On-Write functionality which makes efficient use of disk space. Note that snapshots work at the block level instead of the file level. Before an Oracle ACFS file extent is modified or deleted, its current value is copied to the snapshot to maintain the point-in-time view of the file system.

 

When a file is modified, only the changed blocks are copied to the snapshot location which helps conserve disk space.

Once an Oracle ACFS snapshot is created, all snapshot files are immediately available for use. Snapshots are always available as long as the file system is mounted. This provides support for online recovery of files inadvertently modified or deleted from a file system. You can have up to 63 snapshot views supported for each file system. This provides for a flexible online file recovery solution which can span multiple views. You can also use an Oracle ACFS snapshot as the source of a file system backup, as it can be created on demand to deliver a current, consistent, online view of an active file system. Once the Oracle ACFS snapshot is created, simply backup the snapshot to another disk or tape location to create a consistent backup set of the files.

 

Oracle ACFS snapshots can be created and deleted on demand without the need to take the file system offline. ACFS snapshots provide a point-in-time consistent view of the entire file system which can be used to restore deleted or modified files and to perform backups.

All storage for Oracle ACFS snapshots are maintained within the file system which eliminates the need for separate storage pools for file systems and snapshots. As shown in the next section, Oracle ACFS file systems can be dynamically re-sized to accommodate addition file and snapshot storage requirements.

Oracle ACFS snapshots are administered with the acfsutil snap command. This section will provide an overview on how to create and retrieve Oracle ACFS snapshots.

Oracle ACFS Snapshot Location

Whenever you create an Oracle ACFS file system, a hidden directory is created as a sub-directory to the Oracle ACFS file system named .ACFS. (Note that hidden files and directories in Linux start with leading period.)


[oracle@racnode1 ~]$ ls -lFA /documents3 total 2851148 drwxr-xr-x 5 root root 4096 Nov 26 17:57 .ACFS/ -rw-r--r-- 1 oracle oinstall 1239269270 Nov 27 16:02 linux.x64_11gR2_database_1of2.zip -rw-r--r-- 1 oracle oinstall 1111416131 Nov 27 16:03 linux.x64_11gR2_database_2of2.zip -rw-r--r-- 1 oracle oinstall 555366950 Nov 27 16:03 linux.x64_11gR2_examples.zip drwx------ 2 root root 65536 Nov 26 17:57 lost+found/

Found in the .ACFS are two directories named repl and snaps. All Oracle ACFS snapshots are stored in the snaps directory.


[oracle@racnode1 ~]$ ls -lFA /documents3/.ACFS total 12 drwx------ 2 root root 4096 Nov 26 17:57 .fileid/ drwx------ 6 root root 4096 Nov 26 17:57 repl/ drwxr-xr-x 2 root root 4096 Nov 27 15:53 snaps/

Since no Oracle ACFS snapshots exist, the snaps directory is empty.


[oracle@racnode1 ~]$ ls -lFA /documents3/.ACFS/snaps total 0

Create Oracle ACFS Snapshot

Let's start by creating an Oracle ACFS snapshot named snap1 for the Oracle ACFS mounted on /documents3. This operation should be performed as root or the Oracle Grid Infrastructure owner:


[root@racnode1 ~]# /sbin/acfsutil snap create snap1 /documents3 acfsutil snap create: Snapshot operation is complete.

The data for the new snap1 snapshot will be stored in /documents3/.ACFS/snaps/snap1. Once the snapshot is created, any existing files and/or directories in the file system are automatically accessible from the snapshot directory. For example, when I created the snap1 snapshot, the three Oracle ZIP files were made available from the snapshot /documents3/.ACFS/snaps/snap1:


[oracle@racnode1 ~]$ ls -lFA /documents3/.ACFS/snaps/snap1 total 2851084 drwxr-xr-x 5 root root 4096 Nov 26 17:57 .ACFS/ -rw-r--r-- 1 oracle oinstall 1239269270 Nov 27 16:02 linux.x64_11gR2_database_1of2.zip -rw-r--r-- 1 oracle oinstall 1111416131 Nov 27 16:03 linux.x64_11gR2_database_2of2.zip -rw-r--r-- 1 oracle oinstall 555366950 Nov 27 16:03 linux.x64_11gR2_examples.zip ?--------- ? ? ? ? ? lost+found

It is important to note that when the snapshot gets created, nothing is actually stored in the snapshot directory, so there is no additional space consumption. The snapshot directory will only contain modified file blocks when a file is updated or deleted.

Restore Files From an Oracle ACFS Snapshot

When a file is deleted (or modified), this triggers an automatic backup of all modified file blocks to the snapshot. For example, if I delete the file /documents3/linux.x64_11gR2_examples.zip, the previous images of the file blocks are copied to the snap1 snapshot where it can be restored from at a later time if necessary:


[oracle@racnode1 ~]$ rm /documents3/linux.x64_11gR2_examples.zip

If you were looking for functionality in Oracle ACFS to perform a rollback of the current file system to a snapshot, then I have bad news; one doesn't exist. Hopefully this will be a feature introduced in future versions!

In the case where you accidentally deleted a file from the current file system, it can be restored by copying it from the snapshot, back to the the current file system:


[oracle@racnode1 ~]$ cp /documents3/.ACFS/snaps/snap1/linux.x64_11gR2_examples.zip /documents3

Display Oracle ACFS Snapshot Information

The '/sbin/acfsutil info fs' command can provide file system information as well as limited information on any Oracle ACFS snapshots:


[oracle@racnode1 ~]$ /sbin/acfsutil info fs /documents3 /documents3 ACFS Version: 11.2.0.1.0.0 flags: MountPoint,Available mount time: Sat Nov 27 03:07:50 2010 volumes: 1 total size: 26843545600 total free: 23191826432 primary volume: /dev/asm/docsvol3-300 label: DOCSVOL3 flags: Primary,Available on-disk version: 39.0 allocation unit: 4096 major, minor: 252, 153603 size: 26843545600 free: 23191826432 number of snapshots: 1 snapshot space usage: 560463872

From the example above, you can see that I have only one active snapshot that is consuming approximately 560MB of disk space. This coincides with the size of the file I removed earlier (/documents3/linux.x64_11gR2_examples.zip) which triggered a back up of all modified file image blocks.

To query all snapshots, simply list the directories under '<ACFS_MOUNT_POINT>/.ACFS/snaps'. Each directory under the snaps directory is an Oracle ACFS snapshot.

Another useful technique used to obtain information about Oracle ACFS snapshots is to query the view V$ASM_ACFSSNAPSHOTS from the Oracle ASM instance:


column snap_name format a15 heading "Snapshot Name" column fs_name format a15 heading "File System" column vol_device format a25 heading "Volume Device" column create_time format a20 heading "Create Time" ====================================================================== SQL> select snap_name, fs_name, vol_device, 2 to_char(create_time, 'DD-MON-YYYY HH24:MI:SS') as create_time 3 from v$asm_acfssnapshots 4 order by snap_name; Snapshot Name File System Volume Device Create Time --------------- --------------- ------------------------- -------------------- snap1 /documents3 /dev/asm/docsvol3-300 27-NOV-2010 16:11:29

Delete Oracle ACFS Snapshot

Use the 'acfsutil snap delete' command to delete an existing Oracle ACFS snapshot:


[root@racnode1 ~]# /sbin/acfsutil snap delete snap1 /documents3 acfsutil snap delete: Snapshot operation is complete.

Managing ACFS

Oracle ACFS and Dismount or Shutdown Operations

If you take anything away from this article, know and understand the importance of dismounting any active file system configured with an Oracle ASM Dynamic Volume Manager (ADVM) volume device, BEFORE shutting down an Oracle ASM instance or dismounting a disk group! Failure to do so will result in I/O failures and very angry users!

After the file system(s) have been dismounted, all open references to Oracle ASM files are removed and associated disk groups can then be dismounted or the Oracle ASM instance shut down.

If the Oracle ASM instance or disk group is forcibly shut down or fails while an associated Oracle ACFS is active, the file system is placed into an offline error state. When the file system is placed in an offline error state, applications will start to encounter I/O failures and any Oracle ACFS user data and metadata being written at the time of the termination may not be flushed to ASM storage before it is fenced. If a SHUTDOWN ABORT operation on the Oracle ASM instance is required and you are not able to dismount the file system, issue two sync command to flush any cached file system data and metadata to persistent storage:


[root@racnode1 ~]# sync [root@racnode1 ~]# sync

Using a two-node Oracle RAC, I forced an Oracle ASM instance shutdown on node 1 to simulate a failure:

 

This should go without saying, but I'll say it anyway. DO NOT attempt the following on a production environment.


SQL> shutdown abort ASM instance shutdown

Any subsequent attempt to access an offline file system on that node will result in an I/O error:


[oracle@racnode1 ~]$ ls -l /documents3 ls: /documents3: Input/output error [oracle@racnode1 ~]$ df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 145344992 22459396 115383364 17% / /dev/sdb1 151351424 192072 143346948 1% /local /dev/sda1 101086 12632 83235 14% /boot tmpfs 2019256 0 2019256 0% /dev/shm df: `/documents1': Input/output error df: `/documents2': Input/output error df: `/documents3': Input/output error domo:PUBLIC 4799457152 1901758592 2897698560 40% /domo

Recovering a file system from an offline error state requires dismounting and remounting the Oracle ACFS file system. Dismounting an active file system, even one that is offline, requires stopping all applications using the file system, including any shell references. For example, I had a shell session that previously changed directory (cd) into the /documents3 file system before the forced shutdown:


[root@racnode1 ~]# umount /documents1 [root@racnode1 ~]# umount /documents2 [root@racnode1 ~]# umount /documents3 umount: /documents3: device is busy umount: /documents3: device is busy

Use the Linux fuser or lsof command to identify processes and kill if necessary:


[root@racnode1 ~]# fuser /documents3 /documents3: 16263c [root@racnode1 ~]# kill -9 16263 [root@racnode1 ~]# umount /documents3

Restart the Oracle ASM instance (or in my case, all Oracle Grid Infrastructure services were stopped as a result of me terminating the Oracle ASM instance):


[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster [root@racnode1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster

All of my Oracle ACFS volumes were added to the Oracle ACFS mount registry and will therefore automatically mount when Oracle Grid Infrastructure starts. If you need to manually mount the file system, verify the volume is enabled before attempting to mount:


[root@racnode1 ~]# mount /dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sdb1 on /local type ext3 (rw) /dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) oracleasmfs on /dev/oracleasm type oracleasmfs (rw) domo:PUBLIC on /domo type nfs (rw,addr=192.168.1.121) /dev/asm/docsvol1-300 on /documents1 type acfs (rw) /dev/asm/docsvol2-300 on /documents2 type acfs (rw) /dev/asm/docsvol3-300 on /documents3 type acfs (rw)

Resize File System

With Oracle ACFS, as long as there exists free space within the ASM disk group, any of the ASM volumes can be dynamically expanded which means the file system gets expanded as a result. Note that if you are using another file system other than Oracle ACFS, as long as that file system can support online resizing, they too can be dynamically re-sized. The one exception to 3rd party file systems is online shrinking. Ext3, for example, supports online resizing but does not support online shrinking.

Use the following syntax to add space to an Oracle ACFS on the fly without the need to take any type of outage.

First, verify there is enough space in the current Oracle ASM disk group to extend the volume:


SQL> select name, total_mb, free_mb, round((free_mb/total_mb)*100,2) pct_free 2 from v$asm_diskgroup 3 where total_mb != 0 4 order by name; Disk Group Total (MB) Free (MB) % Free --------------- ------------ ------------ ------- CRS 2,205 1,809 82.04 DOCSDG1 98,303 12,187 12.40 FRA 33,887 22,795 67.27 RACDB_DATA 33,887 30,584 90.25

 

The same task can be accomplished using the ASMCMD command-line utility:

[grid@racnode1 ~]$ asmcmd lsdg

From the 12GB of free space in the DOCSDG1 ASM disk group, let's extend the file system (volume) by another 5GB. Note that this can be performed while the file system is online and accessible by clients — no outage is required:


[root@racnode1 ~]# /sbin/acfsutil size +5G /documents3 acfsutil size: new file system size: 26843545600 (25600MB)

Verify the new size of the file system from all Oracle RAC nodes:


[root@racnode1 ~]# df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 145344992 21952712 115890048 16% / /dev/sdb1 151351424 192072 143346948 1% /local /dev/sda1 101086 12632 83235 14% /boot tmpfs 2019256 1135852 883404 57% /dev/shm domo:PUBLIC 4799457152 1901103872 2898353280 40% /domo /dev/asm/docsvol1-300 33554432 197668 33356764 1% /documents1 /dev/asm/docsvol2-300 33554432 197668 33356764 1% /documents2 /dev/asm/docsvol3-300 26214400 183108 26031292 1% /documents3 [root@racnode2 ~]# df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 145344992 13803084 124039676 11% / /dev/sdb1 151351424 192072 143346948 1% /local /dev/sda1 101086 12632 83235 14% /boot tmpfs 2019256 1135852 883404 57% /dev/shm domo:Public 4799457152 1901103872 2898353280 40% /domo /dev/asm/docsvol1-300 33554432 197668 33356764 1% /documents1 /dev/asm/docsvol2-300 33554432 197668 33356764 1% /documents2 /dev/asm/docsvol3-300 26214400 183108 26031292 1% /documents3

Useful ACFS Commands

This section contains several useful commands that can be used to administer Oracle ACFS. Note that many of the commands described in this section have already been discussed throughout this guide.

ASM Volume Driver

Load the Oracle ASM volume driver:


[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/acfsload start -s

Unload the Oracle ASM volume driver:


[root@racnode1 ~]# /u01/app/11.2.0/grid/bin/acfsload stop

Check if Oracle ASM volume driver is loaded:


[root@racnode1 ~]# lsmod | grep oracle oracleacfs 877320 4 oracleadvm 221760 8 oracleoks 276880 2 oracleacfs,oracleadvm oracleasm 84136 1

ASM Volume Management

Create new Oracle ASM volume using ASMCMD:


[grid@racnode1 ~]$ asmcmd volcreate -G docsdg1 -s 20G --redundancy unprotected docsvol3

Resize Oracle ACFS file system (add 5GB):


[root@racnode1 ~]# /sbin/acfsutil size +5G /documents3 acfsutil size: new file system size: 26843545600 (25600MB)

Delete Oracle ASM volume using ASMCMD:


[grid@racnode1 ~]$ asmcmd voldelete -G docsdg1 docsvol3

Disk Group / File System / Volume Information

Get detailed Oracle ASM disk group information:


[grid@racnode1 ~]$ asmcmd lsdg

Format an Oracle ASM cluster file system:


[grid@racnode1 ~]$ /sbin/mkfs -t acfs -b 4k /dev/asm/docsvol3-300 -n "DOCSVOL3" mkfs.acfs: version = 11.2.0.1.0.0 mkfs.acfs: on-disk version = 39.0 mkfs.acfs: volume = /dev/asm/docsvol3-300 mkfs.acfs: volume size = 21474836480 mkfs.acfs: Format complete.

Get detailed file system information:


[root@racnode1 ~]# /sbin/acfsutil info fs /documents1 ACFS Version: 11.2.0.1.0.0 flags: MountPoint,Available mount time: Fri Nov 26 18:38:48 2010 volumes: 1 total size: 34359738368 total free: 34157326336 primary volume: /dev/asm/docsvol1-300 label: flags: Primary,Available,ADVM on-disk version: 39.0 allocation unit: 4096 major, minor: 252, 153601 size: 34359738368 free: 34157326336 ADVM diskgroup DOCSDG1 ADVM resize increment: 268435456 ADVM redundancy: unprotected ADVM stripe columns: 4 ADVM stripe width: 131072 number of snapshots: 0 snapshot space usage: 0 /documents2 ACFS Version: 11.2.0.1.0.0 flags: MountPoint,Available mount time: Fri Nov 26 18:38:48 2010 volumes: 1 total size: 34359738368 total free: 34157326336 primary volume: /dev/asm/docsvol2-300 label: flags: Primary,Available,ADVM on-disk version: 39.0 allocation unit: 4096 major, minor: 252, 153602 size: 34359738368 free: 34157326336 ADVM diskgroup DOCSDG1 ADVM resize increment: 268435456 ADVM redundancy: unprotected ADVM stripe columns: 4 ADVM stripe width: 131072 number of snapshots: 0 snapshot space usage: 0 /documents3 ACFS Version: 11.2.0.1.0.0 flags: MountPoint,Available mount time: Fri Nov 26 18:38:48 2010 volumes: 1 total size: 26843545600 total free: 26656043008 primary volume: /dev/asm/docsvol3-300 label: DOCSVOL3 flags: Primary,Available,ADVM on-disk version: 39.0 allocation unit: 4096 major, minor: 252, 153603 size: 26843545600 free: 26656043008 ADVM diskgroup DOCSDG1 ADVM resize increment: 268435456 ADVM redundancy: unprotected ADVM stripe columns: 4 ADVM stripe width: 131072 number of snapshots: 0 snapshot space usage: 0

Get ASM volume information:


[grid@racnode1 ~]$ asmcmd volinfo -a Diskgroup Name: DOCSDG1 Volume Name: DOCSVOL1 Volume Device: /dev/asm/docsvol1-300 State: ENABLED Size (MB): 32768 Resize Unit (MB): 256 Redundancy: UNPROT Stripe Columns: 4 Stripe Width (K): 128 Usage: ACFS Mountpath: /documents1 Volume Name: DOCSVOL2 Volume Device: /dev/asm/docsvol2-300 State: ENABLED Size (MB): 32768 Resize Unit (MB): 256 Redundancy: UNPROT Stripe Columns: 4 Stripe Width (K): 128 Usage: ACFS Mountpath: /documents2 Volume Name: DOCSVOL3 Volume Device: /dev/asm/docsvol3-300 State: ENABLED Size (MB): 25600 Resize Unit (MB): 256 Redundancy: UNPROT Stripe Columns: 4 Stripe Width (K): 128 Usage: ACFS Mountpath: /documents3

Get volume status using ASMCMD command:


[grid@racnode1 ~]$ asmcmd volstat DISKGROUP NUMBER / NAME: 2 / DOCSDG1 --------------------------------------- VOLUME_NAME READS BYTES_READ READ_TIME READ_ERRS WRITES BYTES_WRITTEN WRITE_TIME WRITE_ERRS ------------------------------------------------------------- DOCSVOL1 517 408576 1618 0 17007 69280768 63456 0 DOCSVOL2 512 406016 2547 0 17007 69280768 66147 0 DOCSVOL3 13961 54525952 172007 0 10956 54410240 41749 0

Enable a volume using the ASMCMD command:


[grid@racnode1 ~]$ asmcmd volenable -G docsdg1 docsvol3

Disable a volume using the ASMCMD command:


[root@racnode1 ~]# umount /documents3 [root@racnode2 ~]# umount /documents3 [grid@racnode1 ~]$ asmcmd voldisable -G docsdg1 docsvol3

Mount Commands

Mount single Oracle ACFS volume on the local node:


[root@racnode1 ~]# /bin/mount -t acfs /dev/asm/docsvol3-300 /documents3

Unmount single Oracle ACFS volume on the local node:


[root@racnode1 ~]# umount /documents3

Mount all Oracle ACFS volumes on the local node using the metadata found in the Oracle ACFS mount registry:


[root@racnode1 ~]# /sbin/mount.acfs -o all

Unmount all Oracle ACFS volumes on the local node using the metadata found in the Oracle ACFS mount registry:


[root@racnode1 ~]# /bin/umount -t acfs -a

Oracle ACFS Mount Registry

Register new mount point in the Oracle ACFS mount registry:


[root@racnode1 ~]# /sbin/acfsutil registry -f -a /dev/asm/docsvol3-300 /documents3 acfsutil registry: mount point /documents3 successfully added to Oracle Registry

Query the Oracle ACFS mount registry:


[root@racnode1 ~]# /sbin/acfsutil registry Mount Object: Device: /dev/asm/docsvol1-300 Mount Point: /documents1 Disk Group: DOCSDG1 Volume: DOCSVOL1 Options: none Nodes: all Mount Object: Device: /dev/asm/docsvol2-300 Mount Point: /documents2 Disk Group: DOCSDG1 Volume: DOCSVOL2 Options: none Nodes: all Mount Object: Device: /dev/asm/docsvol3-300 Mount Point: /documents3 Disk Group: DOCSDG1 Volume: DOCSVOL3 Options: none Nodes: all

Unregister volume and mount point from the Oracle ACFS mount registry:


[root@racnode1 ~]# acfsutil registry -d /documents3 acfsutil registry: successfully removed ACFS mount point /documents3 from Oracle Registry

Oracle ACFS Snapshots

Use the 'acfsutil snap create' command to create an Oracle ACFS snapshot named snap1 for an Oracle ACFS mounted on /documents3:


[root@racnode1 ~]# /sbin/acfsutil snap create snap1 /documents3 acfsutil snap create: Snapshot operation is complete.

Use the 'acfsutil snap delete' command to delete an existing Oracle ACFS snapshot:


[root@racnode1 ~]# /sbin/acfsutil snap delete snap1 /documents3 acfsutil snap delete: Snapshot operation is complete.

Oracle ASM / ACFS Dynamic Views

This section contains information about using dynamic views to display Oracle Automatic Storage Management (Oracle ASM), Oracle Automatic Storage Management Cluster File System (Oracle ACFS), and Oracle ASM Dynamic Volume Manager (Oracle ADVM) information. These views are accessible from the Oracle ASM instance.

Oracle Automatic Storage Management (Oracle ASM)
View Name Description
V$ASM_ALIAS Contains one row for every alias present in every disk group mounted by the Oracle ASM instance.
V$ASM_ATTRIBUTE Displays one row for each attribute defined. In addition to attributes specified by CREATE DISKGROUP and ALTER DISKGROUP statements, the view may show other attributes that are created automatically. Attributes are only displayed for disk groups where COMPATIBLE.ASM is set to 11.1 or higher.
V$ASM_CLIENT In an Oracle ASM instance, identifies databases using disk groups managed by the Oracle ASM instance.

In a DB instance, contains information about the Oracle ASM instance if the database has any open Oracle ASM files.
V$ASM_DISK Contains one row for every disk discovered by the Oracle ASM instance, including disks that are not part of any disk group.

This view performs disk discovery every time it is queried.
V$ASM_DISK_IOSTAT Displays information about disk I/O statistics for each Oracle ASM client.

In a DB instance, only the rows for that instance are shown.
V$ASM_DISK_STAT Contains the same columns as V$ASM_DISK, but to reduce overhead, does not perform a discovery when it is queried. It only returns information about any disks that are part of mounted disk groups in the storage system. To see all disks, use V$ASM_DISK instead.
V$ASM_DISKGROUP Describes a disk group (number, name, size related info, state, and redundancy type).

This view performs disk discovery every time it is queried.
V$ASM_DISKGROUP_STAT Contains the same columns as V$ASM_DISKGROUP, but to reduce overhead, does not perform a discovery when it is queried. It does not return information about any disks that are part of mounted disk groups in the storage system. To see all disks, use V$ASM_DISKGROUP instead.
V$ASM_FILE Contains one row for every Oracle ASM file in every disk group mounted by the Oracle ASM instance.
V$ASM_OPERATION In an Oracle ASM instance, contains one row for every active Oracle ASM long running operation executing in the Oracle ASM instance.

In a DB instance, contains no rows.
V$ASM_TEMPLATE Contains one row for every template present in every disk group mounted by the Oracle ASM instance.
V$ASM_USER Contains the effective operating system user names of connected database instances and names of file owners.
V$ASM_USERGROUP Contains the creator for each Oracle ASM File Access Control group.
V$ASM_USERGROUP_MEMBER Contains the members for each Oracle ASM File Access Control group.

Oracle Automatic Storage Management Cluster File System (Oracle ACFS)
View Name Description
V$ASM_ACFSSNAPSHOTS Contains snapshot information for every mounted Oracle ACFS file system.
V$ASM_ACFSVOLUMES Contains information about mounted Oracle ACFS volumes, correlated with V$ASM_FILESYSTEM.
V$ASM_FILESYSTEM Contains columns that display information for every mounted Oracle ACFS file system.
V$ASM_VOLUME Contains information about each Oracle ADVM volume that is a member of an Oracle ASM instance.
V$ASM_VOLUME_STAT Contains information about statistics for each Oracle ADVM volume.

Use fsck to Check and Repair the Cluster File System

Use the regular Linux fsck command to check and repair the Oracle ACFS. This only needs to be performed from one of the Oracle RAC nodes:


[root@racnode1 ~]# /sbin/fsck -t acfs /dev/asm/docsvol3-300 fsck 1.39 (29-May-2006) fsck.acfs: version = 11.2.0.1.0.0 fsck.acfs: ACFS-00511: /dev/asm/docsvol3-300 is mounted on at least one node of the cluster. fsck.acfs: ACFS-07656: Unable to continue

The fsck operating cannot be performed while the file system is online. Unmount the cluster file system from all Oracle RAC nodes:


[root@racnode1 ~]# umount /documents3 [root@racnode2 ~]# umount /documents3

Now check the cluster file system with the file system unmounted:


[root@racnode1 ~]# /sbin/fsck -t acfs /dev/asm/docsvol3-300 fsck 1.39 (29-May-2006) fsck.acfs: version = 11.2.0.1.0.0 Oracle ASM Cluster File System (ACFS) On-Disk Structure Version: 39.0 ***************************** ********** Pass 1: ********** ***************************** The ACFS volume was created at Fri Nov 26 17:20:27 2010 Checking primary file system... Files checked in primary file system: 100% Checking if any files are orphaned... 0 orphans found fsck.acfs: Checker completed with no errors.

Remount the cluster file system on all Oracle RAC nodes:


[root@racnode1 ~]# /bin/mount -t acfs /dev/asm/docsvol3-300 /documents3 [root@racnode2 ~]# /bin/mount -t acfs /dev/asm/docsvol3-300 /documents3

Drop ACFS / ASM Volume

Unmount the cluster file system from all Oracle RAC nodes:


[root@racnode1 ~]# umount /documents3 [root@racnode2 ~]# umount /documents3

Log in to the ASM instance and drop the ASM dynamic volume from one of the Oracle RAC nodes:


[grid@racnode1 ~]$ sqlplus / as sysasm SQL> ALTER DISKGROUP docsdg1 DROP VOLUME docsvol3; Diskgroup altered.

 

The same task can be accomplished using the ASMCMD command-line utility:

[grid@racnode1 ~]$ asmcmd voldelete -G docsdg1 docsvol3

Unregister the volume and mount point from the Oracle ACFS mount registry from one of the Oracle RAC nodes:


[root@racnode1 ~]# acfsutil registry -d /documents3 acfsutil registry: successfully removed ACFS mount point /documents3 from Oracle Registry

Finally, remove the mount point directory from all Oracle RAC nodes (if necessary):


[root@racnode1 ~]# rmdir /documents3 [root@racnode2 ~]# rmdir /documents3

About the Author

Jeffrey Hunter is an Oracle Certified Professional, Java Development Certified Professional, Author, and an Oracle ACE. Jeff currently works as a Senior Database Administrator for The DBA Zone, Inc. located in Pittsburgh, Pennsylvania. His work includes advanced performance tuning, Java and PL/SQL programming, developing high availability solutions, capacity planning, database security, and physical / logical database design in a UNIX / Linux server environment. Jeff's other interests include mathematical encryption theory, tutoring advanced mathematics, programming language processors (compilers and interpreters) in Java and C, LDAP, writing web-based database administration tools, and of course Linux. He has been a Sr. Database Administrator and Software Engineer for over 20 years and maintains his own website site at: http://www.iDevelopment.info. Jeff graduated from Stanislaus State University in Turlock, California, with a Bachelor's degree in Computer Science and Mathematics.



Copyright (c) 1998-2017 Jeffrey M. Hunter. All rights reserved.

All articles, scripts and material located at the Internet address of http://www.idevelopment.info is the copyright of Jeffrey M. Hunter and is protected under copyright laws of the United States. This document may not be hosted on any other site without my express, prior, written permission. Application to host any of the material elsewhere can be made by contacting me at jhunter@idevelopment.info.

I have made every effort and taken great care in making sure that the material included on my web site is technically accurate, but I disclaim any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on it. I will in no case be liable for any monetary damages arising from such loss, damage or destruction.

Last modified on
Monday, 30-Apr-2012 19:54:14 EDT
Page Count: 7524