Monday, September 19, 2016

RAC DB Background Processes

DIAG: Diagnosability Daemon – Monitors the health of the instance and captures the data for instance process failures.
LCKx - This process manages the global enqueue requests and the cross-instance broadcast. Workload is automatically shared and balanced when there are multiple Global Cache Service Processes (LMSx).
LMON - The Global Enqueue Service Monitor (LMON) monitors the entire cluster to manage the global enqueues and the resources. LMON manages instance and process failures and the associated recovery for the Global Cache Service (GCS) and Global Enqueue Service (GES). In particular, LMON handles the part of recovery associated with global resources. LMON-provided services are also known as cluster group services (CGS) LMDx - The Global Enqueue Service Daemon (LMD) is the lock agent process that manages enqueue manager service requests for Global Cache Service enqueues to control access to global enqueues and resources. The LMD process also requests are the requests originating from another instance. handles deadlock detection and remote enqueue requests. Remote resource LMSx - The Global Cache Service (GCS) messages. Real Application Clusters software provides for up to 10 Global Cache Service Processes. The number of LMSx varies depending on the amount of messaging traffic among nodes in the cluster. The LMSx handles the acquisition interrupt and blocking interrupt requests from the remote instances for Global Cache Service resources. For cross-instance consistent read requests, the LMSx requesting instance. The LMSx also controls the flow of messages to remote will create a consistent read version of the block and send it to the instances. The LMSn processes handle the blocking interrupts from the remote instance for the Global Cache Service resources by: Managing the resource requests and cross-instance call operations for the shared resources. Building a list of invalid lock elements and validating the lock elements during recovery. Handling the global lock deadlock detection and Monitoring for the lock conversion timeouts

How to enable Write-Back Flash Cache in EXADATA?

Methods are available:
1. Rolling Method - Assuming that RDBMS & ASM instances are UP and enabling Write-Back Flash Cache in One Cell Server at a time
2. Non-Rolling Method - Assuming that RDBMS & ASM instances are DOWN while enabling Write-Back Flash Cache

Note: Before performing the below steps, Perform the following check as root from one of the compute nodes:

Check all griddisk “asmdeactivationoutcome” and “asmmodestatus” to ensure that all griddisks on all cells are “Yes” and “ONLINE” respectively.

# dcli -g cell_group -l root cellcli -e list griddisk attributes asmdeactivationoutcome, asmmodestatus

Check that all of the flashcache are in the “normal” state and that no flash disks are in a degraded or critical state:

# dcli -g cell_group -l root cellcli -e list flashcache detail
exadata01cell01: WriteThrough
exadata01cell02: WriteThrough
exadata01cell03: WriteThrough

1.     Rolling Method:

(Assuming that RDBMS & ASM instances are UP and enabling Write-Back Flash Cache in One Cell Server at a time)

Login to Cell Server:

Step 1. Drop the flash cache on that cell

#cellcli –e drop flashcache
Flash cache exadata01cell01_FLASHCACHE successfully dropped

Step 2. Check the status of ASM if the grid disks go OFFLINE. The following command should return 'Yes' for the grid disks being listed:

# cellcli -e list griddisk attributes name,asmmodestatus,asmdeactivationoutcome
         DATAC1_CD_00_exadata01cell01   OFFLINE  Yes
         DATAC1_CD_01_exadata01cell01   OFFLINE  Yes
         DATAC1_CD_02_exadata01cell01   OFFLINE  Yes
         DATAC1_CD_03_exadata01cell01   OFFLINE  Yes
         DATAC1_CD_04_exadata01cell01   OFFLINE  Yes
         DATAC1_CD_05_exadata01cell01   OFFLINE  Yes
         DBFS_DG_CD_02_exadata01cell01  OFFLINE  Yes
         DBFS_DG_CD_03_exadata01cell01  OFFLINE  Yes
         DBFS_DG_CD_04_exadata01cell01  OFFLINE  Yes
         DBFS_DG_CD_05_exadata01cell01  OFFLINE  Yes
         RECOC1_CD_00_exadata01cell01   OFFLINE  Yes
         RECOC1_CD_01_exadata01cell01   OFFLINE  Yes
         RECOC1_CD_02_exadata01cell01   OFFLINE  Yes
         RECOC1_CD_03_exadata01cell01   OFFLINE  Yes
         RECOC1_CD_04_exadata01cell01   OFFLINE  Yes
         RECOC1_CD_05_exadata01cell01   OFFLINE  Yes

Step 3. Inactivate the griddisk on the cell
# cellcli –e alter griddisk all inactive

Step 4. Shut down cellsrv service
# cellcli -e alter cell shutdown services cellsrv 

Stopping CELLSRV services...
The SHUTDOWN of CELLSRV services was successful.

Step 5. Set the cell flashcache mode to writeback 
# cellcli -e "alter cell flashCacheMode=writeback"

Cell exadata01cell01 successfully altered

Step 6. Restart the cellsrv service 
# cellcli -e alter cell startup services cellsrv 
Starting CELLSRV services...
The STARTUP of CELLSRV services was successful.
  
Step 7. Reactivate the griddisks on the cell
# cellcli –e alter griddisk all active
GridDisk DATAC1_CD_00_exadata01cell03 successfully altered
GridDisk DATAC1_CD_01_exadata01cell03 successfully altered
GridDisk DATAC1_CD_02_exadata01cell03 successfully altered
GridDisk DATAC1_CD_03_exadata01cell03 successfully altered
GridDisk DATAC1_CD_04_exadata01cell03 successfully altered
GridDisk DATAC1_CD_05_exadata01cell03 successfully altered
GridDisk DBFS_DG_CD_02_exadata01cell03 successfully altered
GridDisk DBFS_DG_CD_03_exadata01cell03 successfully altered
GridDisk DBFS_DG_CD_04_exadata01cell03 successfully altered
GridDisk DBFS_DG_CD_05_exadata01cell03 successfully altered
GridDisk RECOC1_CD_00_exadata01cell03 successfully altered
GridDisk RECOC1_CD_01_exadata01cell03 successfully altered
GridDisk RECOC1_CD_02_exadata01cell03 successfully altered
GridDisk RECOC1_CD_03_exadata01cell03 successfully altered
GridDisk RECOC1_CD_04_exadata01cell03 successfully altered
GridDisk RECOC1_CD_05_exadata01cell03 successfully altered

Step 8. Verify all grid disks have been successfully put online using the following command:
# cellcli -e list griddisk attributes name, asmmodestatus

        DATAC1_CD_00_exadata01cell02   ONLINE         Yes
         DATAC1_CD_01_exadata01cell02   ONLINE         Yes
         DATAC1_CD_02_exadata01cell02   ONLINE         Yes
         DATAC1_CD_03_exadata01cell02   ONLINE         Yes
         DATAC1_CD_04_exadata01cell02   ONLINE         Yes
         DATAC1_CD_05_exadata01cell02   ONLINE         Yes
         DBFS_DG_CD_02_exadata01cell02  ONLINE         Yes
         DBFS_DG_CD_03_exadata01cell02  ONLINE         Yes
         DBFS_DG_CD_04_exadata01cell02  ONLINE         Yes
         DBFS_DG_CD_05_exadata01cell02  ONLINE         Yes
         RECOC1_CD_00_exadata01cell02   ONLINE         Yes
         RECOC1_CD_01_exadata01cell02   ONLINE         Yes
         RECOC1_CD_02_exadata01cell02   ONLINE         Yes
         RECOC1_CD_03_exadata01cell02   ONLINE         Yes
         RECOC1_CD_04_exadata01cell02   ONLINE         Yes
         RECOC1_CD_05_exadata01cell02   ONLINE         Yes

Step 9. Recreate the flash cache 
# cellcli -e create flashcache all 

Flash cache exadata01cell01_FLASHCACHE successfully created

 If the flash disk is used for flash cache, then the effective cache size increases. If the flash disk is used for grid disks, then the grid disks are re-created on the new flash disk. If those gird disks were part of an Oracle ASM disk group, then they are added back to the disk group, and the data is rebalanced on them based on the disk group redundancy and ASM_POWER_LIMIT parameter.

Step 10. Check the status of the cell to confirm that it's now in WriteBack mode:

# cellcli -e list cell detail | grep flashCacheMode 
flashCacheMode:         WriteBack                            


Step 11. Repeat these same steps again on the next cell to the FINAL cell. However, before taking another storage server offline, execute the following making sure 'asmdeactivationoutcome' displays YES:

# cellcli -e list griddisk attributes name,asmmodestatus, asmdeactivationoutcome
         DATAC1_CD_00_exadata01cell01   ONLINE  Yes
         DATAC1_CD_01_exadata01cell01   ONLINE  Yes
         DATAC1_CD_02_exadata01cell01   ONLINE  Yes
         DATAC1_CD_03_exadata01cell01   ONLINE  Yes
         DATAC1_CD_04_exadata01cell01   ONLINE  Yes
         DATAC1_CD_05_exadata01cell01   ONLINE  Yes
         DBFS_DG_CD_02_exadata01cell01  ONLINE  Yes
         DBFS_DG_CD_03_exadata01cell01  ONLINE  Yes
         DBFS_DG_CD_04_exadata01cell01  ONLINE  Yes
         DBFS_DG_CD_05_exadata01cell01  ONLINE  Yes
         RECOC1_CD_00_exadata01cell01   ONLINE  Yes
         RECOC1_CD_01_exadata01cell01   ONLINE  Yes
         RECOC1_CD_02_exadata01cell01   ONLINE  Yes
         RECOC1_CD_03_exadata01cell01   ONLINE  Yes
         RECOC1_CD_04_exadata01cell01   ONLINE  Yes
         RECOC1_CD_05_exadata01cell01   ONLINE  Yes

After changing the flashcache modes on all cells, check if flashcache modes are changed to write-back for all cells.
CellCLI> dcli -g ~/cell_group -l root cellcli -e "list cell attributes flashcachemode"
exadata01cell01: WriteBack
exadata01cell02: WriteBack
exadata01cell03: WriteBack
  
2.     Non-Rolling Method:

(Assuming that RDBMS & ASM instances are DOWN while enabling Write-Back Flash Cache)

Step 1. Drop the flash cache on that cell
# cellcli -e drop flashcache 

Step 2. Shut down cellsrv service

# cellcli -e alter cell shutdown services cellsrv 

Step 3. Set the cell flashcache mode to writeback 

# cellcli -e "alter cell flashCacheMode=writeback" 

Step 4. Restart the cellsrv service 

# cellcli -e alter cell startup services cellsrv 

Step 5. Recreate the flash cache 

# cellcli -e create flashcache all


Write-Back Flash Cache Not Required for DiskGroup:

Note: We can disable Write-Back Flash Cache diskgroups like RECO not requiring this feature. This can save space in the flash cache.
CACHINGPOLICY could be used to change the flash cache policy of the griddisk.

Before changing the cache policy from default to none, ensure there is no cached data in flash cache for the grid disk:

CellCLI> create griddisk all harddisk prefix=RECO, size=1006, cachingPolicy="none“;

OR

CELLCLI>ALTER GRIDDISK grid_disk_name FLUSH;
CELLCLI>ALTER GRIDDISK grid_disk_name CACHINGPOLICY="none";


Flushing the data from Flash Cache to Disk – Manual Method:

The data which is not been synchronized with griddisk can be synchronized using the FLUSH option.
CELLCLI>ALTER GRIDDISK grid_disk_name FLUSH

Use the following command to check the progress of this activity:

CELLCLI>LIST GRIDDISK ATTRIBUTES name, flushstatus, flusherr


Reinstating WriteThrough FlashCache:

1.   To reinstate Writethrough caching, FlashCache must first be flushed
2.   FlashCache must then be dropped and cellsrv stopped.

Step 1. CELLCLI> alter flashcache all flush
Step 2. CELLCLI> drop flashcache
Step 3. CELLCLI> alter cell shutdown services cellsrv
Step 4. CELLCLI> alter cell flashCacheMode = WriteThrough
Step 5. CELLCLI> alter cell startup services cellsrv

Monitoring Flash Cache Usage:

CELLCLI> list metricdefinition attributes name, description where name like '.*_DIRTY‘




CD_BY_FC_DIRTY
Number of unflushed bytes cached in FLASHCACHE on a cell disk
FC_BY_DIRTY
Number of unflushed bytes in FlashCache
FC_BY_STALE_DIRTY
Number of unflushed bytes in FlashCache which cannot be flushed. Because cached disks are not accessible
GD_BY_FC_DIRTY         
Number of unflushed bytes cached in FLASHCACHE for a grid disk

Sunday, September 18, 2016

Data/query Processing in EXADATA

Please read the data points

EXADATA - Importance of Cellinit.ora and Cellip.ora files

Cellinit.ora and Cellip.ora

After Oracle Exadata Storage Server is configured, the database server host must be configured with the cellinit.ora and the cellip.ora files to use the cell. 

The files are located in the /etc/oracle/cell/network-config directory of the database server host.These configuration files contain IP addresses, not host names.

cellinit.ora - This file contains the database IP addresses.
cellip.ora -    This file contains the storage cell IP addresses.

Example:

Quarter RAC Exadata machine contains 2 compute(DB) nodes  and 3 Cell(Storage) servers.Below picture shows the ip configurations. 

192.168.50.23 and 192.168.50.24 belongs to Compute Nodes.
192.168.51.27, 192.168.51.28 and 192.168.51.29 belongs to Cell Servers.

How to change system account password in EXADATA?

Run the below commands from first node of respective servers, as ‘root’ user, in order to effect the change on all nodes of the corresponding servers

For “root” user:
=================
dcli -g /opt/oracle.SupportTools/onecommand/dbs_group -l root "echo <random pwd>| passwd root --stdin"

Verify the root account password change across all nodes using the below command:

dcli -g /opt/oracle.SupportTools/onecommand/dbs_group -l root "chage -l root"

For “oracle” user:
===================
dcli -g /opt/oracle.SupportTools/onecommand/dbs_group -l root "echo <random pwd>| passwd oracle --stdin"

Verify the oracle account password change across all nodes using the below command:

dcli -g /opt/oracle.SupportTools/onecommand/dbs_group -l root "chage -l oracle"

Note: The <random pwd> string will get substituted with the actual password that will be set.


EXADATA Flash Disk Replacement - No downtime for databases.

Below technical steps will be followed to replace the flash disk in exadata server. This activity require a restart of the cell server but there will be no downtime for databases.


Log onto cell server as a root-  
                                                                                
#cellcli -e list griddisk attributes name,asmmodestatus,asmdeactivationoutcome
Above command must say 'Yes' for all griddisk
s

#cellcli -e alter griddisk all inactive
#cellcli -e list griddisk attributes name,asmmodestatus,asmdeactivationoutcome
Above command should say 'offline' for all grid disks


#cellcli -e list griddisk
Above command should say 'inactive'


Power off the cell server
#shutdown -h now
Above command will shutdown the cell server


REPLACE FLASH DISK - Hardware replacement
Oracle Engineer slides the cell out from the rack
Oracle engineer removes the Flash card and inserts the new one


Cell server is powered back on
At ILOM fire the below command
start /SYS
Above commd will start the cell server


Disk sync is carried out


#cellcli -e alter griddisk all active
#cellcli -e list griddisk
Above command should show 'active'


#cellcli -e list griddisk attributes name, asmmodestatus
Above command will say OFFLINE then SYNCING then ONLINE


Full resilience is restored  and there is no interruption to service during the replacement.











HBA Card Replacement in EXADATA CELL Server

The replacement process is: This is hot swappable 

1 – Power off the cell server
2 – Oracle Engineer slides the cell out from the rack
3 – Oracle engineer removes the HBA card and inserts the new one
4 – Cell server is powered back on
5 – Disk sync is carried out
6 – Full resilience is restored


There is no interruption to service during the replacement

Oracle 12C - Online Datafile movement and Actions are possible during an Online Data file Move operation

Online Datafile movement
The text description of the syntax is shown below.

ALTER DATABASE MOVE DATAFILE ( 'filename' | 'ASM_filename' | file_number )
 [ TO ( 'filename' | 'ASM_filename' ) ]
 [ REUSE ] [ KEEP ]
The source file can be specified using the file number or name, while the destination file must be specified by the file name. The REUSE keyword indicates the new file should be created even if it already exists. The KEEP keyword indicates the original copy of the datafile should be retained.

The file number can be queried from the V$DATAFILE and DBA_DATA_FILES views.

SQL> CONN / AS SYSDBA

SQL> SET LINESIZE 100
SQL> COLUMN name FORMAT A70
SQL> SELECT file#, name FROM v$datafile WHERE con_id = 1 ORDER BY file#;

     FILE# NAME
---------- ----------------------------------------------------------------------
  1 /u01/app/oracle/oradata/cdb1/system01.dbf
  3 /u01/app/oracle/oradata/cdb1/sysaux01.dbf
  4 /u01/app/oracle/oradata/cdb1/undotbs01.dbf
  6 /u01/app/oracle/oradata/cdb1/users01.dbf

SQL> COLUMN file_name FORMAT A70
SELECT file_id, file_name FROM dba_data_files ORDER BY file_id;

   FILE_ID FILE_NAME
---------- ----------------------------------------------------------------------
  1 /u01/app/oracle/oradata/cdb1/system01.dbf
  3 /u01/app/oracle/oradata/cdb1/sysaux01.dbf
  4 /u01/app/oracle/oradata/cdb1/undotbs01.dbf
  6 /u01/app/oracle/oradata/cdb1/users01.dbf


Examples
The following example shows a basic file move, specifying both source and destination by name. Notice the original file is no longer present.


SQL> ALTER DATABASE MOVE DATAFILE '/u01/app/oracle/oradata/cdb1/system01.dbf' TO '/tmp/system01.dbf';

Database altered.

SQL> SELECT file_id, file_name FROM dba_data_files WHERE file_id = 1;

   FILE_ID FILE_NAME
---------- ----------------------------------------------------------------------
  1 /tmp/system01.dbf

SQL> HOST ls -al /u01/app/oracle/oradata/cdb1/system01.dbf
ls: cannot access /u01/app/oracle/oradata/cdb1/system01.dbf: No such file or directory

SQL> HOST ls -al /tmp/system01.dbf
-rw-r-----. 1 oracle oinstall 838868992 Oct  8 22:48 /tmp/system01.dbf

The next example uses the file number for the source file and keeps the original file.

SQL> ALTER DATABASE MOVE DATAFILE 1 TO '/u01/app/oracle/oradata/cdb1/system01.dbf' KEEP;

Database altered.

SQL> SELECT file_id, file_name FROM dba_data_files WHERE file_id = 1;

   FILE_ID FILE_NAME
---------- ----------------------------------------------------------------------
  1 /u01/app/oracle/oradata/cdb1/system01.dbf

SQL> HOST ls -al /u01/app/oracle/oradata/cdb1/system01.dbf
-rw-r-----. 1 oracle oinstall 838868992 Oct  8 22:48 /u01/app/oracle/oradata/cdb1/system01.dbf

SQL> HOST ls -al /tmp/system01.dbf
-rw-r-----. 1 oracle oinstall 838868992 Oct  8 22:49 /tmp/system01.dbf
The next example shows the use of OMF.
SQL> ALTER SYSTEM SET db_create_file_dest='/u01/app/oracle/oradata/cdb1';

System altered.

SQL> ALTER DATABASE MOVE DATAFILE '/u01/app/oracle/oradata/cdb1/system01.dbf';

Database altered.

SQL> SELECT file_id, file_name FROM dba_data_files WHERE file_id = 1;

   FILE_ID FILE_NAME
---------- ----------------------------------------------------------------------
  1 /u01/app/oracle/oradata/cdb1/CDB1/datafile/o1_mf_system_958zo3ll_.dbf

The final example attempts to use the KEEP option, where the source file in an OMF file. Notice how the KEEP option is ignored.

SQL> ALTER DATABASE MOVE DATAFILE 1 To '/u01/app/oracle/oradata/cdb1/system01.dbf' KEEP;

Database altered.

SQL> SELECT file_id, file_name FROM dba_data_files WHERE file_id = 1;

   FILE_ID FILE_NAME
---------- ----------------------------------------------------------------------
  1 /u01/app/oracle/oradata/cdb1/system01.dbf

SQL> host ls -al /u01/app/oracle/oradata/cdb1/CDB1/datafile/o1_mf_system_958zo3ll_.dbf
ls: cannot access /u01/app/oracle/oradata/cdb1/CDB1/datafile/o1_mf_system_958zo3ll_.dbf: No such file or directory

Pluggable Database (PDB)

The container database (CDB) can not move files that belong to a pluggable database. The following query displays all the datafiles for the CDB and the PDBs.
SQL> SELECT file#, name FROM v$datafile ORDER BY file#;

     FILE# NAME
---------- ----------------------------------------------------------------------
  1 /u01/app/oracle/oradata/cdb1/system01.dbf
  3 /u01/app/oracle/oradata/cdb1/sysaux01.dbf
  4 /u01/app/oracle/oradata/cdb1/undotbs01.dbf
  5 /u01/app/oracle/oradata/cdb1/pdbseed/system01.dbf
  6 /u01/app/oracle/oradata/cdb1/users01.dbf
  7 /u01/app/oracle/oradata/cdb1/pdbseed/sysaux01.dbf
  8 /u01/app/oracle/oradata/cdb1/pdb1/system01.dbf
  9 /u01/app/oracle/oradata/cdb1/pdb1/sysaux01.dbf
 10 /u01/app/oracle/oradata/cdb1/pdb1/pdb1_users01.dbf
 29 /u01/app/oracle/oradata/pdb2/system01.dbf
 30 /u01/app/oracle/oradata/pdb2/sysaux01.dbf
 31 /u01/app/oracle/oradata/pdb2/pdb2_users01.dbf

If we try to move a datafile belonging to a PDB an error is returned.
SQL> ALTER DATABASE MOVE DATAFILE '/u01/app/oracle/oradata/pdb2/system01.dbf' TO '/tmp/system01.dbf' REUSE;
ALTER DATABASE MOVE DATAFILE '/u01/app/oracle/oradata/pdb2/system01.dbf' TO '/tmp/system01.dbf' REUSE
*
ERROR at line 1:
ORA-01516: nonexistent log file, data file, or temporary file "29"

If we switch to the PDB container, the datafile can be moved as normal.
SQL> ALTER SESSION SET container=pdb2;

Session altered.

SQL> ALTER DATABASE MOVE DATAFILE '/u01/app/oracle/oradata/pdb2/system01.dbf' TO '/tmp/system01.dbf' REUSE;

Database altered.

SQL> SELECT file_id, file_name FROM dba_data_files WHERE file_id = 29;

   FILE_ID FILE_NAME
---------- ----------------------------------------------------------------------
 29 /tmp/system01.dbf

SQL> ALTER DATABASE MOVE DATAFILE 29 TO '/u01/app/oracle/oradata/pdb2/system01.dbf' REUSE;

Database altered.

SQL> SELECT file_id, file_name FROM dba_data_files WHERE file_id = 29;

   FILE_ID FILE_NAME
---------- ----------------------------------------------------------------------
 29 /u01/app/oracle/oradata/pdb2/system01.dbf

SQL>ALTER SESSION SET container=cdb1;

SQL> ALTER SESSION SET container=CDB$ROOT;

Session altered.

Tempfiles

Not surprisingly, the ALTER DATABASE MOVE DATAFILE syntax does not work for temporary files.


SQL> SELECT file_id, file_name FROM dba_temp_files;

   FILE_ID FILE_NAME
---------- ----------------------------------------------------------------------
  1 /u01/app/oracle/oradata/cdb1/temp01.dbf


SQL> ALTER DATABASE MOVE DATAFILE '/u01/app/oracle/oradata/cdb1/temp01.dbf' TO '/tmp/temp01.dbf' REUSE;
ALTER DATABASE MOVE DATAFILE '/u01/app/oracle/oradata/cdb1/temp01.dbf' TO '/tmp/temp01.dbf' REUSE
*
ERROR at line 1:
ORA-01516: nonexistent log file, data file, or temporary file
"/u01/app/oracle/oradata/cdb1/temp01.dbf"


Actions are possible during an Online Data file Move operation


1. Creating and dropping tables in the data file being moved
2. Querying tables in the data file being moved
3. Performing Block Media Recovery for a data block in the data file being moved
4. Executing DML statements on objects stored in the data file being moved

Oracle12C - Three ways you can re-create the lost ASM diskgroup and restore the data.

You use RMAN to back up the database and the MD_BACKUP command to
back up the ASM metadata regularly.

md_backup backup_file [-G 'diskgroup [,diskgroup,...]']

The first example shows the use of the backup command when run without the disk group option. This example backs up all the mounted disk groups and creates the backup image in the /backup/allDGs_bkp file. The second example creates a backup of the data disk group. The metadata backup that this example creates is saved in the /backup/allDGs_bkp file.
ASMCMD [+] > md_backup /backup/allDGs_bkp
Disk group metadata to be backed up: DATA
Disk group metadata to be backed up: FRA
Current alias directory path: ORCL/ONLINELOG
Current alias directory path: ORCL/PARAMETERFILE
Current alias directory path: ORCL
Current alias directory path: ASM
Current alias directory path: ORCL/DATAFILE
Current alias directory path: ORCL/CONTROLFILE
Current alias directory path: ASM/ASMPARAMETERFILE
Current alias directory path: ORCL/TEMPFILE
Current alias directory path: ORCL/ARCHIVELOG/2010_04_20
Current alias directory path: ORCL
Current alias directory path: ORCL/BACKUPSET/2010_04_21
Current alias directory path: ORCL/ARCHIVELOG/2010_04_19
Current alias directory path: ORCL/BACKUPSET/2010_04_22
Current alias directory path: ORCL/ONLINELOG
Current alias directory path: ORCL/BACKUPSET/2010_04_20
Current alias directory path: ORCL/ARCHIVELOG
Current alias directory path: ORCL/BACKUPSET
Current alias directory path: ORCL/ARCHIVELOG/2010_04_22
Current alias directory path: ORCL/DATAFILE
Current alias directory path: ORCL/CONTROLFILE
Current alias directory path: ORCL/ARCHIVELOG/2010_04_21

ASMCMD [+] > md_backup /backup/allDGs_bkp/data20100422 -G data
Disk group metadata to be backed up: DATA
Current alias directory path: ORCL/ONLINELOG
Current alias directory path: ASM
Current alias directory path: ORCL/CONTROLFILE
Current alias directory path: ASM/ASMPARAMETERFILE
Current alias directory path: ORCL/PARAMETERFILE
Current alias directory path: ORCL
Current alias directory path: ORCL/DATAFILE
Current alias directory path: ORCL/TEMPFILE

You lost an ASM disk group DG1 due to hardware failure.  You can  use the below method to restore ans recover the ASM diskgroup

1. Use the MD_RESTORE command to restore the disk group with the changed disk group specification, failure group specification, name, and other attributes
and use RMAN to restore the data.

md_restore  backup_file [--silent]
     [--full|--nodg|--newdg -o 'old_diskgroup:new_diskgroup [,...]']
     [-S sql_script_file] [-G 'diskgroup [,diskgroup...]']
The first example restores the disk group data from the backup script and creates a copy.
ASMCMD [+] > md_restore –-full –G data –-silent /backup/allDGs_bkp


The second example takes an existing disk group data and restores its metadata.
ASMCMD [+] > md_restore –-nodg –G data –-silent /backup/allDGs_bkp

The third example restores disk group data completely but the new disk group that is created is named data2.
ASMCMD [+] > md_restore –-newdg -o 'data:data2' --silent /backup/data20100422

The fourth example restores from the backup file after applying the overrides defined in the override.sql script file.
ASMCMD [+] > md_restore -S override.sql --silent /backup/data20100422


2. Use the MKDG command to restore the disk group with the same configuration as the backed- up disk group name and same set of disks and failure group configuration, and use RMAN to restore the data.

3. Use the MKDG command to add a new disk group DG1 with the same or different specifications for failure group and other attributes and use RMAN to restore the data.

mkdg { config_file.xml | 'contents_of_xml_file' }
OptionDescription
config_file
Name of the XML file that contains the configuration for the new disk group. mkdg searches for the XML file in the directory where ASMCMD was started unless a path is specified.

contents_of_xml_file
The XML script enclosed in single quotations.

Below example shows the basic structure and the valid tags with their respective attributes for the mkdg XML configuration file.
<dg>  disk group
      name         disk group name
      redundancy   normal, external, high
 
<fg>  failure group
      name         failure group name
</fg>

<dsk> disk
      name         disk name
      string       disk path
      size         size of the disk to add
      force        true specifies to use the force option
</dsk>

<a>   attribute
      name         attribute name
      value        attribute value
</a>

</dg>


The following is an example of an XML configuration file for mkdg. The configuration file creates a disk group named data with normal redundancy. Two failure groups, fg1 and fg2, are created, each with two disks identified by associated disk strings. The disk group compatibility attributes are all set to 11.2.
Example mkdg sample XML configuration file
<dg name="data" redundancy="normal">
  <fg name="fg1">
    <dsk string="/dev/disk1"/>
    <dsk string="/dev/disk2"/>
  </fg>
  <fg name="fg2">
    <dsk string="/dev/disk3"/>
    <dsk string="/dev/disk4"/>
  </fg>
  <a name="compatible.asm" value="11.2"/>
  <a name="compatible.rdbms" value="11.2"/>
  <a name="compatible.advm" value="11.2"/>
</dg>