Showing posts with label ORACLE 12C. Show all posts
Showing posts with label ORACLE 12C. Show all posts

Wednesday, November 2, 2016

Oracle 12c - In Database Archiving

In-Database Archiving enables us to archive rows within a table/database by marking them as inactive/disable (mark for delete).

By enabling the database to distinguish from active data and ‘older’ in-active data while at the same time storing everything in the same database.

These inactive/disabled rows are in the database and can be optimized using compression, but are not visible to an application. The data in these rows is available for compliance purposes if needed by setting a session parameter.

Below example of using In-Database Archiving

SQL> select count(*) from Audit;

  COUNT(*)
----------
    80001

SQL> alter table Audit row archival;

Table altered.

SQL> select distinct ora_archive_state from Audit;

ORA_ARCHIVE_STATE
--------------------------------------------------------------------------------
0

Please Note – New column ORA_ARCHIVE_STATE is now added to the table Audit and is a hidden column.

We now want to designate all rows in the Audit table which belong to the years 2001 and 2002 as old and historical data.

All data after 01-JAN-2003 should be treated as current and active data.

SQL> update Audit
  2  set ORA_ARCHIVE_STATE=DBMS_ILM.ARCHIVESTATENAME(1)
  3  where time_id < '01-JAN-03';

40000 rows updated.

SQL> Commit;

If we now issue a select * from Audit command, we see that only about half the actual number of rows are being returned by the query as Oracle is not returning the rows where the value is 1 for the column ORA_ARCHIVE_STATE

SQL> select distinct ora_archive_state from Audit;

ORA_ARCHIVE_STATE
--------------------------------------------------
0

SQL> select count(*) from Audit;

  COUNT(*)
----------
    40001

Now let as assume there is a requirement to view the historical and inactive data as well.
At the session level we can set the value for the parameter ROW ARCHIVAL VISIBILITY to ALL

SQL> alter session set row archival visibility=ALL;

Session altered.

SQL> select count(*) from Audit;

  COUNT(*)
----------
    80001

SQL> select distinct ora_archive_state from Audit;

ORA_ARCHIVE_STATE
--------------------------------------------------------------------------------
1
0

Partition the Audit table on the ORA_ARCHIVE_STATE column is a good option, so that we can then compress the partitions containing the archive(old) data. 
The current/Active data will be left in an uncompressed state as it is frequently accessed by application and we do not want to impact performance.


We can also make those partitions containing the older data read only and exclude them from our regular daily database backups.

Wednesday, October 19, 2016

Oracle 12c - How to plugin a non-CDB database to a Container Database (CDB) ?

The Container database cdb12c originally has one pluggable database testpdb (as well as the seed PDB). We will see how we can plugin the 12C non-cdb database testdb12c into the container database cdb12c.

Note that to plugin a non-CDB database into a CDB, the non-CDB database needs to be of version 12c as well. So existing 11g databases will need to be upgraded to 12c before they can be part of a 12c CDB.

Steps involved.
The first thing we need to do is to connect to the non-CDB, and run the DBMS_PDB.DESCRIBE procedure to construct an XML file that contains the metadata which describes the non-CDB.
We need to run this procedure while the database is in a read only mode.
[oracle@server- testdb12c]$ sqlplus sys as sysdba

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.
Total System Global Area  801701888 bytes
Fixed Size                  2293496 bytes
Variable Size             314573064 bytes
Database Buffers          478150656 bytes
Redo Buffers                6684672 bytes
Database mounted.

SQL> alter database open read only;
Database altered.

SQL> BEGIN
DBMS_PDB.DESCRIBE(
pdb_descr_file => '/home/oracle/testdb12c.xml');
END;
/
PL/SQL procedure successfully completed.

Now shutdown and open the non-CDB in read write node. We need to then check if the non-CDB is now ready for conversion into a Pluggable Database (PDD) which is going to be contained in the Container Database.

Connect to the CDB and run the DBMS_PDB.CHECK_PLUG_COMPATIBILITY package where we provide the location of the XML metadata file which was generated in the first step we executed.

  SQL> SET SERVEROUTPUT ON
  DECLARE
  compatible CONSTANT VARCHAR2(3) :=
  CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY(
  pdb_descr_file => '/home/oracle/testdb12c.xml',
  pdb_name       => 'testdb12c')
  WHEN TRUE THEN 'YES'
  ELSE 'NO'
  END;
  BEGIN
  DBMS_OUTPUT.PUT_LINE(compatible);
  END;
  /

NO

PL/SQL procedure successfully completed.

Next we shutdown the non-CDB and plug it in to the CDB by converting it to a PLUGGABLE DATABASE.

SQL> CREATE PLUGGABLE DATABASE testdb12c using '/home/oracle/testdb12c.xml'
nocopy
tempfile reuse;

Pluggable database created.

The XML file accurately describes the current locations of the files of the PDB.  Since we are using the existing 12c non-CDB datafiles to create the CDB pluggable database , we will use the NOCOPY option.

If we want, we can leave the existing 12c non-cdb database and clone that to create a pluggable database

We can see that the testdb12c database is now a PDB and is in a mounted state.

SQL> select CON_ID, NAME, OPEN_MODE from V$PDBS;

    CON_ID NAME                           OPEN_MODE
---------- ------------------------------ ----------
         2 PDB$SEED                       READ ONLY
         3 testpdb                              READ WRITE
         4 testdb12c                          MOUNTED

SQL> select  PDB_NAME, DBID , CON_ID, STATUS  from CDB_PDBS;

PDB_NAME                   DBID     CON_ID STATUS
-------------------- ---------- ---------- -------------
testpdb               571254526          1 NORMAL
PDB$SEED         4061785545       1 NORMAL
testdb12c            1141508778        1 NEW

SQL> ALTER SESSION SET CONTAINER =testdb12c;
Session altered.

SQL> show con_name

CON_NAME
------------------------------
testdb12c

Now run the ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql script. 
This script must be run before the PDB can be opened for the first time.

After the script has been run open the PDB testdb12c in READ WRITE mode.
Now check the status.

SQL> alter session SET CONTAINER =CDB$ROOT;
Session altered.

SQL>  show con_name
CON_NAME
------------------------------
CDB$ROOT

SQL> select  PDB_NAME, DBID , CON_ID, STATUS  from CDB_PDBS;
PDB_NAME                   DBID     CON_ID STATUS
-------------------- ---------- ---------- -------------
testpdb               571254526          1 NORMAL
PDB$SEED       4061785545         1 NORMAL
testdb12c            1141508778        1 NORMAL

3 rows selected.

Connect to the newly plugged-in PDB testdb12c

[oracle@server- testdb12c]$ sqlplus sh/sh@localhost:1525/testdb12c

SQL> select count(*) from sales;

COUNT(*)
———-
918843

Sunday, September 18, 2016

Oracle 12C - Online Datafile movement and Actions are possible during an Online Data file Move operation

Online Datafile movement
The text description of the syntax is shown below.

ALTER DATABASE MOVE DATAFILE ( 'filename' | 'ASM_filename' | file_number )
 [ TO ( 'filename' | 'ASM_filename' ) ]
 [ REUSE ] [ KEEP ]
The source file can be specified using the file number or name, while the destination file must be specified by the file name. The REUSE keyword indicates the new file should be created even if it already exists. The KEEP keyword indicates the original copy of the datafile should be retained.

The file number can be queried from the V$DATAFILE and DBA_DATA_FILES views.

SQL> CONN / AS SYSDBA

SQL> SET LINESIZE 100
SQL> COLUMN name FORMAT A70
SQL> SELECT file#, name FROM v$datafile WHERE con_id = 1 ORDER BY file#;

     FILE# NAME
---------- ----------------------------------------------------------------------
  1 /u01/app/oracle/oradata/cdb1/system01.dbf
  3 /u01/app/oracle/oradata/cdb1/sysaux01.dbf
  4 /u01/app/oracle/oradata/cdb1/undotbs01.dbf
  6 /u01/app/oracle/oradata/cdb1/users01.dbf

SQL> COLUMN file_name FORMAT A70
SELECT file_id, file_name FROM dba_data_files ORDER BY file_id;

   FILE_ID FILE_NAME
---------- ----------------------------------------------------------------------
  1 /u01/app/oracle/oradata/cdb1/system01.dbf
  3 /u01/app/oracle/oradata/cdb1/sysaux01.dbf
  4 /u01/app/oracle/oradata/cdb1/undotbs01.dbf
  6 /u01/app/oracle/oradata/cdb1/users01.dbf


Examples
The following example shows a basic file move, specifying both source and destination by name. Notice the original file is no longer present.


SQL> ALTER DATABASE MOVE DATAFILE '/u01/app/oracle/oradata/cdb1/system01.dbf' TO '/tmp/system01.dbf';

Database altered.

SQL> SELECT file_id, file_name FROM dba_data_files WHERE file_id = 1;

   FILE_ID FILE_NAME
---------- ----------------------------------------------------------------------
  1 /tmp/system01.dbf

SQL> HOST ls -al /u01/app/oracle/oradata/cdb1/system01.dbf
ls: cannot access /u01/app/oracle/oradata/cdb1/system01.dbf: No such file or directory

SQL> HOST ls -al /tmp/system01.dbf
-rw-r-----. 1 oracle oinstall 838868992 Oct  8 22:48 /tmp/system01.dbf

The next example uses the file number for the source file and keeps the original file.

SQL> ALTER DATABASE MOVE DATAFILE 1 TO '/u01/app/oracle/oradata/cdb1/system01.dbf' KEEP;

Database altered.

SQL> SELECT file_id, file_name FROM dba_data_files WHERE file_id = 1;

   FILE_ID FILE_NAME
---------- ----------------------------------------------------------------------
  1 /u01/app/oracle/oradata/cdb1/system01.dbf

SQL> HOST ls -al /u01/app/oracle/oradata/cdb1/system01.dbf
-rw-r-----. 1 oracle oinstall 838868992 Oct  8 22:48 /u01/app/oracle/oradata/cdb1/system01.dbf

SQL> HOST ls -al /tmp/system01.dbf
-rw-r-----. 1 oracle oinstall 838868992 Oct  8 22:49 /tmp/system01.dbf
The next example shows the use of OMF.
SQL> ALTER SYSTEM SET db_create_file_dest='/u01/app/oracle/oradata/cdb1';

System altered.

SQL> ALTER DATABASE MOVE DATAFILE '/u01/app/oracle/oradata/cdb1/system01.dbf';

Database altered.

SQL> SELECT file_id, file_name FROM dba_data_files WHERE file_id = 1;

   FILE_ID FILE_NAME
---------- ----------------------------------------------------------------------
  1 /u01/app/oracle/oradata/cdb1/CDB1/datafile/o1_mf_system_958zo3ll_.dbf

The final example attempts to use the KEEP option, where the source file in an OMF file. Notice how the KEEP option is ignored.

SQL> ALTER DATABASE MOVE DATAFILE 1 To '/u01/app/oracle/oradata/cdb1/system01.dbf' KEEP;

Database altered.

SQL> SELECT file_id, file_name FROM dba_data_files WHERE file_id = 1;

   FILE_ID FILE_NAME
---------- ----------------------------------------------------------------------
  1 /u01/app/oracle/oradata/cdb1/system01.dbf

SQL> host ls -al /u01/app/oracle/oradata/cdb1/CDB1/datafile/o1_mf_system_958zo3ll_.dbf
ls: cannot access /u01/app/oracle/oradata/cdb1/CDB1/datafile/o1_mf_system_958zo3ll_.dbf: No such file or directory

Pluggable Database (PDB)

The container database (CDB) can not move files that belong to a pluggable database. The following query displays all the datafiles for the CDB and the PDBs.
SQL> SELECT file#, name FROM v$datafile ORDER BY file#;

     FILE# NAME
---------- ----------------------------------------------------------------------
  1 /u01/app/oracle/oradata/cdb1/system01.dbf
  3 /u01/app/oracle/oradata/cdb1/sysaux01.dbf
  4 /u01/app/oracle/oradata/cdb1/undotbs01.dbf
  5 /u01/app/oracle/oradata/cdb1/pdbseed/system01.dbf
  6 /u01/app/oracle/oradata/cdb1/users01.dbf
  7 /u01/app/oracle/oradata/cdb1/pdbseed/sysaux01.dbf
  8 /u01/app/oracle/oradata/cdb1/pdb1/system01.dbf
  9 /u01/app/oracle/oradata/cdb1/pdb1/sysaux01.dbf
 10 /u01/app/oracle/oradata/cdb1/pdb1/pdb1_users01.dbf
 29 /u01/app/oracle/oradata/pdb2/system01.dbf
 30 /u01/app/oracle/oradata/pdb2/sysaux01.dbf
 31 /u01/app/oracle/oradata/pdb2/pdb2_users01.dbf

If we try to move a datafile belonging to a PDB an error is returned.
SQL> ALTER DATABASE MOVE DATAFILE '/u01/app/oracle/oradata/pdb2/system01.dbf' TO '/tmp/system01.dbf' REUSE;
ALTER DATABASE MOVE DATAFILE '/u01/app/oracle/oradata/pdb2/system01.dbf' TO '/tmp/system01.dbf' REUSE
*
ERROR at line 1:
ORA-01516: nonexistent log file, data file, or temporary file "29"

If we switch to the PDB container, the datafile can be moved as normal.
SQL> ALTER SESSION SET container=pdb2;

Session altered.

SQL> ALTER DATABASE MOVE DATAFILE '/u01/app/oracle/oradata/pdb2/system01.dbf' TO '/tmp/system01.dbf' REUSE;

Database altered.

SQL> SELECT file_id, file_name FROM dba_data_files WHERE file_id = 29;

   FILE_ID FILE_NAME
---------- ----------------------------------------------------------------------
 29 /tmp/system01.dbf

SQL> ALTER DATABASE MOVE DATAFILE 29 TO '/u01/app/oracle/oradata/pdb2/system01.dbf' REUSE;

Database altered.

SQL> SELECT file_id, file_name FROM dba_data_files WHERE file_id = 29;

   FILE_ID FILE_NAME
---------- ----------------------------------------------------------------------
 29 /u01/app/oracle/oradata/pdb2/system01.dbf

SQL>ALTER SESSION SET container=cdb1;

SQL> ALTER SESSION SET container=CDB$ROOT;

Session altered.

Tempfiles

Not surprisingly, the ALTER DATABASE MOVE DATAFILE syntax does not work for temporary files.


SQL> SELECT file_id, file_name FROM dba_temp_files;

   FILE_ID FILE_NAME
---------- ----------------------------------------------------------------------
  1 /u01/app/oracle/oradata/cdb1/temp01.dbf


SQL> ALTER DATABASE MOVE DATAFILE '/u01/app/oracle/oradata/cdb1/temp01.dbf' TO '/tmp/temp01.dbf' REUSE;
ALTER DATABASE MOVE DATAFILE '/u01/app/oracle/oradata/cdb1/temp01.dbf' TO '/tmp/temp01.dbf' REUSE
*
ERROR at line 1:
ORA-01516: nonexistent log file, data file, or temporary file
"/u01/app/oracle/oradata/cdb1/temp01.dbf"


Actions are possible during an Online Data file Move operation


1. Creating and dropping tables in the data file being moved
2. Querying tables in the data file being moved
3. Performing Block Media Recovery for a data block in the data file being moved
4. Executing DML statements on objects stored in the data file being moved

Oracle12C - Three ways you can re-create the lost ASM diskgroup and restore the data.

You use RMAN to back up the database and the MD_BACKUP command to
back up the ASM metadata regularly.

md_backup backup_file [-G 'diskgroup [,diskgroup,...]']

The first example shows the use of the backup command when run without the disk group option. This example backs up all the mounted disk groups and creates the backup image in the /backup/allDGs_bkp file. The second example creates a backup of the data disk group. The metadata backup that this example creates is saved in the /backup/allDGs_bkp file.
ASMCMD [+] > md_backup /backup/allDGs_bkp
Disk group metadata to be backed up: DATA
Disk group metadata to be backed up: FRA
Current alias directory path: ORCL/ONLINELOG
Current alias directory path: ORCL/PARAMETERFILE
Current alias directory path: ORCL
Current alias directory path: ASM
Current alias directory path: ORCL/DATAFILE
Current alias directory path: ORCL/CONTROLFILE
Current alias directory path: ASM/ASMPARAMETERFILE
Current alias directory path: ORCL/TEMPFILE
Current alias directory path: ORCL/ARCHIVELOG/2010_04_20
Current alias directory path: ORCL
Current alias directory path: ORCL/BACKUPSET/2010_04_21
Current alias directory path: ORCL/ARCHIVELOG/2010_04_19
Current alias directory path: ORCL/BACKUPSET/2010_04_22
Current alias directory path: ORCL/ONLINELOG
Current alias directory path: ORCL/BACKUPSET/2010_04_20
Current alias directory path: ORCL/ARCHIVELOG
Current alias directory path: ORCL/BACKUPSET
Current alias directory path: ORCL/ARCHIVELOG/2010_04_22
Current alias directory path: ORCL/DATAFILE
Current alias directory path: ORCL/CONTROLFILE
Current alias directory path: ORCL/ARCHIVELOG/2010_04_21

ASMCMD [+] > md_backup /backup/allDGs_bkp/data20100422 -G data
Disk group metadata to be backed up: DATA
Current alias directory path: ORCL/ONLINELOG
Current alias directory path: ASM
Current alias directory path: ORCL/CONTROLFILE
Current alias directory path: ASM/ASMPARAMETERFILE
Current alias directory path: ORCL/PARAMETERFILE
Current alias directory path: ORCL
Current alias directory path: ORCL/DATAFILE
Current alias directory path: ORCL/TEMPFILE

You lost an ASM disk group DG1 due to hardware failure.  You can  use the below method to restore ans recover the ASM diskgroup

1. Use the MD_RESTORE command to restore the disk group with the changed disk group specification, failure group specification, name, and other attributes
and use RMAN to restore the data.

md_restore  backup_file [--silent]
     [--full|--nodg|--newdg -o 'old_diskgroup:new_diskgroup [,...]']
     [-S sql_script_file] [-G 'diskgroup [,diskgroup...]']
The first example restores the disk group data from the backup script and creates a copy.
ASMCMD [+] > md_restore –-full –G data –-silent /backup/allDGs_bkp


The second example takes an existing disk group data and restores its metadata.
ASMCMD [+] > md_restore –-nodg –G data –-silent /backup/allDGs_bkp

The third example restores disk group data completely but the new disk group that is created is named data2.
ASMCMD [+] > md_restore –-newdg -o 'data:data2' --silent /backup/data20100422

The fourth example restores from the backup file after applying the overrides defined in the override.sql script file.
ASMCMD [+] > md_restore -S override.sql --silent /backup/data20100422


2. Use the MKDG command to restore the disk group with the same configuration as the backed- up disk group name and same set of disks and failure group configuration, and use RMAN to restore the data.

3. Use the MKDG command to add a new disk group DG1 with the same or different specifications for failure group and other attributes and use RMAN to restore the data.

mkdg { config_file.xml | 'contents_of_xml_file' }
OptionDescription
config_file
Name of the XML file that contains the configuration for the new disk group. mkdg searches for the XML file in the directory where ASMCMD was started unless a path is specified.

contents_of_xml_file
The XML script enclosed in single quotations.

Below example shows the basic structure and the valid tags with their respective attributes for the mkdg XML configuration file.
<dg>  disk group
      name         disk group name
      redundancy   normal, external, high
 
<fg>  failure group
      name         failure group name
</fg>

<dsk> disk
      name         disk name
      string       disk path
      size         size of the disk to add
      force        true specifies to use the force option
</dsk>

<a>   attribute
      name         attribute name
      value        attribute value
</a>

</dg>


The following is an example of an XML configuration file for mkdg. The configuration file creates a disk group named data with normal redundancy. Two failure groups, fg1 and fg2, are created, each with two disks identified by associated disk strings. The disk group compatibility attributes are all set to 11.2.
Example mkdg sample XML configuration file
<dg name="data" redundancy="normal">
  <fg name="fg1">
    <dsk string="/dev/disk1"/>
    <dsk string="/dev/disk2"/>
  </fg>
  <fg name="fg2">
    <dsk string="/dev/disk3"/>
    <dsk string="/dev/disk4"/>
  </fg>
  <a name="compatible.asm" value="11.2"/>
  <a name="compatible.rdbms" value="11.2"/>
  <a name="compatible.advm" value="11.2"/>
</dg>










Migrate 11g database as a pluggable database (PDB) to a 12c's multitenant container database (CDB).

The following are the possible steps to accomplish this task:

1. Place all the user-defined tablespace in read-only mode on the source database.

2. Perform a full transportable export on the source database with the VERSION parameter set to 12 using the expdp utility.

3. Create a new PDB in the target container database.

4. Copy the associated data files and export the dump file to the desired location in the target database.

5. Invoke the Data Pump import utility on the new PDB database as a user with the DATAPUMP_IMP_FULL_DATABASE role and specify the full transportable import options.

6. Synchronize the PDB on the target container database by using the DBMS_PDS.SYNC_ODB function.

Wednesday, September 14, 2016

Snapshot Standby (New in 11g)

                      
1.Can create from the existing Physical standby database.Physical snadby db Which should besync with primary   db.

2.It is fully updatable database can open in read write mode as well as read only mode.

3.When it open in read write mode we can create additional objects like index, views, table etc on snapshot standby database.

4.Snapshot standby can receive and archive the redo from the primary but cant apply and once it convert back to   physical standby it will start applying and can be sync with primary.

   Note:- In 10g we can open the Physical standby db in read write mode and again convert back to physical standby mode using the Flash back option.

5.In 11g no need to enable Flashback database explicitly to use a snapshot standby database as long as the     physical standby is not open in read-only mode.

6.When the physical standby database is converted , as implicit guaranteed restore point is created and     Flashback Database is enabled.

7.We can use DGMGRL or sql command to convert or back physical standby.

8.The snapshot standby can be flashed back to physical standby later point of time and to the guaranteed     restore point and automatically reapplies the archive logs files which is recieved from to primary.

9.For converting the Physical standby from snapshot standby  database should be in read write mode.

====================================================================
Converting physical dg to snapshot dg using data guard Broker command
DGMGRL> convert database stand to snapshot database;

our standby database name will be 'stand'
We can verfy the snapshot standby database by

sql> select database_role from v$database; and the result will be

SNAPSHOT_STANDBY

DGMGRL>show configuration
Name;DGConfig1
Enabled :yes
Protection Mode: MaxPerformance
stand- Snapshot standby database
prim- Primary database
=====================================================================

Converting using the SQL command.
1. Stop the redo apply if it is active.
2. On an Oracle RAC standby database, shutdown all but one instance.
3. Ensure standby database is mounted, but not open.
4. Flash back area is configured.

SQL> alter database convert to snapshot standby;

=====================================================================
 Restriction on Snapshot standby
1.A snapshot standby database can not be the only standby database in a maximum protection configuration.
2.A snapshot standby datbase can not the target of a switchover or failover.
3.A snapshot standby datbase can not be a fast-start failover target

======================================================================
Converting the Snapshot standby to Physical using data guard broker.

A snapshot standby database must be opened at least once in read/write mode before it can converting back to a physical snapshot database.

DGMGRL>Convert database 'standdb/ to Physical standby;
cross check the conversion using following command.
DGMGRL>show configuration
=================================================================

Converting the Snapshot standby to Physical using SQL command.

If RAC database is enabled shutdown all the instance accept once instance.
Ensure that the database is mounted but not open.

SQL> alter database convert to physical standby;

restart the database physical standby
restart the redo apply process oracle will appply redos.
=================================================================

Active Dataguard (New in 11g)

         
1.Oracle active dataguard enables a physical standby database to be open read -only while Redo Apply is active.
2.In 10g if the standby database opened in read only mode redo application services will not be applied.
3.It is feature of 11g Enterprise Edition and required aditional license.
4.The compatible parameter should be set to 11.0.0 at least to enable this feature.

Using the real time query feature


1.Using the active dataguard feature , the standby database can be opened in the read only mode, while the redo   application is still happening on the standby database.In 10g redo application cannot be stared on the   standby   as long as it is opened as read only mode.
2.In this option no new objects can be created on the standby database.
3.The physical standby database can be opened in read-only mode only if all files were recovered up to the same   SCN.
===================================================
How to enable the Real time query via SQL commands?

1. stop redo apply
   SQL> recober managed standby database cancel;

2. Open the database read -only
   sql> alter database open read only;

3. Restart Redo Apply:
   SQL>recover managed standby database disconnect using current logfile;
===================================================
Enable Real -Time query via DGMGRL

1. Stop Redo apply 
   DGMGRL> Edit database 'standdb'set state='APPLY=OFF'
2. Using SQL*Plus, open the database read-only:
   SQL> alter database open read only;
3. Restart Redo Apply:
   DGMGRL> edit database 'standdb'set state='apply-on'
===================================================
Verification

1.SQL> select open_mode from v$database;
 OPEN_Mode
-------------
Read only with apply

===================================================
How to disable the Real Time Query?

1. Shutdown the standby database.
2. Start the standby database in mount stage
3. Restart the media recovery process.
===================================================

Oracle 12c - Architecture Diagram


Oracle Database 12C -Multitenant Architecture Diagram


Saturday, September 3, 2016

Online Redefinition - Converting Reguler (Heap) table into Partitioned table

Partitioning  “SCOTT”.” CREDIT_CARD” Heap Table

Given that this table is frequently written to, it is envisaged that the Partitioning will be done with Oracle 11g redefinition. 
A rough first plan is detailed below about how I think we should prepare and carry out this process.

Create new Partition Tablespace to hold Interim Table

  • Ensure that new Tablespace has enough space for interim table and indexes
  • Extend datafiles or add new datafiles if required
  • Double check that space is free and available

Determine and retain Object Status

  • Table row count
    • Run query to get total count per month
  Select to_date(call_date,'DD-MON-YYYY'), COUNT(*) from SCOTT.CREDIT_CARD group by to_date(call_date,'DD-MON-YYYY')  order by to_date(call_date,'DD-MON-YYYY');

    • Group by month as use as a check later on
  • Table column listing
  • Table sum bytes
    • MB
  • Table Primary Keys
    • Names
    • Columns
    • Order
  • Table Indexes
    • Names
    • Status
    • Columns
    • Order
    • Creation date
  • Invalid Object check
    • No Invalid Objects in schema
  • Table constraints
    • Names
    • Types of constraints
    • Columns 
  • Table Statistics
    • Gather statistics 
  • AWR report
    • Take AWR Report prior to start 

Query Timings 

  • Obtain timings from queries
    • Full table scan, no where clause
    • Select within 1 month only
      • April 2013 (start of inserts)
      • August 2012 (average month increase)
      • October 2012 (significant jump)
    • Select crossing 4 months above
    • Select with columns other than datelogged

Backup table and indexes 

  • Export Table and Indexes, constraints and statistics
    • EXPDP 

Start Partitioning 

  • Confirm table can be redefined using Primary Key 
      BEGIN
         DBMS_REDEFINITION.CAN_REDEF_TABLE('SCOTT','CREDIT_CARD',
                DBMS_REDEFINITION.CONS_USE_PK);
              END;

  • Create New Interim Interval composite Partition Table (Range+Hash) 
CREATE TABLE "SCOTT"."CREDIT_CARD_INTERIM"
    (    "USERNAME" VARCHAR2(256) NOT NULL ENABLE,
        "CALL_DATE" TIMESTAMP (6) NOT NULL ENABLE,
        "METHOD_NAME" VARCHAR2(256) NOT NULL ENABLE,
        "PARAMETERS" VARCHAR2(4000),
        "DURATION" NUMBER(*,0) NOT NULL ENABLE,
        "STATUS" NUMBER(*,0),
        "BLS_PROPID" NUMBER,
        "STATUS_MSG" VARCHAR2(4000),
        "RESPONSE_VALUES" VARCHAR2(4000),
        "REMOTE_ADDRESS" VARCHAR2(100),
        "ID" NUMBER NOT NULL ENABLE,
        "TRANSACTION_ID" NUMBER,
         CONSTRAINT "CREDIT_CARD_PRIMARY_KEY" PRIMARY KEY ("ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)TABLESPACE "DEV_INDEX"  ENABLE)
PARTITION BY RANGE(CALL_DATE)
INTERVAL(NUMTOYMINTERVAL(1, 'MONTH')) STORE IN (FLOW_1)
subpartition by hash(ID)
subpartition template(
subpartition clock_sp1,
subpartition clock_sp2,    
subpartition clock_sp3,
subpartition clock_sp4,
subpartition clock_sp5,
subpartition clock_sp6,
subpartition clock_sp7,
subpartition clock_sp8)
(PARTITION "CREDIT_CARD_CALL_P1"  VALUES LESS THAN (TO_DATE(' 2010-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) SEGMENT CREATION DEFERRED   PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255  NOCOMPRESS LOGGING) ENABLE ROW MOVEMENT;

  • Start redefinition – using Primary Key 
SQL>  exec  DBMS_REDEFINITION.START_REDEF_TABLE (uname =>'SCOTT',orig_table =>'CREDIT_CARD',int_table => 'CREDIT_CARD_INTERIM',col_mapping => '', options_flag =>  DBMS_REDEFINITION.CONS_USE_PK);

  • Perform a count to check status
After this both table should have near about same amount of records
SQL>select count(1) from SCOTT.CREDIT_CARD_INTRIM;
  COUNT(1)
----------
     1000
 SQL>select count(1) from SCOTT.CREDIT_CARD;
  COUNT(1)
----------
     1030

Note: There will likely be continuous inserts into the original table
so there might be a few more records in original table than interim table. This
Sync can be done frequently without any effect.

Sync new table on regular basis till cut off time
BEGIN 
DBMS_REDEFINITION.SYNC_INTERIM_TABLE(
   uname      => SCOTT, 
   orig_table => 'CREDIT_CARD', 
   int_table  => 'CREDIT_CARD_INTRIM'
   );
END;
/
  • Copy Dependent Objects
DECLARE
num_errors PLS_INTEGER;
BEGIN
DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS(
    uname               => SCOTT,
    orig_table          => 'CREDIT_CARD',
    int_table           => 'CREDIT_CARD_INTRIM',
    copy_indexes        => DBMS_REDEFINITION.CONS_ORIG_PARAMS,
    copy_triggers       => TRUE,
    copy_constraints    => TRUE,
    copy_privileges     => TRUE,
    ignore_errors       => TRUE,
    num_errors          => num_errors);
END;
/
  • Confirm that Interim table is Partitioned before completing
SQL>select table_name, partitioned from user_tables where table_name in ('CREDIT_CARD',' CREDIT_CARD_INTRIM’);
 
TABLE_NAME                   PAR
------------------------------ ---
CREDIT_CARD                 NO
CRDET_CARD_INTRIM           YES

  • Run same query as before but against new table 
Select to_date(call_date,'DD-MON-YYYY'), COUNT(*) from SCOTT.CREDIT_CARD group by to_date(call_date,'DD-MON-YYYY')  order by to_date(call_date,'DD-MON-YYYY');

Check monthly counts match original query.

  • Complete the redefinition( Transaction may interrupt here due to exclusive  put on table  while ending the redefinition, better to get outage for few minutes)
 BEGIN
DBMS_REDEFINITION.FINISH_REDEF_TABLE(SCOTT, ' CREDIT_CARD ', ' CREDIT_CARD_INTRIM’);
END;
/ 
  • Confirm that original table is now Partitioned and Interim is not
SQL>select table_name, partitioned from user_tables where table_name in ('CREDIT_CARD',' CREDIT_CARD_INTRIM’);
 
TABLE_NAME                   PAR
------------------------------ ---
CREDIT_CARD                 YES
CREDIT_CARD_INTRIM          NO

  • Now drop the interim table
Drop table CREDIT_CARD_INTRIM;