Wednesday, July 31, 2013

RMAN Backup for Multitenant Architecture

In this demo I will show how to handle different types of RMAN backups in a Multitenant environment.

In a Multitenant environment for cloud supporting data centers we will have container databases on many servers with one to many pluggable databases. These container and pluggable databases will have different database administrators and different SLAs.

I will show how we can backup the container database as a whole or one or many pluggable databases. I will show how with the new database system privilege SYSBACKUP we can separate backup duties and give someone less privileges but still have the capability to perform a full or incremental backup.

I will have a separate blog post on how to recover databases in a Multitenant environment.

CDB = Container Database
PDB = Pluggable Database

Enviroment

CDB- TESTDBS
PDB- DEV1DB, DEV2DB and TSTDB

Demo


I want to grant a mid-level DBA privileges to only have access to backup the CDB as a whole or just one or many PDBs.

The DBAs name is Jane Doe and she will have access to the database server. She will log in to the server with her own account.




Create OS Group and User

$groupadd backupdba
$useradd -g oinstall -G backupdba jane_doe
$password jane_doe



Create Common User

Common users are database accounts that only exist in a Multitenant environment. Common users are accounts that exist in the root CDB and all PDBs plugged into the CDB. Common users names need to start with C## or c##.

I will create a common user called C##ORADBMAINT which our DBA Jane Doe will have the password to log into the CDB and PDB.
In database 12c there is a new database system privilege called SYSBACKUP. In reviewing the SYSBACKUP database system privilege it has the same system privileges as the database system privilege SYSOPER.

Below you will see CONTAINER=ALL meaning that the statement executes in the root CDB and all PDBs plugged into the CDB.

$sqlplus / as sysdba
SQL>CREATE USER C##ORADBMAINT PROFILE DEFAULT IDENTIFIED BY temp1234 DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP ACCOUNT UNLOCK CONTAINER=ALL;
SQL>GRANT CONNECT TO C##ORADBMAINT CONTAINER=ALL;
SQL>GRANT SYSBACKUP TO C##ORADBMAINT CONTAINER=ALL;


In the CDB or PDB we can review who has the sysdba, sysoper or sysbackup privileges granted.
  • $sqlplus / as sysdba
  • SQL>select * from v$pwfile_users; 


Backup CDB

We want to backup the container database as a whole. When you backup the CDB the whole CDB and all PDBs plugged into the CDB are backed up.

Our DBA Jane Doe will connect to the database server with her OS account and use the database account C##ORADBMAINT log into RMAN client to create the full CDB backup.

Connect to RMAN

$rman
RMAN>connect target "c##oradbmaint/temp1234 as sysbackup"











RMAN Configuration 

I made a view RMAN configuration changes for my environment.
RMAN>CONFIGURE CONTROLFILE AUTOBACKUP ON;
RMAN>CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/orabkup/testdbs/rman/%F';
RMAN>CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 2;
RMAN>CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/orabkup/testdbs/rman/%U' MAXPIECESIZE 5 G;

Start Backup

RMAN>backup database plus archivelog;


















Below you can see that datafiles for CDB testdbs and PDBs dev2db, dev2db and tstdb are backed up.


Backup CDB Root

We want to backup only the root for the CDB database. The root of the CDB contains metadata about the container database. Best practice should be to frequently backup the root. When you backup the root CDB only the datafiles for the root CDB are backed up.

Our DBA Jane Doe will connect to the database server with her OS account and use the database account C##ORADBMAINT log into RMAN client to create the root CDB backup.

Connect to RMAN

$rman
RMAN>connect target "c##oradbmaint/temp1234 as sysbackup"

Start Backup 

RMAN>backup database root;

Below you can see that datafiles for only the root CDB are backed up.



Backup PDB

We want to backup only the PDB databases. When you backup the PDB only the datafiles for the PDB are backed up.

Our DBA Jane Doe will connect to the database server with her OS account and use the database account C##ORADBMAINT log into RMAN client to create the full PDB backup.

Connect to RMAN

$rman
RMAN>connect target "c##oradbmaint/temp1234 as sysbackup"


Start Backup

RMAN>backup pluggable database dev1db;













Below you can see that datafiles for only the PDB dev1db are backed up.
















Start Backup for More Then One PDB

$rman
RMAN>backup pluggable database dev2db,tstdb;














Below you can see that datafiles for only the PDB dev2db and tstdb are backed up.


Note:If you connect directly to the pluggable database and try to issue backup pluggable database you will get the following error.
ORA-07538 Pluggable Database qualifier not allowed when connected to a Pluggable Database
To backup the pluggable database connected directly to the pluggable database just issue the following.
$rman
RMAN>connect target "c##oradbmaint/temp1234@dev2db as sysbackup"
RMAN> backup database;




Backup PDB Tablespaces and Datafiles
We want to backup PDB tablespaces or datafiles. When you want to backup PDB tablespaces or datafiles you need to connect directly to the PDB database.

Our DBA Jane Doe will connect to the database server with her OS account and use the database account C##ORADBMAINT log into RMAN client to create tablespace and datafile PDB backup for PDB dev2db.

Connect to RMAN

$rman
RMAN>connect target "c##oradbmaint/temp1234@dev2db as sysbackup"

Review Connection and Report Schema

New feature in Database 12c RMAN is the ability to execute SQL command without the need to have SQL '<statement>'.
RMAN>select con_id,name from v$pdbs;
RMAN>report schema;

Start Backup for Tablespace

RMAN>backup tablespace soe_data,users;

Below you can see that datafiles for only the tablespace SOE_DATA and USERS are backed up.














Start Backup for Datafiles

RMAN>backup datafile 15,17;

Below you can see that only datafiles 15 and 17 were backed up.

Monday, July 29, 2013

Use Database 12c Net Manager to Configure TNS for Multitenant Environment

In this demo I will show how to use the well known Net Manager utility to configure the local TNS file on the Multitenant environment. The Net Manager tool has been around since 8i and is a useful tool to configure multiple TNS entries in one tool.


1. Set your environment to your Oracle 12c database home.


2. Currently no tnsnames files exist in the <oracle_home>/network/admin directory

















3. Execute netmgr

  • $<oracle_home>/bin/netmgr

4. The net manager will open. Click on Service Naming then click the plus.


4. Entry the net service name you want and click next.

5. Select the communication protocol and click next.

6. Enter the host name and port and click next.

7. Enter the service name for the Pluggable database and click next.

8. Click the test button

9. The test button defaults to testing with scott/tiger we can see how long this utility has been around. Click the change login button to change the credentials.

10. Set the username and password credentials for the Pluggable database click OK.

11. Now click test and this time the connection will be successful. Click close.

12. Click finish to close the net service name wizard.

13. Under service naming you will see your service configured. If you need to add more service names just repeat the same steps above if not continue to step 14.

14. Click file>save network configuration

15. Then click file>exit to close net manager.

16.  We can now see under $ORACLE_HOME/network/admin the tnsnames.ora has been created.

17. When we more the tnsnames.ora file we can see the three service names I created have been written to the tnsnames.ora file.































18. Now we can connect to the pluggable database using net service name.





Follow me on Twitter


Friday, July 26, 2013

Provision Cross Platform non-CDB 11gr2 to Pluggable Database 12c with Data Pump Transportable Export and Import

In this demo I will show how to provision PDB on RedHat Linux from a non-CDB 11gr2 on Solaris.

You maybe ask what the reason is for moving to RedHat from Solaris. Well the answer is that the move is not based on a platform reason it is based on moving to 12c without impacting the current development environment.

Development team 1 has been on 12c PDB database DEV1DB for the last week and is loving the new database features. Development team 1 went and told Development team 2 about their new cool database features. The manager for development came and asked to get team 2 on 12c but does not want to impact the current development environment on 11gr2 Solaris. The manager also requested not to share the 12c in the same environment as team 1.

I began to investigate what would be the best option for completing this in the least amount of time.

I have database DBTEST1 11.2.0.3 on Solaris and I want to create a new PDB called DEV2DB and transport the tablespace and import data from DBTEST1.


Here are the steps I completed to give development team 2 their own PDB 12c database inside container database TESTDBS.

Source Database:
dbtest1 on Solaris 64bit

Target Database:
Pluggable database dev2db on RedHat Linux 64bit

Steps:

1. Checks
  • The source database version needs to be minimum of 11.2.0.3+.
  • The source database needs minimum compatible parameter set to 11.2.0.3.0.
  • The source database and PDB database character should match.
  • Set you environment to your source database on source server.
2. Create directory on source server to hold dmp and rman files.

$mkdir -p /orabkup/dbtest1/dpdump

3. Create Oracle directory on source database for export job.

SQL>create directory "EXP_DATA_PUMP_DIR"  as '/orabkup/dbtest1/dpdump/';

4. On source database place the users and non oracle tablespaces in read only mode.

SQL> select tablespace_name from dba_tablespaces;











In my case it is just the USERS and SOE tablespace

SQL> alter tablespace USERS read only;
SQL> alter tablespace SOE read only;

5. Execute full export from source system with version set to 12 and transportable set to always

$expdp \'sys/temp1234 AS SYSDBA\' full=y job_name=EXPORT_DEV2DB dumpfile=datapump_dev2db.dmp DIRECTORY=EXP_DATA_PUMP_DIR LOGFILE=export_datapump_dev2db.log VERSION=12 transportable=always











Once the export is complete note down the required datafiles for transportable tablespace.

6. Execute RMAN on source system to backup the datafiles needed for transportable tablespaces.

$rman target /
run
{
CONVERT DATAFILE '+DATADG/dbtest1/datafile/soe.262.814263143'  DB_FILE_NAME_CONVERT="+DATADG/dbtest1/datafile/soe.262.814263143","/orabkup/dbtest1/dpdump/soe.262.814263143" FORMAT='/orabkup/dbtest1/dpdump/soe.262.814263143';
CONVERT DATAFILE '+DATADG/dbtest1/datafile/users.260.814258995'  DB_FILE_NAME_CONVERT="+DATADG/dbtest1/datafile/users.260.814258995","/orabkup/dbtest1/dpdump/users.260.814258995" FORMAT='/orabkup/dbtest1/dpdump/users.260.814258995';

























7. On source database place tablespaces back into read write mode.

SQL> alter tablespace USERS read write;
SQL> alter tablespace SOE read write;

8. Create directory on target server

$mkdir -p /orabkup/dbtest1

9. Transfer files to target server from source server.

$scp -r dpdump oracle@alpddbs002:/orabkup/dbtest1/

Files now on target system


10. On target system connect to container (CDB) database and check current PDBs.


11. On target system connect to container (CDB) database and create the new pluggable (PDB) database.

Since we are going to plug the default tablespace from the source system I will not create the PDB with a default tablespace.

SQL>CREATE PLUGGABLE DATABASE dev2db
ADMIN USER pdbadm IDENTIFIED BY temp1234
ROLES = (connect);


























  • Observation of the alert log during and after creation of new PDB dev2db



12. Check PDBs after the newly created PDB. The new PDB dev2db is created  with con_id 5 but in mounted mode.























13. Open new PDB dev2db










  • Now the dev2db PDB is in read write mode.

























  • Review datafiles in CDB for all PDBs





















  • Review tablespaces for all PDBs



















14. Create ASM directory to storage datafiles 
$asmcmd -p
ASMCMD>cd dg01
ASMCMD>mkdir DEV2DB




15. Use RMAN to restore datafiles from source system. I am setting a temporary alias to the datafiles that I will rename in the next step.
$rman
RMAN>CONNECT TARGET sys/temp1234@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=alpddbs002)(PORT=1521))(CONNECT_DATA=(SID=testdbs)))
RMAN>run
{
CONVERT DATAFILE '/orabkup/dbtest1/dpdump/soe.262.814263143'  DB_FILE_NAME_CONVERT="/orabkup/dbtest1/dpdump/soe.262.814263143","+DG01/dev2db/soe_data_1" FORMAT='+DG01/dev2db/soe_data_1';
CONVERT DATAFILE '/orabkup/dbtest1/dpdump/users.260.814258995'  DB_FILE_NAME_CONVERT="/orabkup/dbtest1/dpdump/users.260.814258995","+DG01/dev2db/users_data_1" FORMAT='+DG01/dev2db/users_data_1';
}



16. Connect to PDB to check if SOE user exist as we know it will not. I use the full connection string for this demo but you can add the entry into the Oracle home and connect just with the identifier.
$sqlplus /nolog
SQL>connect sys/temp1234@"(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = alpddbs002)(PORT = 1521))(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = dev2db)(INSTANCE_NAME=testdbs)))" AS SYSDBA






17. Create Oracle directory in PDB database.
$sqlplus /nolog
SQL>CREATE DIRECTORY "IMP_DATA_PUMP_DIR" AS '/orabkup/dbtest1/dpdump/';





18. Execute import proccess

Note: I did a remap of the tablespace SOE to SOE_DATA to confirm to my tablespace naming.

$/orabase/product/12.1.0/db_1/bin/impdp  \'sys/temp1234@"(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = alpddbs002)(PORT = 1521))(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = dev2db)(INSTANCE_NAME=testdbs)))" AS SYSDBA\'  full=y dumpfile=datapump_dev2db.dmp DIRECTORY=IMP_DATA_PUMP_DIR LOGFILE=import_datapump_dev2db.log VERSION=12 TRANSPORT_DATAFILES=+DG01/dev2db/soe_data_1,+DG01/dev2db/users_data_1 job_name=imp_dev2db parallel=2 REMAP_TABLESPACE='SOE':'SOE_DATA'



  • First observation of the alert log shows when the datapump job started the tablespace SOE_DATA and USERS was plugged into PDB DEV2DB.













  • Second observation of the alert log shows when the datapump job completed the tablespace SOE_DATA and USERS what alerted to read write.



19.  Connect to PDB to check if SOE user exist now after the import. I use the full connection string for this demo but you can add the entry into the Oracle home and connect just with the identifier.
$sqlplus /nolog
SQL>connect sys/temp1234@"(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = alpddbs002)(PORT = 1521))(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = dev2db)(INSTANCE_NAME=testdbs)))" AS SYSDBA


  • We now see that the SOE schema exist as well as an application user account.


20. Check tablespaces at PDB level.
$sqlplus /nolog
SQL>connect sys/temp1234@"(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = alpddbs002)(PORT = 1521))(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = dev2db)(INSTANCE_NAME=testdbs)))" AS SYSDBA


  • Review of tablespaces at PDB level shows the USERS and SOE_DATA tablespaces now exist.



21. Check tablespaces at CDB level.


  • We can see that SOE_DATA tablespace exist in the new PDB DEV2DB. We also see that an SOE_DATA tablespace exist in the PDB DEV1DB.



















22. Set DEV2DB tablespace default and storage limits.
$sqlplus /nolog
SQL>connect sys/temp1234@"(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = alpddbs002)(PORT = 1521))(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = dev2db)(INSTANCE_NAME=testdbs)))" AS SYSDBA

  • Set default tablespace

SQL>ALTER PLUGGABLE DATABASE DEFAULT TABLESPACE SOE_DATA;


  • Set storage limit for total storage for all tablespaces in PDB DEV2DB to 8G. 
SQL>ALTER PLUGGABLE DATABASE STORAGE(MAXSIZE 8G);
  • Set temp usage limit to 1G.
SQL>ALTER PLUGGABLE DATABASE STORAGE(MAX_SHARED_TEMP_SIZE 1G);

23. Use SQL Developer to compare schema SOE between source DBTEST1 and target DEV2DB. SQL Developer did not show any differences.




























  • Everything compare equal.

24. Run the SOE application against both PDB databases.


  • Here you can see two SwingBench Order Entry application running. One connected to PDB DEV1DB and the other connected to PDB DEV2DB. 

















25. Enterprise Manager 12c top activity page at CDB level shows top activity for PDBs.



















26. Quick look at AWR report from CDB level shows new column PDB Name. In a different blog post I will break down the multitenant AWR report.


  • You can see the IO break down by PDB









  •  Top SQL statement shows which PDB is executing the top query.



























  • We can see which PDB has the top physical segment reads.













Conclusion:
Development team 2 now has their own PDB in the CDB database. To the development it is not a shared environment as the PDB database to end user acts the same as a non-CDB database.

Below are my scripts I used in this demo

check_pdbs.sql
set linesize 100
col open_time format a25
--pdbs
select con_id,name,open_mode,open_time,ceil(total_size)/1024/1024 total_size_in_mb from v$pdbs
order by con_id asc;

--active services
col name format a20
col network_name format a20
select con_id,con_name,name,network_name from v$active_services
order by con_id asc;

--show container id
show con_name

check_pdbs_datafiles.sql
set linesize 500
set pagesize 100
col name format a20
col db_file_name format a80
select b.con_id,b.name,a.name as db_file_name from v$datafile a, v$pdbs b

where a.con_id= b.con_id;

check_pdbs_tablespaces.sql
set linesize 500
set pagesize 100
SELECT d.con_id,d.status "Status", d.tablespace_name "Name", d.CONTENTS "Type",
       d.extent_management "Extent Management",
       TO_CHAR (NVL (a.BYTES / 1024 / 1024, 0), '99,999,990.900') "Size (M)",
          TO_CHAR (NVL (a.BYTES - NVL (f.BYTES, 0), 0) / 1024 / 1024,
                   '99999999.999'
                  )
       || '/'
       || TO_CHAR (NVL (a.BYTES / 1024 / 1024, 0), '99999999.999') "Used (M)",
       TO_CHAR (NVL ((a.BYTES - NVL (f.BYTES, 0)) / a.BYTES * 100, 0),
                '990.00'
               ) "Used %"
  FROM SYS.cdb_tablespaces d,
       (SELECT con_id,tablespace_name, SUM (BYTES) BYTES
            FROM cdb_data_files
        GROUP BY con_id,tablespace_name) a,
       (SELECT   con_id,tablespace_name, SUM (BYTES) BYTES
            FROM cdb_free_space
        GROUP BY con_id,tablespace_name) f
 WHERE d.con_id=a.con_id
 and d.con_id=f.con_id
 and d.tablespace_name = a.tablespace_name(+)
   AND d.tablespace_name = f.tablespace_name(+)
UNION ALL
SELECT d.con_id,d.status "Status", d.tablespace_name "Name", d.CONTENTS "Type",
       d.extent_management "Extent Management",
       TO_CHAR (NVL (a.BYTES / 1024 / 1024, 0), '99,999,990.900') "Size (M)",
          TO_CHAR (NVL (t.BYTES, 0) / 1024 / 1024, '99999999.999')
       || '/'
       || TO_CHAR (NVL (a.BYTES / 1024 / 1024, 0), '99999999.999') "Used (M)",
       TO_CHAR (NVL (t.BYTES / a.BYTES * 100, 0), '990.00') "Used %"
  FROM SYS.cdb_tablespaces d,
       (SELECT   con_id,tablespace_name, SUM (BYTES) BYTES
            FROM cdb_temp_files
        GROUP BY con_id,tablespace_name) a,
       (SELECT   con_id,tablespace_name, SUM (bytes_cached) BYTES
            FROM v$temp_extent_pool
        GROUP BY con_id,tablespace_name) t
 WHERE d.con_id=a.con_id
   AND d.con_id=t.con_id
   AND d.tablespace_name = a.tablespace_name(+)
   AND d.tablespace_name = t.tablespace_name(+)
   order by 1 asc;