Subscribe: Jaffar Oracle blog
Added By: Feedage Forager Feedage Grade A rated
Language: English
alter  cell  cloud  data  database  enterprise  exadata  home  new  oracle enterprise  oracle  procedure  server  sql  standby 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Jaffar Oracle blog

Jaffar's (Mr RAC) Oracle blog

Whatever topic has been discussed on this blog is my own finding and views, not necessary match with others. I strongly recommend you to do a test before you implement the piece of advice given at my blog.

Updated: 2018-02-15T21:52:54.126-08:00


Oracle Cloud Infrastructure services and third-party backup tools (CloudBerry)


As part of the Cloud Infrastructure services, Oracle offers below persistent I/O intensive block storage and high-throughput storage options, which is manageable through the console and by CLI:

  • Oracle Cloud Infrastructure Block Volumes
  • Oracle Cloud Infrastructure Object Storage
  • Oracle Cloud Infrastructure Archive Storage
  • Oracle Cloud Infrastructure File Storage
  • Oracle Cloud Infrastructure Data Transfer Service
 To understand the details, features, pricing and etc, visit the URL below:

One of the customers was struggling to delete the backup data from the Oracle Cloud to avoid the charges. Oracle support requested to use the CloudBerry tools for easy management. Below is the excerpt from the CloudBerry website :

"With CloudBerry Backup you can use Oracle Storage Cloud Service as a cost-effective, remote backup solution for your enterprise data and applications. By backing up your data and applications to Oracle Storage Cloud Service, you can avoid large capital and operating expenditures in acquiring and maintaining storage hardware. By automating your backup routine to run at scheduled intervals, you can further reduce the operating cost of running your backup process. In the event of a disaster at your site, the data is safe in a remote location, and you can restore it quickly to your production systems"

CloudBerry offers below Oracle tools :
  • CloudBerry Explorer
  • CloudBerry Backup
  • CloudBerry Managed Backup
  • CloudBerry Drive
 Visit their website to explore more about these tools:

We then used CloudBerry Backup tool (15 days free trail) to move the backup files from the Oracle cloud storage and removed it from the cloud.

You may download the 15 days free trail and give a try.

Oracle Cloud - Database Services


This blog post highlights some of the essentials of Oracle cloud Database service offerings and its advantages. Also discuss about Database deployment (DBaaS) benefits and tools that can be used to automate backup/recovery operations and maintenance tasks.No doubt, most of the software houses are now pushing the clients towards the cloud services. Though Cloud provides several benefits, one should really understands the benefits, threats and offerings from the different cloud vendors. I am going to discuss here about Oracle Cloud Database Services offering.Database serviceOracle Database Exadata cloud at customer (Full Oracle Databases hosted on an Oracle Exadata Database Machine inside the customer's DC)DB Service on Bare Metal (Dedicated database instances with full administrate control)Exadata Express ServiceOracle Database Exadata Cloud Service (Full Oracle Databases hosted on an Oracle Exadata Database Machine inside the Oracle Cloud)Database Schema service (A dedicated schema with a complete development and deployment platform managed by Oracle) Provides :Rapid provisioning to use in minutesGrow as your business growProvides tight security to protect the dataOff-loads your day-to-day maintenance workDatabase deployment (earlier known as DBaaS): is a compute environment which provides:A Linux VMOracle softwareA per-created databaseCloud tools for automated and on-demand backup and recovery, automated patching and upgrades, web monitoring tool etc.Patching a deployment database:Use UI Cloud Interface from Oracle cloud DB Service console or use the command line utility : dbaascliBackup and Recovery of deployment database:bkup_api Automated backup service leveldbaascli Automated recovery service level For more information about Features, Overview and pricing of Oracle Cloud Database services, visit : will be blogging about each topic separately in the coming posts.Stay tuned.[...]

DBMCLI Utility


Exadata comes with two types of servers, cell (storage) and compute (db). Its important for Database Machine Administrators (DMAs) to manage and configure servers for various purposes. For cell server administration and server settings, we use the cellcli utility. With X6 and higher, the DBMCLI utility can be used to administrate and configure server settings on compute nodes.The DBMCLI utility is the command-line administration tool for configuring database servers, and managing server environment. Like cellcli, DBMCLI also runs on each compute server to enable you to configure an individual database server. The DBMCLI can be used to start and stop the server, to manage server configuration information, and to enable or disable servers. The utility is preconfigured when Oracle Exadata Database Machine is shipped.Note : If you have enabled Capacity on Demand (CoD) during Exadata implementation, using this utility you can increase the CPU count on demand. To invoke/start the utility on the $ prompt, just execute dbmcli command$ dbmcli [port_number] [-n] [-m] [-xml] [-v | -vv | -vvv] [-x] [-e command] With dbmcli utility, you can retireve the server configuration details, metric information and configure SMTP server details, start / stop the server etc.Fore more details, read the Oracle documentation: [...]

Wrong Cell Server name on X6-2 Elastic Rack - Bug 25317550


Two X6-2 Elastic Full capacity Exadata systems were deployed recently. Due to the following BUG, cell names were not properly updated with the client provided names after executing the Bug 25317550 : OEDA FAILS TO SET CELL NAME RESULTING IN GRID DISK NAMES NOT HAVING RIGHT SUFFIXThough this doesn't impact the operations, but, certainly will create confusion when multiple Exadata systems are deployed in the same data center, due to exact name of cell, cell disks, grid disks.Note : Its highly recommended to validate the cell names after executing the, before running the onecommand. If you encounter the similar problem, simply change the cell name with alter cell name=[correctname] and proceed with onecommand execution to avoid the BUG.The default names looks like the below :# dcli -g cell_group -l root 'cellcli -e list cell attributes name'                        celadm1: ru02                        celadm1: ru04                        celadm3: ru06                        celadm4: ru08                        celadm5: ru10                        celadm6: ru12Changing the cell name to reflect the cell disk, grid disk names, you need to follow the below procedure:The procedure below must be performed on all cells separately and sequentially(to avoid full downtime);1) Change the cell name:    cellcli> alter cell name=celadm5 2) Confirm griddisks can be taken offline.    cellcli> list griddisk attributes name, ASMDeactivationOutcome, ASMModeStatus             ASMDeactivationOutcome - Should be YES for all griddisks3) Inactivate griddisk on that cell    cellcli> alter griddisk all inactive             Observation - IF any votesiks are in the storage server will relocate to any surviving storage servers.4) Change cell disk name            alter celldisk CD_00_ru10 name=CD_00_celadm5;                 alter celldisk CD_01_ru10 name=CD_01_celadm5;            alter celldisk CD_02_ru10 name=CD_02_celadm5;            alter celldisk CD_03_ru10 name=CD_03_celadm5;            alter celldisk CD_04_ru10 name=CD_04_celadm5;            alter celldisk CD_05_ru10 name=CD_05_celadm5;            alter celldisk CD_06_ru10 name=CD_06_celadm5;            alter celldisk CD_07_ru10 name=CD_07_celadm5;            alter celldisk CD_08_ru10 name=CD_08_celadm5;            alter celldisk CD_09_ru10 name=CD_09_celadm5;            alter celldisk CD_10_ru10 name=CD_10_celadm5;            alter celldisk CD_11_ru10 name=CD_11_celadm[...]

OBIEE 12c ( post upgrade challenges - unable to save HTML based dashboard reports


Very recently, an OBIEE v12.2.1.1 upgraded to v12.2.1.3 on an Exalytics machine. Though it wasn't a major upgrade, we had to do it to fix some bugs encountered in v12.2.1.1. For sure, it wasn't a easy walk in the park during the upgrade and post upgrade.I am here to discuss a particular issue we faced post upgrade. This should be interesting.Post upgrade to v12.2.1.3, we heard from the application team that all the reports working perfectly fine, except the dashboard reports with HTML tag unable to save. The below error was thrown when try to save the existing HTML reports:"You do not currently have sufficient privileges to save a report or dashboard page that contains HTML markup"It appeared to be a very generic error, and we tried all the workarounds we found on various websites and Oracle docs. Unfortunately, none of the solution fixed the issues. One of the popular advice is the following:you need to go and check the HTML markup privileges in Administration. 1. Login with OBIEE Admin user, 2. Go to Administration link - Manage Privileges, 3. Search for "Save Content with HTML Markup" and see if it has necessary privileges. else assign the necessary roles and re-check the issue. After giving an additional privilege we could save the existing HTML reports. However, still the issue exists when we create a new dashboard report, with HTML tag. This time the below error msg was appeared:While doing the research we also opened a SR with Oracle support. Luckily, (I mean it), the engineer who assigned this SR was good enough to catch the issue and provide the solution.The engineer tried to simulate the issue on his environment and surprisingly he faced the similar problem. He then contacted the OBIEE development team and reaised the above concerns.A few hours later, he provided us the below workaround, which seems to be a generic solution to the first and second problem we faced. I believe this is a common issue on follow below steps ::Take a backup of Instanceconfig.xml file, which is located in the following location, /user_projects/domains/bi/config/fmwconfig/biconfig/OBIPS/instanceconfig.xml 2. Add the following entry under security.  false true  3. Restart the presentation services with command below :: cd /refresh/home/oracle/12c/Oracle_Home/Oracle_Config/bi/bitools/bin ./ -i obips1 ./ -i obips1 4. Now you will see the "Contains HTML MARK up" in the answers , check it and try to save the report now. The solution perfectly worked and all HTML dashboard reports were able to save.If you are on OBIEE, I strongly recommend you to test the HTML reports, and if you encounter one similar to ours, apply the workaround.[...]

Database direct upgrade from 11g to 12c to Exadata X6-2 from X5-2 - RMAN DUPLICATE DATABSAE WITH NOOPEN


At one of the customers recently, a few production databases migration from X5-2 to X6-2 with Oracle 11g upgrade to Oracle 12cR1 was successfully delivered.As we all knew, there are handful of methods available to migrate and upgrade an Oracle databases. Our objective was simple. Migrate Oracle 11g ( database from X5-2 to Oracle 12c ( on X6-2.Since the OS on source and target remains the same, no conversion is required.There was no issue of downtime, so, doesn't required to be worried of various options.Considering the above, we have decided to go for a direct RMAN (duplicate command) with NOOPEN option to upgrade the database. You can also use same procedure with RMAN restore/recovery method. But, the RMAN duplicate database with NOOPEN simplified the procedure.Below is he high level procedure to upgrade an Oracle 11g database to 12cR1 using RMAN DUPLICATE command{Copy preupgrd.sql & utluppkg.sql scripts from Oracle 12c ?/rdbms/admin home to 11g to /tmp or any other location;Run preupgrd.sql script on the source database (which is Oracle 11gR2 in our case);Review preupgrade.log and apply any recommendations; You can also execute preupgrade_fixups script to fix the issues raised in the preupgrade.log;Execute RMAN backup (database and archivelog) on source database;scp the backup sets to remote hostCreate a simple init file on the target with just db_name parameter;Create a password file on the target;Startup nomount the database on the target;Create TNS entries for auxiliary (target) and primary database (source) on the target host;Run the DUPLICATE DATABASE with NOOPEN , with all adjusted / required parameters;After the restore and recovery, open the database with resetlogs upgrade option;Exit from the SQL prompt and run the catupgrade (12c new command) with parallel option;on target host:nohup $ORACLE_HOME/perl/bin/perl -n 8 catupgrd.sql &After completing the catupgrade, get the postupgrade_fixup script from resource and execute on the target database;Perform the timezone upgrade;Gather dictionary, fixed_objects, database and system stats accordingly;Run ultrp.sql to ensure all invalid objects are compiled;Review the dba_registry t ensure no INVALID components remain;Above are the high level steps. If you are looking for a step-by-step procedure, review the below url from We must appreciate for the work done at the blogspot for a very comprehensive procedure and demonstration.References:[...]

Oracle EBS Suite blank Login page - Post Exadata migration


As part of EBS database migration on Exadata, we recently deployed a brand new Exadata X5-2L (eighth rack), and migrated an Oracle EBS Database from the typical storage/server technologies.

Below are the environment details:

EBS Suite 12.1.3 (running on two application servers with hardware LOAD BALANCER)
Database :, RAC database with 2 instance

After the database migration, buildxml and autoconfig procedures went well on both application and database tiers. However, when the EBS login page is launched, it came out as just blank page, plus, apps passwords were unable to change through the typical procedure. We wonder what went wrong, as none of the procedure gave any significant failure indications, all went fine and we could see the successful completion messages.

After a quick initial investigation, we found that there is an issue with the GUEST user, and also found that the profile was not loaded when the autoconfig was ran on the application server. In the autoconfig log file, we could see the process was failed to update the password (ORACLE). We then tried all workaround, the recommended on Oracle support and other websites. Unfortunately, none of the workarounds helped us.

After almost spending a whole day, investigating and analyzing the root cause, we looked at the DB components and status in dba_registry view. We found the JSserver Java Virtual Machine component INVALID. During Exadata software deployment at RDBMS patch installation phase, there was an error while applying the patch due to conflict between the patches.

Without further wasting a single second, we ran the @catbundle and followed by ultrp.sql.

Guess what, all issues disappeared. We run the autoconfig on the application servers. After which we could change the app user password and we could see the Login page too.

When you face EBS blank login page, and none of the workarounds fix the issues, its a good idea to have a look at dba_registry for any INVALID components.

It was quite a nice experience.

This really 

Exadata X5-2L deployment & Challenges


A brand new Exadata X5-2L eighth rack (I knew the latest is X7 now, but it was for a POC, so no worries) has been deployed recently at a customer for Oracle EBS Exadata migration POC purpose. Its wasn't an easy walk in the park as I initially presumed. There were some challenges (network, configuration) thrown during the migration, but, happily overcome and had it installed and EBS database migration completion.So, I am going to share yet another Exadata bare metal deployment story explaining the challenges I have faced, and how they are fixed.Issue 1) DB network cable issues:After successful execution of the elasticConfig, all the Exadata factory IP addresses have been set to client IPs. Though the management network was accessible from the outside, client network was not accessible. When verified with the network team about enabling the ports on the corporate switch, they confirmed that the ports are enabled, however, the connection is showing as not active and asked to us investigate the network cables connected to the DB nodes. When we verified the network cables ports, we didn't find any lights flashing and after an extensive investigation (Switch ports, SFP on Exadata and Corporate switch, checking the cables status), it was found that the cables pin direction was not properly connected. Also, found that the network bonding interfaces (eth4 and eth5) were not up, confirmed from ethtool eth1 command. After fixing the cables, and bringing up the interfaces (ifup eth4 & eith5), we could see that cables are connected properly and we can also see the lights on the ports.$ethtool eth4 (shows the interfaces were not connected)Settings for eth4:        Supported ports: [ TP ]        Supported link modes:   100baseT/Full                                1000baseT/Full                                10000baseT/Full        Supported pause frame use: No        Supports auto-negotiation: Yes        Advertised link modes:  100baseT/Full                                1000baseT/Full                                10000baseT/Full        Advertised pause frame use: No        Advertised auto-negotiation: Yes        Speed: Unknown!        Duplex: Unknown! (255)        Port: Twisted Pair        PHYAD: 0        Transceiver: external        Auto-negotiation: on        MDI-X: Unknown        Supports Wake-on: d        Wake-on: d        Current message level: 0x00000007 (7)                               drv probe link        Link detected: noIssue 2) Wrong Netma[...]

Oracle R Enterprise Server Configuration


Oracle R Enterprise Server OverviewOracle R Enterprise includes several components on the server. Together these components enable an Oracle R Enterprise client to interact with Oracle R Enterprise Server.The server-side components of Oracle R Enterprise are: Oracle Database Enterprise Edition (64bit)Oracle R Distribution or open source ROracle R Enterprise ServerOracle R Enterprise Server consists of the following: The rqsys schemaMetadata and executable code in sysOracle R Enterprise Server libraries in $ORACLE_HOME/lib (Linux and UNIX) or %ORACLE_HOME%\bin (Windows)Oracle R Enterprise R packages in $ORACLE_HOME/R/library (%ORACLE_HOME%\R\library on Windows)Oracle R Enterprise Server RequirementsBefore installing Oracle R Enterprise Server, verify your system environment, and ensure that your user ID has the proper permissions. Oracle R Enterprise runs on 64-bit platforms only.·        Linux x86-64·        Oracle Solaris·        IBM AIX·        IBM AIXOracle Database must be installed and configured as described·        Oracle R Enterprise requires the 64-bit version of Oracle Database Enterprise Edition.Installing Oracle R Enterprise Serververify if ORACLE_HOME, ORACLE_SID, R_HOME, PATH, and LD_LIBRARY_PATH environment variables are properly set. For example, you could specify values like the following in a bashrc file:export ORACLE_HOME=/u01/app/oracle/product/ ORACLE_SID=ORCLexport R_HOME=/usr/lib64/Rexport PATH=$PATH:$R_HOME/bin:$ORACLE_HOME/binexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib:$R_HOME/libDownload Oracle R Enterprise ServerGo to the Oracle R Enterprise home page on the Oracle Technology Network:. Select Oracle R Enterprise Downloads. On the Downloads page, select Oracle R Enterprise Server and the Supporting Packages for Linux. The following files are downloaded for Oracle R Enterprise 1.4.1. ore-server-linux-x86-64-1.4.1.zipore-supporting-linux-x86-64-1.4.1.zipLogin as root, and copy the installers for Oracle R Enterprise Server and the supporting packages across nodes. For example: $ dcli -g nodes -l oracle mkdir -p /home/oracle/ORE$ dcli -g nodes -l oracle -f -d      /home/oracle/ORE/$ dcli -g nodes -l oracle -f -d      /home/oracle/ORE/ore-supporting-linux-x86-64-1.4.1.zipUnzip the supporting packages on each node: $ dcli -t -g nodes -l oracle unzip        /home/oracle/ORE/ -d      /my_destination_directory/ Install Oracle R Enterprise server components: $ dcli -t -g nodes -l oracle "cd /my_destination_directory; ./ -y      --admin --sys syspassword --perm permtablespace      --temp temptablespace --rqsys rqsyspassword      --user-perm usertablespace --user-temp usertemptablespace      --pass rquserpassword --user RQUSER"Installing Oracle R Enterprise Server interactively: $ ./ -i Oracle R Enterprise 1.4.1 Server. Copyright (c) 2012, 2014 Oracle and/or its affiliates. All rights reserved. Checking platform .................. Pass Checking R ......................... Pass Checking R libraries ............... Pass Checking ORACLE_HOME ............... Pass Checking ORACLE_SID ................ Pass Checking sqlplus ................... Pass Checking ORACLE instance ........... Pass Checking CDB[...]

How to install Oracle R Enterprise (ORE) on Exadata


Very recently, we have deployed ORE (R Distribution and R Enterprise) 3.1.1 packages on 4 node Exadata environment. This blog will discuss the prerequisites and procedure to deploy Oracle R Distribution v3.1.1.Note: Ensure you have a latest system (root and /u01) backup before you deploy the packages on the db server.What is R and Oracle EnterpriseR is third-party, open source software. Open source R is governed by GNU General Public License (GPL) and not by Oracle licensing. Oracle R Enterprise requires an installation of R on the server computer and on each client computer that interacts with the server.Why Oracle R Distribution? Oracle R Distribution simplifies the installation of R for Oracle R Enterprise. Oracle R Distribution is supported by Oracle for customers of Oracle Advanced Analytics, Oracle Linux, and Oracle Big Data Appliance. What is needed for R Distribution deployment for Oracle Linux 6?The Oracle R Distribution RPMs for Oracle Linux 6 are listed as follows: /R-3.1.1-2.el6.x86_64.rpm /R-core-3.1.1-2.el6.x86_64.rpm /R-devel-3.1.1-2.el6.x86_64.rpm /libRmath-3.1.1-2.el6.x86_64.rpm /libRmath-devel-3.1.1-2.el6.x86_64.rpm /libRmath-static-3.1.1-2.el6.x86_64.rpm If the following dependent RPM is not automatically included, then download and install it explicitly: texinfo-tex-4.13a-8.el6.x86_64.rpmThe picture below depicts the ORE client/server installation steps:Oracle R Distribution on Oracle Linux Using RPMsOracle recommends that you use yum to install Oracle R Distribution, because yum automatically resolves RPM dependencies. However, if yum is not available, then you can install the RPMs directly and resolve the dependencies manually. Download the required rpms and its dependent rpms from below link: know more about rpms and its dependent rpms, visit the following Oracle website: can install the rpms in the following order:yum localinstall libRmath-3.1.1-2.el6.x86_64.rpmyum localinstall libRmath-devel-3.1.1-2.el6.x86_64.rpmyum localinstall libRmath-static-3.1.1-2.el6.x86_64.rpmyum localinstall R-core-3.1.1-2.el6.x86_64.rpmyum localinstall R-devel-3.1.1-2.el6.x86_64.rpmyum localinstall R-3.1.1-2.el6.x86_64.rpmOnce the rpms are installed, you can validate the installation , using the below procedure:go to /usr/lib64/R directory on the database, as oracle user, type R:You must see the output below:type q() to exit from the R interface.And repeat on the rest of the db nodes, if you are on RAC.To install R distribution, use the procedure below:rpm -e R-Rversionrpm -e R-develrpm -e R-corerpm -e libRmath-develrpm -e libRmath   In the blog post, I will demonstrate how to configure Oracle R Enterprise.[...]

How to expand Exadata Database Storage capacity on demand


Exadata Storage expansionMost of us knew the capabilities that Exadata Database Machine delivers. Its known fact that Exadata comes in different fixed rack size capacity: 1/8 rack (2 db nodes, 3 cells), quarter rack (2 db nodes, 3 cells), half rack (4 db nodes, 7 cells) and full rack (8 db nodes, 14 cells). When you want to expand the capacity, it must be in fixed size as well, like, 1/8 to quarter, quarter to half and half to full.With Exadata X5 Elastic configuration, one can also have customized sizing by extending capacity of the rack by adding any number of DB servers or storage servers or combination of both, up to the maximum allowed capacity in the rack. In this blog post, I will summarize and walk through a procedure about extending Exadata storage capacity, i.e, adding a new cell to an existing Exadata Database Machine. Preparing to Extend Exadata Database Machine·        Ensure HW placed in the rack, and all necessary network and cabling requirements are completed. (2 IPs from the management network is required for the new cell).·         ·        Re-image or upgrade of image:o   Extract the imageinfo from one of the existing cell server.o   Login to the new cell through ILOM, connect to the console as root user and get the imageinfo o   If the image version on the new cell doesn’t match with the existing image version, either you download the exact image version and re-image the new cell or upgrade the image on the existing servers.Review "Reimaging Exadata Cell Node Guidance (Doc ID 2151671.1)" if you want to reimage the new cell.Add the IP addresses acquired for the new cell to the /etc/oracle/cell/network-config/cellip.ora file on each DB node. To do this, perform the steps below from the first 1 db serer in the cluster:cd /etc/oracle/cell/network-configcp cellip.ora cellip.ora.origcp cellip.ora cellip.ora-bak Add the new entries to /etc/oracle/cell/network-config/cellip.ora-bak./usr/local/bin/dcli -g database_nodes -l root -f cellip.ora-bak -d /etc/oracle/cell/network-config/cellip.oraIf ASR alerting was set up on the existing storage cells, configure cell ASR alerting for the cell being added.List the cell attributes required for configuring cell ASR alerting. Run the following command from any existing storage grid cell: o   CellCLI> list cell attributes snmpsubscriberApply the same SNMP values to the new cell by running the command below as the celladmin user, as shown in the below example: o   CellCLI> alter cell snmpSubscriber=((host='',port=162,community=public))Configure cell alerting for the cell being added.List the cell attributes required for configuring cell alerting. Run the following command from any existing storage grid cell: o   CellCLI> list cell attributeso    notificationMethod,notificationPolicy,smtpToAddr,smtpFrom,o    smtpFromAddr,smtpServer,smtpUseSSL,smtpPortApply the same values to the new cell by running the command below as the celladmin user, as shown in the example below: o   CellCLI> alter cell notificationmethod='mail,snmp',notificationpolicy='critical,warning,clear',smtptoaddr= '',smtpfrom='Exadata',smtpfromaddr='',smtpserver='',smtpusessl=FALSE,smtpport=25Create cell disks on the cell being added.Log in to the cell as celladmin and run the following command: o   CellCLI> create celldisk allCheck that the flash log was created by default: o   CellCLI> list flashlogYou should see the name of[...]

K21 Technologies - Upcoming Oracle trainings


Atual's K21 technologies offering some quality of Oracle training. Below are the upcoming Oracle training:

Apps DBA : Install | Patch | Clone | Maintain :

Apps DBA Webinar:…/appsdbawebinar/062687

Oracle GoldenGate Training

Whats new in Exadata X7


Oracle announced its new Exadata Database Machine X7 during OOW 2017.  Lets walk through quickly about the key features of X7.Key featuresUp to 912 CPU core and 28.5TB memory per rack2 to 19 DB servers per rack3 to 18 Storage servers per rackMaximum of 920TB flash capacity2.1PB of disk capacityDelivers 20% faster throughput from earlier models50% more memory capacity from earlier models10TB size disk. (10TB x 12 = 120TB RAW per storage server). The only system in the market today with 10TB disk capacityIncreased OLTP performance : about 4.8 million reads and about 4.3 million writes per secondFeaturing an Intel Skylake processor with 24 cores Enhanced Ethernet connectivity: supports over 25GbEDelivers in-memory performance from Shared Storage OEDA CML interface New Exadata Smart Software : Exadata 18c  [...]

Switchover and Switchback simplified in Oracle 12c


Business continuity (Disaster Recovery) has become a very critical factor for every business, especially in the financial sectors. Most of the banks are tending to have their regular DR test to meet the central bank regulation on DR testing capabilities.Very recently, there was a request from one of the clients to perform a reverse replication and rollback (i.,e switchover & switchback) between the HO and DR for one of the business critical databases. Similar activities performed with easy on pre 12c databases. However, this was my first experience with Oracle 12c. After spending a bit of time to explore whats new in 12c Switchover, it was amazing to learn how 12c simplified the procedure. So, I decided to write a post on my experience.This post demonstrates how Switchover and Switchback procedure is simplified in Oracle 12c. The following is used in the scenario:·        2 instances Oracle 12c RAC primary database (IMMPRD)·        Single instance Oracle 12c RAC Standby database (IMMSDB)Look at the current status of the both databases:-- PrimaryIMMPRD> select status,instance_name,database_role from v$database,v$instance;STATUS       INSTANCE_NAME    DATABASE_ROLE------------ ---------------- ----------------OPEN         IMMPRD1           PRIMARY-- StandbyIMMSDB> select status,instance_name,database_role from v$database,v$instance;STATUS       INSTANCE_NAME    DATABASE_ROLE------------ ---------------- ----------------OPEN         IMMSDB1           PHYSICAL STANDBYBefore getting into the real action, validate the following to avoid any failures during the course of role transition:·        Ensure log_archive_dest_2 is configured on PRIMARY and STANDBY databases·        Media Recovery Process (MRP) is active on STANDBY and in sync with PRIMARY database·        Create STANDBY REDO logs on PRIMARY, if not exists ·        FAL_CLIENT & FAL_SERVER parameters set on both databases·        Verify TEMP tablespaces on STANDBY, add them if required, as TEMPFFILES created after STANDBY creation won’t be propagated to STANDBY site. Pre-Switchover in 12cFor a smooth role transition, it is important to have everything in-place and in sync. Pre-Oracle 12c, a set of commands used on PRIMARY and STANDBY to validate the readiness of the systems. However, with Oracle 12c, this is simplified with the ALTER DATABASE SWITCHOVER VERIFY command. The command performs the following set of actions:·        Verifies minimum Oracle version, i.e, Oracle 12.1 ·        PRIMRY DB REDO SHIPPING·        Verify MRP status on Standby databaseLet’s run the command on the primary database to validate if the environments are ready for the role transition.IMMPRD>  alter database switchover to IMMSDB verify; alter database switchover to IMSDB verify*ERROR at line 1:ORA-16475: succeeded with warnings, check alert log for more detailsWhen the command is executed, an ORA-16475 error was encountered. For more details, lets walk through the PRIMARY and STANDBY databases alert.log file, and pay attention to the SWITCHOVER VERIFY WARNING.--primary database alert.logFri Oct [...]

A few useful Oracle 12cR2 MOS Docs


A few useful MOS Docs are listed below , in case if 12cR2 upgrade around the corner.

  • How to Upgrade to/Downgrade from Grid Infrastructure 12.2 and Known Issues (Doc ID 2240959.1)
  • Complete Checklist for Upgrading to Oracle Database 12c Release 2 (12.2) using DBUA (Doc ID 2189854.1)
  • 12.2 Grid Infrastructure Installation: What's New (Doc ID 2024946.1)
  • Patches to apply before upgrading Oracle GI and DB to (Doc ID 2180188.1)
  • Differences Between Enterprise, Standard Edition 2 on Oracle 12.2 (Doc ID 2243031.1)
  • 12.2 Does Not List Disks Unless the Discovery String is Provided (Doc ID 2244960.1)

Oracle Clusterware 12cR2 - deprecated and desupported features


Having clear understanding of deprecated and desupported features in a new release is equally important as knowing the new features of the release. In this short blog post, I would like to highlight the following features that are either deprecated or desupported in 12cR2.Deprecated·        config.shwill no longer be used for Grid configuration wizard, instead, the is used in 12cR2;·        Placement of OCR and Voting files directly on a shared filesystem is not deprecated;·        The utility is deprecated in favor of Oracle Trace File Analyzer;Desupported·        You are no longer able to use Oracle Clusterware commands that are prefixed with crs_.In my next blog post, will go over some of the important features Oracle Clusterware in 12cR2. Stay tuned.References:[...]

SQL Tuning Advisor against sql_id's in AWR


We were in a situation very recently to run SQL Tuning Advisor against a bunch of SQL statements that appeared in the AWR's ADDM recommendations report. The initial effort to launch SQL Tuning Advisor against the SQL_ID couldn't go through as the SQL didn't exist in the shared pool.

Since the sql_id was present in the AWR report, thought of running the advisory against the AWR data, and found a very nice and precisely explained at the following blog:

---- Example how to run SQL Tuning advisor against sql_id in AWR

variable stmt_task VARCHAR2(64);
SQL> exec :stmt_task := DBMS_SQLTUNE.CREATE_TUNING_TASK (begin_snap => 4118, end_snap => 4119, sql_id => 'caxcavmq6zkv9' , scope => 'COMPREHENSIVE', time_limit => 60, task_name => 'sql_tuning_task01' );

SQL> exec DBMS_SQLTUNE.EXECUTE_TUNING_TASK(task_name => 'sql_tuning_task01');

SQL> SELECT status FROM USER_ADVISOR_TASKS WHERE task_name = 'sql_tuning_task01';

set long 50000
set longchunksize 500000
Set pagesize 5000


SQL> exec DBMS_SQLTUNE.drop_tuning_task(task_name =>'sql_tuning_task01');


Happy reading/learning.

Migrating data from on-primeses to cloud


No doubt everyone talks about cloud technologies and certainly could holds the future for various reasons. Oracle doesn't want to left behind in the competition and put the top gear towards cloud offerings.  This blog explore various Oracle options to migrate on-premises data to cloud. Typically, when a database is created on cloud, the next challenging factor is loading the data to cloud. The good thing about data migration is that the methods and procedures remain the same as you were doing earlier. All data migration constraints still applied, like the following:OS versions of on-premises and cloud machineDB versionsCharacter setDB Sizedata typesNetwork bandwidth  The very known and DBA friendly popular Oracle methods are still valid for cloud data migration too :Logical method (conventional data pumps)TTSCross platform TTSUnplugging/Plugging/Cloning/Remote Cloning of PDBsSQL Developer and SQL LoaderGolden GateUsually, you take the data backup, choosing the method which suits your requirements,  and upload the backup files to the cloud machine where the database is hosted. Please consider good network and internet speed to expedite the data migration process.In the example below, data pump (dumpfile) is copied from the on-premises machine to the cloud host machine:Once the backup files are transferred to the cloud host, you use the typically method to do the data restore. For more options, read the URL below: example using SQL Developer and SQL Loading, read the URLs below: Gate[...]

Transforming a heap table to a partitioned table - how and whats new in 12c R1 & R2


As part of the daily operational job, one of the typical requests we DBAs get is to convert a regular (heap) table into a partitioned table. This can be achieved either offline or online. This blog will demonstrate some of the pre-12c methods and enhancements in Oracle 12c R1 and R2.I remembered when I  had such requests in the past, I used the following offline/online methods to achieve the goals, whatever best fit my application needs.The offline method involves the following action sequence:Create empty interim partitioned table, indexes and etcStop the application services if the non-partitioned table involved in any operationsMigrate the data from the non-partitioned table to partitioned table Swap the table namesDrop the non-partitioned table Compile any invalid package/procedure/functions/triggersGather table stats Note: If any integrity references, dependencies exists, the above procedure slightly defers with a couple of additional actions. The downside of this workaround is the service interruption during the course of transformation.To avoid any service interruption, Oracle provides redefinition feature to perform the action online, without actually impacting the going DML operations on the table. The redefinition option involves the following action sequence: Validate if the table can use redefinition feature or not (DBMS_REDEFINITION.CAN_REDEF_TABLE procedure) Create interim partition table and all indexesStart the online redefinition process (DBMS_REDEFINITION.START_REDEF_TABLE procedure)Copy dependent objects (DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS procedure)Perform data synchronization (DBMS_REDEFINITION.SYNC_INTERIM_TABLE procedure)Stop the online redefinition process (DBMS_REDEFINITIONS.FINISH_REDEF_TABLE procedure) Swap the table namesCompile any invalid package/procedure/functions/triggersGather table stats However, such sort of action is simplified in Oracle 12c R1 and made easier in R2. The following demonstraes12c R1 and R2 methods.12cR1 EXCHANGE PARTITIONWith EXCHANGE PARTITION feature, the data can be quickly loaded from a non-partitioned table to a partitioned table:Once you have the partitioned table, use the following example to exchange the data of heap table to partitioned table. In this example, the existing data will be copied to a single partition in the partitioned table.ALTER TABLE sales EXCHANGE PARTITION p1 WITH TABLE non_sales_part;12cR2 MODIFY With 12cR2 ALTER TABLE MODIFY option, a non-partitioned table can be easily transformed into a partitioned table, either offline or online. The example below demonstrate creating daily interval partition:offline procedure:ALTER TABLE sales MODIFYPARTITION BY RANGE (column_name) INTERVAL (1)(partition p1 values less than (100),partitionp2 values less than (1000)) ;Online procedure:ALTER TABLE sales MODIFYPARTITION BY RANGE (column_name) INTERVAL (1)(partition p1 values less than (100),partitionp2 values less than (1000))ONLINE UPDATE INDEXES (index1, index2 LOCAL) ;References:[...]

Oracle Private Cloud Appliance (PCA) - when and why?


What has become so critical in today's competitive business is the ability to fulfill the sudden and unpredictable demands that arises. It requires data centers agility, rapid deployments and cloud ready solutions. To succeed in today's modern business, companies must be ready to deploy innovative applications and quickly adopt the changes in the market. Oracle Private Cloud Appliance (PCA) is an integrated, 'wire once' converged system designed for fast cloud and rapid application deployments at the data centers. PCA is a one stop system for all your applications, where mixed operating systems (Linux, Solaris, RHEL and Windows) workloads can be consolidated into a single machine.Its has been observed off-late here in GCC specially, more and more organization are moving towards the PCA adoption. Hence, I thought of just writing a blog explaining the prime features and functionalities of PCA.  Once I get some hands-on (which is in the very near future), I would love to write some advance concepts about of PCA and how really organization benefited with PCA.Here are the key features of PCA: Engineered system comes with fully prebuilt and preconfigured setupCost effective solution for most of the Oracle and non-Oracle workloadsAutomated installation and configuration software controllerPrebuilt OVM to speed-up the Oracle deploymentsSingle-button DR solutions through OEMPay for only you use policyFlexibility to Oracle storage or any pre-existing storagePCA certifies all Oracle software that is certified to run on OVM   Deployment of PCA at the data center is very straightforward and simple. The system will be ready within minutes/You can add virtual machines (OVM) either with some basic configuration or use the standard OVM templatesNo additional software licenses are required on PCAGreatly reduces the time required for deployments. A new deployment can be achieved in hours rather than days in contrast to the traditional infrastructureEasy integration into to existing data center modelsOVM included with no additional costBelow picture depicts the typical architecture, what PCA comprises of and supports:A pair of management servers are installed in a active/standard  for HA. The master management node runs the full set of services, whereas the standby node runs only a subset of services.The compute nodes (Oracle Servers X series) constitutes the virtual platforms and provides the processing power and memory capacity for the servers they hosted. The entire functionality is orchestrated by the management node (master). References: Stay tuned for more updates on this.  [...]

How to stabilize, improve and tune Oracle EBS Payroll and Retropay process


Visited few customers off late to review the performance issues pertaining to their Oracle EBS Payroll and Retro-pay processes. Not sure if many are aware of the tools Oracle has to analyze and improve any Oracle EBS modules, including Payroll and Retro-pay. To get proactive with Oracle EBS, refer the following note:Get Proactive with Oracle E-Business Suite - Product Support Analyzer Index (Doc ID 1545562.1)I must say, after running through the analyzers (Retro and Payroll), and implementing the suggestions, significant performance is achieved without making any change to the queries. I would strongly recommend to run the analyzers on different modules on Oracle EBS to get proactive and achieve performance improvements and stability. Below is the Payroll analyzer report screen shot, explains the findings and recommendations: Few good MOS notes to stabilize, improve and tune the Retro-pay and Payroll processes on Oracle EBS environment:EBS Payroll RetroPay Analyzer (Doc ID 1512437.1)EBS Database Parameter Settings Analyzer (Doc ID 1953468.1)EBS Payroll Analyzer (Doc ID 1631780.1)EBS HRMS Payroll - RetroPay Advisor (Doc ID 1482827.1)RetroPay Analyzer Tool FAQ (Doc ID 1568129.1)[...]

12cR1 RMAN Restore/Duplicate from ASM to Non ASM takes a longer time waiting for the ASMB process


Yet another exciting journey with Oracle bugs and challenges. Here is the story for you.One of our recent successful migrations was a single instance Oracle EBS 12cR1 database to Oracle Super Cluster M7 as a RAC database with 2 instances on the same DB version ( Subsequently, the customer wants to run through EBS cloning and set up an Oracle active data guard configuration.The target systems are not Super Cluster. The requirement to clone and set up an Oracle data guard is to configure as a single-instance database onto a filesystem (non-ASM). After initiating the cloning procedure using the DUPLICATE TARGET DATABASE method trough RMAN, we noticed that RMAN is taking significant time to restore(ship) the data files to the remote server. Also, the following warning messages were appeared in the alert.log:ASMB started with pid=63, OS id=18085WARNING: failed to start ASMB (connection failed) state=0x1 sid=''WARNING: ASMB exiting with errorStarting background process ASMBSat Mar 11 13:53:24 2017ASMB started with pid=63, OS id=18087WARNING: failed to start ASMB (connection failed) state=0x1 sid=''WARNING: ASMB exiting with errorStarting background process ASMBSat Mar 11 13:53:27 2017ASMB started with pid=63, OS id=18089WARNING: failed to start ASMB (connection failed) state=0x1 sid=''WARNING: ASMB exiting with errorStarting background process ASMB The situation raised couple of concerns in our minds:Why is the restore is too slow from RMAN? (while there is no Network latency and DB files are not so big sized)Why Oracle is looking for an ASM instance on a non-Cluster home? (not even a standard Grid home)After some initial investigation, we come across following MOS Docs: '12c RMAN Operations from ASM To Non-ASM Slow (Doc ID 2081537.1)'. WARNING: failed to start ASMB after RAC Database on ASM converted to Single Instance Non-ASM Database (Doc ID 2138520.1) According to the above MOS Docs, this is an expected behavior  due to an  Unpublished BUG 19503821:  RMAN CATALOG EXTREMELY SLOW WHEN MIGRATING DATABASE FROM ASM TO FILE SYSTEMYou need to apply a patch 19503821 to overcome from the bug.If you similar demand, make sure you apply the patch in your environmet before you proceed with the restore/duplicate procedure.-- Excerpt from the above notes:APPLIES TO:Oracle Database - Enterprise Edition - Version to [Release 12.1] Information in this document applies to any platform.  SYMPTOMS:1*. RAC Database with ASM has been converted or restored to Standalone Single Instance Non-ASM Database.2*. From the RDBMS alert.log, it is showing continuous following messages.3*.RMAN Restore/Duplicate from ASM to Non ASM in 12.1 take a longer time waiting for the ASMB process. 4*.Any RMAN command at the mount state which involves Non ASM location can take more time. SOLUTION:Apply the patch 19503821, if not available for your version/OS then please log a SR with the support to get the patch for your version.[...]

12cR2 new features for Developers and DBAs - Here is my pick (Part 2)


In Part 1, I have outlined a few (my pick) 12cR2 new features useful for Developers and DBAs. In the part 2, I am going to discuss a few more new features.Read/Write and Read-Only InstancesRead-write and read-only database instances of the same primary database can coexist in an Oracle Flex Cluster.Advanced Index CompressionPrior to this release, the only form of advanced index compression was low compression. Now you can also specify high compression. High compression provides even more space savings than low compression.PDBs EnhancementsI/O Rate Limits for PDBsDifferent character sets of PDBs in a CDBPDB refresh to periodically propagate changes from a source PDB to its cloned copyCONTAINERS hint : When a CONTAINERS ()query is submitted, recursive SQL statements are generated and executed in each PDB. Hints can be passed to these recursive SQL statements by using the CONTAINERS statement-level hint. Cloning PDB no longer to be in R/W mode : Cloning of a pluggable database (PDB) resolves the issue of setting the source system to read-only mode before creating a full or snapshot clone of a PDB.Near Zero Downtime PDB Relocation:This new feature significantly reduces downtime by leveraging the clone functionality to relocate a pluggable database (PDB) from one multitenant container database (CDB) to another CDB. The source PDB is still open and fully functional while the actual cloning operation is taking place.Proxy PDB: A proxy pluggable database (PDB) provides fully functional access to another PDB in a remote multitenant container database (CDB). This feature enables you to build location-transparent applications that can aggregate data from multiple sources that are in the same data center or distributed across data centers.Oracle Data Pump Parallel Export of Metadata: The PARALLEL parameter for Oracle Data Pump, which previously applied only to data, is extended to include metadata export operations. The performance of Oracle Data Pump export jobs is improved by enabling the use of multiple processes working in parallel to export metadata.Renaming Data Files During ImportOracle RAC :Server Weight-Based Node Eviction :Server weight-based node eviction acts as a tie-breaker mechanism in situations where Oracle Clusterware needs to evict a particular node or a group of nodes from a cluster, in which all nodes represent an equal choice for eviction. In such cases, the server weight-based node eviction mechanism helps to identify the node or the group of nodes to be evicted based on additional information about the load on those servers. Two principle mechanisms, a system inherent automatic mechanism and a user input-based mechanism exist to provide respective guidance.Load-Aware Resource Placement : Load-aware resource placement prevents overloading a server with more applications than the server is capable of running. The metrics used to determine whether an application can be started on a given server, either as part of the startup or as a result of a failover, are based on the anticipated resource consumption of the application as well as the capacity of the server in terms of CPU and memory.Enhanced Rapid Home Provisioning and Patch ManagementTDE Tablespace Live Conversion: You can now encrypt, decrypt, and rekey existing tablespaces with Transparent Data Encryption (TDE) tablespace live conversion. A TDE tablespace can be easily deployed, performing the initial encryption [...]

12cR2 new features for Developers and DBAs - Here is my pick (Part 1)


Since the announcement of 12cR2 on-premises availability, the Oracle community become energetic and busy tweeting/blogging the new features, demonstrating installation & upgrades. Hence, I have decided to pick my favorite list of 12cR2 new features for Developers and & DBAs. Here is the high-level summary, until I write a detailed post for each feature (excerpt from Oracle 12cR2 new features document).Command history for SQL * Plus: Pre 12cR2, this could be achieved through a workaround, now, the history command would do the magic for you.Materialized Views: Real-Time Materialized Views: Materialized views can be used for query rewrite even if they are not fully synchronized with the base tables and are considered stale. Using materialized view logs for delta computation together with the stale materialized view, the database can compute the query and return correct results in real time.For materialized views that can be used for query rewrite all of the time, with the accurate result being computed in real time, the result is optimized and fast query processing for best performance. This alleviates the stringent requirement of always having to have fresh materialized views for the best performance.Materialized Views: Statement-Level Refresh: In addition to ON COMMIT and ON DEMAND refresh, the materialized join views can be refreshed when a DML operation takes place, without the need to commit such a transaction. This is predominantly relevant for star schema deployments.The new ON STATEMENT refresh capability provides more flexibility to the application developers to take advantage of the materialized view rewrite, especially for complex transactions involving multiple DML statements. It offers built-in refresh capabilities that can replace customer-written trigger-based solutions, simplifying an application while offering higher performance.Oracle Data Guard Database Compare: This new tool compares data blocks stored in an Oracle Data Guard primary database and its physical standby databases. Use this tool to find disk errors (such as lost write) that cannot be detected by other tools like the DBVERIFY utility.Subset Standby: A subset standby enables users of Oracle Multitenant to designate a subset of the pluggable databases (PDBs) in a multitenant container database (CDB) for replication to a standby database. Automatically Synchronize Password Files in Oracle Data Guard Configurations: This feature automatically synchronizes password files across Oracle Data Guard configurations. When the passwords of SYS, SYSDG, and so on, are changed, the password file at the primary database is updated and then the changes are propagated to all standby databases in the configuration.Preserving Application Connections to An Active Data Guard Standby During Role Changes: Currently, when a role change occurs and an Active Data Guard standby becomes the primary, all read-only user connections are disconnected and must reconnect, losing their state information. This feature enables a role change to occur without disconnecting the read-only user connections. Instead, the read-only user connections experience a pause while the state of the standby database is changed to primary. Read-only user connections that use a service designed to run in both the primary and physical standby roles are maintained. Users connected through a physical standby only role con[...]

Oracle Enterprise Manager 13c R2 configuration on Win 2008 R2 server stops at 78%


Oracle Enterprise Manager (OEM) 13cR2 configuration on a Win 2008 R2 was stopping at 78% completion  while performing the BI Publisher Configuration. Apparently, the problem exists pre 13cR2 as well.All OMS components like (WebTier, Oracle Management Server, JVMD Engine) will stop/start during the BI Publisher configuration operations. Unfortunately, the windows service for OMS was taking too long to complete the start operation of Webtier (HTTP) and installation at 78% stopped (didn't move forward). Initially, I have started looking at WebTier startup issues, in the process, tried to disable firewall, also excluded the installation directory from the anti virus on the Window server, and the result remains the same.Cleaned-up the previous installation and start-over the OEM 13cR2 installation on the server, but this time I didn't check the BI Publisher configuration option as I wanted to exclude the BI Publisher configuration to ensure I complete the OEM installation without any issues. Despite the fact I didn't check the option, OEM started configuring BI and stopped exactly at 78%, the issue remains. The error messages in the sysman and other OMS logs didn't provide any useful hints, in fact, it was misleading and took me in the wrong direction.Came across of a MOS ID( 1943820.1), and after applying the solution, OEM configuration completed successfully.Here is the excerpt from the MOS ID :On some occasions, httpd.exe will fail to start, If you are missing or have a damaged Microsoft Visual C++ Redistributable 64-bit package.It may report the error above, or give  , with 0 bytes of details in the OHS log. Install the Microsoft Visual C++ Redistributable Package (x64) anyhow.1. You can obtain this file at: Download the Microsoft Visual C++ Redistributable Package (x64)3. Should have a file called vcredist_x64.exe. Run installation.4. Try starting OHS again.Note:I understood why Oracle still does the BI Publisher configuration despite I didn't select the option. When you don't select the option, BI Publisher is confgured, but, will be disabled, so that in the future you can easily enable this option.ReferencesOHS 12c Fails To Start On Windows Server 2008 X64, with no detailed errors. (Doc ID 1943820.1) [...]