Subscribe: Oracle Expert Blog
Added By: Feedage Forager Feedage Grade B rated
Language: English
backup  data  database  dba  file  files  flashback database  flashback  oracle database  oracle  recovery  storage  time 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Oracle Expert Blog

Oracle Expert Blog

Updated: 2017-10-30T07:43:57.561-04:00




EXTREME PERFORMANCE WITH ORACLE DATABASE APPLIANCE: PERFORMANCE, RELIABILITY AND HIGH AVAILABILITY ON THE CLOUD AND IN THE DATA CENTEREXTREME PERFORMANCE WITH ORACLE DATABASE APPLIANCE (ODA)Whether your company is rather small or you are part of a major corporation, Oracle Database Appliance (ODA) can accelerate, enhance, and improve your database services at any desirable level. ODA supports RAC, RAC OneNode and Enterprise Standalone databases. Having ODA means that DBAs can work faster, more efficiently and providing the agility that developers and software engineering need in the workplace for every major project to be on time. Database services leads and DBAs can attain further efficiency based on the key ODA framework, involving simplicity, pre-built optimization and affordability at any SLA, as well as providing the ambiance required to create the right productivity environment whether being used in a remote office or branch office, for development or testing, or simply as a solution in a box, ODA surpasses the dreamt expectations on performance, reliability and availability at all times.  It further provides capabilities for full-stack hardware refresh, database consolidation, and backup and recovery. Besides, ODA supports other Oracle technologies such as Transparent Data Encryption (TDE), in integration with BI tools such as Hyperion and applications deployment such as PeopleSoft and Oracle E-Business Suite. ODA ARCHITECTURAL ESSENTIALSStorage In principle, the Oracle Database Appliance uses twenty hard drives for storing user data. These disks are 600 GB SAS hard drives, allowing for a total of 12 TB of raw storage. They are hot-pluggable, front mounted, and are accessible to each of the two servers in the Oracle Database Appliance. Likewise, the Oracle Database Appliance is engineered to tolerate hardware component failures. The onboard storage subsystem is designed for maximum availability. Whenever a server loses access to the disks, the other server will still have access. When a new Oracle Database Appliance is configured for use, Oracle Automatic Storage Management (ASM) is utilized to create and manage the underlying tablespaces. The ASM +DATA and +RECO disk groups are created during the install of Oracle Database Appliance. When configuring the appliance, DBAs have the options of “External Backups” or “Internal Backups”.  A +RECO tablespace will be larger if the “Internal Backup”option is selected during the corresponding installation process. Performance Architecture The testing for this white paper was performed on an Oracle Database Appliance X6-2 system surpasses the capabilities of the Oracle Database Appliance X4-2 System, which consists of two X86 servers, each with two 12-core Intel E5-2697 CPUs and 256GB of memory. The two nodes use direct attached storage that is comprised of twenty 900 GB, 10,000 rpm SAS hard disk drives and four 200 GB SAS solid state disks (SSDs). Thus the Oracle Database Appliance X4-2 provides a total of 512 GB of memory, 48 CPU cores, 18TB of raw HDD storage and 800GB of SSD storage. With the addition of the Expansion Storage Shelf, the available storage can be doubled. More specifically, the Oracle Database Appliance X6-2S and Oracle Database Appliance X6-2M hardware are engineered as single rack-mountable systems that provide the performance benefits associated with the latest generation Intel® Xeon® processors and NVM Express (NVMe) flash storage. Indeed, the Oracle Database Appliance X6-2S is powered by one 10-core Intel® Xeon® processor E5-2630 v4 and 128 GB of main memory, expandable to 384 GB. The Oracle Database Appliance X6-2M doubles the processor and memory resources by offering two 10-core Intel® Xeon® processors E5- 2630 v4 and 256 GB of main memory, expandable up to 768 GB. Both systems come configured with 6.4 TB of high-bandwidth NVM Express (NVMe) flash for database storage and offer the option to double the storage capacity to 12.8 TB of NVM Express (NVMe) flash. Networking Indeed, the Oracle Database Appliance pro[...]

My Collaborate16 Presentation Slides on Oracle Preventive Data Protection Strategies



Oracle Virtual Clusterware for the Cloud Box


My Presentation (Executive Summary and Slides) at the NYOUG 2015 General Meeting at BMCC in New York, NYEXECUTIVE OVERVIEWIn a world of containers, virtual boxes (VMs), logical (LPARs) and physical partitions (PPARs), Software-Defined Networks (SDN) and other virtualization approaches, a real implementation of virtual platforms has not been accomplished for a specific purpose. Therefore, tuning a virtual machine to handle a production database is quite a specialized drill for Architects, DBAs, and even Developers. Likewise, attaining the goal of simulating an environment such as an engineered system in order to demonstrate application performance could be less expensive but somewhat surreal when attempting to measure its true workload. This presentation will discuss best practices for optimal Oracle12c Clusterware availability, performance, scalability, and reliability. It will also describe how to coordinate strategies for Oracle12c stack management and cloud integration from a comprehensive and clear ITIL perspective and discuss methods and techniques to overcome installation and maintenance issues and prevent downtime. While one can install and utilize Oracle Virtual Clusterware in a one potentially unbounded virtual box, and create there a cost-effective cluster database to further implement technologies, such as ASM ADVM, ACFS, and RAT, among others, Oracle12c Virtual Clusterware framework could also hold 2, 4, 2^n, or thousands of nodes successfully for optimal cloud production. [...]

ACE Series: My IOUG Collaborate15 Paper


Case Studies Moving ASM FilesAnthony D. Noriega, ADN ResearchNJIT Alumnus, Montclair State University AlumnusAbstractWhen moving ASM database files in any scenario, it is important to analyze the convenience of taking such an action before proceeding. The Database Administrators should first ensure that this is the last option, and the most convenient in the resolution of the scenario. For a successful move, it is significantly important to look at the array of methods and techniques available in order to accomplish this task successfully and appropriately in each scenario. including tools, utilities, e.g., RMAN, and the SQL and PL/SQL API, as well as the task manageability via Oracle Enterprise Manager Cloud Control. Each options offers a constrained domain and scope; and, therefore, this task should be carefully planned and executed with a required regression testing.Target AudienceThis paper is addressed primarily to Oracle DBAs and Architects and IT Managers and Storage Administrators (e.g., SAN Administrators) who wish to be up to date with ASM storage management technology, and participate hands-on in these tasks, the planning, and its step-by-step execution.Executive SummaryUpon completion of this study, the reader should be able to differentiate among strategies to accomplish ASM file move through the basic inspection of file type, disk capacity, disk group redundancy, and available space, granularity, and various other factors concerning storage networking and future capacity planning. Therefore, the following become specific target objectives, namely:Attaining a thorough of understanding in possible scenarios and options to move or rename ASM files of any kind.Establishing a set of best practices in order to determine the best applicable method to use, and the best approach to its execution.Handling errors and recovering from disaster while performing an ASM file move.Background: Some A Priori ConsiderationsWhen moving ASM files from one location to another, the DBA must carefully consider the key reason to move the file. In general, planning storage and disk capacity accordingly should lead to avoiding file moves. However, many reasons such as requirements for file replication or muiti-plexing, low ASM storage redundancy, a physical device being full, previous diskgroup or volume mismanagement, and the general inability to increase the storage space can lead to a file move, usually a data file.The type of file to move and its role via its logical container (.e.g, a tablespace can also derive unforeseen constraints, which could jeopardize the consistency and integrity of the database and clusterware storage involved directly or indirectly. Besides, the following are important considerations are important: Oracle ASM metadata resides within the disk group and contains information that Oracle ASM uses to control a disk group, including:The disks that belong to a disk groupThe filenames of the files in a disk groupThe location of disk group data file extentsThe amount of space that is available in a disk groupA redo log recording information about atomically changing metadata blocksOracle ADVM volume information.Moving Considerations by File TypeControl FilesWhen copying or moving control files, ensure that each properly renamed copy of the control file is consistent with all other ASM files, and that the target disk utilized has the same properties, such as, for instance, being created using the same ASM template or at least preserves the appropriate type of striping. Likewise, the copied file should be reachable by the instance, and therefore the disk should not be seen as foreign to the containing database and the supporting ASM instances. The DBA can use any of ASMCMD, PL/SQL supplied packages, such as DBMS_FILE_TRANSFER, preferably in order to accomplish this task. Indeed moving a control file will justify a reconfiguration of the initialization parameter files, i.e., [...]

Slides from my IOUG Collaborate15 Presentation



Oracle Transparent Data Encryption for the Wise DBA


Introducing an Oracle TDE Simplified Encryption MethodOracle Transparent Data Encryption technology utilizes a variety of methods and techniques in order to encrypt a database at both the logical and physical object levels, and provides support for a variety of options such as encryption domain instantiation (SALT), wallet-driven encryption, encryption methods and models, and a variety of encryption algorithms; thus, OTDE attains an outstanding level of data protection and data privacy. This session presents a quick and efficient way to attain the best from Oracle TDE with any major obfuscation, introducing an agile model to optimal encryption with Oracle TDE, including control on network processes under TCP/IP, HTTP, and multi-VPN, operating system authentication, operating system level encryption, and complex network topology environments.The core purpose of this presentation is to elucidate Oracle Transparent Data Encryption as too complex and getting the best benefits out of it rather than fearing the worst from encrypting your database.Exhibit 1. Oracle TDE Conceptual ViewExhibit 2. Oracle12c provides support with native pre-built encryption Main Configuration MethodWith the Oracle instance down, execute the following step-by-step process:1.     Customize the sqlnet.ora file as provided below: # sqlnet.ora Network Configuration File: C:\app\oracle\product\12.1.0\dbhome_1\network\admin\sqlnet.ora# Generated by Oracle configuration tools.# This file is actually generated by netca. But if customers choose to # install "Software Only", this file wont exist and without the native # authentication, they will not be able to connect to the database on NT.SQLNET.AUTHENTICATION_SERVICES= (NTS)NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)SSL_CLIENT_VERSION = 0SSL_VERSION = 1.0NAMES.DIRECTORY_PATH= (TNSNAMES)SSL_CLIENT_AUTHENTICATION = FALSESQLNET.INBOUND_CONNECT_TIMEOUT = 0ADR_BASE = C:\app\oracle\product\12.1.0\dbhome_1\log#SQLNET.WALLET_OVERRIDE = FALSEWALLET_LOCATION =  (SOURCE =    (METHOD = FILE    (METHOD_DATA =      (DIRECTORY = C:\app\oracle\product\12.1.0\dbhome_1\NETWORK\ADMIN)    )  ) Exhibit 3. Successful customized pluggable sqlnet.ora file.2.   Rename the sqlnet.ora to sqlnet.ora.tmp3.     Start the listener without a sqlnet.ora4.     Once the listener is successfully started, plug in the sqlnet.ora file5.   Start the database instance and other oracle database services or processes6.   Create the Oracle database instance TDE wallet’s master key by using the commandExhibit 4. Setting the Encryption Master KeyALTER SYSTEM SET KEY IDENTIFIED BY "Password";, where password is your password as entered. In Oracle12c, this command will automatically create the wallet with the master key. In previous releases, including Oracle11g, the DBA will need to use either mkstore or orapki to create the wallet accordingly.7.   Create an oracle wallet and auto login local wallet using the mkstore or orapki utilites.  mkstore does this with one single command, orapki requires two commands for equivalent outcome.It is recommended to use the TNS_ADMIN directory as a DBA friendly reference to create the wallet.  This directory should be referenced in the WALLET_LOCATION parameter in the just plugged sqlnet.ora file.7.1    Using mkstore (skip if using orapki)Navigate to the wallet_location directory, e.g., TNS_ADMIN.  Enter the command:For Oracle11g Release 2 and earlier releases including Oracle10g:mkstore –wrl . –create For Oracle12c, the DBA must specify the encryption wallet location, regardless of the current directory:mkstore –wrl “C:\app\oracle\product\12.1.0\dbhome_1\NETWORK\ADMIN” –c[...]

The Engineering of the Intelligent Backup (III)


Estimating the Optimal Backup Parallel Degree The postulates presented here are of public domain. However, hypotheses and theorems on which the postulates are based are not of public domain and will be copy righted under the ADN REsearch logo.Derived from postulate 2, there is a need to establish a mechanism to enhance performance.  This mechanism is to identify the optimal degree of parallelism to be used.  Based on RMAN technology this is specified when configuring the default settings to be used by an RMAN operation or by specifying it in an RMAN script.A rather non-formal study, suggests that the minimum production parallel degree to be used should be at least 4.  This is simply because there is significant improvement from using a lower degree such as 2 or 1.  However, attempting to identified the optimal parallel degree in a scenario where there are several huge files (as outliers by size), requires a statistical study or a heuristic mathematical formulation. A statistic study should produce a confidence interval with a 95% confidence, using a sample model where the mean and the variance are known.Through my experience, a mathematical formulation is possible based on the variance (or standard deviation) and the average size of all files being backed up.A mathematical formulation proposed for this range should consider the following formula, as follows:where σ is the standard deviation of the data files sizes and k = 1,2,3… ,i.e., a positive integer.  This means that an appropriate parallel degree should be established on the basis of the variability of the data file size rather than on the number of files being backed up.I can summarize this new postulates, as follows:Postulate 3: Derived from postulate 3, there exists a parallel degree such that the backup performance is optimized on the basis of duration.  The parallel degree can either be established within the closed interval N) data files’ sizes.  Furthermore, an additional adjustment could take place when the ratio of the largest file to the largest small file is significant, e.g., larger than 1000.  Such an adjustment could be based on that ratio. In Oracle technology, this could be applicable regardless of whether the data file is of the SMALLFILE or BIGFILE tablespace, i.e., whether one or more data files are allowed in a tablespace.  Similarly, a parallel degree can be established through a statistical model derived from sampling model utilizing a 95% confidence interval, where data file size are used as input, and specific transformation such as logarithmic transformations are applied to the model in order to attain a reasonable values for the expected parallel degree range,where k is a positive integer greater than or equal to 2, and Sigma is the standard deviation of the population of all (N) data files’ sizes.   Exhibit. An example to estimate the optimal degree of parallelism (as in an optimal economic model of returns) for a database with 100 files of size 1GB, 100 files of size 2GB, 100 files of size 4GB, 100 files of size 8GB, 50 files of size 16GB; 10 files of size 32TB; 4 files of size 64TB; and 2 files of size 128TBFrom postulate 3, it is possible to suggest that the optimal value for the FILESPERSET RMAN parameter is either 1 or 2, in order to minimize the seek time fro restore operations, as well. However, the MAXSETSIZE should probably be either left alone (UNLIMITED) or controlled otherwise by the mean size of all data files involved, excluding the outlier files. A different approach with a future capacity planning could use the double of the largest file among the small files.[...]

The Engineering of the Intelligent Backup (II)


An Open Call to Study and Research “the Backup Fatigue Syndrome”During my recent NYOUG presentation at St. John University, I refer to the relevance of data file size sorting in relation to backup performance and backup performance tuning degradation  when significantly larger files are left at the end of the backup, which I will hereby refer to as “the backup fatigue syndrome.”  This brief article is a summary of informal statistics on this problem, which I have never formalized for truthful statistical research.  It looks like this sydrome is associated with various factors such as, but not limited to, logic inherent to OS-level I/O, logic associated with generic backupset-driven technology, and inefficiency of large storage devices, in general. The combination of such factors, among others, is particularly critical to the appearance of the backup fatigue syndrome.For the past twenty years or so, I have observed various scenarios where a backup operation took a bit longer or much longer than expected, to be more precise. These scenarios involved not only various instances of Oracle database backup but also OS-level backups, involving Windows, Linux, MacOS, and Solaris, and network-driven backups; and different media type, such as essentially tape or drive.  These observations lead me to the systematic testing of verifying that the order by size in which files or backup pieces are used to create the corresponding backup sets has a significant impact in the backup duration.  This could simply mean that based a backup operation is not necessarily commutative by content (data file) size and that the order (by size) in which datafiles are backed up does result in different duration; in particular, when significantly large files (huge in comparison to the rest of datafiles in the backup) are placed at the end of the backupset or last in the order which files are backed up. This means that an abstraction would not allow me to define an Abelian Group, as a mathematical analogy; or more exactly, a class or abstract object that defines such a group, as the order by size does have an impact.  Comparing mathematical concepts is quite practical here, although it may appear inappropriate to most people.  The purpose of this note is to actually entice a comprehensive research on this topic.  Explicitly, it is important to determine the following:1.     The proportion, if any, at which backup performance latency occurs and results in exponential degradation when only a serial channel is used; and  when nearly exponential degradation occurs when parallelism is used.2.     A method to determine such as proportion of small file to large files, and the proportion between the number of small files, and the number of larger files; such as, for instance, in Oracle, the number of SMALLFILE tablespace datafiles in comparison to the number of BIGFILE tablespace datafiles. The proportion could be established via an actual mathematical ratio or a statistical index or regression model, or as a stochastic model with controlled probability via a Bayes or Markovian or Levy process.3.     The smoothing of the degradation impact as various level of parallelism are used or increased. In Oracle RMAN technology, this explicitly relates to the actual number of channels used from the number established.4.     The average ratio from the rather small number large files to the number large number of small files, which does not cause for the backup fatigue syndrome to occur. Otherwise said, if there is a specific generalized proportion that one can used without the degradation to occur.5.     A (formula-based) method to determine a factor or index to properly estimate the appropriate o[...]

The Engineering of the Intelligent Backup


The following is an excerpt of the paper accompanying my recent NYOUG presentation on the same topic.  You are welcome to download the pdf version of this paper at, Encrypting, and Encoding with RMAN  Anthony D. Noriega, MSCS, MBA, BSSE, OCPPresident and Lead Database Consultant at ADN Research1.  Introduction This document introduces a general dynamic model to backup and recovery (BR) and disaster recovery (DR) operations applicable to Oracle standalone and the high-availability cluster environments, involving full, incremental, and cumulative incremental backups, with extension to database cloning for high-availability, backup tagging. and encoding for further granularity, such as but not limited to, block-level recovery, backup and restore scripting, and backup validation among others.In principle, the focus is solely on establishing a dynamic backup capable of dynamically implementing and executing full database backup, incremental level 1, and cumulative incremental level 1 backups, with custom autodrop job instantiation, and will extend its coverage to cloning for high-availability, disaster recovery topics such as RMAN database backup tagging, and various other topics of high-value for a complete corporate backup and recovery paradigm.In most corporate environments today, the current backup policy uses fundamental features features, but it in many instances, it does not provide a unified model to fully utilize resources and capabilities.This model will reveal the significance of private encoding, tagging, and encryption with RMAN, not only to ensure the confidentiality of the backup, but also to commit to the highest level of data privacy and security.2.  Justification: Weaknesses of Existing Backup PoliciesThe following is a good-will analysis regarding the weaknesses of the current backup policies bases on a SWAT general perspective.2.1    SecurityCurrent backup policies lack the fundamental of database security and privacy, and were not implemented to attain regulatory compliance in their respective industry.2.2    Guaranteed Restore Point In at least a few scenarios, backup restores can fail due to the lack of proper synchronization and database activity at backup time, so recovery procedures may be needed ipso facto.2.3    Integrity and UnityIn many scenarios, lack of backup integrity, as the backup depends on the availability of the entire set of archived logs in the file system for a complete or point-in-time recovery. If archived redo logs were not available in disk or tape, due to a disk crash or a tape loss or destruction, then full or point‑in‑time recovery of any nature would not be possible. This is more significant when a corporation’s current policy utilizes the recovery catalog in the same database instance, a model historically not recommeded by Oracle, but still practised.2.4    Lags in Backup TimeframeCorporate statistics and random qualitative samples could show that backup models have either potential or neglected lagging timeframes due to its imperfect integrity, which include scenarios such as, in the commonest cases, tape loss or destruction and disk drive or volume damage.2.5    Timestamps on backup sets and backup piecesLack of proper timestamps, in principle, prevent to identify if a backup piece belongs to a specific backup set, and if so, whether it was actually performed at a specific time.  Therefore, searching for the appropriate file can only be accomplished by looking at a directory in the file system, being restricted to a folder-based component control.  However, if a file is misplaced for any reason, it would not be simple to identify such a file and restore it easi[...]

Oracle Flashback Technology (Part V)


Part V: Flashback Database Restore PointsIntroducing Flashback Database, Restore Points, and Guaranteed Restore PointsOracle Flashback requires a combination of factors and options, such as control of undo retention, Oracle Flashback retention, and relevant undo tablespace utilization, block change tracking, and supplemental log management in many scenarios. Besides, Oracle Flashback Database and restore points are indeed related data protection features that enable DBAs to reverse data back in time to fix any problems caused by logical data corruption or user errors within a designated time window, providing a better approach than conventional database point-in-time recovery (DBPITR). The following examples return the database to a specified SCN or restore point:FLASHBACK DATABASE TO RESTORE POINT 'before_OrclFullBkp';FLASHBACK DATABASE TO SCN ;Flashback DatabaseIn general, Flashback Database is accessible through the RMAN command and SQL statement FLASHBACK DATABASE. The DBA can use either interface to promptly recover the database from logical data corruptions or user errors. In its outcome, Flashback Database is similar to conventional point-in-time recovery in its effects. Similarly, Flashback Database uses its own logging mechanism, creating flashback logs and storing them in the fast recovery area. The DBA can only use Flashback Database when flashback logs are available. To take advantage of this feature, the DBA must set up the database in advance to create flashback logs. Thus, to enable Flashback Database, he then configures a fast recovery area and set a flashback retention target. This retention target specifies how far back it is possible to reverse the database with Flashback Database. From that time onwards, at regular intervals, the database copies images of each altered block in every data file into the flashback logs such that these block images can later be reused to reconstruct the data file contents for any moment at which logs were captured. The database restores the version of each block that is immediately before the target time, once Oracle has determined which blocks changed after the target time and restores them from the flashback logs. Likewise, Redo logs on disk or tape must be available for the entire time period spanned by the flashback logs. In real world practice, redo logs are usually needed much longer than the flashback retention target to support point-in-time recovery.Flashback Database WindowThe range of SCNs for which there is currently enough flashback log data to support the FLASHBACK DATABASE command is called the flashback database window. The flashback database window cannot extend further back than the earliest SCN in the available flashback logs.In particular, some database operations, such as dropping a tablespace or shrinking a data file, cannot be reversed with Flashback Database.  Limitations of Flashback DatabaseBased on the fact that Flashback Database works by reversing changes to the data files that exist at the moment when the DBA runs the command, it has the following limitations:ñ Flashback Database can only reverse changes to a data file made by Oracle Database. It cannot be used to repair media failures, or to recover from accidental deletion of data files.ñ The DBA cannot use Flashback Database to undo a shrink data file operation. Nevertheless, the DBA can take the shrunk file offline, flash back the rest of the database, and later restore and recover the shrunk data file.ñ The DBA cannot use Flashback Database alone to retrieve a dropped data file. When the DBA flashes back a database to a time when a dropped data file existed in the database, only the data file entry is added to the control file. Thus, he must recover the dropped data file by using RMAN to fully restore and[...]

Oracle Flashback Technology (Part IV)


PART IV: Database Point-in-Time Recovery Using Oracle FlashbackWith RMAN DBPITR the DBA has the ability to restore the database from backups before the target time for recovery, then uses incremental backups and redo to roll the database forward to a specific target time, but it is possible to recover to an SCN, time, log sequence number, or restore point, in particular. Oracle recommends for DBAs to perform Flashback Database rather than database point-in-time recovery whenever possible. Media recovery with backups should be the last option in lieu of flashback technologies in order to undo the most recent changes. But I have personally tested that creating guaranteed restore points at the beginning of periodic full backups is a good policy to adopt.Prerequisites to Database Point-in-Time RecoveryThe prerequisites for database point-in-time recovery are as follows:ñ The database must be running in ARCHIVELOG mode.ñ All backups of all data files from before the target SCN for DBPITR must be available and so archived logs for the period between the SCN of the backups and the target SCN.Performing Database Point-in-Time RecoveryThe basic steps to DBPITR are now listed with the following assumptions, namely:ñ The DBPITR is performed in the current database incarnation alone. ñ The current control file is being used. ñ The database is using the current server parameter file.Otherwise, a need to set the database back to the appropriate incarnation, or restore the controlfile or spfile may be necessary. Likewise, the DBA can avoid errors using the SET UNTIL command to set the target time at the beginning of the procedure, rather than specifying the UNTIL clause on the RESTOREand RECOVER commands individually. To perform DBPITR:1.   The DBA must determine the time, SCN, restore point, or log sequence that should end recovery.Then, the DBA can use the Flashback Query features in order to identify when the logical corruption occurred and use the alert log to attempt to determine the time of the event from which to perform recover.Likewise, the DBA can use a SQL query to determine the log sequence number that contains the target SCN and then recover using this log. For example, the DBA can run:SELECT RECID,        STAMP,        THREAD#,        SEQUENCE#,        FIRST_CHANGE#       FIRST_TIME,        NEXT_CHANGE#FROM   V$ARCHIVED_LOGWHERE  RESETLOGS_CHANGE# =       ( SELECT RESETLOGS_CHANGE#         FROM   V$DATABASE_INCARNATION         WHERE  UPPER(STATUS) = 'CURRENT');RECID      STAMP      THREAD#    SEQUENCE#  FIRST_CHAN FIRST_TIM NEXT_CHANG---------- ---------- ---------- ---------- ---------- --------- ----------         1  347455711          1          1      31025 10-AUG-07      31031         2  347455716          1          2      31031 10-AUG-07      31033         3  347455720          1          3      3[...]

Oracle Flashback Technology (Part III)


Retaking Oracle Flashback Technology (Part III)It is strongly believed and so proven that Oracle Flashback features are consistently more efficient than media recovery in most circumstances where available. Physical Flashback Features Useful to Backup and Recovery Oracle Flashback Database is the most efficient alternative to Database Point in Time Recovery (DBPITR). Unlike other flashback features, it operates at a physical level and reverts the current data files to their contents at a past time. The result is like the result of a DBPITR, including the OPEN RESETLOGS, but faster. Therefore, configuring a fast recovery area is required for Flashback Database and the DBA must both set the DB_FLASHBACK_RETENTION_TARGET initialization parameter and issue the ALTER DATABASE FLASHBACK ON statement.During usual operation, the database periodically writes old images of data file blocks to the flashback logs. Flashback logs are written sequentially and often in bulk. Those logs are not archived.Logical Flashback Features Useful to Backup and Recovery The rest of flashback features operate at the logical level. The logical features documented in this here are, namely:ñ Flashback TableThe DBA can recover a table or set of tables to a specified point in time in the past without taking any part of the database offline. Flashback DropWith flashback table, a DBA can also reverse the effects of a DROP TABLE statement. And all logical flashback features except Flashback Drop rely on undo data. Rewinding a Table using Flashback Table Flashback Table uses information in the undo tablespace rather than restored backups to retrieve the table. Therefore, when a Flashback Table operation occurs, new rows are deleted and old rows are reinserted. The rest of the database remains available while the flashback of the table takes placePrerequisites to Flashback TableIn order to utilize the Flashback Table feature on one or more tables, use the FLASHBACKTABLE SQL statement with a target time or SCN.The DBA must have the following privileges to use the Flashback Table feature:ñ FLASHBACK ANY TABLE system privilege or  FLASHBACKobject privilege on the table.ñ SELECT, INSERT, DELETE, and ALTER privileges on the table.ñ SELECT ANY DICTIONARY or FLASHBACK ANY TABLE system privileges or the SELECT_CATALOG_ROLErole in order to flash back a table to a restore point.In order for an object to be ready to be flashed back, the following is necessary:ñ The object cannot be part of the following categories: tables that are part of a cluster, materialized views, static data dictionary tables, system tables, remote tables, object tables, nested tables, or individual table partitions or subpartitions or Advanced Queuing (AQ) tables.ñ The structure of the table must not have been changed between the current time and the target flash back time.The following DDL operations change the structure of a table: upgrading, moving, or truncating a table; adding a constraint to a table, adding a table to a cluster; modifying or dropping a column; adding, dropping, merging, splitting, coalescing, or truncating a partition or subpartition (except adding a range partition).ñ Row movement must be enabled on the table, which indicates that rowids will change after the flashback occurs. If the application depends on rowids, then flashback table cannot be used.ñ The undo data in the undo tablespace must extend far enough back in time to satisfy the flashback target time or SCN.To attain minimum flashback guarantee for Flashback Table operations, Oracle suggests setting the UNDO_RETENTION parameter to 86400 seconds (24 hours) or greater for the undo tablespace.Performing a Flashback Table OperationIn this example, the DBA plans [...]

My Presentation Slidas on Modernizing Workflow and Data Integration


Before, I publish my third and possibly final part on Oracle Flashback technology, I would like to upload the set of slides presented at the recent NYOUG Oracle User's group meeting at Saint John University on September 12, 2012.  I dedicated this presentation to Carl Petri, the inventor of Petri Nets, core topic of my doctoral dissertation, who died on July 2, 2010 at age 83.[...]

Oracle 11g Flashback Technology (Part II)


Oracle11gFlashback (II)Executive SummaryThe second part on this Oracle Flashback Technology white paper focuses on advanced query options with case scenarios and various capabilities useful primarily to the system DBAs, without neglecting the usefulness for both the developers and development DBAs.Use of Oracle Flashback Transaction Query with Oracle Flashback Version QueryIn this scenario, the DBA performs the following actions via SQL statements, as shown:DROP TABLE personnel; CREATE TABLE personnel       ( empid NUMBER PRIMARY KEY,          empname VARCHAR2(16),          salary NUMBER,          comments VARCHAR2(4000)        ); INSERT INTO personnel (empid, empname, salary, comments') VALUES (119, 'Smith', 80000, ,'New employee'); COMMIT; DROP TABLE dept; CREATE TABLE dept ( deptid NUMBER, deptname VARCHAR2(32) );INSERT INTO dept (deptid, deptname) VALUES (50, 'Finance'); COMMIT;Now personnel and depthave one row each. In terms of row versions, each table has one version of one row. Suppose that an erroneous transaction deletes empid 119from table personnel:UPDATE personnel         SET salary = salary + 15000  WHERE empid = 119; INSERT INTO dept (deptid, deptname)         VALUES (70, 'IT'); DELETE FROM personnel  WHERE empid = 119;COMMIT;Later, a transaction reinserts empid 119 into the personnel table with a new employee name:INSERT INTO personnel (empid, empname, salary)VALUES (119, 'Noriega', 100000); UPDATE personnel         SET salary = salary + 20000 WHERE empid = 119; COMMIT;After a careful analysis, the database administrator detects the application error and must diagnose the problem. The database administrator issues this query to retrieve versions of the rows in the personnel table that correspond to empid 119. The query uses Oracle Flashback Version Query pseudo-columns, as shown below:SELECT  versions_xid XID,                  versions_startscn START_SCN,                 versions_endscn END_SCN,                 versions_operation OPERATION,                 empname,                 salary   FROM   personnel VERSIONS BETWEEN SCN MINVALUE AND MAXVALUE WHERE empid = 119;Results are similar to: XID START_SCN END_SCN O EMPNAME SALARY---------------- ---------- ---------- - ---------------- ----------08001100B2300000 210093467 210093487 I Noriega 120000 040002002B220000 210093453 210093457 D Smith 800000900120096800000 210093379 210093389 I Smith 80000 3 rows selected. Indeed, the results table rows are in descending chronological order. The third row corresponds to the version of the row in the table personnel that was inserted in the table when the table was created. The second row corresponds to the row in personnel that the erroneous transaction deleted. The first row corresponds to the version of the row in personnel that was reinserted with a new employee name.The database administrator identifies tra[...]



Oracle11g FlashbackExecutive SummaryOracle Flashback technology is best suited to attain readiness and preparedness in scenarios such as human errors, and point-in-time database state restore without the need for conventional point-in-time recovery. Most importantly, it can be used to discriminate among backup and recovery planned strategies, including real-time mirroring through which a full database snapshot can actually be restored at any time, with nearly no downtime. Similarly, there is the convenience that this is an Oracle Enterprise Edition provided feature rather than an expensive and more comprehensive mirroring technology with both full snapshot and thing provisioning capabilities, in general, mor customizable and granular than those technologies. Besides, the fact that Oracle Flashback relies significantly on Automatic Undo Management (AUM), as discussed, implies the understanding of that concept as balance between the space provided in UNDO tablespace and the time provided through the UNDO_RETENTION setting in comparison to the actual need of undo resources determined by the effect of transactional workload and relevant duration involved, which is quite different from the traditional MANUAL undo management model which uses a round-robin algorithm instead.IntroductionOracle Flashback Technology is a set of features (resources and capabilities) that allow DBAs to reach past states of database objects, including bringing the database objects back to a previous state, without using point-in-time media recovery. Flashback recovery is quite cost effective and convenient in comparison to conventional backup and recovery procedures and operations.With flashback features, it is possible:Perform queries that return metadata exhibiting a detailed history of changes to the database Perform queries that return past data Recover tables or rows to a previous point in time Automatically track and archive transactional data changes Roll back a transaction and its dependent transactions while the database remains online Flashback features utilize the Automatic Undo Management (AUM) to get metadata and historical data for transactions. For instance, if a user runs an UPDATEstatement to change a retail_price from 2000 to 5000, then Oracle Database stores the value 2000 as undo data, which is persistent in database beyond shutdown.In addition to this, Oracle Database uses undo data to perform these actions:Roll back active transactionsRecover terminated transactions by using database or process recoveryProvide read consistency for SQL queries Application Development FeaturesThe key application development features, include:Oracle Flashback QueryA developer or DBA would use this feature to retrieve data for an earlier time that he or she specified with the ASOF clause of the SELECTstatement. Oracle Flashback Version QueryA DBA or developer could use this feature to retrieve metadata and historical data for a specific time interval (for example, to view the set of rows in a table that ever existed for a given period of time). Metadata for each row version includes start and end time, type of change operation, and identity of the transaction that created the row version. To create an Oracle Flashback Version Query, they would need to use the VERSIONSBETWEEN clause of the SELECTstatement. Oracle Flashback Transaction QueryA developer or DBA can use this feature to retrieve metadata and historical data for a given transaction or for all transactions in a given time interval; in order to perform an Oracle Flashback Transaction Query, they need to select from the static data dictionary view FLASHBACK_TRANSACTION_QUER[...]

On Oracle Engineered Systems and Appliances


A Dossierto Customizing New Oracle SystemsThe following is an excerpt from an on-going independent review on Oracle engineered systems and appliances.Providing an expert and custom DBA guide on how to match and optimize computing, storage networking and networking resources on Oracle Sun productline is a task that requires product knowledge and extensive experience on Oracle and Sun productline and expertise with an objective and independent perspective. Enhancing the matched resources and capabilities, whether you use a comprehensive middleware, extensive virtualization, or emerging files systems technologies such as Hadoop HDFS, and underlying technologies such as partitioning and Hybrid Column Compression (HCC), you have the opportunity to analyze, establish, and further research an expert's opinion with a different viewpoint with a comparable potential for a successful infrastructure and implementation for optimal performance, reliability and availability, and subsequent deployment.Certainly, it is time for a major investment on information technology to walk on the cloud. It is time for a change! Thus, providing a custom independent guide to match and optimize the use of Oracle Sun systems including appliances, high-availability, and storage networking solutions with respect to OLTP or Datawarehousing and Business Intelligence Solutions, with an futuristic view on technologic sustainability and predictable economic trends. If you information technology infrastructure hinders a focus upon datawarehousing, there are various solutions all of which could meet the DBA desires to have the best storage, storage networking and main computing resources, such as CPU power, enough RAM memory, and high quality flash and solid state storage (SSS) with convenient capabilities to attain the optimal custom solution. Whether you are looking at Exadata, the Oracle Database Appliance, the NoSQL Appliance, or the Oracle ZFS Storage Appliance as a potential solution, your investment will pay back on reliability and performance for the next years, opening a door to an easy upgrades to new version, editions, and the next generations of database technologies. Exalogic is appropriate to attain the perfect middleware goals, and Exalytics can drive the appropriate built-in resources to attain the best desirable outcome for business intelligence, data mining, data discovery, and analytics in general. The implementation of in-memory database machines, with Oracle SQL or NoSQL, enhances the ability to integrate with new file systems, such as Hadoop HDFS and the ability to implement associate mapsets and derive intelligent reports. The amount of hardware resources customizing high-end resources at any level, and the ability to leverage those resources for the best return on investment (ROI) and extended return on assets (ROA) at the lowest possible total cost of ownership (TCO) in today's market.Because, these appliance and engineered systems have been designed to work together both CAPEX (Capital Expenditure, i.e., investment) and OPEX (Operational Expenditure) are significantly optimal in today's information technology market.The same remarks could be applicable to online transaction processing (OLTP) systems, where the Exadata database machine and the Oracle Appliance can provide a significant level de customization for a dedicated or hybrid purpose.In general, companies regardless of its size can successfully consider an important investment in IT at the moment, which could bring this sector to its best moment in ten years, and have more confidence than ever that their ROI.In essence, what is important is t[...]

About Oracle Advisors


The Oracle Advisory FrameworkOracle's Advisory Framework encompasses a set of performance tuning advisors that allow for an enhanced dimension of the cause of the problem, the problem involved, and the solution, with the most appropriate steps to be pursued at each stage. The framework is widely integrated with the Automatic Database Diagnostic Monitor (ADDM) and Automatic Workload Repository (AWR) reporting capabilities. I will carefully discuss at a later time how to approach the most typical scenarios in a production environment Among the most important components of the advisory framework, a DBA can look at the following, namely: Memory Advisor Provides advice for SGA memory structures and PGA memory areas, very practical when using Automatic Memory Management. The SGA memory structures involved include the shared pool, the database buffer cache, the large pool, the Java pool, the streams pool.SQL Tuning Advisor This advisory component provides SQL Profiling, SQL Access Paths, statistics analysis, and structure analysis. This advisor can use the DBMS_SQLTUNE package to create, execute, and report a given task.SQL Access Advisor This advisor provides an analysis of the overall system performance through workload specifications, focusing on each segment structure, in particular indexes and materialized views in very critical scenarios. The specified workload could involve one of the following:A simple SQL statement A SQL statement tuning set Current contents of the SQL Cache Any hypothetical workload obtained through DDL of certain object sets. Oracle Segment AdvisorThis advisory framework component allows to plan on the capacity of a specific segment as needed, and forecast its future growth. Undo AdvisorThis Advisor is useful in the control of undo management and the necessary custom requirements to run specific processes. MTTR AdvisorThis advisory component establishes the appropriate settings to establish the appropriate windows of time for an optimal mean time to recovery, to minimize downtime, in general. These advisors can be used either through the Oracle Enterprise Manager Control interface or through the DBMS_ADVISOR package, as convenient in each scenario. Advisors have more value when they can also be utilized with the resource manager and specific plan directives that can effectively produce the desirable custom outcomes.[...]

Oracle Unveils Solaris 11 in New York


Oracle Solaris 11, simply extraordinaryYesterday, November 9, I attended the launch of Oracle Solaris 11. The presentations were lead by keynote speakers, Oracle President Mark Hurd, and Executive Vice President, John Fowler, who introduced the new features of what Oracle now calls the first operating system fully intended for cloud computing.Among the key features highlighted, I can highlight the following:Co-engineered with Oracle StackNative support to run with SuperCluster, T-4, and Exadata/Exalogic servers and ZFS storage appliance for optimal Cloud elasticityZFS Virtualized Pool StorageBuilt for Cloud InfrastructuresLeading transactional capability through Oracle Sun servers benchmarks Fault-tolerance automatic troubleshooting monitoringEnhanced built-in security for cloud authoring and authenticationNative virtualization for the cloudStack-driven fault toleranceHigh-availability capabilityUnified Upgrade/Patch transparent method and strategyBuilt-in Support for Solaris10 environment Multi-tenancy built-in supportZone/Container Resource ManagementUnlimited Boot EnvironmentsAfter the keynotes presentation, a panel of Oracle Sun engineers followed, involving an interesting discussion on the several years Oracle Sun Solaris 11 took to complete.Finally, a customer panel followed the engineers' discussion, in which customers related their previous experience was recounted. Customers also listed their expectations on how they will upgrade to and work with Solaris 11.Over 700 customers worldwide have already adopted Solaris11 for their production environments, by the time its Oracle official launch took place at Gothan Center in the City of New York on November 9, 2011.Excerpts from the Oracle Solaris 11 Launching Event allowFullScreen='true' webkitallowfullscreen='true' mozallowfullscreen='true' width='320' height='266' src='' class='b-hbp-video b-uploaded' FRAMEBORDER='0' />John Fowler makes his keynote speech on Solaris11 allowFullScreen='true' webkitallowfullscreen='true' mozallowfullscreen='true' width='320' height='266' src='' class='b-hbp-video b-uploaded' FRAMEBORDER='0' />An Oracle Sun engineer discusses a new feature in Solaris 11.11 allowFullScreen='true' webkitallowfullscreen='true' mozallowfullscreen='true' width='320' height='266' src='' class='b-hbp-video b-uploaded' FRAMEBORDER='0' />AnSteve Wilson, Oracle VP of Engineering, discusses a new feature in Solaris allowFullScreen='true' webkitallowfullscreen='true' mozallowfullscreen='true' width='320' height='266' src='' class='b-hbp-video b-uploaded' FRAMEBORDER='0' />An Oracle Sun engineer discusses a new feature in Solaris 11.For a complete view of the New York launch presentations, you are welcome to visit the Oracle website at: [...]

The Present and Future of Datawarehousing


About Oracle Data IntegratorWhile companies continue to hire novice people in the data warehousing field, i.e., IT professionals with three years of experience or less, the set of compliance regulations have significantly increased, thus, giving this field an additional dimension of complexity. This is confirmed by a recent Gartner Research report.Conventional data warehousing is about extracting, transforming, and loading data from one or several data sources, to attain integration through a variety of staging business processes, an possibly into a data mart.  The process of ETL (Extract-Transform-Load)  is quite traditional and conveys a set of constraints, such as, in most instances where the transformation is performed row-by-row, such that load can take place. E-LT (Extract-Load-Transform) is a modern approach where customized bulk load can occur without the need to be constrained by individual row-by-row transformations.Oracle Data Integrator provides support for up to 10000 different concurrent data sources independently from their type, including a variety of structured and unstructured data sources, such as, relational and object relational databases, XML, flat files including csv and tab-delimited among others, various multi-dimensional data sources, message queuing and various other data source types.Oracle Data Integrator is an ideal data warehousing tool to work with technologies such as Exadata and Oracle Solaris SuperCluster T4-4 (just released), as it provides seamless designing capabilities and transformations through bulk E-LT.  Besides, Oracle Data Integrator is quite versatile in interaction with other tools such as Oracle Golden Gate, allowing for fast advanced replication, including both full replication and thin provisioning.Other important capabilities available with Oracle Data Integrator involve the possibility of performing both complex traditional ETL (Extract-Transformation-Load) and fast bulk E-TL (Extract-Transform-Load). While the former allows row-by-row transformation, the latter allows bulk on-the-fly transformation, for which this stage can essential occur at any time in the datawarehousing timeline. This capability in conjunction with Oracle's demonstrated datawarehousing leadership by quadrant positions ODI as the datawarehousing tool of the future not only for conventional datawarehousing environments but also for the Big Data landscape. This feature also enhances its unique ability to reach optimal data quality while attaining outstanding performance and throughput and minimizing latency.In addition to this, as part of the Oracle Fusion Middleware,  ODI is extremely hot-pluggable. It can friendly interact with Oracle WebLogic, BPEL Process Manager, TopLink, JDeveloper, and Oracle Identity Management.Concluding RemarksOracle Data Integrator is a comprehensive, solid datawarehousing tool that allows the fast and comprehensive, design, implementation, and development of a datawarehousing and reporting datamarts through the appropriate usage of bulk E-LT processing, highly relevant for the Big Data and general datawarehouse performance. Among the top benefits to adopt ODI, it is possible to mentioned reduced DW cost with best ROI, great flexibility, outstanding data quality, and data integrity and accuracy with increased productivity. [...]

Solid State/Flash and Optical Storage Technologies


The Solid State Disk Single Level Caching and the Future of Database Technology
The implementation of faster more resilient database architectures on the basis of flash drives using Single-Level Caching (SLC) is key to the engineering of new databases technologies. At the present time, database languages like SQL (Oracle, SQL Server, DB2) and QUEL (Ingres) can analyze, query, mine, dredge, and project and forecast based on (the behavior) of data, i.e., information that is static or persistent by nature, even in the event when it only takes a few seconds or a fraction of a second. SLC caching will at some point to track the more dynamic behavior of some data, e.g., the brownian motion of particles in some kind of physics chaos, such as the agitation of some fluid, or the movement of electrones and other subatomic particles; the real-time measurement of heartbeats and other vitals signs could be further measure with a more consistent level of granularity. However, the most important contribution of SLC, which currently provides the best performance among Solid State Drive technologies, is to provide the possibility to interconnect to active or dynamic live automata in general, that can mimic the exact behavior of the brownian motion, vital measurements, stock market behavioral simulators and other live simulation scenarios. Because of this, I believe that iSCSI technology will complete the dominance of the market as they can provide better interaction with “live dynamic data” store in the newly flash storage technology in interaction with other small computer interfaces and devices. Multipathing in conjunction with Oracle technologies like Automatic Storage Mangement (ASM) will provide further capability and support, such that Fiber Channel, FICON in general, can also provide support for such new technologies technologies. Perhaps, optical storage will add another component of managing “real-time live data”, and the LG Hybrid drive released by Hitachi in the last quarter of 2010 is probably the most recent development in that respect.

While there are some Oracle cartridges capable to interact with SQL, such as in Text, Multimedia, and spatial technology, or an XML-driven implementation such as xQuery, there will be a need to provide an API such that SQL can interact with the real-time nature of the dynamic behavior described for such “live data”, which could probably be store in real-time as a tuple describing the state and behavior of an live automata object-data with greater precision than today's object-relational models.

Thanks to Oracle Professionals at Collaborate11


I would like to break this blog protocol to thank all attendees at my presentations at Collaborate11, in particular, those attending my presentation on Engineering Oracle ASM Optimal Storage I/O Performance with Intelligent Data Placement on April 12, a presentation on Intelligent Storage for High-Performance Computing, for such a great feedback.  I look forward to see you again at the next great Oracle event.
Once more thank you!

Presenting on Forecasting Analytics at Collaborate11, Orlando, Florida


I will be presenting Perspectives on Forecasting Analytics at Collaborate11, on Thursday, April 14 from 11:00AM until noon. This business intelligence venue will take place in Room West 303C.
Everybody is welcome to attend.


You are invited to my Intelligent Storage Presentation at Collaborate11


Please join Anthony D Noriega at Collaborate11, at the Orlando Orange Convention Center in Orlando, Florida on Tuesday, April 12, 4:30PM to 5:30PM, Room West 308C  for his presentation on Intelligent Storage for High-Performance Computing: "Engineering Oracle ASM Optimal I/O Performance with IDP."

Everyone is welcome!

Performance Tuning for the Storage Layer


Intelligent Storage for High-Performance Computing (HPC)The following is an excerpt of my intelligent storage paper using ASM technologies, involving ASM Cluster Files System (ACFS), ASM Dynamic Volume Manager (ADVM), and Intelligent Data Placement (IDP). The articles proposes an array of recommendations and practical strategies to optimize server performance from the storage layer up. The discussion focuses on the usage of traditional disk drives (HDD) and Solid State Drives (SSD).Implementation on Traditional HDD Disk ArraysThis is done by creating a custom template and applying to appropriate ASM datafile and subsequently determining relevance to logical objects. The SQL API is shown in the example below:ALTER DISKGROUP adndata1 ADD TEMPLATE datafile_hotATTRIBUTE ( HOT MIRRORHOT); ALTER DISKGROUP adndata1 MODIFY FILE '+DATA/ADN3/DATAFILE/tools.256.765689507'ATTRIBUTE ( HOT MIRRORHOT); Dual Pairing Drive (Hybrid) Solution This solution seeks the following goals:SSD for performanceWhenever SSD become part of the storage performance tuning strategy, it is necessary to plan on a more consistent space management, meaning that overprovisioning is an critical requirement to attain optimal storage elasticity. This is primarily due to the asymmetry of disk read/write speed on this technology. Depending on the disk controller used by its vendor, an SSD can have a 10:1 normalized ratio when experimentally tests have been conducted by various vendors where reading tasks represent about 65% of the entire transactional workload. Overprovisioning, in general, can also account for a reduction in manual rebalancing tasks when they seem to be required, such as moving an ASM file from one disk group to another either via PL/SQL packages or RMAN, could result in the disruption of production tasks. Similarly, overprovisioning can prevent the moving of logical objects across the database to attain better performance optimization.Thus, both DRAM SSDs and NAND SSDs have much better IOPS costs than high performance hard disks (HDDs). Besides, Flash SSDs are very well suited to read-only applications and mobility applications, and they are highly cost-effective in industries such as media, where its default storage format UFS (Universal File System) is widely utilized.Besides ASM can reciprocally help SSD technology by ensuring that the writing of data is spread out evenly across the storage space over time. Consequently, to facilitate wear leveling, SSD controllers implement a virtual-to-physical mapping of Logical Block Addressing (LBA) to Physical Block Addressing (PBA) on the Flash media and sophisticated error correction codes.HDD for capacityTraditional hard disk drives (HDDs) can be used to attain the implementation of low-cost high-capacity volumes. Therefore, the hard disks have a cost per gigabyte which is significantly better than solid state. Hybrid Pools: Implementation on HHDDWhere Disk Groups are transparently built on the mix of these technologies.In this scenario, it is important to establish the following considerations:Establish the hybrid ASM read preferred read groupsSQL> ALTER SYSTEM SET ASM_PREFERRED_READ_FAILURE_GROUPS= 'HYBRID.HHDD';Decide how to best use the SSD/Flash technology for speed and performance of database online redo logs.1 Set the logs on pure SSD drives. In doing so, this could imply that the Oracle DBA Architect might need at least to create a disk group or [...]