Oracle 12c Multitenant Option – A Database Architect’s Perspective

The Future

Do you wonder what the future holds for the principal Oracle Database platform in terms of its RDBMS Architecture? The Oracle Multitenant Architecture is the platform for the future.

Putting this plainly, the current Non-Multitenant Architecture (aka Legacy Architecture), will ultimately be de-supported so, if you’re serious as an Oracle DBA, you had best get on board! But rest easy for now; the 12cR1 Oracle Database release still supports both the Multitenant and legacy configurations.

Key Features

Just in case you were still wondering what its all about and have not dug in and tried to decipher the multitude and sprawl of information out there via a quick google search, here is a quick heads-up on the key elements of the Multitenant architecture you need to be aware of.

The Multitenant option is only available on Oracle 12c with a database COMPATIBLE parameter of 12.0.0.0 or greater. This value also applies to each planned Pluggable Database (PDB) being deployed within it.

You can run Oracle Container Databases (CDBs) and Non-CDBs on the same Server and can even share the same ORACLE_HOME.

A Multitenant database provides the capability of a Database to act as a CDB. There are two potential configurations:

  • One type of Container Database can act as a singularly unique database which can be plugged into another Container Database. This is called a Pluggable Database (PDB) and it contains all the usual object types you would find in a standard legacy database.
  • The other type of Container Database can contain none, one or more unique Databases (PDBs) and is called the CDB. A CDB consists of a root container as well as a seed PDB database which is READ ONLY by default.

Then again, you can find this information anywhere, the question is what are the key Architectural considerations you need to focus on as a DBA?

Structural Differences

Essentially, we need to first look at the structural differences between the Legacy and Multitenant architectures.

A container database can exist on its own with or without additional PDBs. Putting aside RAC Architectures, a legacy Oracle database is by design associated with one Oracle instance. In the same context, an Oracle CDB (multitenant Container Database) is associated with one instance. Therefore you have a 1:1 relationship between a database and instance.

However, a key variant for the Container database Architecture is that the CDB and all its associated PDBs “share the same Oracle Instance.” Hence you have a many to one (n:1) database to Oracle Instance relationship

  • n = CDB and all its associated PDBs

In the context of Oracle High Availability RAC architectures, the same statement would translate to an Oracle Database being associated with “one or more Oracle Instances”. Hence an Oracle CDB (and its associated PDBs) is associated with one or more RAC instances in an n: m relationship

  • n = CBD and all its associated PDBs
  • m= Oracle Instance(s) belong to the same RAC infrastructure

Hence in a multitenant environment for one CDB with a relationship to one or more PDBs, we have the following common\shared entities residing in the root Container:

  • SGA Memory Structures
  • Control files (at least one)
  • Background processes
  • Online Redo logs
  • SPFILE
  • Oracle Wallet
  • Alert Log
  • Flashback logs
  • Archived Redo logs
  • Undo tablespace and its associated Tempfiles
  • Default Database Temp Tablespace and files (however, each PDB can also have its separate Temp Tablespace)

The Oracle Data Dictionary metadata is principally stored in the root container at the CDB level with links for each PDB dictionary object defined to this from the PDB.

Deployment Architectural considerations and benefits

If you need to deploy a CDB with multiple PDBs and are wondering what you need to consider and how you optimize these, while not a complete list of review points, some items for considerations are

  • Adequate PDB unique names which translate into Services names for access
  • Resource management and delineation against each PDBs for resource consumption
  • Sizing and impact of each PDB on the shared Redo Logs, Temp Tablespace and Undo tablespaces
  • Understanding that all Oracle upgrades are performed at the CDB level and impact all associated PDBs
  • Strategy to leverage new security features related to separation of users and responsibilities between and across the CDBs and PDBs
  • Strategy for consolidated performance tuning
  • Communication to the business of the benefits of reduced cost of platform and ease of platform management
  • Plan for consolidation, especially for same version databases with small storage and memory footprints
  • Consideration for Oracle DataGuard configuration for related PDBs

Limitations and Restrictions

In spite of all the benefits, you know what they say. With every good thing comes …  Below are also a few restrictions (relative to the known operations of the legacy Databases) for RMAN on PDBs:

RMAN Restrictions from PDBs

  • Tablespace point in Time recovery
  • Duplicate Database
  • Point in Time recovery
  • Flashback operations from RMAN
  • Table recovery

Summary

So now you have the basics of what you need to be aware of, and to explore from an architectural standpoint. You can now dig into each section to obtain more information and see how to further optimize within the context of your requirements, resource availability and deployments to meet with your planned application needs.

At Cintra, as Database Architects, these are just a subset of the kind of considerations we delve into when looking at deploying customer databases. Get in touch if you would like to discuss how the Multitenant Option might be of value to your business.

Written by Joseph Akpovi, Lead DBA, Cintra NY – May 2017

Cintra Data Masking and Subsetting Solution

Most of us know that Oracle has an option that can be added to Enterprise Edition databases called the Data Masking and Subsetting Pack. This is a powerful solution, typically used to mask data when it is exported from a Production database and loaded into a Non-Production environment, so that our Developers do not have access to sensitive data.

However… what happens if we’re running on an environment with 120 CPU cores, and the cost of adopting this software option is prohibitive? But our auditors still require it? And how about if we need a similar solution for SQL Server? Are we out of luck?

Fortunately not! Here at Cintra, we have developed a comprehensive alternative which is almost as feature-rich as the license options, but does not require any additional licensing at all! Here’s a short summary of the solution and how we deliver it.

Overview
Data Masking and Subsetting is one of the architected solutions in Cintra’s array of blue printed solutions. At Cintra, we pride ourselves on partnered customer relationships wherein we closely work with our many satisfied clients to jointly identify and implement the best solution for their unique business needs.

The Cintra Data Masking and Subsetting tool is a customizable solution and rule-based approach to de-sensitizing column data at source as it is copied from a production database to non-production databases. The software intelligently hides sensitive column values such as social security numbers, credit card numbers, names, addresses, and phone numbers from unauthorized access thereby preventing data breaches. The masked values are repeatable and realistic so that the application can function as if the data were not masked.

The data is masked using methods to ensure repeatability from one run to another to provide stable development and test environments. Masked values cardinality and distribution rates are maintained to help ensure that the application functions and performs as if the data were not masked. Masking is applied carefully to preserve significant characters used in intelligent values such as the first three characters of social security numbers.

Data Masking and Subsetting Service and Solution
Cintra’s Security Solution Architect will jointly work with you to understand, define, and document your unique business security needs to optimally match the business needs with that of the Cintra Data Masking and Subsetting tool. The following high level interaction will occur between the Cintra Security Solution Architect and your team:

  • Business Security Requirements Workshops
    • Full Cintra team of Security Solution Architects
    • Facilitated sessions with Business experts to identify data masking requirements
    • Focused session with Security Managers to understand business direction / growth
  • Identify current processes to participate in Data Masking
    • Perform an initial inventory and assessment of each production data copy
    • Analysis of the level of customization required
    • Analysis of migration options to each non-production environment
  • Business case and cost savings of the new architecture
    • Provide a fully costed approach for the appropriate selected solution
    • Analysis of potential cost savings and reduction on alternatives

Proven Success with Cintra World Class Security Architecture Services
Cintra Security Solution Architects have delivered security architecture services for 20 years, resulting in many successful deployments.  Cintra is at the forefront of evolving and refreshing security architectures to ensure that they remain secure, agile and cost-effective.

Written by Jack Augustin, Database Architect, Cintra TOLA – May 2017

Modern Data Security: Protecting Your Most Valuable Assets

Information security has always been a hot topic in the IT industry, however in recent months the focus on this area has further increased. Whether due to national events such as the alleged hacking of the US election by foreign parties, international crises such as the WannaCry ransomware virus, or personal information attacks such as the latest PopcornTime Ransomware attack (which offers compromised individuals the chance to get their data back if they infect other parties with the virus), we’ve never had more data at greater risk than we do at the time of writing.

In general, we’re seeing the number of security breaches increase year-on-year, with a clear up-tick in breaches that are motivated by financial or espionage motives. While traditional hacking and malware attacks are rising steadily, so are more recent phenomenon such as social media attacks and breaches completed by exploiting senior individuals in organizations through their personal information. The variety of devices accessing sensitive data also broadens this threat landscape, with mobile devices becoming responsible for more and more breaches over time.

Due to these factors, the average cost of a data breach has risen to $4 million (a 29% increase over the last few years) and the cost per compromised data record has risen to a shocking $158. Clearly, it is now not a question of whether organizations can afford enterprise-class security, and more a question of whether they can afford not to. (more…)

Oracle 12c Multitenancy: Pluggable Database Concepts

A few years ago, Oracle released a multitenant database architecture that featured a concept called Pluggable Databases. The basic idea behind this architecture is a set of containers. A multitenant database is comprised of:

  • One root container which acts as the main repository for metadata and common users
  • One seed pluggable database that acts as a template
  • One or more portable databases based on the seed template, created as needed

In the past, a whole new database with all its base resource allocation requirements was created when there was a demand for it. Pluggable databases can be used to remove this complexity while still giving end users the experience of having a separate environment.

Another advantage of the pluggable database is its portability. Moving a pluggable database is as simple as issuing an “ALTER PLUGGABLE DATABASE” command to create an XML file with the necessary metadata for the physical structure. Data files can then be moved to another container database on that server or a different server. Once the data files and XML files are in place, issuing a “CREATE PLUGGABLE DATABASE” command with the appropriate CREATE_FILE_DEST and SOURCE_FILE_NAME parameters “plugs” the database in for immediate use.

Data migration using this concept eliminates several of the pitfalls of more traditional methods. For instance:

  • All metadata is contained within the database so no users, objects or code will be lost
  • Plugging a database into a container on the same server is nearly instantaneous, eliminating the time previously needed for RMAN backup/recovery or Data Pump export/import
  • Pluggable databases are generally smaller so moving the data files across the network to the new location can be very quick

This architecture also makes it possible to create and manage databases according to logical boundaries. Pluggable databases can be created to separate data into specific areas for:

  • Applications
  • User communities
  • Client specific environments
  • Life cycle stages
  • Regulatory compliance requirements

It is important to demystify the multitenant architecture by viewing it as a tool for secure and successful data separation and migration. Experimenting with this feature will no doubt convince anyone of its value and necessity to meet today’s fast paced demands for more databases.

Written by Michael Paddock, Principal Oracle DBA, Cintra Texas – May 2017

Oracle Public Cloud Bursting: Benefits and Considerations

Some months ago, I migrated a customer’s environment to the Oracle Public Cloud. It was a very straight forward migration, nothing complex, a single instance database that hosts many applications with over 300 users accessing the applications at any point in time. The Oracle Cloud service subscription chosen by the customer was a Non-metered hosted environment Oracle Platform as a Service which includes Oracle Database Cloud Service, Oracle Database Backup Cloud Service and a host of other cloud tools like the DBaaS monitor which gives you a view of what is going on in your environment such as CPU and memory utilization as well as Real Time SQL monitoring.

In addition, you can also use the cloud console to perform a variety of self-service functions. Backups were configured to both cloud storage and local storage using Oracle Backup Cloud Service. As mentioned earlier, this is a non-metered service and to set the context for this post, a brief description of the type of service subscription offerings available from Oracle Cloud is appropriate.

Basically, Oracle offers two Cloud service subscriptions namely Metered and Non-Metered Service Offerings. A metered cloud service is where you are charged based on the actual usage of the service resources on an hourly or monthly basis while a non-metered service is essentially a monthly or annual subscription for a fixed service configuration which you typically cannot change.

The customer wanted some flexibility of being able to provision additional resources when required as at during their seasonal peak periods which can be a period of one to two months twice in a year. It is expected that during this period more users will be using the system heavily and being able to add more capacity for this period will give the customer the needed flexibility. The non-metered service is more cost effective to this customer and addresses most of their resource requirements but is not flexible as the configuration cannot be changed.

When Oracle announced in June 2016 the bursting feature for non-metered service, it was just what my customer had been waiting for and straight away we put the feature to test. The feature is also straight forward; log in to the cloud console, select a new compute shape and click on the “Yes, Scale Up/Down Service” to apply the changes.

It worked quite well, in fact in a matter of a few minutes the system was back up and running with the additional capacity with no need to resize the database components such as the SGA, PGA and other settings; all these were already sized appropriately based on the new compute shape size, though if you have specific sizing requirements, they can be resized as required. All looks good and this bursting feature seems great and cost effective as it gives the flexibility to only spend on extra capacity when required.

However, a few minutes after scaling up I received the notification below via email:

“Your services are suspended due to exceeding resource quota. New instances can’t be created and existing instances will not be able to consume more of the resources that have exceeded the quota”.

Suspended? Really? What for? As I wasn’t quite sure what was going on I opened a service request with Oracle Support detailing what was done and they came back saying:

“There is a breach in your quota services. Please do free up resources to resume the suspension”.

I then referred them back to their own documentation below for the June 2016 update (https://docs.oracle.com/en/cloud/paas/database-dbaas-cloud/csdbn/index.html#CSDBN-GUID-4696D271-7B1A-43B5-9EF8-8C8179CAC1C9 ) which I have also extracted the key information as below:

===========================================================

Changes to Oracle Database Cloud Service non-metered subscriptions

If you have a non-metered subscription to Oracle Database Cloud Service, you can now use additional capacity above your non-metered subscription rate (also referred to as “bursting”). You will be charged per hour and billed monthly in arrears for this increased capacity, using the “Pay as You Go” model. Pricing for this increased capacity will be based on the current Per Hour list price as shown on the Pricing tab at https://cloud.oracle.com/database .

It is clear that there is therefore a need to closely monitor even the usage of non-metered Cloud solutions (whether on Oracle’s Cloud or any other), to ensure that bursting, while useful from a business point of view, is budgeted and accounted for from a financial point of view as well.

Here at Cintra, we have developed a comprehensive set of monitoring and alerting scripts for all Public Cloud environments, to ensure that you can keep a finger on the pulse of your usage, and react accordingly if it needs to change. Contact us now to find out more.

Written by Hakeem Ambali, Oracle DBA, Cintra UK – May 2017

Oracle Compute Cloud Service (IaaS) New Features

The latest versions of Oracle’s Infrastructure as a Service (IaaS) compute Cloud Service have added a slew of features and functionality that keep improving the capabilities and the performance of the service.

The most notable additions of the latest releases were:

High I/O shapes featuring NVMe SSD disks (February 2017)

It is now possible to use NVMe SSD disks as nonpersistent data disks attached IaaS instances.

The NVMe disks are available initially in some sites, you can check if they are available by listing the possible shapes on the Create Instance Wizard of your IaaS Cloud Console, or by using the GET /shape/ method using the Compute API , For more information, see REST API for Oracle Compute Cloud Service (IaaS).

The size of the disk is determined by the shape you select:

Shape OCPUs vCPUs Memory (GB) Size of SSD Disk (GB)
OCIO1M 1 2 15 400
OCIO2M 2 4 30 800
OCIO3M 4 8 60 1600
OCIO4M 8 16 120 3200
OCIO5M 16 32 240 6400

 

Building your own Windows images (April 2017)

You can now use your own license to build private Windows images and add them to your Oracle Compute Cloud Service account. You can use these images to create new Windows instances in Oracle Compute Cloud Service.

Using the REST API to set up a VPN connection using VPN as a Service (VPNaaS) (April 2017)

You can set up a VPN connection between your data center and IP networks in your Oracle Compute Cloud Service site using VPN as a Service (VPNaaS). VPNaaS uses IPSec-based tunnels to carry encrypted traffic between your data center and your instances in Oracle Compute Cloud Service. You can set up a VPNaaS connection using the REST API.

Shutting down and restarting instances

Instance life cycle management operations have been enhanced to allow you to shut down an instance that uses a persistent bootable storage volume. Earlier, you could delete an instance by stopping the instance orchestration. This would stop all the instances and other resources defined in the orchestration. Now you can shut down an instance without changing the state of other objects in the orchestration. You can restart the instance later. The instance resumes with the same configuration and data.

Orchestrations v2 REST API  (April 2017)

Since this release Oracle introduced an improved version of the orchestration framework, that provides a modular and flexible approach to creating and managing multiple resources through a single JSON file.

The new framework improves on the previous version by adding these features:

  • The ability to create references across interdependent objects, so that entire hierarchies can be created or restored without manually ensuring that dependencies are satisfied.
  • The ability to update some attributes of objects while the orchestration is running
  • The ability to manage the state and lifecycle of each instance in an orchestration, without disturbing the state of other resources

A full comparison between the previous version of the orchestration framework and the new one can be found here: About Orchestrations v2

Persistent SSD block storage (May 2017)

In some sites, SSD block storage is now available. You can attach these persistent SSD block storage volumes to instances as either boot disks or data disks.

You can check the detailed description of the new features here: Oracle Cloud What’s New for Oracle Compute Cloud Service (IaaS)

For more information on how the Oracle Public Cloud might benefit your business, contact Cintra today.

Written by Mattia Rossi, Cloud Architect, Cintra Italy May 2017

ODA X6-2 Small / Medium: Important New Tools in Our Solution Architecture Toolbox

We at Cintra have been fans of the Oracle Database Appliance (ODA) platform since it was released in late 2011, seeing it as a great fit for some common use cases among our customers, namely:

  • Modernization, Upgrades and Cloud Roadmaps
    • Customers looking to refresh, upgrade and adopt a modern platform
    • Customers looking to adopt a portable, agile platform as part of an overall journey towards Cloud adoption
  • Oracle SE to EE Upgrades and 12c Adoption
    • SE customers wanting to upgrade to EE without the higher cost of traditional platforms, by leveraging the ODA’s capacity on demand CPU model
    • Customers looking to upgrade to Database 12c on an optimized, templated appliance
  • Cost Saving Measures
    • Customers looking to reduce hardware and software costs and do more with less
  • Simplified RAC Database Clustering
    • Customers who need RAC clustering redundancy but don’t have the time or skills to integrate
  • Windows Customers
    • Customers seeking greater resilience with Linux and clustering, rolled out in a templated, wizard-driven fashion
  • Storage Performance
    • Customers looking for a low cost fix for storage performance issues
  • Apps in-a-box
    • Applications customers looking to consolidate the full apps stack onto a single platform

While the “traditional” ODA (made up of two database servers and at least one storage tray) is still available, rebranded as the “ODA X5-2 HA” for its high availability capabilities, Oracle recently released two new models named the ODA X6-2 Small and the ODA X6-2 Medium.

These new models add some interesting capabilities to the Oracle Engineered Systems family, in particular:

  • Support for Oracle Standard Edition
  • Lowest cost of entry into the Engineered Systems platforms
  • All flash high performance storage
  • Small form factor (1 rack unit)

In a nutshell, the new models are single-node database servers (virtualization is currently not supported on them) designed for small to medium database workloads where high availability is not required (or where a more limited recovery time using Data Guard is an option.) The technical specs are as follows:

  • ODA X6-2 Small
    • Single socket with 10 CPU cores
    • 128GB RAM, upgradeable to 384GB
    • 4TB NVMe Flash, upgradeable to 12.8TB
  • ODA X6-2 Medium
    • Two sockets with a total of 20 CPU cores
    • 256GB RAM, upgradeable to 768GB
    • 4TB NVMe Flash, upgradeable to 12.8TB

These appliances are deployed in a similar way to the traditional ODA, using a new streamlined web console, and benefit from the same automated monitoring, faster provisioning and patching of the entire technology stack.

While one size doesn’t fit all in solution architecture design, these new ODAs do add some interesting tools to our toolbox. If we team these up with some of Oracle’s new Cloud offerings, we can get extremely creative in building new database architectures that offer extreme agility and business value, for very modest investments.

For more information on the ODA family, and how they might form part of a solution architecture to benefit your business, contact Cintra today!

Written by Simon Rice, VP Enterprise Services, August 2016

Oracle 12c: Real-Time Database Operation Monitoring

In the latest release of Database 12c, Oracle has come up with another brilliant feature for performance monitoring: Real-Time Database Operation Monitoring.

Real time SQL monitoring was introduced in 11g and helped with monitoring SQL query performance in real time.

However, only individual SQL statements could be monitored in 11g. Now, imagine there is a batch operation or a PL/SQL block having multiple calls inside it which might not trigger real-time monitoring by default, but could still consume a lot of DB and CPU time.

This is where the new feature, real time database operation monitoring, comes in.

Execution of Real-Time Operation Monitoring:

The concept and the way to use this feature are straight forward, as follows:

Execute Start of Operation
-- Do Action ----
End of Operation.

Below is my script which I will be using in this context. I have highlighted in green the start and end of the operation.

I will be using the infamous scott schema and a cartersian join on dba_objects to have a long running query.

var db_opid number;
 EXEC :db_opid  := dbms_sql_monitor.begin_operation ('Scott.Batch.Select', forced_tracking => 'Y');
select * from emp a, dept d;
selet * from emp;
select /*+ MONITOR */ * from dba_objects a, dba_objects  b;
EXEC dbms_sql_monitor.end_operation('Scott.Batch.Select', :db_opid );

Note:

I purposefully inserted the monitor hint because I wanted to monitor the SQL separately in real time as well. If you skip the monitor hint then the db decides (on the 5 second and parallel rule) that if the SQL has to be monitored or not.

I have used forced_tracking as well to force monitoring of the operation.

Scott.Batch.Select is the name given to the operation which we would see in the performance page.

Real Time Operation Monitoring and Oracle Enterprise Manager

I am using EM Express (new in 12c) but similar (and more advanced) monitoring is available in Cloud Control as well.

From Image 1 below – you can see 2 elements being monitored: One is the SQL because of the monitor hint and the other is the database operation, the * in the type refers to Database Operation.

From Image 2 you can see new options added for filtering the type of operation in the Performance Page.

rtom_blog1There can be many use cases for operation monitoring ranging from monitoring a PL/SQL block, a batch operation, comparing one batch operation versus previous batch operations and so on. Here at Cintra, this is another tool in our toolbox for advanced performance troubleshooting and tuning. Get in contact now if you would like more information.

Written by Vineet Sachdeva, Senior Oracle DBA, Cintra India – March 2016

Oracle Database Appliance X5-2: Providing 64TB of usable storage on 8TB drives at no additional cost

If you’re not familiar with the Oracle Database Appliance (aka the “ODA”) then you need to be; the ODA is the most successful Oracle Engineered Systems platform in terms the number of units deployed globally. The ODA is now in it’s fourth incarnation and as of October 2015 Oracle announces support for 8TB high capacity drives which takes the raw storage in a single drive tray to 128TB and the usable capacity to 64TB. What’s really interesting is that this comes at no additional cost with the list price remaining at an attractive $68K.

As a two node RAC cluster in a box, the benefits of a simplified RAC deployment based on an optimized Engineered System are compelling, from rapid deployments achieved in a week or less to out of the box performance based on large CPU, memory and storage resources.

So let’s talk about the latest storage upgrade: Storage capacity has always been a concern for any appliance-based pre-configured solution; we were limited in the early ODA versions to 6TB of usable storage, which saw Oracle evolve the storage configuration from high performance to high capacity drives complimented by the increase in focus on SSD storage, now supplied by 4 x 400GB SSDs for frequently accessed data and 4 x 200GB SSDs to improve the performance of database redo logs. This addressed the need to optimize the typical I/O bottlenecks around database redo log files and temp and undo areas.

Now the Oracle Database Appliance X5-2 is delivered with 8TB drives, replacing the original 4TB drives. The new High Capacity storage shelf supports 128TB raw storage capacity that will provide 64TB of usable space with Normal redundancy (double-mirrored) and 42.7TB with High redundancy (triple-mirrored).

The powerful hardware specs and the capacity-on-demand database software licensing makes the Oracle Database Appliance a valuable investment, with a fully integrated system of software, servers, storage and networking that can deliver high availability database services for a wide range of application workloads.

Having the option of deploying a virtualized platform based on Oracle VM, the Oracle Database Appliance is an ideal platform to build a solution-in-a-box that extends capacity-on-demand licensing to both database and application workloads thanks to Oracle VM hard partitioning.

Below is a quick summary of the Database Appliance hardware specifications:

System Architecture

  • Two servers and one storage shelf per system
  • Optional second storage shelf may be added to double the storage capacity

Processor

  • Two 18-core Intel® Xeon® processors E5-2699 v3 @2.30GHz per server

Main Memory

  • 256 GB (8 x 32 GB) per server
  • Optional memory expansion to 512 GB or 768 GB

Storage

  • Capacity per storage shelf:
    • Sixteen 3.5-inch 8 TB 7.2K rpm HDDs
    • 128 TB raw, 64 TB (double-mirrored) or 42.7 TB (triple-mirrored) usable capacity
  • SSD layer
    • Four 2.5-inch (3.5-inch bracket) 400 GB ME SSDs for frequently accessed data
    • Four 2.5-inch (3.5-inch bracket) 200 GB HE SSDs for database redo logs
  • Server internal drives
    • Two 2.5-inch 600 GB 10K rpm HDDs (mirrored) per server for OS

Network

  • Four onboard auto-sensing 100/1000/10000 Base-T Ethernet ports per server
  • Four PCIe 3.0 slots per server:
    • PCIe internal slot: dual-port internal SAS HBA
    • PCIe slot 3: dual-port external SAS HBA
    • PCIe slot 2: dual-port external SAS HBA
    • PCIe slot 1: dual-port InfiniBand HCA
  • Optional 10GbE SFP+ external networking connectivity requires replacement of InfiniBand HCA Graphics

To learn more about how this architecture may benefit your business, contact Cintra today!

Written by Abdul Sheikh, CTO, and Paolo Desalvo, Enterprise Architect – October 2015

Oracle GoldenGate 12c: Integrated Configurations

Setting up a GoldenGate environment entails making several decisions on how to structure the environment. Choices like trail file location, naming conventions for processes, etc. are usually pretty simple. With the release of 12c, there is a decision that merits very close attention: whether to use an INTEGRATED configuration or a CLASSIC one. The INTEGRATED approach has been around since 11gR2 but there are some significant and important enhancements in the 12c implementation that make the INTEGRATED set up more desirable.

Here are some advances and improvements that comprise the new version of the INTEGRATED option.

1. INTEGRATED APPLY

Administration of the APPLY process is simplified by using a special INTEGRATEDPARMS parameter. In the past, a parallel-like APPLY process was achieved by creating parameter files that split a schema into multiple threads by only applying to a subset of the tables to be replicated. Now the APPLY takes care of this and boosts performance even more. True parallel processes that are aware of other counterpart APPLY threads can be started up every time the process is running. This provides better scalability and load balancing. The way this is setup is amazingly simple. It is well worth it to explore this capability more closely.

2. INTEGRATED CAPTURE

CAPTURE can now be closely coupled with the database. This means that CAPTURE can access the data dictionary and even use the database’s UNDO tablespace. Because of this CAPTURE can support more advanced features and data types, and is fully compatible with Oracle’s various compression and encryption algorithms.

3. DOWNSTREAM CAPTURE

This is an adaptation of the STREAMS design, which allows in-memory CAPTURE and APPLY using the Data Guard log transport. The integrated setup is an option in this configuration as well. The performance using this method is invaluable where low latency is required, even in the case of a very heavy data loads. Changes in the database are written to standby redo logs which make real-time mining possible. If the APPLY process lags too far, it will mine the archived redo logs. One of the great things about this is that, once the APPLY catches up, the real time mining process is automatically started back up again, with no manual intervention needed.

Using an integrated setup for replication has taken a big step forward in the latest versions of GoldenGate 12c. In addition to providing access to the data dictionary the parallel capabilities really make this option the most desirable way to create a modern GoldenGate environment.

For assistance in designing and deploying your new GoldenGate environment, contact Cintra today!

Written by Michael Paddock, Principal DBA, Cintra Texas – October 2015
© This website and its content is copyright of © Cintra Software and Services 2011. All rights reserved.