ODA X6-2 Small / Medium: Important New Tools in Our Solution Architecture Toolbox

We at Cintra have been fans of the Oracle Database Appliance (ODA) platform since it was released in late 2011, seeing it as a great fit for some common use cases among our customers, namely:

  • Modernization, Upgrades and Cloud Roadmaps
    • Customers looking to refresh, upgrade and adopt a modern platform
    • Customers looking to adopt a portable, agile platform as part of an overall journey towards Cloud adoption
  • Oracle SE to EE Upgrades and 12c Adoption
    • SE customers wanting to upgrade to EE without the higher cost of traditional platforms, by leveraging the ODA’s capacity on demand CPU model
    • Customers looking to upgrade to Database 12c on an optimized, templated appliance
  • Cost Saving Measures
    • Customers looking to reduce hardware and software costs and do more with less
  • Simplified RAC Database Clustering
    • Customers who need RAC clustering redundancy but don’t have the time or skills to integrate
  • Windows Customers
    • Customers seeking greater resilience with Linux and clustering, rolled out in a templated, wizard-driven fashion
  • Storage Performance
    • Customers looking for a low cost fix for storage performance issues
  • Apps in-a-box
    • Applications customers looking to consolidate the full apps stack onto a single platform

While the “traditional” ODA (made up of two database servers and at least one storage tray) is still available, rebranded as the “ODA X5-2 HA” for its high availability capabilities, Oracle recently released two new models named the ODA X6-2 Small and the ODA X6-2 Medium.

These new models add some interesting capabilities to the Oracle Engineered Systems family, in particular:

  • Support for Oracle Standard Edition
  • Lowest cost of entry into the Engineered Systems platforms
  • All flash high performance storage
  • Small form factor (1 rack unit)

In a nutshell, the new models are single-node database servers (virtualization is currently not supported on them) designed for small to medium database workloads where high availability is not required (or where a more limited recovery time using Data Guard is an option.) The technical specs are as follows:

  • ODA X6-2 Small
    • Single socket with 10 CPU cores
    • 128GB RAM, upgradeable to 384GB
    • 4TB NVMe Flash, upgradeable to 12.8TB
  • ODA X6-2 Medium
    • Two sockets with a total of 20 CPU cores
    • 256GB RAM, upgradeable to 768GB
    • 4TB NVMe Flash, upgradeable to 12.8TB

These appliances are deployed in a similar way to the traditional ODA, using a new streamlined web console, and benefit from the same automated monitoring, faster provisioning and patching of the entire technology stack.

While one size doesn’t fit all in solution architecture design, these new ODAs do add some interesting tools to our toolbox. If we team these up with some of Oracle’s new Cloud offerings, we can get extremely creative in building new database architectures that offer extreme agility and business value, for very modest investments.

For more information on the ODA family, and how they might form part of a solution architecture to benefit your business, contact Cintra today!

Written by Simon Rice, VP Enterprise Services, August 2016

Oracle 12c: Real-Time Database Operation Monitoring

In the latest release of Database 12c, Oracle has come up with another brilliant feature for performance monitoring: Real-Time Database Operation Monitoring.

Real time SQL monitoring was introduced in 11g and helped with monitoring SQL query performance in real time.

However, only individual SQL statements could be monitored in 11g. Now, imagine there is a batch operation or a PL/SQL block having multiple calls inside it which might not trigger real-time monitoring by default, but could still consume a lot of DB and CPU time.

This is where the new feature, real time database operation monitoring, comes in.

Execution of Real-Time Operation Monitoring:

The concept and the way to use this feature are straight forward, as follows:

Execute Start of Operation
-- Do Action ----
End of Operation.

Below is my script which I will be using in this context. I have highlighted in green the start and end of the operation.

I will be using the infamous scott schema and a cartersian join on dba_objects to have a long running query.

var db_opid number;
 EXEC :db_opid  := dbms_sql_monitor.begin_operation ('Scott.Batch.Select', forced_tracking => 'Y');
select * from emp a, dept d;
selet * from emp;
select /*+ MONITOR */ * from dba_objects a, dba_objects  b;
EXEC dbms_sql_monitor.end_operation('Scott.Batch.Select', :db_opid );


I purposefully inserted the monitor hint because I wanted to monitor the SQL separately in real time as well. If you skip the monitor hint then the db decides (on the 5 second and parallel rule) that if the SQL has to be monitored or not.

I have used forced_tracking as well to force monitoring of the operation.

Scott.Batch.Select is the name given to the operation which we would see in the performance page.

Real Time Operation Monitoring and Oracle Enterprise Manager

I am using EM Express (new in 12c) but similar (and more advanced) monitoring is available in Cloud Control as well.

From Image 1 below – you can see 2 elements being monitored: One is the SQL because of the monitor hint and the other is the database operation, the * in the type refers to Database Operation.

From Image 2 you can see new options added for filtering the type of operation in the Performance Page.

rtom_blog1There can be many use cases for operation monitoring ranging from monitoring a PL/SQL block, a batch operation, comparing one batch operation versus previous batch operations and so on. Here at Cintra, this is another tool in our toolbox for advanced performance troubleshooting and tuning. Get in contact now if you would like more information.

Written by Vineet Sachdeva, Senior Oracle DBA, Cintra India – March 2016

Oracle Database Appliance X5-2: Providing 64TB of usable storage on 8TB drives at no additional cost

If you’re not familiar with the Oracle Database Appliance (aka the “ODA”) then you need to be; the ODA is the most successful Oracle Engineered Systems platform in terms the number of units deployed globally. The ODA is now in it’s fourth incarnation and as of October 2015 Oracle announces support for 8TB high capacity drives which takes the raw storage in a single drive tray to 128TB and the usable capacity to 64TB. What’s really interesting is that this comes at no additional cost with the list price remaining at an attractive $68K.

As a two node RAC cluster in a box, the benefits of a simplified RAC deployment based on an optimized Engineered System are compelling, from rapid deployments achieved in a week or less to out of the box performance based on large CPU, memory and storage resources.

So let’s talk about the latest storage upgrade: Storage capacity has always been a concern for any appliance-based pre-configured solution; we were limited in the early ODA versions to 6TB of usable storage, which saw Oracle evolve the storage configuration from high performance to high capacity drives complimented by the increase in focus on SSD storage, now supplied by 4 x 400GB SSDs for frequently accessed data and 4 x 200GB SSDs to improve the performance of database redo logs. This addressed the need to optimize the typical I/O bottlenecks around database redo log files and temp and undo areas.

Now the Oracle Database Appliance X5-2 is delivered with 8TB drives, replacing the original 4TB drives. The new High Capacity storage shelf supports 128TB raw storage capacity that will provide 64TB of usable space with Normal redundancy (double-mirrored) and 42.7TB with High redundancy (triple-mirrored).

The powerful hardware specs and the capacity-on-demand database software licensing makes the Oracle Database Appliance a valuable investment, with a fully integrated system of software, servers, storage and networking that can deliver high availability database services for a wide range of application workloads.

Having the option of deploying a virtualized platform based on Oracle VM, the Oracle Database Appliance is an ideal platform to build a solution-in-a-box that extends capacity-on-demand licensing to both database and application workloads thanks to Oracle VM hard partitioning.

Below is a quick summary of the Database Appliance hardware specifications:

System Architecture

  • Two servers and one storage shelf per system
  • Optional second storage shelf may be added to double the storage capacity


  • Two 18-core Intel® Xeon® processors E5-2699 v3 @2.30GHz per server

Main Memory

  • 256 GB (8 x 32 GB) per server
  • Optional memory expansion to 512 GB or 768 GB


  • Capacity per storage shelf:
    • Sixteen 3.5-inch 8 TB 7.2K rpm HDDs
    • 128 TB raw, 64 TB (double-mirrored) or 42.7 TB (triple-mirrored) usable capacity
  • SSD layer
    • Four 2.5-inch (3.5-inch bracket) 400 GB ME SSDs for frequently accessed data
    • Four 2.5-inch (3.5-inch bracket) 200 GB HE SSDs for database redo logs
  • Server internal drives
    • Two 2.5-inch 600 GB 10K rpm HDDs (mirrored) per server for OS


  • Four onboard auto-sensing 100/1000/10000 Base-T Ethernet ports per server
  • Four PCIe 3.0 slots per server:
    • PCIe internal slot: dual-port internal SAS HBA
    • PCIe slot 3: dual-port external SAS HBA
    • PCIe slot 2: dual-port external SAS HBA
    • PCIe slot 1: dual-port InfiniBand HCA
  • Optional 10GbE SFP+ external networking connectivity requires replacement of InfiniBand HCA Graphics

To learn more about how this architecture may benefit your business, contact Cintra today!

Written by Abdul Sheikh, CTO, and Paolo Desalvo, Enterprise Architect – October 2015

Oracle GoldenGate 12c: Integrated Configurations

Setting up a GoldenGate environment entails making several decisions on how to structure the environment. Choices like trail file location, naming conventions for processes, etc. are usually pretty simple. With the release of 12c, there is a decision that merits very close attention: whether to use an INTEGRATED configuration or a CLASSIC one. The INTEGRATED approach has been around since 11gR2 but there are some significant and important enhancements in the 12c implementation that make the INTEGRATED set up more desirable.

Here are some advances and improvements that comprise the new version of the INTEGRATED option.


Administration of the APPLY process is simplified by using a special INTEGRATEDPARMS parameter. In the past, a parallel-like APPLY process was achieved by creating parameter files that split a schema into multiple threads by only applying to a subset of the tables to be replicated. Now the APPLY takes care of this and boosts performance even more. True parallel processes that are aware of other counterpart APPLY threads can be started up every time the process is running. This provides better scalability and load balancing. The way this is setup is amazingly simple. It is well worth it to explore this capability more closely.


CAPTURE can now be closely coupled with the database. This means that CAPTURE can access the data dictionary and even use the database’s UNDO tablespace. Because of this CAPTURE can support more advanced features and data types, and is fully compatible with Oracle’s various compression and encryption algorithms.


This is an adaptation of the STREAMS design, which allows in-memory CAPTURE and APPLY using the Data Guard log transport. The integrated setup is an option in this configuration as well. The performance using this method is invaluable where low latency is required, even in the case of a very heavy data loads. Changes in the database are written to standby redo logs which make real-time mining possible. If the APPLY process lags too far, it will mine the archived redo logs. One of the great things about this is that, once the APPLY catches up, the real time mining process is automatically started back up again, with no manual intervention needed.

Using an integrated setup for replication has taken a big step forward in the latest versions of GoldenGate 12c. In addition to providing access to the data dictionary the parallel capabilities really make this option the most desirable way to create a modern GoldenGate environment.

For assistance in designing and deploying your new GoldenGate environment, contact Cintra today!

Written by Michael Paddock, Principal DBA, Cintra Texas – October 2015

The Oracle Optimizer: Mode Considerations

Oracle’s Cost-Based Optimizer has been with us now for a long time. Choosing the right mode setting can improve the average performance of your database queries and is often overlooked.

The Optimizer tries to determine the best execution plan for a database query based on many factors including table statistics, query predicates, joins, indexes, partitions and the Optimizer mode.

When setting the initialization parameter OPTIMIZER_MODE the typical workload of the database should be considered. By default it is set to ALL_ROWS.

Let’s consider the options:

  • CHOOSE: Still allowed but obsolete. Uses Cost-Based optimization where statistics are available, otherwise uses Rule.
  • RULE: Still allowed for backwards compatibility. Uses a set of Rules to determine the best query plan.
  • ALL_ROWS: This is the default setting. Causes the optimizer to determine the best query plan to return the complete result set. This generally favours scans over index lookups.
  • FIRST_ROWS: Causes the optimizer to determine the best query plan to return the first row of the query. This generally favours index lookups over scans.
  • FIRST_ROWS_n (where n=1, 10, 100 or 1000): Causes the optimizer to determine the best query plan to return the first n rows of the query. This generally favours index lookups over scans.

Generally, the default setting of ALL_ROWS will be the best option, but for some OLTP applications where only the first n rows are displayed on the screen it may be worth considering using FIRST_ROWS_n.

It is possible to test this out by setting the OPTIMIZER_MODE at a session or query level


or by using the hint

SELECT /*+ opt_param(‘optimizer_mode’,’first_rows_100′) */ ….

No matter what mode is decided upon it is important to keep statistics up to date to ensure the optimizer has the latest information to base its calculations on and to stand the best chance of generating the most optimal query plan. It should be noted that the default Oracle statistics gathering jobs are often not appropriate for larger or more complex data sets, and may lead to incomplete or inconsistent statistics.

As the Optimizer evolves, the same query plans may not be generated between releases and the plans generated may not always be an improvement. Tools such as RAT (Real Application Testing) can be used to check performance prior to a production upgrade to prevent unpleasant surprises, by allowing the capture and replay of Production workloads against representative test environments, including “what if” analysis of different Optimizer modes.

By analyzing your workload, indexing, partitioning and other variables Cintra are able to make recommendations to improve your database query performance and advise on the use of Oracle tools like RAT.

Written by Ian Fergusson, Principal DBA, Cintra UK – October 2015






Private Cloud Design: Pick the Right Brick

I guess many of you still remember good old times when your company had about three core applications, users were less than twenty, the client layer was installed on a few workstations and all this was fitting perfectly on 2 single instance databases.

And then, the company grows, business gets diversified, and suddenly IT becomes a service, and as a service that has to satisfy requirements about quality, efficiency, scalability and flexibility.

The idea of Private Cloud is born, one central infrastructure that can provide services to our business, such as:

  • IaaS – Infrastructure as a Service
  • PaaS – Platform as a service
  • DBaaS – Database as a service

If you think about the infrastructure you should build to fulfill the business requests, you can think of it as a construction made of primary bricks.

At the beginning, as the company is small and the business requirements are fairly rudimentary, a few small bricks are enough, but as requirements grow, you need bigger and more complex bricks, in order to deliver solid, reliable and flexible solutions.

During the last few years, Oracle has further refined its own hardware and software catalog towards cloud computing. Let’s have a look at what bricks it can provide, the features we see as their sweet spots for cloud enablement and how you could use them:

Oracle Database 12c

  • The best-performing Oracle database engine yet
  • Multitenancy
  • Ease of deployment
  • Rapid provisioning/cloning
  • Security
  • Improved availability

Oracle VM

  • Rapid environment deployment
  • VM templates
  • Extreme Scalability
  • Logical partitioning to limit software license footprint

Oracle Database Appliance

  • 12c GI/RDBMS
  • Oracle VM Manager
  • Fine grained resource management
  • High availability
  • Capacity-on-demand licensing
  • Ability to consolidate application and database workloads onto the same platform

Oracle Exadata  

  • Pre-Installed 12c GI/RDBMS
  • Allows isolated virtual clusters
  • Extreme scalability
  • Extreme performance
  • Fine grained resource management

Oracle Supercluster

  • All the performance benefits of Exadata storage cells
  • Proven robustness of SPARC hardware
  • Ability to consolidate application and database workloads onto the same platform
  • Software-in-silicon benefits for Oracle database workloads

Oracle Private Cloud Appliance

  • Implement Oracle VM in a stream-lined, wizard-driven fashion
  • Allows consolidation of heterogeneous environments (database servers, APP servers, VDI. etc)
  • Trusted partitioning
  • Extreme scalability
  • Ease of operational manageability
  • Fine-tuning in resource allocations

ZFS Storage Appliance

  • Quick and easy storage provisioning and management
  • Zero-space cloning of Production databases in seconds
  • Hybrid columnar compression for Oracle database
  • Ideal platform for Information Lifecycle Management for Oracle databases
  • Scalability to Exabytes of supported storage
  • Intelligent storage optimization
  • High speed interconnection with the other Oracle appliances

At this point, once you have acquired all your bricks, the main challenge is to understand which brick you should use for each business requirement, in order to define the most appropriate architecture for your organization.

The first tool to visit to address this challenge is Oracle Enterprise Manager, which allows you to:

  • Monitor and manage all available resources in your architecture
  • Design and deploy resource pools
  • Measure usage of resources
  • Automatize deployment of services
  • Define and manage service templates
  • Define and manage cloud service catalog

Below you can find a few examples of services built and deployed using one or more bricks from the above list. This is not meant to be an exhaustive list, but should give you an idea of the power, flexibility and agility which can be gained using Oracle Engineered Systems.

Sandbox database

  • Steps
    • Creation of a new NFS share on ZFS storage appliance
    • Mount share on an existing database server
    • Creation of a clone or 12c pluggable database on the new share
  • Bricks to use
    • 12c Database
    • ZFS Storage Appliance

New multi-database critical application

  • Steps
    • Creation of one 12c cluster database on an Exadata virtual cluster
    • Deploy pluggable databases and distribution on separate nodes
    • Deploy application on Virtual Machine guests
  • Bricks to use
    • Exadata
    • 12c Database
    • OVM

Increase Weblogic server pool

  • Steps
    • Provision new LUNs on ZFS
    • Discover new storage on ZFS using VM manager
    • Clone existing VMs onto provisioned storage
  • Bricks to use
    • Oracle VM
    • ZFS Storage Appliance

Segregated App-in-a-Box

  • Steps
    • Deployment of a new ORACLE_HOME on Oracle Database Appliance (ODA)
    • Creation of a new cluster database
    • Deployment of App VM guest using a stored template onto the same ODA
    • Enable App VM guests for automated failover using built-in ODA / OVM functionality
    • Provisioning of storage for external backup/cloning
  • Bricks to use
    • Database Appliance
    • ZFS Storage Appliance

High performance analytical application environment

  • Steps
    • Creation of a DSS cluster database
    • Deployment of App VM guests
    • Deployment of Oracle BI software
    • Provisioning of storage for cold data/backup
  • Bricks to use
    • Exadata
    • Oracle VM
    • ZFS Storage Appliance

P2V consolidation of legacy windows system

  • Steps
    • Provision new LUNs on ZFS
    • Deployment of new VMs on Cloud appliance, using provisioned storage
  • Bricks to use
    • Private Cloud Appliance
    • ZFS Storage Appliance

As you can see, it’s primarily a matter of choosing the right bricks and following proven best practices, in order to quickly build enterprise-class, reliable and optimized architectures for every IT service need.

Do you want to know more? Contact us and we can help you to design and deploy a cloud-enabled architecture tailored to your needs.

Written by Luca Giannone, Database Architect, Cintra UK – October 2015

Tales from Oracle OpenWorld 2015: Day 2 (Monday)

Oracle OpenWorld kicked off in earnest today, with over 65,000 people descending on the Moscone Center for their dose of Oracle knowledge! If you couldn’t make it in person, here are the Cintra team’s hottest topics of the day:

Monday Morning Keynote:
This morning’s keynote was largely in line with Larry Ellison’s presentation the previous evening, however there were some interesting predictions cited, as follows:

  • By 2025 (predictions)
    • 80% of Production workload in the Cloud
    • 100% of Dev / Test in the Cloud
    • 100% of Enterprise Data in the Cloud

Whether you believe this or not, it certainly directs our focus towards the Cloud as an architecture component that can no longer be ignored. Cintra will be presenting on exactly this topic on Thursday afternoon.

ZFS Storage Appliance Updates:
We attended a technical deep dive on some new ZFS capabilities and support best practices, and thought it would be worth sharing a couple of the more interesting points:

Firstly, a new software release is now available, which provides significant benefits in terms of encryption, storage quotas, replication and migrations. This should be considered by anyone running a ZFS Storage Appliance, regardless of make or model.

Oracle also stressed the benefits of combining a ZFS Storage Appliance with Database 12c new features, particularly the Oracle Intelligent Storage Protocol (OISP) and the combination of Hybrid Columnar Compression (HCC) and Automatic Data Optimization (ADO). These three features combine to provide the ultimate Information Lifecycle Management policy, moving your data automatically from your prime real estate storage to lower tiers, with the ZFS performing its own internal tiering between storage devices based on the Oracle database’s requirements.

Of course, Cintra have a blueprint for these configurations, so get in touch if you’re interested in setting it up and we can help you implement these features smoothly!

Exadata Updates:
A number of “new” Exadata features and capabilities were mentioned today, some of which have been available for a few months and which Oracle were essentially just reminding people of, and some of which made us sit up and pay attention! Here’s a quick synopsis:

  • New Exadata features
    • Smart Analytics
      • Analytics queries run on storage cells
      • Approx 100x faster analytics than traditional commodity hardware platforms
      • Columnar flash cache (5x faster analytics)
      • JSON / XML storage offload (3x faster analytics)
    • OLTP Benefits
      • New IB protocol reduces IOs to 250us latency
        • DB talks directly to IB bypassing OS
      • Sub-second IO latency capping to remove the risk of performance degradation in the event of a drive failure
    • Consolidation Benefits
      • Workload-aware CPU resource allocation
      • Zero overhead Xen VMs on Exa
      • IB partitioning within VM for network resource management
  • Coming soon on Exadata:
    • New version of Exadata coming in 2016, engineered specifically to leverage features of Database 12cR2
    • Analytics Benefits
      • In-memory columnar within storage cell flash
      • Aggregation queries executed within storage cells
    • Smart OLTP
      • Smart cache-to-cache block transfer
      • 2x faster disk recovery
    • Consolidation
      • Hierarchical snapshots
      • 2x number of application connections
    • Smart availability
      • Extended distance clusters (stretch clusters) will soon be possible with Exadata
      • 2 x faster software upgrades

Other nuggets from the show floor:
Of course, OpenWorld isn’t just about the sessions; there’s a whole exhibition floor out there with many vendors vying to tell you about their cool new wares. Here’s a few that caught our eye today:

  • Qlogic NPARs
    • New line of NICs with built-in QoS functionality
    • OS sees many virtual NICs set up on single physical card
    • Compatible with multiple VLANs and isolated for security
    • Great for OVM / VMWare – providing separation to allow running Dev/Test on same server as Prod
  • Huawei Servers
    • Low-cost Intel servers with very low heat/power requirements
    • Can run in very hot or unstable environments
  • SGI UV3 series servers
    • Many nodes in the same chassis combine to form a single physical server
    • Removes the need for clustering complexity, allowing one Oracle instance to span many physical nodes
    • Designed for high performance – 7GB/s throughput
    • Designed for In-Memory DB

As with everything at OpenWorld, there’s not enough time to write about every cool thing we heard today, but we hope the above gives you an idea of what’s coming on some of Oracle’s major platforms.

We’ll be back tomorrow with more news and updates!
Written by the Cintra Architect Team – October 2015

Tales from Oracle OpenWorld 2015: Day 1 (Sunday) – The Oracle Cloud Matures

Larry Ellison kicked off Oracle Openworld on Sunday with a typically engaging presentation, which focused largely, as we all expected, on Oracle’s cloud offerings.

The key messages from Ellison were that the IT industry is in the middle of a paradigm shift where more and more companies will be putting more and more services in the Cloud, and that Oracle has evolved its offerings from the original SaaS catalog, to PaaS offerings, and now a full IaaS capability.

Ellison outlined the following key design goals of the Oracle Public Cloud:

  • Oracle Cloud Design Goals
    • Lowest acquisition price
      • Oracle plan to match or beat AWS
      • Automation / elimination of human error
      • Productivity increases and management costs decrease
    • Fault tolerance
      • n+1 architecture across the board
    • Fastest Database / Middleware / Applications / Analytics
      • Evolution of storing more of your data in a high speed format on a high speed device with every release
      • Row-format data evolves to Columnar
      • Disk storage evolves to PCIe-attached flash
    • Standards baked in
      • Open standards (SQL, Hadoop, Java, Ruby, etc) provide portability between various cloud vendors
      • This is good for the customer, but means Oracle need to focus on customer satisfaction to avoid churn!
    • Compatibility / portability
      • Manage private and public clouds as a single estate
      • Portability between on-premise and cloud
    • Security – always-on continuous defense
      • Security in silicon
      • Latest patches are always applied
      • Data encryption always-on in the cloud

In addition to the general Cloud goals, Ellison covered many new application-related cloud services, but for those of us from a database background there were a few interesting (and long-awaited) nuggets to add:

  • Oracle Database 12cR2
    • Due in 2016
    • Includes many new capabilities, including expanded multitenancy
  • Oracle Exadata Cloud Services
    • Exadata in the Cloud, with all software options enabled for one metered price
    • Includes In-memory database and in-flash database
  • Big Data offerings
    • Oracle Big Data Preparation Cloud Service
    • Oracle Big Data Discovery Cloud Service

These items were only touched on at a very high level, so we look forward to learning more as the week progresses!

Look for our daily updates with all of the hottest news from Oracle OpenWorld 2015!
Written by the Cintra Architect Team – October 2015

Oracle Exadata: A Few Features You May Not Be Aware Of

Oracle Exadata represents the most capable and powerful converged system on the planet for consumer businesses. While most people are familiar with flash cache and storage cells, Cintra occasionally runs into a few features with which even the most experienced Database Machine Administrators are not familiar. Here are a list of some that we typically address as part of our optimization services.

Appliance Mode

“appliance.mode” is an ASM disk group attribute which improves disk rebalancing times so that redundancy is restored much faster after a disk drop or addition operation, or after a drive failure.

Database Machine Administrators who have upgraded to Exadata release (yes, it’s been available for a while!) can use this attribute. This needs to be set at the ASM level and can be set using traditional “alter diskgroup” SQL statements.

As with all the Oracle Features there are restrictions when setting this. Two key considerations are that compatible.asm must be set to or higher on the disk group, and cell.smart_scan_capable must be set to TRUE.

Content Type

With Oracle Grid Infrastructure, Oracle introduced a new parameter “content.type” for ASM which takes 3 different values: data, recovery and system. Each type of setting modifies the adjacency measure used by Exadata’s secondary extent placement algorithm.

Using these settings, the likelihood of a double failure (in a “normal” redundancy ASM diskgroup) which could result in data loss, is reduced. Enabling this attribute requires rebalancing of the diskgroup, and therefore it makes sense to perform this change outside of busy times.

The explanation of these features would require a dedicated blog, so if you would like to learn more please get in contact with us.


Also in Exadata release, Oracle introduced an automatic hard disk scrub and repair function, where disks are scanned for latent corruptions.

This I/O scrub operation only happens when disks are idle, avoiding any impact on database workloads. However, it is still a good idea to schedule the hard disk scrub for a time when the system is most idle and there is less activity going on.

Better Security for Cell Servers

With the latest Exadata release at the time of writing,, it is possible to completely disable ssh on cell servers. The obvious question is then how do you access the storage cells? Oracle, as the tradition goes, provided another utility with this release, ExaCli, which can be invoked from the Compute Nodes. It is possible to run cellcli commands using this utility without direct access to the storage servers.

This utility can also be used to create windows for enabling ssh to the Storage Servers, allowing us to enable ssh for times when we it may be needed for patching or maintenance operations.

For more information on these and many more Exadata features, contact Cintra today!

Written by Vineet Sachdeva, Oracle DBA, Cintra India – October 2015

Oracle Solaris: Engineered For The Cloud

Whenever you’re building your own Private Cloud or want to make use of a public cloud service, the de facto standard has always been Linux. However, what do you do if your applications are only certified on SPARC platforms, or your in-house skill sets revolve primarily around Solaris?

With the latest Oracle Solaris release we have now a valid alternative to Linux on both SPARC and x86 systems.

All the following features are available in Oracle Solaris 11.2 to maximize your hardware usage and build an ultra-dense environment:

  • Virtualization:
    • Oracle Solaris Zones
      • Provide native low overhead OS virtualization, with high application isolation and resource management.
    • Kernel Zones
      • Provide zones with independent kernel versions and patch levels, secure live migration, and live reconfiguration of CPU and memory resources.
  • OVM for SPARC
    • Formerly LDOM (SUN Logical Domain), OVM is a virtualization feature that allows you to define, install and run a Solaris VM using the SPARC hypervisor that is embedded in the system firmware. Unlike Oracle Solaris Zones, an OVM for SPARC virtual machine guest can run its own operating system with different Oracle Solaris releases (10 or 11) or kernel patch levels.
  • Dynamic Domains
    • Available on SPARC Enterprise M-Series systems, this technology provides a hardware partitioning of the system. CPU cores, RAM and I/O resources are allocated by a system controller (not by a hypervisor like in OVM for SPARC), and a separate instance of the OS is placed on the newly created domain.
  • OpenStack distribution
    • A complete OpenStack distribution, the well-known software to control a cloud infrastructure, is now incorporated into Oracle Solaris.
  • Software Defined Network
    • The integrated software defined network technology has been expanded, in order to have an application driven, multi-tenant cloud virtual networking, with the introduction of Elastic Virtual Switch and VXLANs
  • ZFS Storage Pools
    • Even though ZFS was part of Solaris since Solaris 10 update 2, it should be noted that it is tightly integrated with the other OS features mentioned above and provides:
      • Unlimited capacity (256 zebibytes (2^78 bytes))
      • Encryption
      • Compression
      • Replication
      • Snapshots
      • Cloning
      • Excellent storage performance through flash-aware, tiered storage pools
  • IPS (Image Packaging System)
    • In previous Oracle Solaris releases SVR4 packages were used to install software onto a system and a different set of commands were used to install patches to update the system.
    • IPS is an integrated solution that helps to automate and ease the complexity of managing the currently-installed and new software on a Solaris server, including patching.
    • Built upon a network-centric and efficient approach with automatic software dependency-checking and validation, IPS can easily and reliably install or replicate an exact set of software package versions across many different client machines, and provide a much clearer understanding of any differences between software versions installed across systems.
  • Engineered for Oracle workloads
    • Oracle Solaris is now optimized for Java and Oracle Database, providing specific benefits for running Oracle-on-Oracle.
  • Security and compliance
    • An integrated compliance monitoring and reporting system is now available. It is standards-based (XML) and built on the SCAP ecosystem (XCCDF, OVAL, and SCE), which easily integrates with enterprise compliance management programs.

Despite the fact that back in 2010 many people were concerned about the death of Solaris after the Sun acquisition and the change in Solaris licensing (moving away from open-source) Oracle Solaris today can be considered one of the most innovative enterprise-class Unix platforms available; as Oracle says, it is truly an “enterprise-grade cloud platform.”

Written by Daniel Procopio, Systems Architect – October 2015

© This website and its content is copyright of © Cintra Software and Services 2011. All rights reserved.