Oracle Database Appliance X6-2 S/M: Optimized and Cost-effective Database Platforms

The Oracle Database Appliance (ODA) X6-2S and X6-2M are two of the latest ODA models released by Oracle which offer greater simplicity, increased optimization and flexibility as well as the ability to support several types of application workloads, deployment solutions and database editions.

In addition to the various customizable solutions that the ODA X6-2S and X6-2M offers, the configuration options for the ODA are much simpler as well as adaptive to cater to customer needs and Application requirements. (more…)

Weblogic Thread Monitoring: Your Finger on the Pulse of the Middle Tier

Introduction

WebLogic thread monitoring is very important for all slow or hanging application server issues. All WebLogic (middleware) requests are processed by thread pool thus it becomes a very important place to check for the problems. In Oracle DBA terms it’s very similar to checking database active sessions.

Weblogic server architecture

Weblogic server has a centralized AdminServe that allows deployment and configuration of resources like JNDI configuration, JMS (Java Message Services), Data Source (database connection pools).  WebLogic server can have multiple managed servers.  All managed servers can co-exist individually and they can be clustered as well.

BLOG Weblogic 1

Each WebLogic server processes different classes of work in different queues, it is based on priority and ordering requirements and to avoid deadlocks.

Weblogic server uses a single thread pool, in which all types of work are executed. Weblogic server prioritizes work based on the rules and run-time metrics and it automatically adjusts the number of threads. The queue monitors throughput over time and based on history data it adjusts the number of threads. It can increase and decrease the number of treads of Weblogic managed server.

For each Managed server and AdminServer we can monitor the threads, which is an effective way to monitor the workload on Weblogic servers.

The Admin Console

The most common mechanism to monitor Weblogic server is the Administrative console, which runs on AdminServer.  From console we can administer each managed servers of the domain, like startup/shutdown and configuration of various properties.

We can login using the Weblogic username and password. Administrator superuser is usually weblogic user, which is created while creating domain. Using this account we can create other accounts which can be administrator or less privilege users for only monitoring purpose.

After logging in you will see following screen. To check AdminServer or Managed Server click on Servers below Environment. (Or on left hand panel Expand Environment, and click on Servers)

BLOG Weblogic 2

Once you click on Servers you will see following screenshot. Where you can see AdminServer and all the managed serves of domain. Here in this example managed servers are grouped in four different clusters. Managed servers can run on single or different machines.  Click on Managed Server you want to monitor.

BLOG Weblogic 3

Thread monitoring

Once you click on server, to monitor threads click on Monitoring and then Threads tab and you will see following screen.

BLOG Weblogic 4

On above screen, you can monitor further threads by clicking Next (On right hand top corner of the Table: Self-Tuning Thread Pool Threads).

You can customize table to display all of the threads on one single page by clicking Customize this table and select Number of rows displayed per page value from drop down. You also can select or de-select columns to display in table.

Console will remember this setting when login next time for the specific managed server.

BLOG Weblogic 5

Thread state

To check issues of slow application response or hanging application you should be interested in Hogger and Stuck threads.

Let’s understand about thread state.

Active: The number of active execute threads in the pool (shown in Self Tuning thread pool table first column).

Total: Total number of threads.

Idle: Threads that are ready to pick up new work.

Standby: These threads are activated when more threads are needed, these threads remain in standby pool and activated when needed.

Hogging: Thread held by a request. These can be potential stuck threads after the configured timeout or it will return to pool before that.

Stuck: Thread is stuck working on request for more than configured stuck thread maximum time. You can imagine Stuck thread as long running job. Once the condition causing the stuck threads is cleared they will disappear and back to thread pool.

The Managed server status will show Health as Warning when you have Stuck threads in thread pool.

To identify potential issues or to check Stuck threads, there are various methods:

On Admin console, sort the Thread pool table based on Stuck in descending order, or you can check threads with Stuck column value as True and look at the Current request column and thread ID. Using thread ID, you can able to check what this thread was doing before became STUCK.

You will have complete detail of Stuck thread in the managed server log file on the server node. In the managed server log, look for string ‘BEA-000337’ or ‘STUCK’ word and timestamp. The log entry will show all about the request and potential problem or current problem. Advantage of log file is we can check the historical STUCK thread occurrences as well.

To understand more about STUCK threads, you can take thread dump by clicking button “Dump Thread Stacks” on thread monitoring page and you will have complete dump of all threads in pool. You can then locate all the STUCK threads in dump and you can understand more about STUCK threads.

Summary

By monitoring WebLogic servers using Admin Console or by checking log files you can identify potential issues or ongoing problems of slow or hanging application.

To learn more, or for assistance with any Weblogic issues, contact Cintra today!

Written by Dilip Patel, Senior Oracle Apps DBA, Cintra UK – May 2017

 

 

ZFS Storage: All That Flash, All The Time

Back in February Oracle announced its NEW ZS5 Storage Appliance. The ZS5 Storage Appliance was designed to deliver high performance across the board for all applications.

Oracle has been building the ZFS Appliance from the beginning using a powerful SMP design. This design allows all the CPU Cores in its controllers to work effortlessly. The ZFS Appliance also runs a multithreaded OS (Solaris) and the controllers each have very large DRAM caches which gives them great performance, but the new ZS5 models now take advantage of all flash storage enabling the Oracle ZFS Appliance to power some very demanding applications while getting very fast query responses and rapid transaction processing times.

The new ZS5 Storage Appliance comes in two models, the ZS5-2 and the ZS5-4. Both use the latest Intel 18 core processors and both models can be configured with a large amount of DRAM. The ZS5-2 can scale out to 1.5TB of DRAM and the ZS5-3 can max out at 3TB of DRAM. That’s with 36 or 72 Intel CPU cores to make the ZFS really scream with performance. For the SSD’s Flash storage both models can be configured with all Flash SSD’s. Each disk storage shelf can be configured with 20 or 24 – 3.2TB 2.5-inch SSD’s. The ZS5-2 can connect to 16 storage shelves giving it a total of 1.2 Peta Bytes of all flash storage and the ZS5-4 can connect to 32 disk storage shelves giving it a total of 2.4 Peta Bytes of all flash storage.

The ZFS Appliance has a built in secret sauce that has been designed to work with the Oracle’s infrastructure stack, also known as the “Red Stack”. This has sped up applications and databases, specifically 12c databases, while delivering efficiency and storage consolidation. Oracle could do this by having their storage engineers work with their database engineers in designing the storage to take advantage of all the software features built into the Oracle database. The ZFS Storage Appliance is truly and engineered system.

A great new feature is that the Oracle ZFS Storage Appliance easily integrates in to the Oracle Cloud, making cloud storage available to all users at the click of a button.

From the beginning the ZFS engineers created Hybrid Storage Pools (HSP). These HSP’s were made up of, DRAM, Read SSD’s, Write SSD’s and spinning SAS hard drives. Together the data was moved in and out of DRAM and through the Read or Write SSD’s and into the SAS drives. Today the ZFS engineers have created a Hybrid Cloud Pool, (HCP) which pretty much acts the same way but only in the Oracle public cloud.

BLOG ZFS Flash 2

The best part of this integration to the Oracle Cloud is that it’s FREE! Another benefit is that the new ZFS Appliance also eliminates the need for external gateways. It is all built into the ZFS controllers.

Truth be told Oracle has been using the ZFS Storage Appliance in its own cloud since it purchased Sun Microsystems 7 years ago.

And finally, as you evolve your storage model, you’ll want to extend what you’re doing on-premises to the public cloud.  And, ideally, you’d do this easily and seamlessly with the new ZFS ZS5 Storage Appliance.

Written by Chris Brillante, Sr. Solutions Architect, Cintra NYC – May 2017

BLOG ZFS Flash

Low Risk SPARC Refresh: A Primer on SPARC Solaris Migrations

These days every IT department is familiar with x86 server virtualization, a useful strategy to reduce hardware footprints, provisioning time, downtime and increase flexibility and reliability.

But few know that the same paradigm can be also applied to SPARC servers.

Taking advantages of the latest SPARC server platforms and consolidating legacy SPARC systems without changing applications onto a more cost-effective, scalable, agile and flexible infrastructure, is pretty easy and can be achieved with a zero risk approach.

How?

By virtualizing (P2V) legacy SPARC servers into Oracle Solaris zones or Oracle VM for SPARC LDOMs, so that all system peculiarities (system ID, configurations, software installations, etc)  are captured and available on the new servers. In this way, one of the most complex and difficult processes, a full OS and software reinstallation/configuration, is skipped and the business risk associated with the migration is dramatically reduced.

Consider that all systems deployed starting from Feb 2000 can be virtualized and run on latest SPARC systems.

Whether they need to be migrated into a Solaris Zone or an LDOM, can be determinate by the Solaris version of the legacy system.

Basically :

  • If the legacy system is running Solaris 8 or Solaris 9 or Solaris 10 9/10, then it can be virtualized into a Solaris zone
  • If the legacy system is running Solaris 10 1/13 or Solaris 11, then it can be virtualized into a Solaris zone or an LDOM

It is also possible, with Oracle VM for SPARC and Oracle Solaris Zones, to create from very simple to pretty complex virtual environments made of a wide range of Solaris versions.

For example, the picture below shows various levels of virtualization technologies available on SPARC M Servers :

BLOG - Solaris

From the bottom to the top can be identified :

  • The server platform.
  • PDOMs, the first level of virtualization, available only on high end server platforms. They are formerly known as Dynamic Domains and are electrically isolated hardware partitions that can be powered up and down without affecting any other PDOMs
  • LDOMs. Each PDOMs can be further virtualized using the hypervisor-based Oracle VM for SPARC or can natively run Oracle Solaris 11.  LDOMs can run their own Oracle Solaris kernel and manage their own physical I/O resources.  Different Oracle Solaris versions, running different patch levels, can be run within different LDOMs on the same PDOM
  • The next virtualization level is Oracle Solaris Zones (formerly called Oracle Solaris Containers), available on all servers running Oracle Solaris. A zone is a software-based approach that provides virtualization of compute resources by enabling the creation of multiple secure, fault-isolated partitions (or zones) within a single Oracle Solaris instance.

While similar virtualization technologies from other vendors cost several thousands of dollars per year, PDOMs – on high-end SPARC servers, LDOMs – on every SPARC system and Oracle Solaris Zones are virtualization options that come at no cost with a valid support contract on SPARC servers.

In conclusion,  a technology refresh of SPARC systems can  :

  • significantly reduce the risk of running business critical applications on old hardware
  • increase the security – thanks to Silicon Secured Memory features
  • improve overall system performance
  • be accomplished with minimal effort and a zero risk approach

For more information on how this strategy could benefit your business, contact Cintra today!

Written by Daniel Procopio, Director of Systems, Cintra Italy – May 2017

Oracle 12c Multitenant Option – A Database Architect’s Perspective

The Future

Do you wonder what the future holds for the principal Oracle Database platform in terms of its RDBMS Architecture? The Oracle Multitenant Architecture is the platform for the future.

Putting this plainly, the current Non-Multitenant Architecture (aka Legacy Architecture), will ultimately be de-supported so, if you’re serious as an Oracle DBA, you had best get on board! But rest easy for now; the 12cR1 Oracle Database release still supports both the Multitenant and legacy configurations.

Key Features

Just in case you were still wondering what its all about and have not dug in and tried to decipher the multitude and sprawl of information out there via a quick google search, here is a quick heads-up on the key elements of the Multitenant architecture you need to be aware of.

The Multitenant option is only available on Oracle 12c with a database COMPATIBLE parameter of 12.0.0.0 or greater. This value also applies to each planned Pluggable Database (PDB) being deployed within it.

You can run Oracle Container Databases (CDBs) and Non-CDBs on the same Server and can even share the same ORACLE_HOME.

A Multitenant database provides the capability of a Database to act as a CDB. There are two potential configurations:

  • One type of Container Database can act as a singularly unique database which can be plugged into another Container Database. This is called a Pluggable Database (PDB) and it contains all the usual object types you would find in a standard legacy database.
  • The other type of Container Database can contain none, one or more unique Databases (PDBs) and is called the CDB. A CDB consists of a root container as well as a seed PDB database which is READ ONLY by default.

Then again, you can find this information anywhere, the question is what are the key Architectural considerations you need to focus on as a DBA?

Structural Differences

Essentially, we need to first look at the structural differences between the Legacy and Multitenant architectures.

A container database can exist on its own with or without additional PDBs. Putting aside RAC Architectures, a legacy Oracle database is by design associated with one Oracle instance. In the same context, an Oracle CDB (multitenant Container Database) is associated with one instance. Therefore you have a 1:1 relationship between a database and instance.

However, a key variant for the Container database Architecture is that the CDB and all its associated PDBs “share the same Oracle Instance.” Hence you have a many to one (n:1) database to Oracle Instance relationship

  • n = CDB and all its associated PDBs

In the context of Oracle High Availability RAC architectures, the same statement would translate to an Oracle Database being associated with “one or more Oracle Instances”. Hence an Oracle CDB (and its associated PDBs) is associated with one or more RAC instances in an n: m relationship

  • n = CBD and all its associated PDBs
  • m= Oracle Instance(s) belong to the same RAC infrastructure

Hence in a multitenant environment for one CDB with a relationship to one or more PDBs, we have the following common\shared entities residing in the root Container:

  • SGA Memory Structures
  • Control files (at least one)
  • Background processes
  • Online Redo logs
  • SPFILE
  • Oracle Wallet
  • Alert Log
  • Flashback logs
  • Archived Redo logs
  • Undo tablespace and its associated Tempfiles
  • Default Database Temp Tablespace and files (however, each PDB can also have its separate Temp Tablespace)

The Oracle Data Dictionary metadata is principally stored in the root container at the CDB level with links for each PDB dictionary object defined to this from the PDB.

Deployment Architectural considerations and benefits

If you need to deploy a CDB with multiple PDBs and are wondering what you need to consider and how you optimize these, while not a complete list of review points, some items for considerations are

  • Adequate PDB unique names which translate into Services names for access
  • Resource management and delineation against each PDBs for resource consumption
  • Sizing and impact of each PDB on the shared Redo Logs, Temp Tablespace and Undo tablespaces
  • Understanding that all Oracle upgrades are performed at the CDB level and impact all associated PDBs
  • Strategy to leverage new security features related to separation of users and responsibilities between and across the CDBs and PDBs
  • Strategy for consolidated performance tuning
  • Communication to the business of the benefits of reduced cost of platform and ease of platform management
  • Plan for consolidation, especially for same version databases with small storage and memory footprints
  • Consideration for Oracle DataGuard configuration for related PDBs

Limitations and Restrictions

In spite of all the benefits, you know what they say. With every good thing comes …  Below are also a few restrictions (relative to the known operations of the legacy Databases) for RMAN on PDBs:

RMAN Restrictions from PDBs

  • Tablespace point in Time recovery
  • Duplicate Database
  • Point in Time recovery
  • Flashback operations from RMAN
  • Table recovery

Summary

So now you have the basics of what you need to be aware of, and to explore from an architectural standpoint. You can now dig into each section to obtain more information and see how to further optimize within the context of your requirements, resource availability and deployments to meet with your planned application needs.

At Cintra, as Database Architects, these are just a subset of the kind of considerations we delve into when looking at deploying customer databases. Get in touch if you would like to discuss how the Multitenant Option might be of value to your business.

Written by Joseph Akpovi, Lead DBA, Cintra NY – May 2017

Cintra Data Masking and Subsetting Solution

Most of us know that Oracle has an option that can be added to Enterprise Edition databases called the Data Masking and Subsetting Pack. This is a powerful solution, typically used to mask data when it is exported from a Production database and loaded into a Non-Production environment, so that our Developers do not have access to sensitive data.

However… what happens if we’re running on an environment with 120 CPU cores, and the cost of adopting this software option is prohibitive? But our auditors still require it? And how about if we need a similar solution for SQL Server? Are we out of luck?

Fortunately not! Here at Cintra, we have developed a comprehensive alternative which is almost as feature-rich as the license options, but does not require any additional licensing at all! Here’s a short summary of the solution and how we deliver it.

Overview
Data Masking and Subsetting is one of the architected solutions in Cintra’s array of blue printed solutions. At Cintra, we pride ourselves on partnered customer relationships wherein we closely work with our many satisfied clients to jointly identify and implement the best solution for their unique business needs.

The Cintra Data Masking and Subsetting tool is a customizable solution and rule-based approach to de-sensitizing column data at source as it is copied from a production database to non-production databases. The software intelligently hides sensitive column values such as social security numbers, credit card numbers, names, addresses, and phone numbers from unauthorized access thereby preventing data breaches. The masked values are repeatable and realistic so that the application can function as if the data were not masked.

The data is masked using methods to ensure repeatability from one run to another to provide stable development and test environments. Masked values cardinality and distribution rates are maintained to help ensure that the application functions and performs as if the data were not masked. Masking is applied carefully to preserve significant characters used in intelligent values such as the first three characters of social security numbers.

Data Masking and Subsetting Service and Solution
Cintra’s Security Solution Architect will jointly work with you to understand, define, and document your unique business security needs to optimally match the business needs with that of the Cintra Data Masking and Subsetting tool. The following high level interaction will occur between the Cintra Security Solution Architect and your team:

  • Business Security Requirements Workshops
    • Full Cintra team of Security Solution Architects
    • Facilitated sessions with Business experts to identify data masking requirements
    • Focused session with Security Managers to understand business direction / growth
  • Identify current processes to participate in Data Masking
    • Perform an initial inventory and assessment of each production data copy
    • Analysis of the level of customization required
    • Analysis of migration options to each non-production environment
  • Business case and cost savings of the new architecture
    • Provide a fully costed approach for the appropriate selected solution
    • Analysis of potential cost savings and reduction on alternatives

Proven Success with Cintra World Class Security Architecture Services
Cintra Security Solution Architects have delivered security architecture services for 20 years, resulting in many successful deployments.  Cintra is at the forefront of evolving and refreshing security architectures to ensure that they remain secure, agile and cost-effective.

Written by Jack Augustin, Database Architect, Cintra TOLA – May 2017

Modern Data Security: Protecting Your Most Valuable Assets

Information security has always been a hot topic in the IT industry, however in recent months the focus on this area has further increased. Whether due to national events such as the alleged hacking of the US election by foreign parties, international crises such as the WannaCry ransomware virus, or personal information attacks such as the latest PopcornTime Ransomware attack (which offers compromised individuals the chance to get their data back if they infect other parties with the virus), we’ve never had more data at greater risk than we do at the time of writing.

In general, we’re seeing the number of security breaches increase year-on-year, with a clear up-tick in breaches that are motivated by financial or espionage motives. While traditional hacking and malware attacks are rising steadily, so are more recent phenomenon such as social media attacks and breaches completed by exploiting senior individuals in organizations through their personal information. The variety of devices accessing sensitive data also broadens this threat landscape, with mobile devices becoming responsible for more and more breaches over time.

Due to these factors, the average cost of a data breach has risen to $4 million (a 29% increase over the last few years) and the cost per compromised data record has risen to a shocking $158. Clearly, it is now not a question of whether organizations can afford enterprise-class security, and more a question of whether they can afford not to. (more…)

Oracle 12c Multitenancy: Pluggable Database Concepts

A few years ago, Oracle released a multitenant database architecture that featured a concept called Pluggable Databases. The basic idea behind this architecture is a set of containers. A multitenant database is comprised of:

  • One root container which acts as the main repository for metadata and common users
  • One seed pluggable database that acts as a template
  • One or more portable databases based on the seed template, created as needed

In the past, a whole new database with all its base resource allocation requirements was created when there was a demand for it. Pluggable databases can be used to remove this complexity while still giving end users the experience of having a separate environment.

Another advantage of the pluggable database is its portability. Moving a pluggable database is as simple as issuing an “ALTER PLUGGABLE DATABASE” command to create an XML file with the necessary metadata for the physical structure. Data files can then be moved to another container database on that server or a different server. Once the data files and XML files are in place, issuing a “CREATE PLUGGABLE DATABASE” command with the appropriate CREATE_FILE_DEST and SOURCE_FILE_NAME parameters “plugs” the database in for immediate use.

Data migration using this concept eliminates several of the pitfalls of more traditional methods. For instance:

  • All metadata is contained within the database so no users, objects or code will be lost
  • Plugging a database into a container on the same server is nearly instantaneous, eliminating the time previously needed for RMAN backup/recovery or Data Pump export/import
  • Pluggable databases are generally smaller so moving the data files across the network to the new location can be very quick

This architecture also makes it possible to create and manage databases according to logical boundaries. Pluggable databases can be created to separate data into specific areas for:

  • Applications
  • User communities
  • Client specific environments
  • Life cycle stages
  • Regulatory compliance requirements

It is important to demystify the multitenant architecture by viewing it as a tool for secure and successful data separation and migration. Experimenting with this feature will no doubt convince anyone of its value and necessity to meet today’s fast paced demands for more databases.

Written by Michael Paddock, Principal Oracle DBA, Cintra Texas – May 2017

Oracle Public Cloud Bursting: Benefits and Considerations

Some months ago, I migrated a customer’s environment to the Oracle Public Cloud. It was a very straight forward migration, nothing complex, a single instance database that hosts many applications with over 300 users accessing the applications at any point in time. The Oracle Cloud service subscription chosen by the customer was a Non-metered hosted environment Oracle Platform as a Service which includes Oracle Database Cloud Service, Oracle Database Backup Cloud Service and a host of other cloud tools like the DBaaS monitor which gives you a view of what is going on in your environment such as CPU and memory utilization as well as Real Time SQL monitoring.

In addition, you can also use the cloud console to perform a variety of self-service functions. Backups were configured to both cloud storage and local storage using Oracle Backup Cloud Service. As mentioned earlier, this is a non-metered service and to set the context for this post, a brief description of the type of service subscription offerings available from Oracle Cloud is appropriate.

Basically, Oracle offers two Cloud service subscriptions namely Metered and Non-Metered Service Offerings. A metered cloud service is where you are charged based on the actual usage of the service resources on an hourly or monthly basis while a non-metered service is essentially a monthly or annual subscription for a fixed service configuration which you typically cannot change.

The customer wanted some flexibility of being able to provision additional resources when required as at during their seasonal peak periods which can be a period of one to two months twice in a year. It is expected that during this period more users will be using the system heavily and being able to add more capacity for this period will give the customer the needed flexibility. The non-metered service is more cost effective to this customer and addresses most of their resource requirements but is not flexible as the configuration cannot be changed.

When Oracle announced in June 2016 the bursting feature for non-metered service, it was just what my customer had been waiting for and straight away we put the feature to test. The feature is also straight forward; log in to the cloud console, select a new compute shape and click on the “Yes, Scale Up/Down Service” to apply the changes.

It worked quite well, in fact in a matter of a few minutes the system was back up and running with the additional capacity with no need to resize the database components such as the SGA, PGA and other settings; all these were already sized appropriately based on the new compute shape size, though if you have specific sizing requirements, they can be resized as required. All looks good and this bursting feature seems great and cost effective as it gives the flexibility to only spend on extra capacity when required.

However, a few minutes after scaling up I received the notification below via email:

“Your services are suspended due to exceeding resource quota. New instances can’t be created and existing instances will not be able to consume more of the resources that have exceeded the quota”.

Suspended? Really? What for? As I wasn’t quite sure what was going on I opened a service request with Oracle Support detailing what was done and they came back saying:

“There is a breach in your quota services. Please do free up resources to resume the suspension”.

I then referred them back to their own documentation below for the June 2016 update (https://docs.oracle.com/en/cloud/paas/database-dbaas-cloud/csdbn/index.html#CSDBN-GUID-4696D271-7B1A-43B5-9EF8-8C8179CAC1C9 ) which I have also extracted the key information as below:

===========================================================

Changes to Oracle Database Cloud Service non-metered subscriptions

If you have a non-metered subscription to Oracle Database Cloud Service, you can now use additional capacity above your non-metered subscription rate (also referred to as “bursting”). You will be charged per hour and billed monthly in arrears for this increased capacity, using the “Pay as You Go” model. Pricing for this increased capacity will be based on the current Per Hour list price as shown on the Pricing tab at https://cloud.oracle.com/database .

It is clear that there is therefore a need to closely monitor even the usage of non-metered Cloud solutions (whether on Oracle’s Cloud or any other), to ensure that bursting, while useful from a business point of view, is budgeted and accounted for from a financial point of view as well.

Here at Cintra, we have developed a comprehensive set of monitoring and alerting scripts for all Public Cloud environments, to ensure that you can keep a finger on the pulse of your usage, and react accordingly if it needs to change. Contact us now to find out more.

Written by Hakeem Ambali, Oracle DBA, Cintra UK – May 2017

Oracle Compute Cloud Service (IaaS) New Features

The latest versions of Oracle’s Infrastructure as a Service (IaaS) compute Cloud Service have added a slew of features and functionality that keep improving the capabilities and the performance of the service.

The most notable additions of the latest releases were:

High I/O shapes featuring NVMe SSD disks (February 2017)

It is now possible to use NVMe SSD disks as nonpersistent data disks attached IaaS instances.

The NVMe disks are available initially in some sites, you can check if they are available by listing the possible shapes on the Create Instance Wizard of your IaaS Cloud Console, or by using the GET /shape/ method using the Compute API , For more information, see REST API for Oracle Compute Cloud Service (IaaS).

The size of the disk is determined by the shape you select:

Shape OCPUs vCPUs Memory (GB) Size of SSD Disk (GB)
OCIO1M 1 2 15 400
OCIO2M 2 4 30 800
OCIO3M 4 8 60 1600
OCIO4M 8 16 120 3200
OCIO5M 16 32 240 6400

 

Building your own Windows images (April 2017)

You can now use your own license to build private Windows images and add them to your Oracle Compute Cloud Service account. You can use these images to create new Windows instances in Oracle Compute Cloud Service.

Using the REST API to set up a VPN connection using VPN as a Service (VPNaaS) (April 2017)

You can set up a VPN connection between your data center and IP networks in your Oracle Compute Cloud Service site using VPN as a Service (VPNaaS). VPNaaS uses IPSec-based tunnels to carry encrypted traffic between your data center and your instances in Oracle Compute Cloud Service. You can set up a VPNaaS connection using the REST API.

Shutting down and restarting instances

Instance life cycle management operations have been enhanced to allow you to shut down an instance that uses a persistent bootable storage volume. Earlier, you could delete an instance by stopping the instance orchestration. This would stop all the instances and other resources defined in the orchestration. Now you can shut down an instance without changing the state of other objects in the orchestration. You can restart the instance later. The instance resumes with the same configuration and data.

Orchestrations v2 REST API  (April 2017)

Since this release Oracle introduced an improved version of the orchestration framework, that provides a modular and flexible approach to creating and managing multiple resources through a single JSON file.

The new framework improves on the previous version by adding these features:

  • The ability to create references across interdependent objects, so that entire hierarchies can be created or restored without manually ensuring that dependencies are satisfied.
  • The ability to update some attributes of objects while the orchestration is running
  • The ability to manage the state and lifecycle of each instance in an orchestration, without disturbing the state of other resources

A full comparison between the previous version of the orchestration framework and the new one can be found here: About Orchestrations v2

Persistent SSD block storage (May 2017)

In some sites, SSD block storage is now available. You can attach these persistent SSD block storage volumes to instances as either boot disks or data disks.

You can check the detailed description of the new features here: Oracle Cloud What’s New for Oracle Compute Cloud Service (IaaS)

For more information on how the Oracle Public Cloud might benefit your business, contact Cintra today.

Written by Mattia Rossi, Cloud Architect, Cintra Italy May 2017

© This website and its content is copyright of © Cintra Software and Services 2011. All rights reserved.