Architecting comparable IaaS workloads across Oracle, AWS, Azure & Google
In arranging cost comparisons for customers across multiple cloud vendors, it became apparent that there are differences in the most core metric of all – processing power – that not all customers may be aware of. These differences when factored in with costs can make the already very good Oracle Cloud offering seem an even more powerful proposition.
Oracle measures compute in OCPUs whilst Amazon AWS, Microsoft Azure and Google Cloud all use vCPUs – but the 2 measures cannot be directly aligned as they are very different.
Let’s get right to the conclusions before we get into the detailed rational below.
- Real cores vs virtual CPUs
- Developers benefit from real CPU core resources in the Oracle cloud
- Workload Comparison
- This means we can directly compare on-premises workload to the Oracle cloud
- The value the investment in each the Oracle cloud instance is 100% realized.
- Overall Cost
- Oracle’s cloud delivers this enterprise workload capability at a lower cost
vCPUs: AWS, Azure, Google Cloud
AWS, Azure and GC vCPUs are charged at the Thread level, so a standard Intel processor core with Hyperthreading enabled has 2 Threads. Each Thread is then shared by VMs. Customers will be able to pay more for dedicated Cores, but these are not the norm.
OCPUs: Oracle Cloud Infrastructure (OCI)
Oracle OCPUs are charged at the Core level, with no sharing of compute resources! So, customers buying a single OCPU get a dedicated core with 2 Threads.
The dedicated Cores/Threads ensures guaranteed performance for workloads with no contention.
Going beyond just thread comparisons, even the simple view of 2 vCPU = 1 OCPU cannot be made due to thread contention, whereby other resources are using those threads. This can have considerable impact on hosting multi-threaded applications that scale to serve multiple users. Developers and architects cannot directly compare on-premise hardware to cloud hosted vCPUs.
The exact factor of vCPUs to OCPUs, to provide the same amount of processing power is subject to the amount of overloading on the vCPUs at any point in time. But it will almost always be greater than 2:1.
The only resolution is to over-purchase / flex onto a larger number of vCPUs with associated costs until workloads are met, or to leverage known, dedicated OCPUs to have guaranteed performance/pricing.
The following comparison was created using public list prices from Oct 2017.
|Vendor||CPU Cores||Mem||Storage||Utilization||Monthly Cost||Shape|
|Oracle OCI Frankfurt||2 OCPU / 2 Cores / Dedicated||14GB||400GB||100%||$131.00||VM.Standard1.2 on BMC|
|AWS Frankfurt||4 vCPU / 2 Cores / Oversubscribed||16GB||400GB||100%||$202.15||t2.xlarge linux|
|Azure Germany Central (Frankfurt)||4 vCPU / 2 Cores / Oversubscribed||14GB||512GB||100%||$217.72||D3 v2|
In this case, the shapes are driven by memory needs along with minimum core count. The Oracle shape provides 2 cores/4 threads with no contention at a lower price than AWS & Azure’s 2 cores/4 threads with contention.
Linux was used to avoid any differences in Windows licensing models.
The primary conclusion is to ensure your measurements for compute and other key performance metrics are balanced and allow a proper cost/performance assessment across cloud vendors.
Oracle has come from the high-end enterprise down, with a starting focus on massive workloads, guaranteed performance and stability. This engineering led viewpoint has driven its expansion into cloud from the Gen1 through to the Gen2 Bare Metal Cloud, now known as OCI. The focus is on engineering quality for the enterprise.
By comparison, AWS and Azure have come from the lowest commodity scale point of view and are slowly moving up. This is a key factor when planning the migration of enterprise workloads from on-premises to the cloud.