Thursday, March 15, 2018

Critical Capabilities for Public Cloud Infrastructure as a Service, Worldwide


Summary

Market-leading cloud IaaS providers deliver significantly greater capabilities than their competitors, which are relegated to niches. Enterprise architecture and technology innovation leaders must make both strategic and tactical use-case-specific provider selections.

Overview

Key Findings

  • Most public cloud infrastructure-as-a-service offerings deliver virtual machines, and the basic storage and networking capabilities associated with those compute resources, but the depth, breadth, quality and manageability of the capabilities vary significantly. The best providers integrate management capabilities, cloud software infrastructure services and extensive additional services that extend across the spectrum from IaaS to platform as a service, delivered as a unified whole — integrated IaaS+PaaS.
  • The most capable public cloud IaaS offerings can be successfully used for both new and existing applications; they enhance both developer productivity and infrastructure and operations efficiency, and may enable IT transformation. Other offerings may deliver significantly less value.
  • Many less-capable providers are now focusing on multicloud management, instead of trying to directly compete with the market leaders; their own cloud IaaS offerings are used to supplement one or more hyperscale cloud providers. Other providers have turned to targeting niches. Some offerings in this research are newly introduced and, thus, have a limited operational track record.

Recommendations

Enterprise architecture and technology innovation leaders responsible for cloud computing should:
  • Choose one or two strategic cloud IaaS providers that can meet most of their needs. Consider specialist providers for use cases that cannot be served well by your strategic providers. Adopt a bimodal IT sourcing strategy and ensure that you prioritize meeting the needs of developers and other technical end users who consume cloud IaaS, not just the needs of the I&O organization.
  • Test several offerings before committing to any one service; this is the only way to get a feel for the nuances of depth, breadth, quality, manageability, user experience and cost. Many public cloud IaaS offerings can be bought by the hour, with no contractual commitment.
  • Determine a provider's likely strategic future before migrating a significant percentage of applications into its cloud. Assume cloud IaaS offerings are not interchangeable and a workload will stay where you put it; think about application portability as a long-term strategic question.

What You Need to Know

Public cloud infrastructure as a service (IaaS) can be used for most workloads that can run in a virtualized x86-based server environment. Enterprise architecture and technology innovation leaders should ensure that they select offerings that meet the needs of developers and other technical end users of cloud IaaS solutions, not just the needs of the I&O organization. A focus on application needs and application life cycles will help drive agility, efficiency and the best fit for the use case. IT leaders should also keep in mind that the most effective way to adopt cloud IaaS usually requires embracing DevOps, taking advantage of the provider's proprietary capabilities, including management capabilities, cloud software infrastructure services and platform as a service (PaaS) capabilities, and moving toward an agile and transformative approach to IT. Using cloud IaaS as "rented virtualization" may be of limited benefit.
This Critical Capabilities research compares providers in the context of their ability to deliver value for five common use cases for public cloud IaaS:
  • Application development
  • Batch computing
  • Internet of Things (IoT) applications
  • Cloud-native applications
  • General business applications
This research should help you draw up a shortlist of appropriate providers for the public cloud IaaS use cases in your organization. Although it's still appropriate to choose providers in a use-case-specific way, most organizations will choose one or two providers for strategic adoption across multiple use cases.

Analysis

Critical Capabilities Use-Case Graphics

Figure 1. Vendors' Product Scores for the Application Development Use Case
Research image courtesy of Gartner, Inc.
Source: Gartner (October 2017)
Figure 2. Vendors' Product Scores for the Batch Computing Use Case
Research image courtesy of Gartner, Inc.
Source: Gartner (October 2017)
Figure 3. Vendors' Product Scores for the Cloud-Native Applications Use Case
Research image courtesy of Gartner, Inc.
Source: Gartner (October 2017)
Figure 4. Vendors' Product Scores for the General Business Applications Use Case
Research image courtesy of Gartner, Inc.
Source: Gartner (October 2017)
Figure 5. Vendors' Product Scores for the Internet of Things Use Case
Research image courtesy of Gartner, Inc.
Source: Gartner (October 2017)

Vendors

All of the providers evaluated in this Critical Capabilities research serve enterprise and midmarket customers, and they generally offer a high-quality service. Note that when we say "all providers," we specifically mean "all the evaluated providers included in this Critical Capabilities research," not all cloud IaaS providers in general.
Each of the public cloud IaaS offerings rated in this Critical Capabilities research is briefly summarized below.
For each service provider, we provide an overview that discusses the offering's fit to common use cases, as well as a set of notable service traits, which summarizes the offering's compute, storage and network capabilities compared with the list of expected characteristics above, along with notable management-related services (such as monitoring, autoscaling, templates or an integrated service catalog) and other services (specifically, cloud software infrastructure services, such as database as a service [DBaaS]).
Offerings that use VMware's vSphere hypervisor are described as vCloud-based or VMware-virtualized. A vCloud-based offering uses vCloud Director (vCD) software and offer access to the vCloud API; these service providers may offer their own portal, the vCD portal or both.
Throughout the offering descriptions, VM sizes are stated as "vCPUxRAM"; for instance, "8x32" refers to eight virtual CPUs (vCPUs) and 32 gigabytes (GB) of RAM. In general, a vCPU maps to a physical core on a CPU, but not always. Implementations vary between providers — and may actually vary within a particular provider's infrastructure — since many providers have multiple generations of hardware in their cloud. CPU performance varies widely, so if it is very important to you, you should carry out your own benchmarks. Bare-metal server sizes are stated as "CPUxRAM"; CPU is measured in physical cores. Maximum compute instance sizes are provided as a guideline for understanding the scope of what a provider offers.
We directly tested every offering, including conducting performance and provisioning benchmarks, and we have continuously monitored each service over a multiyear period. Provisioning times are factored into the Scaling score. Neither performance nor monitoring results are factored into the scores. Some profile text contains commentary derived from this testing. This data is available via the Gartner Cloud Decisions tool.
Embedded in all descriptions is commentary on the quality of the portal user interface (UI) and its responsiveness to user actions, as well as an evaluation of the documentation. Most providers offer documentation in their portals, as well as helpful tips embedded in the UI, but some providers only offer a manual. Broadly, these manuals are significantly less helpful, comprehensive and clear than portal-based documentation. Awkward, nonintuitive or slow UIs may frustrate users. Low-quality documentation may be frustrating, affect customer success and increase reliance on the provider's technical support. However, neither UI usability nor documentation was factored into the scoring.
Certain capabilities are possessed by all providers, unless noted otherwise. Other capabilities are common to many providers. We note deviations from the norm in the market; these deviations are sometimes positive and sometimes negative, since the norm is not always positive. These capabilities, and the standard terminology we use for them within the Notable Service Traits section of each vendor profile, are listed below. Capabilities that are "expected" should be assumed, unless noted otherwise; capabilities that are "not expected" are listed in the profiles when they are offered.
Compute-Related Capabilities
  • Hourly per-VM pricing (expected). All the providers offer, at minimum, per-hour metering of VMs, and some can offer shorter metering increments, which can be more cost-effective for short-term batch jobs. Providers charge on a per-VM basis, unless otherwise noted. Some providers offer a shared resource pool (SRP) pricing model or are flexible about how they price the service. In the SRP model, customers contract for a certain amount of capacity (in terms of CPU and RAM) but can allocate that capacity to VMs in an arbitrary way, and may be able to oversubscribe that capacity voluntarily; additional capacity can usually be purchased on demand by the hour.
  • Slow or nonsimultaneous provisioning (not expected, and negative). Most of the providers can provision a basic Linux VM within five minutes (although this will increase with large OS images, and Windows VMs typically take somewhat longer). Those that cannot are noted as having slow provisioning. Most providers can do simultaneous provisioning of multiple VMs; for example, provisioning 20 VMs will finish about as quickly as one VM. Those that cannot are noted as such, and the degradation can be significant (some providers take over an hour to provision 20 VMs). Nonsimultaneous provisioning has a major negative impact in disaster recovery, instant high-scalability and batch-computing scenarios.
  • Fixed-size VMs (expected, but negative). Most of the providers have a catalog of fixed-size VMs — specific configurations of vCPUs, RAM, and VM storage. Some providers with fixed-size VMs have a very limited range of VM sizes, while others have a wide variety of sizes and suit a broad range of use cases. Some providers allow customers to choose arbitrary-size VMs — any combination of vCPUs, RAM and VM storage, subject to some limits, such as a maximum ratio between vCPUs and RAM.
  • Resizable VMs (expected). Most of the providers can resize an existing VM without needing to reprovision it; those that cannot are explicitly noted as offering nonresizable VMs. Some of the providers can resize an existing VM without needing to reboot it (if the customer's OS can also support it).
  • Single-tenant VMs (not expected). Some of the providers are able to offer an option for single-tenant VMs within a public cloud IaaS offering, on a fully dynamic basis, where a customer can choose to place a VM on a host that is temporarily physically dedicated to just that customer, without the customer needing to buy a VM that is so large that it consumes the whole physical host. These VMs are typically more expensive than VMs on shared hosts. Providers that have this option are noted as such.
  • Bare-metal servers (not expected). Some of the providers are able to offer "bare metal" physical servers on a dynamic basis (metered by the hour or less). Providers with a bare-metal option are noted as such, along with estimated provisioning times, as some providers may take hours to provision a bare-metal server.
  • High-performance computing (HPC) options (not expected). Some of the providers are able to offer an option for GPUs, in conjunction with VMs or bare-metal servers. Some may also offer fast network interconnects that are useful for HPC clusters.
  • Container service (not expected). Some of the providers offer a Docker-based container service. In some cases, the customer can control the underlying VMs and, thus, control the container host OS; in other cases, the provider controls the underlying VMs.
  • Colocation (expected). All the providers offer an option for colocation, unless otherwise noted. Many customers have needs that require a small amount of supplemental colocation in conjunction with their cloud — most frequently for a large-scale database, but sometimes for specialized network equipment, software that cannot be licensed on virtualized servers, or legacy equipment.
  • Autorestart (expected). Most of the providers have resilient infrastructure, achieved through redundant infrastructure in conjunction with VM clustering, or the ability to rapidly detect VM failure and immediately restart it on different hardware. They are thus able to offer very high SLAs for infrastructure availability — sometimes as high as 99.999% (sometimes expressed as a 100% SLA with a 10-minute exclusion). "No autorestart" indicates offerings without VM clustering or fast VM restart — the key features that provide higher levels of infrastructure availability than can be expected from a single physical server.
  • Maintenance windows (expected, but negative). Most of the providers have maintenance windows that result in downtime of the control plane (including the GUI and API), and they may require infrastructure downtime. Some offerings utilize a technology that allows VM-preserving host maintenance, but they may still have downtime for other types of maintenance. Some utilize live migration of VMs, largely eliminating the need for downtime to perform host or data center maintenance, but this does not eliminate maintenance windows in general.
  • Multiple data center zones (not expected). Some providers have multiple data centers ("zones") within a region — with high-speed connectivity and intra-data-center latencies that are low enough to allow for synchronous replication — and enable customers to explicitly choose zones within the region. This allows customers to more easily achieve high availability.
  • Replication (not expected). Infrastructure resources are not normally replicated automatically into multiple data centers, unless otherwise noted. Customers are responsible for their own business continuity. Some providers offer optional disaster recovery solutions.
Storage-Related Capabilities
  • Ephemeral storage (not expected, and negative). Typically, the storage associated with an individual VM is persistent. However, some providers have ephemeral storage, where the storage exists only during the life of the VM; if the VM goes away unexpectedly (for instance, due to hardware failure), all data in that storage is lost. Providers with ephemeral storage may offer persistent storage options as well; however, ephemeral storage is often less expensive. Ephemeral storage is always noted explicitly.
  • VM-independent block storage (expected). All the providers offer VM-independent block storage, unless otherwise noted. A few providers allow storage volumes to be mounted on multiple VMs simultaneously, although customers must correctly architect their solutions to ensure data integrity (just as they would with a traditional storage array).
  • File storage (not expected). Some of the providers offer file storage — shared network storage that supports file protocols such as NFS and SMB. Providers that have this option are noted as such.
  • All-SSD storage (expected). Storage performance varies considerably between providers. Most providers can offer solid-state drives (SSDs). Providers that cannot do so, or that offer only SSD-accelerated storage (where SSDs are used to cache data for higher performance), are noted as such.
  • Object storage (expected). All the providers offer object-based cloud storage, unless otherwise noted. In many cases, this service is integrated with a content delivery network (CDN).
  • Encryption (not expected). Some providers offer encryption as part of some or all of their storage services. In some cases, encryption is always applied; in others, the customer must choose to enable encryption.
Network-Related Capabilities
  • Complex network topologies (expected). All the providers offer customers a self-service ability to create complex hierarchical network topologies with multiple network segments, and to have multiple IP addresses per VM, including a public and a private IP, unless otherwise noted. Some providers use a software-defined network (SDN), which typically allows greater API-based control over the network.
  • Load balancing (expected). All the providers offer self-service, front-end load balancing, unless otherwise noted. All also offer back-end load balancing (used to distribute loads across the middle and back-end tiers of an application), unless otherwise noted. Some also offer application-layer load balancing (content routing).
  • Private WAN (expected). All the providers have a private WAN that connects their data centers, unless otherwise noted.
  • Private network connectivity (expected). All providers offer an option for private network connectivity (usually in the form of Multiprotocol Label Switching [MPLS] or Ethernet purchased from the customer's choice of carrier), between their cloud environment and the customer's premises, unless otherwise noted.
  • Third-party connectivity is via partner exchanges (not expected). Some providers allow customers to obtain private connectivity via cross-connect in the data centers of select partners, such as Equinix and Interxion; this also meets the needs of customers who require colocated equipment. Some carriers may also have special products for connecting to specific cloud providers — for example, AT&T NetBond and Verizon Secure Cloud Interconnect.
  • Private IPs (expected). All the providers allow customers to have VMs with only private IP addresses (no public internet connectivity), and also allow customers to use their own IP address ranges, unless otherwise noted. Most of the providers support the use of internet-based IPsec virtual private networks (VPNs). Some providers may enforce secure access to management consoles, restricting access to VPNs or private connectivity.
  • DNS (expected). All providers have a DNS service, unless otherwise noted.
Security-Related Capabilities
  • Compliance (expected). All the providers claim to have high security standards. The extent of the security controls provided to customers varies significantly, though. Most providers offer additional security services. All the providers evaluated can offer solutions that will meet common regulatory compliance needs, unless otherwise noted. Unless otherwise noted, all the providers have ISO 27001 assessments for their public cloud IaaS offering (see Note 1). Many will have SOC 1, 2 and 3 (see Note 2). A few can meet FedRAMP requirements (see Note 3). Some can support PCI compliance with stored cardholder data. Some can support Health Insurance Portability and Accountability Act (HIPAA) compliance and will sign a business associate agreement (BAA). Third-party assessments should not be taken as indications of security, though some, such as FedRAMP, indicate that the provider is likely to have an acceptable security posture.
  • Restricted administrative access (expected). All the providers conduct background checks on personnel, prohibit their personnel from logging into customer compute instances unless the provider is doing managed services on behalf of the customer, and log the provider's administrative access to systems, unless otherwise noted.
  • Role-based access control (RBAC; expected). All the providers offer a portal and self-service mechanism that is designed for multiple users and that offers hierarchical administration and RBAC. However, the degree of RBAC granularity varies greatly. From most to least control, RBAC can be per element, tag, group or account. Unless stated otherwise, a provider's RBAC applies across the account. Providers typically predefine some roles; the ability to have customer-defined roles offers more control, and is noted where available. We strongly recommend that customers that need these features, but that want to use a provider that does not have strong support for them, evaluate a third-party management tool, such as CliQr, RightScale, or Scalr; be aware, however, that this can create a security single point of failure, because the tool is typically fully privileged, and users may be able to circumvent the tool by using the provider's portal or API directly.
  • MFA (expected). All providers offer multifactor authentication (MFA) for the portal, unless otherwise noted.
  • Logging (expected). All the providers log events, such as resource provisioning and deprovisioning, VM start and stop, and account changes, and allow customers self-service access to those logs for at least 60 days, unless otherwise noted.
  • Stateful firewall (expected). Most of the providers offer a stateful firewall (intrusion detection system/intrusion prevention system [IDS/IPS]) as part of their offering, although a few offer only access control lists (ACLs), and a few offer no self-service network security at all; this will always be explicitly noted. Providers that also offer a web application firewall (WAF) are noted as such.
  • Anti-DDoS (expected). All providers provide distributed denial of service (DDoS) attack mitigation, unless otherwise noted.
Management-Related Capabilities
  • Control plane is continuously available (not expected). A few providers maintain continuous availability of their control plane — they do not have any planned maintenance that results in outages in which the portal or API is unavailable.
  • No maintenance windows (not expected). Most providers have maintenance windows during which the performance or availability of customer infrastructure may be affected. Their SLAs exclude these maintenance windows.
  • Monitoring (expected). All the providers offer self-service monitoring as an option, unless otherwise noted.
  • Autoscaling (not expected). A few providers offer trigger-based autoscaling, which allows provisioning-related actions to be taken based on a monitored event. Some providers offer schedule-based autoscaling, which allows provisioning-related actions to be executed at a particular time. Note that many providers have autoscaling that stops and starts compute instances in a preprovisioned pool ("static autoscaling"), rather than provisioning or deprovisioning them from scratch ("dynamic autoscaling"). If stopped instances continue to incur charges (storage charges may apply even if there is no compute charge), static autoscaling may offer lessened cost-savings. Dynamic autoscaling is more flexible, but it may be slower than static autoscaling when the provider does not have very fast provisioning times.
  • Tagging (expected). All the providers, unless otherwise noted, offer the ability to place metadata tags on provisioned resources, and to run reports based on them, which is useful for internal showback or chargeback. Some providers also offer cost control measures, such as quotas (limits on what a user can provision) and leases (time-limited provisioning of resources).
  • Marketplace (not expected). Some of the providers offer a marketplace in which third-party commercial software can be licensed on-demand. Some marketplaces also contain open-source software.
Developer-Enablement-Related Capabilities
  • API (expected). All providers offer an API — it is fundamental to the definition of cloud IaaS. Providers vary in the degree of completeness of coverage of service features that is available in their API. For all providers, we note the nature of the API, and the degree of coverage provided by the API.
  • Command line interface (CLI; expected). Most providers offer at least one CLI, although they vary in the degree of completeness of coverage of service features that can be controlled via the CLI. For all providers, we note CLI coverage and Windows PowerShell support, if available.
  • Integrated development environment (IDE) integration (not expected). Some providers have built plug-ins for IDEs, or are otherwise directly supported by IDEs. A provider is considered to have IDE integration if both Eclipse and Visual Studio support that provider's platform.
We also list cloud software infrastructure services, such as DBaaS and other middleware as a service, regardless of whether these capabilities are delivered at the IaaS or PaaS level. Some of the providers have so many services of this type that we do not list them all. We also note the existence of an application platform as a service (aPaaS), even though we do not evaluate aPaaS in this assessment, because aPaaS is useful to many cloud IaaS customers.
The capabilities listed above are by no means comprehensive, and they are considered industry norms for enterprise-class offerings; few such capabilities are differentiating. Providers need basic capabilities to be implemented in a robust fashion in order to score well on this critical capability evaluation, but they also need capabilities beyond the basics listed above.
Keep in mind that a provider that seems like a good fit for a particular category of use case might not be an ideal fit for a specific need, as individual technical and business requirements and priorities vary. Conversely, a provider with a mediocre score on the relevant use case may nevertheless have the best fit to your particular requirements, especially if you have unusual constraints.

Alibaba Cloud

Alibaba Cloud was established in 2009, and its international cloud offering, with an English-language portal, was launched in mid-2016. In this evaluation, we assess the international offering, which has capabilities that are less extensive than the Mandarin-language offering in mainland China.
Alibaba Cloud has a broad range of IaaS and PaaS capabilities, and it tends to imitate other cloud services, especially AWS, both in offering structure and branding. Caution should be used when considering Alibaba capabilities that are supposedly AWS-compatible. For example, Alibaba Cloud claims Amazon S3 API compatibility, but implements a substantially different authentication mechanism; thus, an application that integrates with the S3 API will need low-level modifications before it can potentially work with Alibaba's object storage service.
Alibaba's UI employs a clean aesthetic, heavily inspired by the simplicity of modern VPS providers, such as Digital Ocean. Though Alibaba's portal is easy to use, we encountered significant issues with responsiveness and performance when we tested the portal from outside mainland China. Alibaba's English-language documentation is thorough from the perspective of portal, product and API capabilities.
Alibaba has well-rounded batch computing and analytics offerings. Alibaba offers a managed Hadoop and Spark service and has recently gained native support through the open-source versions of those frameworks as well. Hadoop and Spark frameworks employ a widely adopted interface known as S3A, which will now work with Alibaba's Object Storage Service (OSS).
Alibaba's offering is not designed to target customers with traditional general business applications. Notably, Alibaba lacks support for Microsoft workloads beyond Windows Server, which can be installed from readily available, prebuilt images.
Alibaba Cloud is likely to appeal to developers building cloud-native applications operated with a DevOps philosophy, and for whom mainland China is a primary deployment target.
Notable Service Traits:
  • Compute: Xen or KVM-virtualized, fixed-size and resizable (requires reboot), with a broad range of VM sizes. Maximum VM size of 56x480. Docker container service integrated into Elastic Compute Service (ECS).
  • Storage: Ephemeral local storage (NVMe with some compute instances), VM-independent block storage (Cloud Disks), tiered object storage with integrated CDN (OSS with Cloud CDN).
  • Network: Capable SDN (VPC). DNS service (Cloud DNS) allows specific carrier selection for traffic. Third-party connectivity via partner exchanges (Express Connect).
  • Security: Can meet some common audits and compliance requirements, including SOC 1, SOC 2, SOC 3, ISO 27001 and PCI. Encryption available for some data stores. Very granular RBAC (Resource Access Management [RAM]), including RBAC across multiple accounts. Key management service. WAF and anti-DDoS.
  • Management: No maintenance windows for most services. Broad range of native capabilities, including monitoring (CloudMonitor), dynamic autoscaling, templates (Resource Orchestration), service catalog, and billing management. Significant marketplace.
  • Developer Enablement: RESTful API with extensive coverage, and a broad range of language bindings and SDKs from Alibaba and the community. There is a primary CLI, but some services require installation of additional CLIs.
  • Other Services: Database (ApsaraDB for RDS), caching (ApsaraDB for Redis), data warehouse (MaxCompute), data analytics (E-MapReduce) and many more.

Amazon Web Services

Amazon Web Services (AWS) essentially created the cloud IaaS market with the 2006 introduction of its Elastic Compute Cloud (EC2), and it still offers the richest suite of public cloud IaaS capabilities, along with deep and broad PaaS-layer capabilities.
AWS is suitable for nearly all use cases that run well in a virtualized environment. Applications should not need to be modified to run on AWS, although customers may benefit from optimizing applications for the platform. Customers also need to pay attention to best practices for resiliency, performance and security. AWS is a complex platform, due to its extensive array of capabilities and options. There is extensive high-quality documentation, and the portal tries to offer good experiences to both novice users and experts with complex large-scale needs; however, depth of functionality seems to be prioritized over ease of use.
AWS is an especially strong choice for digital business and other new applications, including customer-facing applications, big data and analytics, and back ends for mobile applications and IoT. Its extensive suite of services is useful for improving developer productivity and simplifying operations, and customers typically use a blend of AWS's IaaS and PaaS capabilities. It is best-suited to a DevOps style of operations, but traditional operations approaches are also viable.
AWS is also commonly used for legacy applications in a "lift and shift" approach, as well as transformation-oriented full data center migrations, due to its solid feature set, platform resiliency and maturity, and the ability to meet most requirements for security and regulatory compliance. AWS has introduced capabilities that facilitate migration for applications, servers and databases, and can move large amounts of data via its Snowball data-transfer appliances. It has fast provisioning times that may make it attractive as a disaster recovery platform when using an identical hypervisor is not a priority. AWS's extensive marketplace benefits customers, as well as independent software vendors (ISVs) and SaaS providers. Features such as cross-customer Virtual Private Cloud (VPC) peering help AWS customers integrate more easily with third-party providers.
AWS has many customers with a large number of users, such as developers, scientists, engineers or researchers; it has the most sophisticated capabilities for management and governance of many accounts, users and infrastructure components. Batch-computing users may value the AWS Spot Market, which uses a reverse-auction style of bidding for compute instances.
AWS appeals most strongly to customers who value thought leadership, cutting-edge capabilities, the ability to meet most security and regulatory compliance needs, or a "safe" provider that has a well-proven reliable service and is likely to continue to be a long-term market leader.
(See "In-Depth Assessment of Amazon Web Services" for a detailed technical evaluation. "In-Depth Assessment of Amazon Web Services Application PaaS" might also be of interest.)
Notable Service Traits:
  • Compute: Xen-virtualized, fixed-size and nonresizable, but with a broad range of VM sizes. Maximum VM size of 128x1952. Single-tenant VM option (Dedicated Instances and Dedicated Hosts). HPC option that includes GPUs and high-performance network interconnects. FPGA instances. Option for burstable-CPU smaller VMs. Per-second metering. Docker container service integrated into EC2.
  • Storage: Ephemeral local storage (NVMe with some compute instances), VM-independent block storage (Elastic Block Store [EBS]), tiered object storage with integrated CDN (S3 with CloudFront), file storage (EFS) and archive storage (Glacier). Optional encryption. Optional Provisioned IOPS for EBS provides quality-of-service guarantees for storage performance.
  • Network: Highly configurable and sophisticated SDN (Amazon VPC). Load-balancing options include content routing. DNS (Route 53) includes global load-balancing service. Third-party connectivity via partner exchanges (Direct Connect).
  • Security: Can meet almost all common audits and common compliance requirements, including SOC 1, SOC 2, SOC 3, ISO 27001, FedRAMP, PCI, HIPAA and GxP (pharmaceutical industry). Encryption available for most data stores. Very granular RBAC (Identity and Access Management [IAM]), including RBAC across multiple accounts. MFA also includes API. Active Directory integration. Key management service, including hardware security modules. WAF, including integrated anti-DDoS.
  • Management: Control plane is continuously available. No maintenance windows. Multiple data center zones. Broad range of native capabilities, including monitoring (CloudWatch), dynamic autoscaling, templates (CloudFormation), configuration management (Config, OpsWorks), service catalog, and billing management. Extensive marketplace.
  • Developer Enablement: Extensive API coverage, with RESTful interfaces, with a broad range of language bindings and SDKs from AWS and the community, including a secured mobile API. CLI and Windows PowerShell tool cover a wide range of functionality. IDE integration. Developer services (CodeStar and more).
  • Other Services: Database (Relational Database Service [RDS] for most common databases, DynamoDB); caching (ElastiCache); data warehouse (Redshift); data analytics (Elastic MapReduce); data ingest and event processing (Data Pipeline, Kinesis, Lambda, IoT); etc.

CenturyLink Cloud

CenturyLink has a range of public and private cloud IaaS offerings, built on different platforms. Its primary public cloud IaaS offering is CenturyLink Cloud (CLC), which was obtained via the acquisition of Tier 3 in November 2013.
CLC attempts to tread the middle ground between the needs of IT operators and developers by providing a service that appeals to traditional I&O teams, but that also offers ease of use and API-controllable capabilities to developers. However, developers typically expect far more extensive PaaS capabilities than CLC possesses.
CLC's UI is attractive and responsive; most tasks are straightforward, and the documentation is good. Although CLC can be used in an entirely self-service fashion, CenturyLink also provides many add-on managed services that are heavily automated but still have some human-guided elements. Most customers who choose CLC are likely to purchase some managed services; customers with strong security or regulatory compliance needs are likely to require managed security services to fill self-service feature gaps.
CLC's scriptable template system, Cloud Blueprints, is capable of provisioning complex, multi-data-center infrastructure configurations, and it is integrated with a marketplace. CLC provides additional Ansible-based orchestration capabilities through its multicloud automation service, Runner.
CLC's capacity pool is, however, relatively small, and provisioning throughput is relatively low, resulting in lengthy provisioning times when many VMs need to be provisioned at the same time. This limits the usefulness of the service for applications that require rapid scalability, or for on-premises disaster recovery scenarios that demand short recovery time objectives.
CLC is an acceptable platform for general business applications that run well in a virtualized environment (or that are suited to CLC's bare-metal server configurations, although these can take more than 20 minutes to provision), and thus can be used to "lift and shift" applications without change. It has solid features for user governance, including the ability to provide a restricted service catalog, which may make it appealing as a lab environment for developers. The CLC portal also allows a customer's administrator to set the price that his or her subaccounts see for CLC services, thus surfacing a price for internal chargeback.
CLC will appeal to I&O teams that are prioritizing their own operational requirements or want to use managed services, but need to appease developers who want self-service infrastructure.
Notable Service Traits:
  • Compute: VMware-virtualized, non-fixed-size, with a maximum VM size of 16x128, and non-simultaneous provisioning. Bare-metal servers in eight configurations, up to a maximum size of 24x512.
  • Storage: Local storage, VM-specific block storage and object storage. Media disposal does not meet NIST standard.
  • Network: Limited number of VLANs per account, and they are predefined.
  • Security: SOC 2 and SOC 3 audits. Will support PCI and HIPAA with BAA. No MFA. Group-based RBAC.
  • Management: Monitoring only covers metrics for compute and storage; there is no availability monitoring. Horizontal and vertical static autoscaling. Patching. Backups. Templates (Blueprints), including a marketplace. Continuous configuration automation (Runner). Service catalog.
  • Developer Enablement: Extensive API coverage. RESTful interfaces with a wide assortment of vendor-developed language bindings. CLI covering functionality from older versions of RESTful interfaces.
  • Other Services: MySQL DBaaS. Microsoft SQL Server DBaaS in beta. Separate Cloud Foundry-based PaaS (AppFog). Disaster recovery (SafeHaven). Multicloud CMP as SaaS (ElasticBox).

Fujitsu Cloud Service K5 IaaS

Fujitsu launched Fujitsu Cloud Service K5 IaaS in 2016. It is an OpenStack-based offering with a variety of tenancy models. Fujitsu first launched a common global platform for its public and private cloud IaaS offerings in 2010; these offerings, including Fujitsu Cloud IaaS Trusted Public S5, still exist, but are not evaluated here.
Although K5 is an improvement over S5 and slightly expands the viable use cases, K5 is still missing a wide range of basic infrastructure capabilities, and has few integrated PaaS capabilities. The UI is somewhat awkward, and its performance can be sluggish. The documentation comes in the form of a manual and can be confusing, and the UI lacks integrated help.
K5 may offer an acceptable set of baseline capabilities for IT organizations that require an Asia/Pacific-based offering for one of the following four needs: as an IaaS platform in conjunction with Fujitsu managed services; for the lift-and-shift of small general business applications that run well in a virtualized environment and do not have significant security or regulatory compliance requirements; development environments for small teams; and basic IaaS used to supplement the use of Fujitsu's Cloud Foundry-based PaaS.
Notable Service Traits:
  • Compute: KVM-virtualized, fixed-size and resizable VMs, with a maximum VM size of 24x128. Bare-metal servers with a maximum size of 40x250, as well as SAP-Hana-specific configurations.
  • Storage: Block storage has limited replication capabilities. No SSD options. Object storage. No data import from physical media.
  • Network: Does not fully support self-service complex hierarchical network topologies.
  • Security: SOC 2 and ISO 27001 audits. Whole-account, limited RBAC. No DDoS mitigation. No MFA. Limited logging of portal and API actions.
  • Management: Monitoring (based on OpenStack Monasca) provides metrics and email notification, but no availability alerts. Tagging is supported via the OpenStack APIs only. Only SAML federation.
  • Developer Enablement: RESTful OpenStack APIs, as well as K5-specific APIs, including APIs for monitoring and pricing data. No CLI, but OpenStackClient can be used.
  • Other Services: DBaaS (Postgres and Microsoft SQL Server). Separate Cloud Foundry-based PaaS.

Google Cloud Platform

Google Cloud Platform (GCP) combines IaaS and PaaS capabilities within an integrated solution portfolio. Google's VM offering, Google Compute Engine (GCE), was launched in June 2012 and became generally available in December 2013.
GCP has a rich set of well-architected, innovative capabilities, and is suitable for a broad range of use cases that run well in a virtualized environment. It has a well-designed UI that balances ease of use for both simple and complex tasks; the pleasant user experience is further facilitated by excellent documentation.
GCP's capabilities should appeal strongly to developers building new cloud-native applications, especially those that intend to adopt a broad range of Google API-enabled services beyond GCP itself. It may also appeal to those that have container-based architectures and want Kubernetes as a service. Its comprehensive API coverage facilitates a DevOps approach. Although it does not have as many cloud software infrastructure services as its leading competitors, some services, such as its Cloud Spanner DBaaS, often contain uniquely differentiated capabilities.
GCP is also a very attractive platform for batch computing; it has exceptionally fast provisioning times and a very large pool of available capacity, along with low-cost "preemptible VMs" (which can be shut down by the platform at any time), making it especially well-suited to short-lived large-scale batch jobs. Google Genomics focuses these capabilities in a vertical-specific fashion.
GCP is also attractive for a variety of data analytics use cases, especially those that take advantage of its BigQuery platform or machine-learning APIs. GCP is also well-suited to many IoT use cases. Although it does not have a specific IoT platform, it does provide IoT-specific documentation and an IoT prototyping kit for developers.
GCP now has sufficient technical capabilities to meet the needs of many general business applications, and it is not necessary to rearchitect applications to run on GCE. During the past year, Google has been steadily introducing features that facilitate the "lift and shift" of traditional applications; consequently, many of these features are new or in beta.
GCP's strengths lie in developer enablement and new applications. It will appeal to organizations that like cutting-edge capabilities and have use cases that fit well into GCP's strengths.
(See "In-Depth Assessment of Google Cloud Platform IaaS" for a detailed technical evaluation.)
Notable Service Traits:
  • Compute: KVM-virtualized, fixed-size, with a broad range of VM sizes. Maximum VM size of 96x624. GPU instances. TensorFlow-optimized instances (Cloud TPUs) in alpha. Option for burstable-CPU smaller VMs. Per-second metering. Kubernetes-based Docker container service (Google Container Engine) integrated with GCE.
  • Storage: Ephemeral and persistent local storage (NVMe with some compute instances). VM-independent block storage. Tiered object storage with integrated CDN. All data is encrypted at rest and in motion; customers can provide their own keys.
  • Network: High-performance, configurable SDN. LAN and WAN encryption. Integrated local and global load balancing, including content routing, using AnyCast, rather than the Cloud DNS service. Third-party connectivity via partner exchanges (Cloud Interconnect).
  • Security: SOC 1, SOC 2, SOC 3 and ISO 27001 audits. Will support PCI and HIPAA with BAA. WAF. Granular RBAC.
  • Management: Control plane is continuously available. No maintenance windows. Monitoring, error reporting and debugging. Performance reporting. Templates (Cloud Deployment Manager). Marketplace (acquired Orbitera).
  • Developer Enablement: Extensive API coverage. RESTful interfaces with a broad set of language bindings. CLI and Windows PowerShell support a wide range of functionality. IDE integration.
  • Other Services: Database (Cloud SQL for MySQL and Postgres, Cloud Bigtable, Cloud Datastore, Cloud Spanner), data ingest (Cloud Dataflow, Cloud Pub/Sub), data analytics (BigQuery, Cloud Dataproc) and more. Application PaaS (Google App Engine).

IBM Bluemix Infrastructure

IBM's public cloud IaaS offering is anchored by the services offered by its SoftLayer subsidiary, which is being gradually absorbed into IBM and its broader cloud portfolio. SoftLayer, a provider of dedicated hosting and cloud IaaS, was acquired by IBM in July 2013, and its services replaced IBM's SmartCloud Enterprise offering. Currently, its infrastructure services are offered via the SoftLayer.com portal, where they continue to be branded "SoftLayer," as well as within the IBM Bluemix portal, where they are branded "IBM Bluemix Infrastructure." For clarity, the text of this profile calls these infrastructure services "SoftLayer" regardless of which portal is used to provision them (the services are the same regardless), and uses "Bluemix" to refer to the Bluemix portal and the non-SoftLayer services in that portal.
Customers use IBM IDs to sign into both the SoftLayer and Bluemix portals. Where necessary, the authentication back end translates IBM IDs into SoftLayer IDs and authenticates against the SoftLayer control plane. The SoftLayer API uses SoftLayer API keys; example code is provided for translating an IBM ID into a SoftLayer API key. There is also a SoftLayer CLI. The SoftLayer UI is reasonably usable, but the documentation is sprawling and complex.
Within the Bluemix portal, the infrastructure-related UI elements are drawn from the SoftLayer portal (the elements have SoftLayer URLs), but appear to be part of the Bluemix console. However, infrastructure provisioning actions directly create a pop-up window for the SoftLayer portal.
The Bluemix portal also contains a Cloud Foundry-based PaaS, a Kubernetes-based container service, and a variety of cloud software infrastructure services, with clean UIs and good documentation. These services use IBM IDs and Bluemix API keys, have their own APIs, and are incorporated into the Bluemix CLI. These services run in SoftLayer's data centers, but there is no self-service SDN for network integration between these services and a customer's SoftLayer infrastructure.
By Gartner's definitions, SoftLayer and the other services in the Bluemix portal are not integrated IaaS and PaaS. They do not share a fully integrated portal, API and CLI; they do not share a single low-latency network context; and they lack unified security controls. This evaluation is focused on IaaS capabilities and, thus, on SoftLayer. Consequently, our evaluation acknowledges the availability of additional capabilities via the Bluemix portal, but our scoring takes into account the lack of integration. Customers should be aware that the experience feels disjointed.
In this assessment, we evaluate only those capabilities that are available as a cloud service — those that are standardized, fully automated and metered by the hour (or less). SoftLayer's noncloud capabilities are typically provided via hardware rented by the month, rather than as abstracted services; we refer to these solutions as "hosted appliances." Regardless of the portal used, all customers have a quota of compute instances, and provisioning additional capacity requires sales order approval.
SoftLayer's compute options include bare-metal servers, as well as VMs, and VMs can be on single-tenant or multitenant hosts. Although some capabilities require VMs, SoftLayer tries to minimize the differences between VMs and bare-metal servers; its strength lies in these bare-metal capabilities. However, SoftLayer's services were originally designed for the small-business market, and its greatest weaknesses are in the capabilities that are desired by larger organizations.
SoftLayer is best-suited to use cases that require API-provisioned bare metal, but do not require API control or on-demand capacity for other solution elements (such as a load balancer). Bare-metal cloud servers may take more than 20 minutes to provision (and the customized hosting configurations may take up to four hours), and Gartner encountered repeated provisioning failures during testing. This relatively slow provisioning speed may limit their suitability for batch computing and cloud-native use cases. Furthermore, customers have reported capacity constraints in several data centers.
Organizations could potentially consider SoftLayer for "lift and shift" migrations, simply using SoftLayer bare-metal servers as a substitute for on-premises servers — renting, rather than buying, servers. SoftLayer is less-suited to other general business application use cases. Customers have reported, and Gartner monitoring confirms, multiple outages in the last year; most have been network-related. Users are notified of maintenances, but customers need to carefully configure notifications, as SoftLayer can generate a large number of email alerts. VM import is limited to a narrow range of OS versions. SoftLayer's weak user management capabilities, without granular RBAC, make it unsuitable for most large-scale application development use cases.
Organizations that are using IBM Bluemix PaaS offerings, and that need a few IaaS resources for portions of the application not well-suited to PaaS, may find SoftLayer (Bluemix Infrastructure) to be a practical option. Organizations that are outsourcing their infrastructure to IBM, and are replatforming onto SoftLayer dedicated hosting, may find that SoftLayer's cloud services are a useful complement.
Notable Service Traits:
Compute: Citrix Xen-virtualized, fixed-sized, multitenant or single-tenant VMs, in many possible sizes, up to a maximum VM size of 56x242. Bare-metal servers up to a maximum size of 24x256. No VM autorestart.
Storage: VM-specific and VM-independent block storage; no all-SSD option. SoftLayer object storage (without multi-data-center replication) and, via the Bluemix portal, IBM Cloud Object Storage (with an S3 API). Cannot snapshot VM-specific storage.
Network: Does not fully support complex hierarchical network topologies or customer-provided private IP addresses without use of a hosted appliance. No load balancing; customer can use a hosted appliance.
Security: SOC 1, SOC 2, SOC 3 and ISO 27001 audits. Will sign HIPAA BAA. IBM personnel can log into compute hosts.
Management: Monitoring (CA Nimsoft-based). Dynamic autoscaling. SAML federation is supported for IBM IDs. Marketplace.
Developer Enablement: SoftLayer has extensive API coverage, but the API is complex and designed more for partner integration than for customer self-service. SoftLayer's API uses a SOAP interface, with XML-RPC and REST alternatives; there are bindings available for popular languages. SoftLayer's CLI covers a limited set of functionality.
Other Services: Bluemix Cloud Foundry PaaS, Bluemix container service (Docker with Kubernetes), IBM Cloud Functions (Bluemix service based on OpenWhisk), and various Bluemix cloud software infrastructure services (such as a message bus, push notifications, Hadoop and Spark), including one-click offerings that leverage IBM Compose.

Interoute Virtual Data Centre

Interoute entered the cloud IaaS market with the 2012 launch of its VDC offering. We last evaluated this offering in 2015, when it was VDC version 1.0; the current version 2.0 represents a significant improvement in capabilities. Although it supports multiple hypervisors, hypervisor choice can affect what higher-order capabilities are available to a VM. The VMware hypervisor is preferred in VDC 2.0.
Interoute VDC offers various tenancy models, including multitenant public cloud. Although many other network service providers also have cloud IaaS offerings connected to their WAN services, Interoute has done a unique degree of integration between its network services and its cloud infrastructure. Customers can integrate and API-control their VDC LAN in conjunction with their Interoute WAN services.
Interoute VDC's UI is somewhat awkward and nonintuitive to use, and some UI operations are frustratingly slow. (Interoute intends to launch a completely revamped UI in October 2017.) The documentation is adequate for basic tasks.
Interoute VDC is best-suited to deploying production applications that require integration with the WAN, such as multisite distributed applications and applications that replicate data across multiple data centers. These can be either cloud-native applications or legacy applications.
Interoute VDC will appeal to organizations for which network integration is a high priority, especially those seeking DevOps-oriented control over an SDN WAN, or that have Pan-European cloud IaaS data center needs.
Notable Service Traits:
  • Compute: CloudStack-based and multihypervisor (VMware, Citrix Xen or KVM) VMs. Maximum VM size of 12x128 under VMware.
  • Storage: Full-featured ephemeral, persistent, block and file storage services. Object storage, with limited replication capabilities. Media disposal is compliant with EU Data Protection Directive (disposal that meets NIST standard available on request).standard.
  • Network: Customers can integrate LAN topologies with Interoute's WAN services and configure multisite networks via its portal, as well as an API. Full-featured load balancing and content routing.
  • Security: SOC 1 (ISAE 3402) and ISO 27001 audits. No MFA.
  • Management: Monitoring. Puppet integration. Snapshot management.
  • Developer Enablement: REST-based API with extensive coverage and multicloud library support.
  • Other Services: DBaaS (MySQL, Postgres, Microsoft SQL Server, Oracle).

Joyent Triton Public Cloud

Joyent entered the cloud IaaS market in 2007. It was acquired by Samsung Electronics in 2016, but continues to be managed as an independent entity.
Joyent did not respond to requests for the technical details of their service. Therefore, Gartner's analysis is based on Joyent documentation and hands-on testing of its service.
The Joyent Triton Public Cloud offers a unified model for VM and container-based compute instances; it can be used for VM-based or containerized applications, and may be especially useful for applications that blend both models.
Joyent's approach is distinctive in the market, both in its technical implementation and its service constructs. The portal experience varies in its ease of use and responsiveness. However, the documentation is not sufficiently thorough, which makes it more challenging for customers to adopt Joyent's unique offering.
Triton is fully container-native across compute, storage, and network resources. Both containers and VMs are managed via the same conceptual constructs. Customers that are building new cloud-native applications that use container-oriented, mini- or microservice architectures may find the Triton service attractive.
Triton Converged Analytics leverages containers to offer a serverless function PaaS that collocates compute capabilities with object storage, allowing it to be used for a range of ETL capabilities as well as in-place analytics. This may make Triton an attractive platform for batch computing and big data workloads.
Joyent should be looked at primarily as a specialized, container-oriented offering in the market, rather than as a general-purpose cloud IaaS offering (where the breadth of its feature set lags the market as a whole).
Notable Service Traits:
  • Compute: Triton uses Joyent's own SmartOS, an open-source Type 1 hypervisor based on Illumos (an OpenSolaris derivative). SmartOS can run Linux-targeted binaries without modification. Customers have a choice between OS virtualization in a SmartOS Container (with a Docker API) and KVM virtualization on a SmartOS Container for Linux and Windows guests. The maximum instance size is 32x256. Provisioning is exceptionally fast.
  • Storage: Local storage for containers. No block storage. Object storage service (Manta) with an integrated in-place batch compute service (Triton Converged Analytics). No data import from physical media. Media disposal does not meet NIST standard.
  • Network: Container-native SDN does not support complex network topologies. The Triton Container Name Service (CNS) provides container-native DNS and basic load-balancing capabilities. No private IPs. No private WAN. No private network connectivity.
  • Security: SOC 1 audit. Will support PCI and HIPAA with BAA. Granular RBAC (defined by Joyent's Aperture Policy Language). No DDos mitigation. Limited MFA (HOTP-based).
  • Management: Monitoring and log management integrated with DTrace.
  • Developer Enablement: RESTful APIs with broader coverage than the portal, including API sets for managing the Triton cloud, managing Docker, and storage and analytics. The CLI manages both containers and VMs.
  • Other Services: None within the scope of this evaluation.

Microsoft Azure

Microsoft Azure combines a rich suite of IaaS and PaaS capabilities within an integrated solution portfolio. Azure was initially a PaaS launched in 2010. IaaS VMs (Azure Virtual Machines) were launched in June 2012 and became generally available in April 2013.
Microsoft Azure is suitable for a broad range of use cases that run well under virtualization. Although it is not as mature or feature-rich as AWS, it is more broadly capable than any of its other competitors, and it has its own distinct set of differentiated and innovative capabilities. Customers are likely to consider Microsoft Azure for hosting Microsoft applications such as SharePoint, as well as use cases where the application is Windows-based, is written in .NET, is developed by a team using Microsoft developer tools such as Visual Studio, or is dependent on Microsoft middleware. However, Microsoft is increasingly targeting applications that run on Linux; Linux workloads are the fastest-growing portion of the Azure portfolio, and comprise 40% of Azure VMs.
Azure has an attractive and well-designed portal that looks and feels like a unified whole. It is relatively easy to do simple things with the UI, but this comes at the cost of sometimes-frustrating difficulties when implementing more-complex architectures. Azure has introduced a new structure for documentation during the past year; however, documentation is still often difficult to navigate and insufficiently thorough. The most significant customer implementation difficulties are related to Virtual Networks, especially for customers trying to replicate enterprise LAN topologies.
Azure is a capable environment for digital business workloads and other cloud-native applications, including mobile back ends and IoT applications, aided by Microsoft's composition of multiple Azure elements into relevant suites. The Microsoft developer experience is enhanced by tight integration with Visual Studio. However, most customers choose to use the portal or CLI, and do not manage using a DevOps philosophy. Customers are highly likely to mix IaaS VMs with PaaS-level compute capabilities when using Azure for new applications. The Azure Batch service, Azure's analytics-related services and VM configurations designed for HPC have made Azure an attractive environment for batch computing. Some midmarket customers have also begun to "lift and shift" existing applications to Azure.
Azure is still in the process of enhancing its governance capabilities, which might not fully address the needs of organizations with many cloud users or large deployments, although some of these challenges can be addressed using external tools. Management can also be improved through the use of Microsoft Operations Management Suite. Microsoft has acquired Cloudyn, a cloud service expense management SaaS provider; it is not currently part of the Azure service, but is offered free to Azure customers.
Using Azure for mission-critical applications requires a thoughtful approach, because it can be challenging to deploy highly resilient architectures. Although Azure sometimes has multiple regions per extended metropolitan area, it did not have multiple data center zones across its regions. In September 2017, it announced a preview of multiple data centers zones in two regions, which should lead to improved architectures over the long term. Azure fault domains are at the rack level within a data center. Previous Azure outages have affected multiple regions simultaneously. Although we believe Azure is broadly reliable, individual experiences can vary (most issues affect a small percentage of customers, rather than Azure as a whole), and Gartner's monitoring indicates lower reliability than leading competitors. Customers primarily express concern about the reliability of Virtual Networks, the API, and Azure Resource Manager.
Microsoft Azure will appeal to most to organizations with investments in Microsoft technologies that intend to do one or more of the following: Use Azure for cloud-native applications that are built on .NET, use Microsoft middleware or use Azure PaaS capabilities; host Windows applications (with attention paid to Azure's ability to meet availability, performance and security requirements); migrate a Microsoft-centric data center to the cloud over a multiyear period; augment Microsoft SaaS applications; or build a hybrid cloud environment with Azure Stack.
(See "In-Depth Assessment of Microsoft Azure IaaS" for a detailed technical evaluation. "In-Depth Assessment of Microsoft Azure Application PaaS" might also be of interest.)
Notable Service Traits:
  • Compute: Hyper-V-virtualized, fixed-size and nonresizable, but with a broad range of VM sizes. Maximum VM size of 128x2048. HPC options that include high-performance network interconnects. "SAP Hana on Azure Large Instances" bare-metal server option (requires special onboarding). Alternatively, use the PaaS VM-based compute service (Cloud Services Web and Worker roles) or App Service. Docker container service (Azure Container Service), with a choice of orchestrators (Docker Swarm, Kubernetes or Mesos-based DC/OS), and Azure Container Instances (beta).
  • Storage: Ephemeral local storage, VM-independent block storage and object storage (Blobs) with integrated CDN, file storage. Higher-performance and SSD-based storage (Premium Storage) is available as local storage for specific VM types, as well as Managed Disks block storage.
  • Network: SDN (Virtual Networks). Load-balancing capabilities include content routing. Global load-balancing service. Third-party connectivity via partner exchanges (ExpressRoute).
  • Security: Can meet most common audits and common compliance requirements, including SOC 1, SOC 2, ISO 27001, FedRAMP, PCI, HIPAA and GxP (pharmaceutical industry). Granular RBAC. Active Directory service and integration. Key management service.
  • Management: Control plane is continuously available. No maintenance windows. Monitoring. Scheduling service. Templates (Resource Manager). Run book service (Azure Automation). Significant marketplace.
  • Developer Enablement: Extensive API coverage. RESTful interfaces with a broad set of language binding (primarily from Microsoft with some community contributions). CLI covers a wide range of functionality, with strong support for Windows PowerShell. IDE integration, including developer services integrated with Visual Studio Team Services.
  • Other Services: Database (SQL Database, Cosmos DB with multiple interfaces to a single data source), caching (Redis Cache), data warehouse, Hadoop (HDInsight), data ingest and event processing (Azure Functions, Data Factory, Event Hubs, Stream Analytics, IoT Hub), and many more.

NTT Com Enterprise Cloud 2.0

NTT Communications launched Enterprise Cloud 2.0, an OpenStack-based offering, in March 2016. It is the successor offering to Enterprise Cloud 1.0, which is still available, but is not evaluated in this research.
NTT Enterprise Cloud 2.0 contains a basic set of core compute, storage and networking capabilities. It is similar in capability to Enterprise Cloud 1.0, though the underlying platform has changed and the options have expanded, including the addition of bare-metal servers provisioned in approximately 15 minutes. In theory, its hosting-oriented VMware or Hyper-V virtualization options are intended to appeal to Mode 1 I&O teams, and its OpenStack-with-KVM option is intended to appeal to Mode 2 developers. However, its limited IaaS capabilities constrain the potential viable use cases, and it is also hampered by poor documentation and a portal that is slow and difficult to navigate. Gartner's monitoring also indicates lower-than-expected reliability.
NTT Enterprise Cloud 2.0 has neither the depth of virtual automation features that can help a customer's Mode 1-oriented operations team improve efficiency and reduce cost, nor the developer-oriented capabilities that help Mode 2-oriented developers and DevOps teams become more agile and productive. It lacks the governance features for managing a large number of users, limiting its usefulness for application development use cases, although customers could consider using NTT Com's Cloud Management Platform (a SaaS-based multicloud CMP) to provide governance capabilities.
NTT Enterprise Cloud 2.0 may be appealing to Asia/Pacific customers who want to lift and shift workloads and are looking for an uncomplicated "rented virtualization" offering on which to host general business applications.
Notable Service Traits:
  • Compute: KVM, VMware or Hyper-V-virtualized fixed-size VMs with a maximum size of 32x128. Bare-metal servers with a maximum size of 36x512. Does not support the two most-recent versions of Windows Server on KVM.
  • Storage: Persistent local storage, VM-independent block storage, and file storage. No object storage. No import of data from physical media. Media disposal does not meet NIST standard.
  • Network: No self-service network security (requires managed services). Granular RBAC via API permissions. Direct Connect access to AWS.
  • Security: SOC1 and ISO 27001 audits. Limited MFA (HOTP-based).
  • Management: Monitoring. No enterprise directory integration (although the separate ID Federation Services can be used for this purpose).
  • Developer Enablement: RESTful OpenStack API with incomplete coverage. API interfaces to trouble ticketing, pricing, and logging. CLI has limited functionality.
  • Other Services: Separate Cloud Foundry-based aPaaS.

Oracle Cloud Infrastructure

Oracle Cloud Infrastructure (OCI, formerly branded Oracle Bare Metal Cloud Services or "Gen 2 Cloud") was launched in November 2016. This assessment covers only this second-generation offering, and does not evaluate any elements of Oracle Cloud Infrastructure Classic (formerly branded as Oracle Cloud Compute or "Gen 1 Cloud").
Although OCI was launched solely with nonvirtualized physical servers, it has since introduced VMs as well. All compute, storage and network elements are fully software-defined and API-addressable. Physical servers can be provisioned in five minutes, and physical server configurations are large and highly performant, making them suitable for hosting the Oracle Database, scale-up Oracle applications such as the Oracle e-Business Suite, and other software that performs best on bare-metal servers with large amounts of RAM. However, OCI's smallest VM instance has one vCPU and 7GB of RAM, so workloads that are less demanding may not be cost-effective on OCI.
At its current stage of development, OCI contains a minimalistic feature set, but what is present is thoughtfully designed, well-architected, stable, performant and cost-competitive. The user interface is well designed and comprehensive. It places OCI in context with Oracle's many other cloud solutions, providing a sensible and intuitively navigable experience, and will benefit Oracle customers that consume a broader set of Oracle cloud services.
With this offering, Oracle demonstrates to its customers that it understands their need for general-purpose cloud services, as well as caters to their special needs to host Oracle-optimized workloads. The offering provides one-click deployment of many Oracle solutions to further cement this value proposition. The ability to obtain hosted Exadata appliances side-by-side with OCI infrastructure will be particularly beneficial for some customers.
OCI will appeal to three distinct groups of customers: Those seeking enterprise application solutions who will adopt OCI as part of a broader Oracle-managed application deal, those seeking cloud solutions for their Oracle Databases and associated enterprise applications, and those seeking bare-metal servers in a true software-defined cloud model for batch computing and big data needs.
In October 2017, Oracle will also begin offering the ability to deploy some of its PaaS solutions that are available on the Gen 1 infrastructure, such as the Java Cloud Service, on Gen 2. However, its lack of integrated PaaS capabilities limits its suitability for cloud-native applications, and its feature set is too limited for broad adoption for general business applications.
Notable Service Traits:
Compute: KVM-virtualized fixed-size VMs (maximum size 16x240) and bare-metal machines (maximum size of 36x512). Multiple data center zones. No autorestart.
Storage: Ephemeral local storage (NVMe with some compute instances). VM-independent block storage. Object storage.
Network: Sophisticated SDN. DNS is via Oracle's Dyn acquisition.
Security: Missing common compliance audits and documentation, although SOC 1, SOC 2 and ISO 27001 are in progress (Type/Stage 1 achieved). No MFA. Granular RBAC via "compartment" concept. Limited logging.
Management: Control plane is continuously available. No maintenance windows. No monitoring.
Developer Enablement: REST-based API with thorough coverage. Python-based CLI.
Other Services: Oracle DBaaS.

Rackspace Public Cloud

Rackspace began offering cloud IaaS in 2008, when it acquired Slicehost. However, in August 2012, it launched an OpenStack-based offering into general availability; this is now Rackspace's sole public cloud IaaS offering. This assessment does not cover any of Rackspace's managed services or private cloud IaaS offerings.
Rackspace has a solid set of core compute, storage and networking capabilities, presented in an easy-to-use and well-documented portal (Rackspace's own, not OpenStack gtHorizon). It also has some useful PaaS-layer services; some were obtained through acquisitions and available to any customer, not just those using Rackspace Public Cloud.
Rackspace Public Cloud lacks the depth of integrated IaaS and PaaS capabilities that is typically desired by customers building new cloud-native applications. It may be suitable for batch computing and analytics use cases, for customers that need bare-metal servers with an API. Rackspace does not have the governance features needed to manage large numbers of users, nor the enterprise integration features desired for "lift and shift" of existing business applications.
Rackspace Public Cloud will appeal to organizations that are looking for an OpenStack-based public cloud offering and that value ease of use for individual developers who are building simple applications. It may also appeal to Rackspace managed hosting customers that need some complementary cloud IaaS capabilities, but do not want to use a hyperscale cloud provider (such as AWS or Azure), or that need close proximity between their environments.
Notable Service Traits:
Compute: OpenStack-based, Citrix Xen-virtualized, fixed-sized VMs. No VM auto-restart. Maximum VM size of 32x120. Bare-metal servers (OnMetal Cloud Servers) with a maximum size of 20x128 or 12x512.
Storage: Local storage; most VM types use SSDs. VM-independent block storage with SSD option. Object storage (Cloud Files) with integrated CDN. No importing/exporting of data.
Network: SDN (Cloud Networks). Private connectivity requires the RackConnect service and a dedicated appliance.
Security: SOC 1, SOC 2, SOC 3, and ISO 27001 audits. MFA also includes API. RBAC roles are per service and limited to full access, create access and read-only access. DDoS mitigation requires the use of Cloud Load Balancer.
Management: Monitoring. Templates (Cloud Orchestration). Logs are generated via the Cloud Feeds service.
Developer Enablement: Extensive API coverage. RESTful interfaces with a wide assortment of language bindings from Rackspace and the community. CLI covers a wide range of functionality.
Other Services: Database (Cloud Databases), Rackspace-owned separate PaaS services for NoSQL databases and caching (ObjectRocket and RedisToGo).

Skytap Cloud

Skytap launched its cloud IaaS offering, Skytap Cloud, in 2008. It has been focused on development and testing, lab, online learning, training and demo environments, though it has recently also begun to target Mode 1 production workloads. The offering is completely infrastructure-centric.
Skytap's user experience is centered on the notion of an environment, which acts as a group for all resources that are allocated to it. An environment can be managed as a single entity; all resources such as VMs within an environment can potentially be stopped, started, copied or templated in a single click. However, an entire environment becomes read-only, while a VM in the environment experiences a state change, such as a starting up, shutting down or suspending, which may frustrate some users. Skytap's UI is responsive, but sometimes awkward to use. However, it has a notably excellent interface for configuring complex networking. It has thorough documentation.
Skytap offers unique collaboration capabilities for developers within its environments. Skytap also offers a differentiated set of governance and scheduling capabilities designed to address cost control for nonproduction resources. For instance, training and demo environments rarely need to run during nights and weekends. Skytap administrators can set up a schedule that determines when an environment and its resources are run, shut down, suspended and the like.
Skytap is the only offering in this Critical Capabilities evaluation that supports AIX; enterprises can lift-and-shift existing AIX based applications to Skytap without rearchitecting them. The same set of tools to manage resources at an environment level can be used with AIX workloads.
Skytap will appeal to organizations that need to do agile development on legacy workloads, especially if those workloads will continue to run in their original on-premises production environment; many such customers desire for their applications to run in their own data center, but seek more flexible development/test solutions and are willing to use public cloud IaaS for such environments. Skytap may also appeal to those who want to lift-and-shift legacy workloads into cloud IaaS without making any changes.
Notable Service Traits:
Compute: VMware-virtualized, fixed-size and resizable with a broad range of VM sizes. Maximum VM size of 12x262. Supports VMs as Docker container hosts with ability to manage container instances.
Storage: Block storage is not VM-independent. No object storage.
Network: Highly-configurable SDN that can accurately simulate most enterprise networking configurations.
Security: SOC1 and SOC2 audits. RBAC across multiple accounts.
Management: No maintenance windows. Auditing and usage logs.
Developer Enablement: Extensive API coverage, with RESTful interfaces. Limited, infrequently updated, vendor-developed API bindings. Offers a CLI.
Other Services: No other services.

vCloud Air

OVH, a French-based hosting provider, acquired VMware's vCloud Air offering in May 2017. vCloud Air (previously named vCloud Hybrid Service) became generally available in September 2013. vCloud Air powered by OVH is available in two tenancy models: Virtual Private Cloud (multitenant compute and storage) and Dedicated Cloud (single-tenant compute and multitenant storage). Virtual Private Cloud is offered in two variants — a paid-by-the-month shared resource pool, and a pay-as-you-go per-VM service. vCloud Air most closely resembles a vCloud Datacenter Service.
OVH has not modified the offering since the acquisition. vCloud Air is an infrastructure-focused offering, with few capabilities for developer enablement or DevOps-style management. VMware eliminated plans to roll out additional developer services, such as the vCloud Air SQL service that was previously in beta. VMware's hybrid cloud strategy emphasizes workload portability between vCloud Air and on-premises VMware-virtualized infrastructure. Consequently, vCloud Air is best-used for application development and general business application use cases, where the highest-priority requirement is the I&O team's desire to use the same VMware-based infrastructure constructs in the cloud and on-premises.
The vCloud Air graphical user interface (GUI) is actually a combination of two loosely integrated UIs: the vCloud Air UI and the vCloud Director (vCD) UI. Users are initially presented with the vCloud Air UI, through which they can accomplish vCloud Air-specific tasks and the most commonly executed vSphere-related tasks. However, the vCloud Air UI does not give complete coverage of the control plane, linking to the more complex, but complete vCloud Director UI, when necessary. Switching between the two UIs can lead to a disjointed user experience, although veteran vCD users may not care, because they are likely to spend most of their time in the vCD UI because of its familiarity. There is comprehensive documentation.
vCloud Air will appeal to IT organizations that desire a cloud IaaS offering that allows them to continue to use their existing investments in VMware-based IT operations skills and management tools; that need to either move existing VMware-virtualized workloads to the cloud, or be able to move cloud-developed applications back on-premises; and that are focused on traditional I&O needs, not development enablement or IT transformation.
(See "In-Depth Assessment of VMware vCloud Air" for a detailed technical evaluation. Although this is an older assessment, relatively little has changed.)
Notable Service Traits:
Compute: VMware virtualized and nonfixed size. Maximum VM size of 16x480.
Storage: Local storage. VM-independent block storage, in multiple tiers, with expandable volumes. Object storage powered by Google Cloud Storage.
Network: SDN (NSX-based). No inter-data-center private WAN. Third-party connectivity via partner exchanges. No DNS service.
Security: SOC 1, SOC 2, SOC 3 and ISO 27001 audits. Will sign HIPAA BAA.
Management: Monitoring. Templates (as vApps). Active Directory and LDAP identity federation with SAML support (via VMware Identity Manager). Service catalog.
Developer Enablement: Extensive API coverage. RESTful interfaces with only a few language bindings (provided by the community, but VMware support is available). CLI covers a wide range of functionality.
Other Services: Disaster recovery.

Virtustream Enterprise Cloud

Virtustream entered the cloud IaaS market in 2010. It was acquired by EMC in 2015, and EMC's managed services and cloud storage business was merged into Virtustream prior to EMC's subsequent acquisition by Dell Technologies in September 2016. Virtustream is now an independent entity within Dell Technologies. xStream is Virtustream's common hypervisor-neutral platform for its public and private cloud IaaS offerings. Virtustream Enterprise Cloud (VEC) is Virtustream's primary cloud IaaS offering.
Virtustream has significant capabilities in infrastructure resiliency, security and regulatory compliance, although not all such capabilities are self-service. Its Micro VM technology enables it to offer policy-based, service-level management that allows customers to pay for resources consumed rather than resources allocated. Virtustream also divides its multitenant infrastructure into physically separate hardware pools based on application type and security requirements; customer logical environments span multiple pools.
Virtustream's UI is complex and not very intuitive, and there are separate portals for account management functions (such as support) and IaaS self-service. The documentation comes in the form of a manual, rather than integrated into the portal, and it is not comprehensive or sufficiently clear. Not all capabilities can be provisioned or managed via the portal or Virtustream's proprietary API.
Virtustream's capabilities are focused on bringing agility and efficiency to traditional IT, rather than developer enablement or DevOps-style management. In contrast to most other cloud IaaS providers, whose services are optimized for scale-out applications, Virtustream specializes in scale-up, mission-critical enterprise applications. Virtustream has had a strong focus on SAP applications, where it has significant specialized automation capabilities. It is also suitable for application development use cases related to SAP and similar applications. Virtustream is expanding to other verticals; for instance, it is beginning to offer Epic electronic healthcare record (EHR) solutions on its cloud.
Virtustream will appeal to I&O organizations that want to migrate mission-critical traditional enterprise applications into a cloud IaaS environment. Although managed services are not a requirement, most organizations will need Virtustream's assistance in transitioning applications onto the xStream platform.
Notable Service Traits:
Compute: VMware or KVM-virtualized, multitenant or single-tenant VMs. Maximum VM size of 8x72 via self-service (up to 128x4TB on request). Large-scale provisioning is slow.
Storage: Local storage. VM-independent block storage with expandable multimountable volumes (but VM independence is not configurable via the portal). Optional encryption. Object storage based on Dell EMC Atmos and ECS, but provisioned and managed in a separate web application. Snapshots cannot be used as images.
Network: No back-end load balancing. No DNS service.
Security: SOC 1, SOC 2 and SOC 3 audits. FedRAMP. Will sign HIPAA BAA. Granular RBAC. Compliance (via Viewtrust).
Management: Monitoring. Service catalog with approval workflows. Scheduling service.
Developer Enablement: Extensive API coverage. RESTful interfaces, but no language bindings provided. No CLI.
Other Services: None within the scope of this evaluation.

Context

In the context of this Critical Capabilities research, public cloud IaaS combines self-service on-demand elastic infrastructure resources with cloud software infrastructure services and management capabilities (see "Technology Insight for Cloud Infrastructure as a Service" for more definitional information). Public cloud IaaS is a fully mainstream technology that is adopted by businesses of all sizes, including large enterprises that have demanding requirements for availability, performance, security and regulatory compliance. It is on the Slope of Enlightenment in Gartner's "Hype Cycle for Cloud Computing, 2017." Gartner forecasts that the worldwide market for public cloud IaaS will be more than $34 billion in 2017, and growing at a compound annual growth rate (CAGR) of 37% (see "Forecast: Public Cloud Services, Worldwide, 2015-2021, 2Q17 Update" and "Forecast Analysis: IT Services, Worldwide, 2Q17 Update" for details); we estimate that it accounts for more than 15% of virtualized infrastructure.
There are two major buying centers for public cloud IaaS, representing a bimodal adoption pattern (see "Best Practices for Planning a Cloud Infrastructure-as-a-Service Strategy — Bimodal IT, Not Hybrid Infrastructure" ). In the early years of this market, most public cloud IaaS was bought by Mode-2-oriented development organizations using a business budget, and most public cloud IaaS spending still comes from Mode 2 projects, especially new digital business applications. However, increasingly, public cloud IaaS is purchased by I&O organizations using the central IT budget, and is used to replace or supplement traditional internal data centers for existing or non-cloud-native workloads. Large-scale migration projects have become commonplace, especially in midsize businesses. In most cases, I&O organizations have Mode 1 priorities.
Both buying centers value a similar set of capabilities for core IaaS resources — flexible software-defined capabilities for compute, storage and networking, with security integrated throughout. However, Mode 2 projects, especially digital business initiatives, are more likely to exploit the capabilities of new cloud-native application architectures, including new patterns, such as microservices, and related technologies, such as OS containers. Digital business applications are more likely to have DevOps management and an agile life cycle. Developers highly prize self-service cloud provider capabilities that enable them to accelerate development and the entire application life cycle. I&O organizations with Mode 1 workloads, however, need capabilities that help with migrating and optimizing these workloads to deliver lower costs, while improving operational reliability and enhancing a traditional ITIL service life cycle with greater automation.
IaaS and PaaS are part of a continuum of capabilities that are increasingly delivered as a unified whole — integrated IaaS and PaaS. An increasing number of providers offer both IaaS and PaaS, but many do not meet the definition of an integrated IaaS+PaaS offering (see"Technology Insight for Integrated IaaS and PaaS" for details). Customers do not necessarily expect IaaS to be integrated with an aPaaS, but rather, for cloud software infrastructure services to be available — elements such as DBaaS. The need to offer not just infrastructure resources, but also fully automated application infrastructure elements, as well as deep and extensive management capabilities, significantly increases the engineering resources required to compete successfully in this market.
In the early years of this market, there was a clear division between cloud IaaS providers that were focused on cloud-native and other digital business use cases, as well as cloud IaaS providers that were focused on traditional workloads. The former providers typically built their own global-class technology, while the latter providers typically relied on VMware and other traditional enterprise-class data center technology vendors. However, the latter providers are now building, or have built in the past two years, a new generation of cloud IaaS platforms, using proprietary or open-source-based technology, such as OpenStack. Be cautious when adopting these new platforms, because these providers may have built and abandoned multiple previous generations of platforms and may have significant engineering challenges.
There is now a clear division in the market between technology leaders with IaaS+PaaS, and all other cloud IaaS providers. The technology leaders not only have greater capabilities, but they are releasing capabilities at a much faster rate — typically hundreds of new features a year — and are likely to continue to extend their lead over their competitors. Importantly, these technology leaders have superior capabilities, not just for digital business applications, but they also deliver superior environments for traditional, non-cloud-native workloads. Other cloud IaaS providers may occupy specific niches, but are now rarely competitive for a broad range of use cases. Gartner recommends that customers choose one or two IaaS+PaaS providers for strategic adoption, then consider other cloud IaaS providers for use cases that might not be a good fit for their strategic providers. Customers most commonly choose AWS and Microsoft Azure as their strategic providers, and choose the placement of each application based on the use case and requirements.
The most successful cloud IaaS providers, and especially IaaS+PaaS providers, offer highly differentiated capabilities. This is true even when the services used are limited only to so-called "commodity" infrastructure resources, such as compute, storage and network elements; configuration flexibility and options differ significantly between providers. Elements evaluated as "expected" in our list of Critical Capabilities can be considered "commodity," but, in truth, so many providers are missing such capabilities that they cannot truly be considered common enough to constitute a commodity. Customers should consider cloud portability on an application-by-application basis (see "Addressing Lock-In Concerns With Public Cloud Infrastructure as a Service" for guidance). Gartner recommends against the approach of trying to evaluate and source cloud IaaS providers as if they were easily interchangeable.
Although this research covers public cloud IaaS, many of the providers evaluated in this Critical Capabilities research offer hosted private cloud or outsourced private cloud versions of their public cloud IaaS platforms. These single-tenant offerings do not typically have all the capabilities of the provider's public cloud IaaS offerings, but this Critical Capabilities evaluation may nevertheless be helpful in developing a shortlist of viable private cloud IaaS providers.
This research is focused solely on technical capabilities. For a broader assessment that includes evaluation of contract terms and service-level agreements (SLAs), price/performance benchmarking, and operational track record, see the "Magic Quadrant for Cloud Infrastructure as a Service, Worldwide."

Product/Service Class Definition

Cloud computing is a style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service using internet technologies. Cloud IaaS is a type of cloud computing service. It parallels the infrastructure and data center initiatives of IT. Cloud compute IaaS constitutes the largest segment of this market (the broader IaaS market also includes cloud storage and cloud printing). In this document, we use the term "cloud IaaS" synonymously with "cloud compute IaaS."
Cloud IaaS is a standardized, highly automated offering where compute resources, complemented by storage and networking capabilities, are owned by a service provider and offered to the customer on demand. The resources are scalable and elastic in near real time and are metered by use. Self-service interfaces are exposed directly to the customer, including a web-based UI and an API. In public cloud IaaS, the resources are multitenant and hosted in the service provider's data centers.
This Critical Capabilities evaluation includes not only the cloud IaaS resources themselves, but also the automated management of those resources, management tools delivered as services and cloud software infrastructure services. The last category includes middleware and DBaaSs, up to and including some PaaS capabilities.
Public cloud IaaS is typically used as an alternative to infrastructure that is running within a customer's own data center, or as an alternative to hosting or data center outsourcing services. Many buyers are attracted by the self-service capabilities, which require no interaction with the provider or other human intervention. Also, because the resources are metered by the hour and can usually be bought without any kind of contractual commitment, public cloud IaaS is often perceived as an inexpensive alternative to traditional IT infrastructure. Moreover, public cloud IaaS is an enabler of "infrastructure as code," allowing infrastructure capabilities to be abstracted and controlled via API.
No private cloud IaaS offerings are evaluated, whether industrialized or customized.

Critical Capabilities Definition

Public cloud IaaS needs to be evaluated for its technical suitability to the needs of particular workloads, as well as the organization's governance needs. This report examines eight broad critical capability areas that IT organizations should consider when evaluating public cloud IaaS offerings.
It is important to note that these are broad categories, not granular capabilities. They are inclusive of a range of features, and we do not provide a comprehensive list of these features. Because each of the categories includes a large number of features, the scoring in each category is directional. In general, a score of 3 indicates that a provider is able to fulfill the most critical features in that category. However, it is possible that a provider may be missing some important features in that category, yet has other strengths that increase its score in that category. You will need to conduct an in-depth evaluation of your shortlisted providers to determine whether they meet your specific needs.
This Critical Capabilities research is not intended to be a granular evaluation of provider capabilities. If you are seeking an in-depth technical evaluation of providers, you should consult Gartner's "Evaluation Criteria for Cloud Infrastructure as a Service" and the associated In-Depth Assessments of individual providers against those criteria. Individual service traits within these critical capabilities are closely associated with the Evaluation Criteria; however, the Critical Capabilities research groups those traits into categories of functionality.
If you are looking for an evaluation of providers in the broader context of the entire cloud IaaS market, including private cloud IaaS services, see"Magic Quadrant for Cloud Infrastructure as a Service, Worldwide." Keep in mind, however, that a Magic Quadrant is not a product evaluation. It considers many business factors as well, and it looks at providers' recent execution and vision for the future. Furthermore, the Product/Service rating of a provider on the Magic Quadrant can be significantly different from its rating in this critical capability evaluation, since this report takes into account only one specific public cloud IaaS offering for each provider, whereas the Magic Quadrant takes into account providers' entire cloud IaaS portfolios.
Note that this Critical Capabilities report considers only those features that are available strictly within the context of the provider's public cloud IaaS offering. Importantly, "hybrid" capabilities that require the use of dedicated servers (that cannot be directly provisioned via the GUI and API for public cloud IaaS) are not counted. All capabilities must be provided as part of the standard, industrialized, fully automated public cloud IaaS offering. Capabilities that require hosted equipment, hosted software, leases or monthly rentals, managed services, provider-customized implementations, partner-provided services, or other services that are not a fully integrated element of the offering do not count for the purpose of this evaluation. A provider's cloud IaaS offering may incorporate PaaS-level capabilities, as long as such capabilities are fully integrated, with a single self-service portal and catalog, unified billing, shared identity and access management, and integrated low-latency network and security contexts.
This market is changing extremely quickly. Some providers release features as often as several times per week, and many providers release features at least once a quarter. Providers do occasionally remove existing capabilities as well. When evaluating service providers, ensure that you understand the current state of each provider's offering. The quantitative assessment is current as of August 2017 (more recent than the Magic Quadrant) for features that are in general availability; the description of each vendor is accurate as of the time of publication.
Because the market is evolving so quickly, the baseline capabilities expected from providers are increasing each year. Provider scores may change significantly with each new iteration of this Critical Capabilities research, since the expected minimum capabilities increase, and the capabilities of each provider may advance significantly.
In many cases, a provider's scores may have decreased since the 2016 iteration of this research, despite improvements in the provider's offering . This is the result of:
  • Customer expectations are rising; therefore, the capabilities required have increased. This has decreased the number of points assigned to capabilities that are expected by most customers.
  • More options are available in the market; therefore, the breadth of capabilities has increased. This has decreased the number of points assigned to any one capability.
  • We reweight the capabilities each year to reflect the relative degree of importance of each capability, and the differentiation that it represents.
Also note that beginning in 2016, and continuing with this 2017 research, we have strictly excluded all human-powered managed services. In the 2015 evaluation and earlier, the providers could receive credit for some human-augmented services.
The critical capability categories are as follows:
  • Compute resilience: VM availability
  • Architecture flexibility: Compute, storage and networking options
  • Security and compliance: Security controls, risk management and governance
  • User management: Governance of a large numbers of users
  • Enterprise integration: Network integration, data migration and workload portability
  • Automation and DevOps enablement: IT operations management (ITOM) and developer enablement
  • Scaling: Scalability of the service, scaling applications and workloads
  • Big data enablement: Large-scale data processing and batch computing
These categories are described in detail below.

Compute Resilience

This category is focused on features that are important for VM availability.
Capabilities in this category include:
  • Autorestart (rapid detection of physical host failure and automatic restart of the VMs — on another host, if necessary)
  • VM-preserving host maintenance (the ability to perform maintenance on the host server, such as host OS and kernel updates, without rebooting guest VMs)
  • VM-preserving data center maintenance (the ability to perform data center and hardware maintenance without impacting guest VMs, usually implemented via live migration of VMs)
  • Affinity and anti-affinity in VM placements
  • Customers can make storage and network configuration changes without needing to reboot the affected VMs
While the availability of the control plane and other resource elements are considered here, the emphasis is strongly on VM availability, which is important for workloads that assume infrastructure resilience. Most non-cloud-native applications are architected with the assumption of compute resilience, and most enterprise virtualization environments take advantage of the compute resilience features of the hypervisor.
To meet basic expectations in this category, a provider should offer all of the following key features:
  • Autorestart
  • VM-preserving host maintenance
  • VM-preserving data center maintenance
  • VM restart flexibility (when host maintenance occurs, customers must have the option of choosing restart windows on a per-instance basis)
In addition, to meet basic expectations in this category, the provider should also offer the following hot-swap virtual hardware features:
  • Customers can change a VM's size without rebooting it (if the OS supports it).
  • Customers can alter storage volumes without rebooting the VM.
  • Customers can change network configurations without rebooting the VM.

Architecture Flexibility

This category encompasses features that provide a customer with a breadth of resource types and architectures.
For compute resources, flexibility means a broad range of VM sizes, along with other options such as bare-metal (nonvirtualized) servers, VMs on single-tenant hosts, multiple hypervisor choices and container-based capabilities (such as a Docker container service). Ideally, a provider should allow flexible (nonfixed) VM sizes — VMs that can have an arbitrary combination of the number of vCPUs and the amount of RAM. If the provider offers specific (fixed) VM sizes instead, the broadest possible number of combinations, representing varying ratios of vCPUs to RAM, is desirable.
For storage resources, flexibility means different types of storage and multiple performance tiers.
For network resources, this means the ability to create complex network topologies, as well as support useful features such as static IP addresses and the ability to have multiple virtual network interfaces per VM.
To meet basic expectations in this category, a provider should offer all the following:
  • Flexible VM sizes, or a full range of vCPU-to-RAM ratios for fixed-size VMs
  • Support for the two most-recent generations of at least one free Linux, at least one paid enterprise Linux and Windows
  • VM-independent block storage, including an option for SSD-based storage
  • Self-service creation of complex hierarchical network topologies
  • A Docker-based container service

Security and Compliance

This category encompasses features that are important to security, compliance, risk management and governance.
Capabilities in this category include specific security measures, such as network ACLs, IDS/IPS, MFA and encryption. This category also includes aspects such as the availability of audits, logging and reporting, and the ability to use the service if you have regulatory compliance needs, such as those of the Payment Card Industry Data Security Standard (PCI DSS), the Federal Information Security Management Act (FISMA) and HIPAA BAA.
This category encompasses both provider-supplied capabilities that are inherent to the service and are the provider's responsibility to manage, and provider-supplied security controls that are the customer's responsibility to use appropriately. Security is a shared responsibility; customers need to enable and configure appropriate controls.
Some providers offer managed security services that cannot be consumed in an on-demand, self-service fashion. Such capabilities are not included in the scoring, but a provider may be able to provide a higher degree of security if such capabilities are used.
Because security is a major concern for most customers using public cloud IaaS, the capabilities necessary to meet basic expectations in this category are extensive and include all the following:
  • The service integrates a self-service stateful firewall.
  • DDoS attack mitigation is provided for all customers.
  • Traffic between the provider's cloud data centers is sent over a private WAN, not the internet.
  • MFA is provided as an option.
  • Customer changes and provisioning actions are logged, and the logs are retained for 90 days.
  • Administrative credentials for VMs are issued in a secure fashion.
  • Provider's personnel are subject to background checks, provider's personnel cannot log into customer compute instances unless the customer has purchased managed services from provider, and all administrative access is logged.
  • Storage services include an option for encryption (or all storage is encrypted by default).
  • Previously used storage is overwritten before it is reallocated to another customer.
  • Physical media is sanitized before disposal, in accordance with the NIST SP 800-88 standard.
  • ISO 27001 (see Note 1) and SOC 1, 2 and 3 (see Note 2) or equivalent audits are available for customers to review.
  • The customer can achieve PCI DSS compliance on the platform, including holding cardholder data.
  • The customer can achieve HIPAA compliance on the platform, and the provider will sign a HIPAA BAA.

User Management

This category encompasses features that are necessary to provision and govern multiple users of the service.
User management and governance capabilities are particularly important if you have large development, engineering or research teams. This covers aspects such as RBAC, quotas, leases and integration with enterprise directory services.
To meet basic expectations in this category, a provider should support all the following:
  • Multiple users per account and multiple API keys per account
  • Granular RBAC for both users and API keys
  • Integration with Active Directory and SAML-based single sign-on
  • Metadata tagging for compute and storage elements
  • Billing alerts

Enterprise Integration

This category encompasses features that are needed to operate in a hybrid IT environment.
These capabilities include secure extension of the organization's WAN, data migration features and workload portability features, along with the ability of an enterprise to license needed software, including the provider's software marketplace capabilities.
To meet basic expectations in this category, a provider should offer all the following features:
  • Import of customer-built VM images
  • Customization of VM images offered by the provider
  • Snapshot VMs and use of snapshots as images
  • Import and export of data on physical media
  • Allows customers to directly extend their enterprise WAN to the cloud infrastructure
  • Uses the customer's choice of private IP addresses from RFC 1918 address allocations
  • Open software marketplace of third-party and open-source software that can be deployed in a single click and billed through the provider

Automation and DevOps Enablement

This category encompasses ITOM features, particularly those necessary to manage infrastructure in a DevOps fashion. It also includes software infrastructure capabilities that enhance developer productivity.
Features in this category include monitoring, service catalogs, templates, configuration management, application life cycle management (ALM) and metadata tagging. Other automated capabilities, such as DBaaS, are also considered here.
Because one of the key advantages of cloud IaaS is "infrastructure as code" — the ability to have programmatic access to infrastructure — API capabilities are considered in all the categories of capabilities. However, this category also includes the quality of API access, including continuous availability of the control plane and API, responsiveness to a large number and high rate of API requests, completeness of API coverage, breadth of language bindings, and IDE integration. The completeness of CLI coverage and integration with Windows PowerShell are also considered.
To meet basic expectations in this category, a provider should offer all the following:
  • Access to all functionality via the API (customers can do anything via the API that they can do via the portal)
  • No maintenance windows that result in the control plane or API being unavailable
  • Self-service monitoring, including the ability to generate alerts
  • Relational DBaaS for at least one common relational database platform
  • NoSQL DBaaS
  • In-memory caching as a service that supports at least one common caching protocol
  • Orchestration templates

Scaling

This category encompasses capabilities related to scaling applications and workloads.
Features in this category include local and global load balancing, integrated CDN, autoscaling and the resizing of existing resources, such as VMs and storage volumes. Speed of provisioning is also very important.
To meet basic expectations in this category, a provider should offer all the following:
  • Provisioning of a single Linux VM in three minutes or less
  • Provisioning of 20 Linux VMs in five minutes or less
  • Provisioning of 1,000 Linux VMs in two hours or less
  • Resizing a VM without reprovisioning it
  • Autoscaling based on a schedule or monitoring trigger
  • Front-end and back-end load balancing
  • Global load balancing with latency-based request routing
  • Sufficient available capacity to permit customers to burst provision up to 10 times their normal baseline of consumption, in real time and without prior notice

Big Data Enablement

This category encompasses features that are typically desired for large-scale data processing.
Capabilities in this category include access to large VM sizes, large quantities of capacity on demand and GPUs. This category also covers capabilities such as object storage and services for Hadoop and Spark, unstructured data stores such as NoSQL databases, data ingest and data flow. "Big data" is used as a convenient catchall label for this criterion, rather than literally encompassing big-data-specific capabilities.
To meet basic expectations in this category, a provider should offer all the following:
  • VMs with up to 32 vCPUs and 256GB of RAM
  • Storage volume sizes of at least 2TB
  • Sufficient available capacity to provision up to 1,000 VMs, in real time and without prior notice
  • Object-based cloud storage
  • Hadoop as a service, or one-click provisioning of a curated Hadoop solution
  • Data warehouse as a service

Use Cases

All use cases have been constructed with the needs of enterprises and midsize businesses in mind — organizations that have existing IT environments, infrastructure and applications, along with security and compliance requirements — even though some of these use cases may involve "greenfield" digital business and other cloud-native applications. Other types of customers may have different needs and criteria.

Application Development

This use case is focused on the needs of large teams of developers that are building new applications.
Many organizations begin their use of public cloud IaaS with this use case, often with a single developer in ad hoc adoption. As usage grows from an individual developer to the entire development organization, however, so do the needed capabilities. In this use case, we consider an application development environment for a large team of developers that must have appropriate governance, security and interoperability with the organization's internal IT infrastructure — one that should enhance that team's productivity with self-service, automated capabilities.
This use case is also applicable to similar needs from other types of technical users, such as researchers, scientists and engineers. However, depending on the technical requirements, the batch computing use case may be more relevant for users of this type.

Batch Computing

This use case includes HPC, data analytics and other one-time (but potentially recurring), short-term, large-scale, scale-out workloads.
Batch computing is particularly well-suited to, and may be an exceptionally cost-effective use of, cloud IaaS. Big data enablement capabilities are the majority of the weighting. Since many such workloads depend on a high degree of automation, consideration is given to those aspects as well. Enterprise integration also has some importance, because such workloads often use data that originates on-premises.

Cloud-Native Applications

This use case includes applications at any scale, which have been written with the strengths and weaknesses of public cloud IaaS in mind.
Cloud-native applications assume that resilience must reside in the application and not in the infrastructure (low "compute resilience" weighting), that the application can run well in a variety of underlying infrastructure configurations (low "architecture flexibility" weighting), that the customer's IT organization will attend to security concerns (low "security and compliance" weighting), and that there are only minimal integrations with existing on-premises infrastructure and applications (low "enterprise integration" weighting). Automation, API capabilities and scale-out capabilities are, however, extremely important. Because many such applications have big data aspects, the big data enablement capability also receives a high weighting in this use case.

General Business Applications

This use case includes all applications that were not designed with the cloud in mind, but that can run comfortably in virtualized environments.
In this use case, which can include mission-critical production, applications are designed with the expectation that the infrastructure is resilient and offers consistently good performance. An organization intending to move existing enterprise applications into the cloud typically places a strong emphasis on security, and the public cloud IaaS needs to interoperate smoothly with the existing internal IT infrastructure. To gain more benefit from moving to the cloud, the organization needs the service to deliver additional value-added automation, but the organization is unlikely to make much use of the API, except possibly via third-party tools.
This use case is suitable for all migrations (including lift-and-shift migrations, as well as those that include refactoring), as well as new applications that do not specifically exploit cloud capabilities. Organizations that intend to pursue DevOps transformation and application rewrites should consider the cloud-native applications use case instead.

Internet of Things

This use case focuses on capabilities specific to IoT-related workloads, which typically are cloud-native, large-scale, scale-out, "big data" and mobility-oriented workloads.
Many IoT applications are mission-critical. Some IoT applications may be primarily machine-to-machine, such as applications that handle data from sensors. However, many IoT applications have large numbers of end users. The ability to scale cost-effectively in response to unpredictable demand, and to handle unpredictable volumes of data, is very important. Furthermore, IoT applications frequently contain sensitive data, including personal health information requiring regulatory compliance. Most IoT applications are also need to integrate with on-premises applications.
Because this evaluation focuses on cloud IaaS, we only considered the infrastructure capabilities necessary to support IoT applications. We did not score the IoT platform capabilities of any of the service providers. Organizations with IoT needs should factor such capabilities into vendor selection, since non-IaaS capabilities, such as device management, are important in many such use cases.

Vendors Added and Dropped

Added

  • Alibaba Cloud
  • Interoute Virtual Data Centre
  • Joyent
  • Oracle Cloud Infrastructure
  • Skytap

Dropped

No vendors were dropped. However, the following changes have been made:
  • Fujitsu. We now evaluate Fujitsu Cloud Service K5 IaaS, a newly introduced offering. In 2016, we evaluated Fujitsu Cloud IaaS Trusted Public S5.
  • IBM. IBM SoftLayer offerings are transitioning to a new brand, IBM Bluemix Infrastructure. Our evaluation now takes into account technical integration between the these offerings and the rest of IBM Bluemix.
  • NTT Communications. We now evaluate NTT Communications Enterprise Cloud 2.0, a newly introduced offering. In 2016, we evaluated NTT Communications Enterprise Cloud 1.0. Although the branding shows merely a version change, 2.0 is actually a new platform.
  • vCloud Air. In May 2017, VMware sold vCloud Air to OVH. The offering was evaluated while vCloud Air was owned by VMware.

Inclusion Criteria

The vendor inclusion criteria for this report are identical to those for "Magic Quadrant for Cloud Infrastructure as a Service, Worldwide."
All the services in this evaluation meet the following criteria:
  • They are public cloud IaaS (by Gartner's definition of the term).
  • The service is in general availability and is offered globally.
  • The service's data centers are in at least two metropolitan areas, separated by a minimum of 250 miles, on separate power grids, with SSAE 16, ISO 27001 or equivalent audits (see Note 1 and Note 2).
  • A web services API is available to all customers.
  • There can be multiple users and API keys per account, with RBAC.
  • Provisioning occurs in real time, with the smallest available Linux VM available within five minutes.
  • Applications can be scaled beyond the capacity of a single physical server.
  • There is an allowable VM size of at least eight vCPUs and 64GB of RAM.
  • Customers can securely extend their network into the public cloud IaaS offering.
  • There is an SLA for compute, with a minimum of 99.9% availability.
  • Customers can receive an invoice, and multiple accounts can be consolidated under one bill.
  • Customers can negotiate a customized contract.
  • The provider offers 24/7 support, including phone support (in some cases, this is an add-on rather than being included in the base service).
All the providers in this evaluation are among the top 15 providers by Gartner-estimated market share or mind share for the relevant segments of the overall cloud IaaS market (public and industrialized private cloud IaaS, excluding small deployments of two or fewer VMs).
If a provider has multiple offerings that meet our definition for public cloud IaaS, we have selected the offering that we expect Gartner clients to be most likely to purchase, and which was generally available for at least six months at the time of evaluation. We do not discuss any provider's other offerings in this research; for alternative offerings from a given provider, consult the Magic Quadrant.
There are many additional providers of public cloud IaaS that may be worthy of your consideration, even though they are not included in this report. Providers that are regional or have less market share are not included in this report, even if they have offerings superior to those of included providers.
Table 1.   Weighting for Critical Capabilities in Use Cases
Critical Capabilities
Application Development
Batch Computing
Cloud-Native Applications
General Business Applications
Internet of Things
Compute Resilience
1%
1%
5%
15%
5%
Architecture Flexibility
10%
6%
8%
15%
8%
Security and Compliance
10%
1%
3%
20%
10%
User Management
25%
1%
2%
5%
2%
Enterprise Integration
15%
5%
2%
25%
10%
Automation and DevOps Enablement
33%
14%
40%
10%
21%
Scaling
5%
7%
20%
8%
22%
Big Data Enablement
1%
65%
20%
2%
22%
Total
100%
100%
100%
100%
100%
As of October 2017
Source: Gartner (October 2017)
This methodology requires analysts to identify the critical capabilities for a class of products/services. Each capability is then weighed in terms of its relative importance for specific product/service use cases.

Critical Capabilities Rating

Each of the products/services has been evaluated on the critical capabilities on a scale of 1 to 5; a score of 1 = Poor (most or all defined requirements are not achieved), while 5 = Outstanding (significantly exceeds requirements).
Table 2.   Product/Service Rating on Critical Capabilities
Critical Capabilities
Alibaba Cloud
Amazon Web Services
CenturyLink Cloud
Fujitsu Cloud Service K5 IaaS
Google Cloud Platform
IBM Bluemix Infrastructure
Interoute Virtual Data Centre
Joyent Triton Public Cloud
Microsoft Azure
NTT Com Enterprise Cloud 2.0
Oracle Cloud Infrastructure
Rackspace Public Cloud
Skytap Cloud
vCloud Air
Virtustream Enterprise Cloud
Compute Resilience
3.2
4.2
3.0
3.4
4.0
3.0
4.0
1.1
3.4
4.1
1.4
3.0
2.0
3.2
3.9
Architecture Flexibility
3.7
4.5
2.0
3.1
3.9
3.7
4.6
2.1
3.8
3.9
1.8
2.3
2.0
2.7
3.8
Security and Compliance
3.3
4.8
2.5
2.1
4.3
3.2
3.9
1.3
4.8
2.8
1.8
2.3
1.8
3.1
4.2
User Management
3.8
4.9
1.6
1.2
2.3
2.0
1.8
1.0
3.8
3.0
1.9
1.3
1.7
3.5
1.2
Enterprise Integration
3.0
4.2
2.4
3.0
3.9
3.5
4.5
1.0
4.1
2.6
1.7
2.7
2.0
2.8
3.1
Automation and DevOps Enablement
3.4
4.8
1.9
1.6
3.6
2.8
3.2
1.2
3.8
1.8
1.4
2.8
1.2
1.5
1.6
Scaling
3.9
4.9
1.1
2.6
4.6
1.2
2.0
1.2
4.1
3.4
2.5
3.2
1.0
3.4
1.2
Big Data Enablement
4.0
5.0
1.1
1.3
4.4
2.5
1.4
1.3
4.5
1.0
1.2
2.7
1.0
1.8
2.3
As of October 2017
Source: Gartner (October 2017)
Table 3 shows the product/service scores for each use case. The scores, which are generated by multiplying the use-case weightings by the product/service ratings, summarize how well the critical capabilities are met for each use case.
Table 3.   Product Score in Use Cases
Use Cases
Alibaba Cloud
Amazon Web Services
CenturyLink Cloud
Fujitsu Cloud Service K5 IaaS
Google Cloud Platform
IBM Bluemix Infrastructure
Interoute Virtual Data Centre
Joyent Triton Public Cloud
Microsoft Azure
NTT Com Enterprise Cloud 2.0
Oracle Cloud Infrastructure
Rackspace Public Cloud
Skytap Cloud
vCloud Air
Virtustream Enterprise Cloud
Application Development
3.49
4.71
1.93
1.98
3.48
2.75
3.19
1.22
3.96
2.63
1.70
2.33
1.58
2.59
2.22
Batch Computing
3.82
4.88
1.37
1.65
4.22
2.58
2.10
1.31
4.30
1.60
1.40
2.71
1.16
2.02
2.28
Cloud-Native Applications
3.63
4.80
1.67
1.99
4.01
2.51
2.77
1.28
4.02
2.31
1.64
2.78
1.27
2.24
2.05
General Business Applications
3.37
4.53
2.23
2.60
3.95
3.05
3.79
1.28
4.05
3.03
1.72
2.59
1.77
2.84
3.13
Internet of Things
3.61
4.75
1.72
2.15
4.11
2.56
2.86
1.28
4.13
2.46
1.71
2.75
1.37
2.50
2.36
As of October 2017
Source: Gartner (October 2017)
To determine an overall score for each product/service in the use cases, multiply the ratings in Table 2 by the weightings shown in Table 1.

Evidence

Scoring for this Critical Capabilities report was derived from recent independent Gartner research on the cloud IaaS market. Each vendor responded in detail to an extensive primary-research questionnaire covering the business and the technical features of their cloud IaaS offerings. Gartner analysts tested services, reviewed service documentation, corresponded with the vendors on the details of certain offerings and conducted reference checks with end users. Gartner also conducted thousands of client inquiries with prospective and current customers of public cloud IaaS during 2016 and 2017, and it currently conducts more than 1,000 such inquiries each quarter.

Note 1 
ISO 27001

International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 27001 is an international standard for information security management systems (see "Security Research Roundup for ISO 27001 Compliance" ).

Note 2 
SSAE 16 and SOC 1, 2 and 3

In 2011, the well-known Statement on Auditing Standards No. 70 (SAS 70) standard was replaced by the Statement on Standards for Attestation Engagements (SSAE) 16 standard (aka Service Organization Control Reports 1 [SOC 1]) (see "SOC Attestation Might Be Assurance of Security … or It Might Not" ).

Note 3 
FedRAMP

These providers possess a Federal Risk and Authorization Management Program (FedRAMP) Authority to Operate (ATO) for specific services within their portfolio. Some possess an agency ATO, and others a FedRAMP Joint Authorization Board Provisional ATO (JAB P-ATO). Either allows the covered services to be used for FedRAMP-compliant needs.

Critical Capabilities Methodology

This methodology requires analysts to identify the critical capabilities for a class of products or services. Each capability is then weighted in terms of its relative importance for specific product or service use cases. Next, products/services are rated in terms of how well they achieve each of the critical capabilities. A score that summarizes how well they meet the critical capabilities for each use case is then calculated for each product/service.
"Critical capabilities" are attributes that differentiate products/services in a class in terms of their quality and performance. Gartner recommends that users consider the set of critical capabilities as some of the most important criteria for acquisition decisions.
In defining the product/service category for evaluation, the analyst first identifies the leading uses for the products/services in this market. What needs are end-users looking to fulfill, when considering products/services in this market? Use cases should match common client deployment scenarios. These distinct client scenarios define the Use Cases.
The analyst then identifies the critical capabilities. These capabilities are generalized groups of features commonly required by this class of products/services. Each capability is assigned a level of importance in fulfilling that particular need; some sets of features are more important than others, depending on the use case being evaluated.
Each vendor’s product or service is evaluated in terms of how well it delivers each capability, on a five-point scale. These ratings are displayed side-by-side for all vendors, allowing easy comparisons between the different sets of features.
Ratings and summary scores range from 1.0 to 5.0:
1 = Poor or Absent: most or all defined requirements for a capability are not achieved
2 = Fair: some requirements are not achieved
3 = Good: meets requirements
4 = Excellent: meets or exceeds some requirements
5 = Outstanding: significantly exceeds requirements
To determine an overall score for each product in the use cases, the product ratings are multiplied by the weightings to come up with the product score in use cases.
The critical capabilities Gartner has selected do not represent all capabilities for any product; therefore, may not represent those most important for a specific use situation or business objective. Clients should use a critical capabilities analysis as one of several sources of input about a product before making a product/service decision.

3 comments: