If you’re new to cloud computing, this guide is for you.
Cloud computing is a way of accessing computing resources on-demand. You use what you need, when you need it.
Think about how you turn on a tap to fill a glass of water, run a bath or do the washing up. You pay for what you use via a metered service, without ever worrying about the complexity of how that water gets to your tap. With cloud computing, the same principle applies to servers, databases, storage and other computing services. They’re provided as a service, over the internet, with the cloud vendor managing the complexity of providing those services on your behalf.
So, what about ‘the cloud’ itself? When people talk about the cloud, they’re generally referring to the hyperscale cloud computing providers (such as Amazon AWS, Microsoft Azure, Google GCP and Alibaba Cloud). The term can also be used to describe other online services, such as Apple’s iCloud, Microsoft’s Office 365 or software-as-a-service (SaaS) offerings.
The management of corporate IT is much simpler with cloud computing. It’s more efficient, adaptive, measurable and reliable. It’s useful to return to the consumer utilities example here. We expect to access water and electricity without owning and running water treatment plants and power stations. With cloud computing, we can access the computing resources we need without owning and running datacentres, wide-area networks and storage arrays.
Obtaining core computing services via highly-efficient cloud-based datacentres brings economies of scale. What’s more, you only pay for the services you consume, often on a pay-as-you-go basis. Let’s look at some of the core benefits in more detail.
Accessing computing resources through the cloud relieves the burden of having to manage the underlying IT infrastructure. So, the IT team is free to spend more time focusing on customer needs and market differentiation.
Traditional IT requires considerable upfront capital investment (CapEx). However, cloud computing is based on a variable funding model which is classed as an operational expense (OpEx). With CapEx investment, there’s often lots of governance surrounding who can spend how much on what. A cloud-centric OpEx model is more directly linked to revenues or divisional P&L. This enables a shift from centralised IT to a distributed model where IT can respond to the needs of individual business units faster.
Hyperscale cloud providers support hundreds of thousands of customers all over the globe. The sheer size of their customer base results in economies of scale which translate into lower pay-as-you-go costs for customers.
In 2011, the National Institute of Standards and Technology (NIST) published five essential characteristics of cloud computing. They were revolutionary then, but today’s hyperscale cloud providers offer them as standard. The secret is to make sure your cloud computing strategy takes full advantage of them.
Computing services are provisioned without any need for human interaction. Users access what they need from the cloud provider via a control panel or an API.
Users can access cloud resources online at any time, from any location irrespective of the device they’re using.
A central pool of resources serves multiple customers and is dynamically assigned based on current demand.
Cloud platforms quickly provision (scale-up) and release (scale-down) services according to current needs. Customers can set services to auto-scale based on demand.
Cloud services are ‘measured’ so customers only pay for the amount of each service (e.g. network traffic, storage, compute) that they use.
A key characteristic of cloud computing is on-demand self-service. This means teams can deploy new resources at the click of a mouse (or ideally via a line of code). So, it’s quicker and cheaper to respond to market changes or exploit new opportunities.
With traditional computing, you have to work with fixed resources. The cost and lead time associated with new hardware means there’s always a compromise. You either shoulder the expense of unused capacity, or live with the service issues caused by insufficient capacity, Cloud computing turns this around with its inherent elasticity. You can scale resources up or down as needed in line with demand.
All the main cloud providers enable you to deploy to multiple regions around the world. This means resources are located close to end users, resulting in the best possible user experience at the lowest possible cost.
Infrastructure as a Service (IaaS) provides users with secure, scalable infrastructure via cloud computing. Users can configure networking, servers (virtual machines) and storage, often through an API. They are responsible for everything from the operating system up.
Examples of IaaS include:
The Platform-as-a-Service (PaaS) model is more abstract than IaaS. Users work with a platform to develop, run and manage applications. They are not responsible for the operating system, middleware or runtime. PaaS facilitates reduced time to market and is relatively easy to get up and running. However, it typically offers a finite feature-set and may lead to vendor lock-in.
Examples of PaaS include:
Sometimes described as a cloud service in its own right, serverless is in fact a subset of PaaS. It provides a platform for executing specific code in response to events.
Examples of serverless platforms include:
Software-as-a-Service (SaaS) represents the highest level of abstraction of the cloud models. It offers access to specific applications over the internet. With this model the vendor manages everything including the application itself. Only the configuration rests with users.
Typical use-cases for SaaS include email, calendars and CRM systems. Examples include:
NIST defines four different cloud deployment models: Public, Private, Hybrid and Community.
Public cloud is the model most of us think of when we talk about cloud computing. Amazon AWS, Microsoft Azure and Google GCP are all public clouds. These three are often referred to as ‘hyperscale public clouds’ to emphasise their global footprint and majority market share.
The idea of private cloud is appealing for organisations that want to benefit from a cloud model but are unwilling to migrate applications to public cloud. Traditional virtualisation vendors like VMWare have rebranded and extended their virtualisation solutions as private cloud platforms. However, while they offer some benefits (e.g. better APIs to improve automation and improved self-service portals) many private cloud solutions fall well short of the five essential characteristics discussed above.
For example, you are still constrained by the capacity of the underlying physical infrastructure, so rapid elasticity is confined within hard boundaries. More importantly, private clouds rarely offer advanced PaaS and SaaS which can provide game-changing capabilities for developers.
Strictly speaking NIST defines ‘hybrid cloud’ as a ‘composition of two or more distinct cloud infrastructures (private, community or public)’. However, it’s often used as shorthand for any ‘private datacentre + public cloud’ combination (even if the private datacentre solution includes physical or non-cloud virtualised hosting, not a private cloud).
So, hybrid is generally seen as a transitional stage during the cloud adoption journey (which may last several years). Workloads may span public cloud and corporate hosting, linked by a direct network interconnection. Microsoft calls this ExpressRoute and AWS calls it DirectConnect.
A common pattern is to host dynamic services (like web or application tiers) in the public cloud, while keeping databases on-premise until you’re ready for a full migration.
Another is the ‘burst’ pattern, where a certain level of compute capacity is available on-premise but the application can burst into the cloud under periods of high load. This is useful for high-performance compute (HPC) workloads such as simulations which benefit from high CPU core counts but don’t warrant holding the entire capacity on-premise.
Another emerging pattern is management of on-premise resources as though they are cloud resources. One option is to host dedicated cloud-managed hardware on-premise e.g. Azure Stack (including Azure Stack HCI, Hub and Edge) or AWS Outposts. Another is to move your datacentre control plane to the cloud via a solution like Azure Arc. You can read more about this trend in an article from DevOpsGroup CTO Steve Thair here.
Community cloud is probably the least well-known cloud deployment model. This is partly because cloud models of this type are only relevant to their specific community and don’t reach a wide audience. For example, if you are in the UK Educational Community you might be aware of the relationship between Janet (the education network) and Microsoft but most people would never hear of it.
An increasingly common type of community cloud is commercial virtual clouds. These typically run on top of a hyperscale cloud platform, offering a curated set of services that meet a specific market segment’s needs. They are often established by large, dominant players looking to extend their dominance into the digital realm by becoming a platform provider. An example is the Siemens “MindSphere” IOT platform available on Microsoft Azure.
Whilst not an official NIST definition, multi-cloud has become a common term (most often used by vendors trying to sell you multi-cloud management platforms).
True multi-cloud, where you run the same application workload across two or more public clouds is surprisingly rare. And technically, it would meet the NIST definition of hybrid. This type of multi-cloud is usually found when vendors who predominately host in Cloud A have a customer requirement for data residency/data sovereignty for a country/region not currently supported by Cloud A, but covered by Cloud B. The solution in this case is to port their application to Cloud B then host it (and its data) in the relevant region.
Use of another cloud as a disaster recovery or busines continuity strategy is sometimes seen in highly regulated industries. For instance, there may be a regulatory requirement to have DR/BCP contingency plans should cloud outsourced hosting services be offline for an extended period. However, this is fairly rare as such requirements can normally be met via multiple availability zones or multi-region deployments with a single cloud vendor.
Pseudo multi-cloud, where you run different application workload across two or more public clouds is quite common in enterprise organisations. When different parts of the organisation adopt cloud at different times they may opt for different providers. Once cloud adoption moves to the cloud-first phase, they normally select a preferred cloud vendor, often in return for large discounts. However, when moving existing services between cloud(s) the cost/benefit equation rarely stacks up.
A multi-cloud situation can also arise from M&A activity. The company you merge with or acquire might not use same cloud partner as you, so the new entity inhabits a multi-cloud world.
Some organisations deliberately pursue a multi-cloud ‘best of breed’ strategy. They select the cloud vendor that best matches the requirements of each workload. This might mean having core line-of-business apps in Azure, big data analytics in Google GCP and new digital native applications in Amazon AWS.
Read this whitepaper to learn how to supercharge your Cloud migration with DevOps
This diagram helps to understand the 3 common pathways to the cloud - lift & shift, evolve and “Go Native” (rebuild).
In this live webinar a DevOpsGroup panel discuss the five essential characteristics of cloud computing that make you 23 times more likely to be an ‘elite’ performer