What Are Containers? Part 2

Home / Resources / Tutorial / What Are Containers? Part 2
What Are Containers? Part 2

Container technology is a key enabler of cloud adoption, but it’s inherently virtual and can be hard to visualise. This guide brings clarity to containers.

A New Approach to an Old Problem

Containers are, arguably, still the biggest ‘new’ technology in modern IT. They’ve retained that fresh, innovative feel and touting a new Kubernetes infrastructure still has an element of cachet.

The problem is, being inherently virtual, container technology is quite a complex paradigm to understand. To really grasp what containers are and why they’re so beneficial, it helps to consider how we got to this point in software environment management history.

Laptop and server illustration

In the Beginning, There Were Physical Servers…

And they still play a vital role, but bear with me for a moment). An operating system kept the lights on while applications were deployed directly on top of a physical server and things were left to chug away.

This approach worked. The model didn’t really change until the (very) late 1990s and changes didn’t gain widespread adoption until several years after that.

However, as any self-respecting software engineer knows, there’s a world of difference between ‘works’ and ‘works well’. The problems with running an infrastructure directly on the hardware powering it were rarely catastrophic, but they certainly presented problems.

With a single operating system, dependencies could become hard to manage. What do you do if one application requires LibraryX 1.2, while another requires LibraryX 1.6?

Refactoring might get you so far, but it often demanded a new server with a new environment running the alternate version of said library. Which meant more CapEx to buy the hardware and more OpEx to ensure it wouldn’t turn into a steaming pile of maintenance debt in a few months’ time. This often led to hardware resource wastage too, with servers running software that only required a fraction of their horsepower.

In short, it was a functional but inefficient and wasteful paradigm.

Enter Virtualisation

In 1999, VMWare introduced VMWare Virtual Platform, the first x86-based virtualisation platform. While virtualisation itself had existed (and, indeed, been in use) since the 1960s, VMWare made it accessible to the increasingly x86-wedded masses.

This Guest/Host model allowed a host operating system to have guest operating systems installed on top of it.

Guests had no idea that they weren’t interacting directly with the hardware underpinning things, and simply obtained all the benefits allocated to them. This meant separation of environments became possible via software, rather than hardware, which was infinitely more flexible. Hardware resources could be allocated to Virtual Machines (VMs) according to their need, maximising use without bursting at the seams.

This also prompted cloud providers like Amazon Web Services (AWS) and Microsoft Azure to leap into action. Because ‘servers’ could now be created through software, it was possible to provide an interface for this and simply charge users on a pay-as-you-go basis. VMs became usable within minutes, in stark contrast to the weeks (and considerable expense) required to source, ship and configure a physical server. With an enormous amount of hardware still doing the grunt work, servers were now available as software and life was good.

Managing icon - Sourced Group

There’s Always Room for Improvement

As with the physical-only server era, VMs work(ed). They work so well, in fact, that plenty of infrastructures still use them – and use them well – today.

Nevertheless, the IT industry is replete with those of us who like to make things better. And at some point, somebody asked why we need to run multiple operating systems on top of the same physical hardware.

Why, indeed? The stack on a VM is essentially thus:

Hardware > Host OS > Hypervisor > Guest OS(s) > Runtime System(s) > Application(s)

Operating systems are no light-touch chunks of software. Running one for every VM on top of the hypervisor consumes considerable resource.

Sourced containers icon

Enter Containers

This need for multiple operating systems is eradicated with containers. Instead of emulating hardware in the manner of a VM, they emulate the operating system via a container management system (Docker being the most widely used today).

Containers share the host operating system’s kernel, which handles requests to and responses from the underlying hardware. You can imagine containers as a vacuum-packed application: anything unnecessary is removed, creating a tightly packed image with no bloat.

The resultant stack is:

Hardware > Host OS > Container Manager > Runtime System(s) > Application(s)

Ultimately, this ensures hardware has more capacity and availability for business value-generating software because it’s not running multiple copies of Windows Server or Linux.

Sourced VM and containers illustration

We Still Need Hardware

It sounds straightforward, but there is still an element of complexity and interdependency. None of the three elements to this equation – hardware, virtual servers and containers – eliminates the need for any other.

VMs offer greater isolation and, therefore, theoretical security against containers’ process-level isolation. And containers offer a way to trim a considerable amount of ‘fat’ from images, meaning faster sharing, deployment and booting. However, the underlying hardware still powers the rest of the stack and always will.

With VMs and containers, the role of physical hardware is less direct but no less essential. As with so much in technology, it’s about the correct tool for the job.