Created by Materia for OpenMind Recommended by Materia
4
Start Containerization of Cloud Computing
12 November 2015

Containerization of Cloud Computing

Estimated reading time Time 4 to read

Virtualization (software that separates physical infrastructures to create various dedicated resources) has swept through the data center in recent years, enabling IT transformation and serving as the secret sauce behind cloud computing. But what is next? According to one speaker at the Linux Collaboration Summit last year “Cloud computing world is founded on hypervisors (A hypervisor, also called a virtual machine manager, is a program that allows multiple operating systems to share a single hardware processor). Containers can deliver more services using the same hardware you’re now using for virtual machines” and that spells more profits for both data centers and cloud services.

Virtual Machines vs Containers / Source: Docker

Docker is a new container technology, that makes it possible to get far more apps running on the same old servers and it also makes it very easy to package and ship programs. It is an open platform for developers and system admins to build, ship, and run distributed applications.

The right way to think about Docker is to view each container as an encapsulation of one program with all its dependencies. The container can be dropped into (almost) any host and it has everything it needs to operate. This way of using containers leads to small and modular software stacks and follows the Docker principle of one concern per container.

Docker’s Components

  • Docker Engine: a portable, lightweight runtime and packaging tool.
  • Docker Hub: a cloud service for sharing applications and automating workflows.

Docker creates a sandboxed runtime on the computer on which it lands. It occupies a defined memory space and has access only to specified resources. A container sets up networking for an application in a standard way and carries as discrete layers all the related software that it needs.

Benefits of Docker’s containers

VM hypervisors are “based on emulating virtual hardware. That means they’re fat in terms of system requirements.” while containers use shared operating systems. That means they are much more efficient than hypervisors in system resource terms. Instead of virtualizing hardware, containers rest on top of a single Linux instance. This in turn means you can “leave behind the useless 99.9% VM junk, leaving you with a small, neat capsule containing your application. The following represent some of the benefits of using Docker’s containers:

  • With a perfectly tuned container system, you can have as many as four-to-six times the number of server application instances as you can use VM hypervisors on the same hardware.
  • The key difference between containers and VMs is that while the hypervisor abstracts an entire device, containers just abstract the operating system kernel.
  • Because Docker’s partnering with the other container powers, including Canonical, Google, Red Hat, and Parallels, on its key open-source component libcontainer , it’s brought much-needed standardization to containers.
  • Developers can use Docker to pack, ship, and run any application as a lightweight, portable, self-sufficient LXC (Linux container) that can run virtually anywhere.
  • Docker containers are easy to deploy in a cloud.
  • With Docker, developers can build any app in any language using any toolchain.
  • Dockerized apps are completely portable and can run anywhere .
  • Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments.
  • By using Docker’s containers , IT can ship faster and run the same app, unchanged, on laptops, data center VMs, and any cloud.

Docker’s Containers and Cloud Computing

Amazon Web Services and Microsoft are moving quickly to make Docker containers welcome guests on their respective cloud hosts. Containers, sometimes described as lightweight virtualization, promise to move software around more easily and level the playing field between clouds.

In the future, containers are expected to be nested. A software component that makes up a layer in one container might be called by another in a remote location. An update to the same layer might be passed on to any other containers that use the same component.

Containerization is going to have an appeal for the next generation of developers, partly because it can’t be matched in every way by sophisticated virtualization tools and management. There’s evidence from IBM that containers deploy more quickly and run more efficiently than virtual machines. They can also be more densely packed on servers. That’s a big plus in the cloud, where overall efficiency remains a litmus test of who will thrive and who will die.

Containerization “is an important way to get standardization at the sub-virtual machine level, allowing portable apps to be packaged in a lightweight fashion and be easily and reliably consumed by PaaS clouds everywhere,” wrote IDC software analyst Al Hilwa from the DockerCon 2014 event.

On the other hand, Docker workloads can be deployed in virtual machines, if the user chooses. It is conceivable containers and virtual machines will be used hand-in-glove in some cloud settings. In others, containers will run by themselves on bare metal for maximum efficiency.

For the foreseeable future, virtualization has several management advantages in the enterprise data center, with its mixture of legacy applications. Those applications can be made independent of the hardware they were launched on and managed with pooled resources. Workloads can be moved around while running to maximize utilization of servers — containers cannot. But the software-defined data center doesn’t necessarily rule out Linux containers. They can be fit in alongside VMs.

The next generations of applications, many of which will run in the cloud, are more likely to be built with containers in mind rather than virtualization. When applications are composed as assemblies of many moving and distributed parts, containers will be a better fit.

Google VP of Infrastructure Eric Brewer in a keynote said that containers have been critical to how Google does cloud computing. He said, “Everything at Google, from search to Gmail, is packaged and run in a Linux container. Each week we launch more than 2 billion container instances across our global data centers, and the power of containers has enabled both more reliable services and higher, more-efficient scalability.

As a better understanding of attributes of containerization emerges, it will be the tools to create and manage them that will take center stage. It’s too soon to know how flexibly containers will be managed or migrated, or the future tasks they may be able to undertake. But the giant step represented by the move to virtualization in the data center appears about to be repeated, this time with containerization in the cloud.

This text is also published in Ahmed Banafa’s LinkedIn profile

Ahmed Banafa

Faculty | Author | Speaker | IoT Expert

Comments on this publication

Name cannot be empty
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved