Kubernetes is a powerful orchestration system that allows you to manage containers reliably and efficiently. It has received a lot of attention in recent years, thanks to its abilities to help organisations scale, manage complexity, and respond to changing demands, which is why we have created a guide to Kubernetes.

During a virtual event with C2C, Tim Berry the Head of Cloud Training at Appsbroker Academy revealed the various different Kubernetes learning pathways people can take. This blog post is a brief overview of what Kubernetes is and how it serves to demystify containers for those who may be new to the concept. 

Keep reading to explore learning pathways with Appsbroker Academy, an Authorised Training Partner for Google Cloud:

VM based deployments

For anyone who is new to the concept of containers, it can be useful to remind ourselves of VM based deployments. One way to deploy applications, which we’ve been doing for over a decade now, is to use virtual machines, or VMs.

VMs are similar to having your own dedicated server; you can build them however you want, with whatever operating system or software that you like. However, it is important to maintain the ability to adapt your VM’s, or to scale them, when demands for your applications change. 

The problem with Virtual Machines

Many people end up running into the same VM bases problems. It is very difficult, for example, to run multiple software packages on the same virtual machine without causing dependency problems. For example, what happens if you need to upgrade a library for Package A and it breaks Package B?

As a guide to Kubernetes will tell you, what usually ends up happening is we dedicate an entire VM to a single instance of an application. However, VMs have a very large minimum size, because they need an entire operating system to run.

This makes scaling them a very slow and resource hungry process. We’re now spinning up new entire VMs with entire operating systems, and using all of that CPU and RAM overhead every time we need to cope with additional demand. 

How do containers help solve VM based problems? 

Rather than virtualise a server, you can think of containers as a way to virtualise the operating system only, so your workload is no longer tied to an operating system and a bunch of different packages. 

Picture your application code and its immediate dependencies packaged into a neat little box that can run in isolation. You can run dozens (or even hundreds) of containers on the same virtual host, because they are so resource efficient – and they spin up as quickly as a single operating system process.

Using this more efficient level of abstraction also removes a lot of the system’s management burden from developers. A container can run anywhere that containers are supported on your laptop, in the cloud, or in development and production, with no changes required to the actual container package.

Learn about Kubernetes Deployment Patterns in this virtual lab demo with Appsbroker Academy

Packaging the application into a container

Let’s get into things a little bit more. Wondering how to package the application into a container? You do this by creating a container image that exists in multiple layers. 

A container image comprises our application code, any dependencies it needs, and instructions on how the container should behave when it is run. These are all stored as lightweight layers inside the container image. This means the images are very small and easy to distribute from a development point of view, so it’s very quick and easy to develop containers and to iterate on them – especially as you’re only replacing specific layers of the image with every update. 

It’s common to use the Docker set of tools to package containers, but there are several open-source, standardised and automated ways you could do this which don’t involve Docker.

Deploying running containers

Wondering how a guide to Kubernetes can help you deploy this as a running container? 

All you need is a host that supports a container runtime in development. This would typically be your laptop. When you’re ready for other people to access your application, you can deploy a container to any environment that hosts a container runtime. This could be a dedicated server, a cluster of servers running in your office, or a datacenter or in the cloud. This flexibility makes the development process easier when you’re promoting applications through environments, from development to staging and production. 

Deploying individual containers is just the first step. Using all of the benefits of containers, you will quickly want to start building out applications as microservices. You’ll be managing groups of different containers, which can all scale up and down based on the demand for different parts of your application. You’ll probably want an easy way to manage the container runtimes and the underlying resources that are hosting your containers as well – not to mention networking, storage and other supporting services that you’re going to need. This is what we call orchestrating containers.

Thankfully, there are some great solutions out there for container orchestration. Of course, the one we’ll talk about today is Kubernetes.

What is Kubernetes?

Kubernetes is an open source project that was created at Google in 2014 and is now maintained by the Cloud Native Computing Foundation. It’s based on Google’s original internal system for managing containers at what they call planet scale, and that basically means it’s designed in a way that allows it to scale to serve millions of users all over the world. It is contributed to and supported by a huge ecosystem of developers, including engineering teams from companies like Google, Amazon, and Microsoft, as well as hundreds of individuals.

The purpose of Kubernetes is to provide a platform for managing containerized workloads and services and to facilitate automation and declarative configuration. 

The basic architecture of a Kubernetes system

A Kubernetes system, like most distributed systems, is just a cluster of computers. Usually we call these computers ‘nodes’. Generally, you need more than one for it to be a cluster, unless it’s for development purposes. These computers can be physical or virtual machines, (for example, it’s very common to run Kubernetes using the computing environment of a cloud provider). 

Now inside the cluster, we run a control plane. The control plane components make decisions about the cluster and keep it working; it’s the brain of the cluster. For example, one of its main jobs is to decide where a container should be scheduled to run. The worker nodes of the cluster provide the container runtime environments – so this is where our containers will actually run. It’s a common good practice to use separate nodes for the control plane and the workers.

And that, in brief, is a guide to Kubernetes. Broadly. Yes, we’ve only scratched the surface of Kubernetes, but having given you an introduction to Kubernetes as a concept, I’m sure you are excited to learn more.

Watch Tim Berry’s lecture in full

Want to start your learning journey and boost your career? Check out our course calendar for a mixture of free and paid events, both in-person and online.