Introduction to DevOps with Kubernetes
上QQ阅读APP看书,第一时间看更新

Introduction

Microservices are one of the most recent and prominent trends in software development and architecture. Nowadays, applications are designed as a set of loosely-coupled services in microservice architecture. These "micro" services are expected to be developed independently, and they focus on a small subset of business functionalities. For instance, let's imagine developing a banking application with a web frontend for its customers and multiple backend services. It is expected to run frontend and backend services independently, and the frontend finds the IP address of the backend from discovery services to send queries. Each service focuses only on its business functionality and does not directly depend on other services. This architecture enables faster development, bug-fixing, and customer responsiveness. Therefore, it is inevitable for competitive organizations to engage in microservice architecture.

Creating single and large applications, namely, monolithic architecture, was a common approach in the past. All the functionalities of an application were packaged into a single process and delivered to customers as a single binary. It was easy to build, deploy, and update; however, it lacked horizontal scalability. For instance, let's assume that you have purchased a human-resources system such as a monolith application and installed it into costly servers in your data center. Within a couple of months, you realize that everything works, but the payroll systems are not responding fast enough because complex calculations are required for your company. The most straightforward solution is to buy another high-priced server and run two instances of the complete HR system. Although you only need faster payroll operations, it will cost you more than double since you must upgrade the whole system. This is the main problem with monolith applications; that is, without proper scalability based on usage levels, monolithic architecture is doomed to fail in the long run.

On the other hand, microservice architecture puts each business functionality into a separate service so that you can quickly increase the number of "only" payroll service replicas. Even better than this is that it can scale itself automatically with the usage level – since the microservice architecture will not span the complete resources of the servers. The scalability of microservices makes them the ultimate choice for the successful applications of today and the future compared to the monolithic architecture of yesterday.

Using traditional methods and tools for a new architectural style of microservices is impractical. Dramatic changes are needed for the development, build, testing, and runtime environment due to the requirements of microservice architecture. Prior to the last decade, the only solution for this was to actually run applications on physical servers. Since our applications are now "micro" services, it is possible to run multiple services on the same host.

However, this comes with its own risks, such as conflicting dependency libraries or causing a chaotic domino effect of failing applications in the same host. Virtualization is the solution to this problem; it creates multiple virtual servers or virtual machines (VMs) on the same physical server. It is a very well established and popular technology, and it is the fundamental service provided by all cloud providers, such as AWS, Google Cloud, Azure, and Alibaba Cloud. However, a fine level of virtualization is required considering the scalability and the high number of microservices in a complex application. Containerization technology provides a high level of virtualization as a de facto runtime solution for microservices:

A lightweight runtime: VMs partition the physical server by using a comprehensive operating system as their runtime environment. Considering the scope of microservices, using one VM for one microservice results in heavy infrastructure costs. In addition, there is theoretically no need for a completely "new" operating system to run an application. In order to reach the scalability required by microservices, virtualization is moved one level closer to the application. Containerization focuses on virtualization at the operating system level so that multiple containers are able to share the same operating system without interfering with each other. Figure 2.1 shows how VMs and containers are structured as layers on top of the infrastructure. Each microservice running in its container creates a separate execution environment while reducing the overhead and enabling scalability. Compared to VMs, containers – with their lightweight runtime environments – are a better option for running microservice applications.

Figure 2.1: The VM and container layers on top of the infrastructure

The build and run speed: Hypervisors start VMs on physical servers, and it could take a couple of minutes to bootstrap and start a complete operating system. In order to solve this issue, some additional idle VMs could be initialized and kept ready for workloads, but this will come with extra costs. On the other hand, microservices are started inside an operating system with less overhead and in a couple of seconds. Today's applications are expected to react more quickly to spikes in usage levels, and so waiting a couple of minutes is not acceptable in most cases. Containers are a better option compared to physical servers and VMs to run scalable, reliable, and robust applications when considering these performance concerns.

Microservice architecture focuses on the design and operations of multiple services but does not indicate any runtime choice. Containers are the most appropriate runtime environment considering the requirements of scalability, reliability, and responsiveness of today's applications. It is best if you start your microservices journey from scratch. However, it is also acceptable if you have a well-established system such as Netflix running an entire microservice architecture on AWS instances instead of containers. Container runtimes are standardized under the Container Runtime Interface (CRI) so that container orchestrators such as Kubernetes can support different runtimes. There are open source and licensed container runtimes available in the market, such as Docker Engine, CRI-O, and Kata Containers:

Docker Engine: This was started in 2013 and is currently being supported by Docker Inc. It is the most widely adopted, popular, and mature environment, which has been tested by a huge number of users and organizations. It is the best choice if you started the containerization of your applications recently and want a mature environment that is supported by many cloud providers and Kubernetes.

CRI-O: This is sponsored by the Cloud Native Computing Foundation (CNCF) and was started in 2016 as a lightweight Kubernetes-specific runtime. It is supported by the OpenShift Kubernetes engine by default; however, it lacks some security features compared to Docker.

Kata Containers: This is the youngest runtime environment, which was started in 2017 and is supported by Intel. It provides many more security options but creates extra overhead that reduces the overall performance of the system. Although it is a young environment, it is already supported by Kubernetes and is promising for enterprises because of its extra security options.

In this chapter, we will focus on Docker Engine, since Kubernetes support it and it is the most mature and popular container runtime environment. First, we will introduce Docker using a "Hello World" container. Second, we will explain container images and image repositories. We'll then continue by presenting methods that can be used to share resources between the host system and the containers. Finally, we will perform an activity to run a WordPress blog using a database connection inside a Docker container.