Docker for Developers
上QQ阅读APP看书,第一时间看更新

Using Docker containers

Docker is generally used to create containers, which run your application as if in a headless virtual machine. In fact, on host operating systems that are not Linux-based, Docker effectively runs Linux in a virtual machine and runs your containers within that virtual machine. This is done transparently.

Note:

You don't have to install VirtualBox yourself. Docker is packaged in such a way that it will install or use any already-existing virtualization technology (for example, a hypervisor) for your operating system.

Introduction to containers

Earlier versions of Docker installed VirtualBox to create its virtual machine, but more recent virtualization technology implemented within the operating systems allows Docker to use those technologies instead.

Docker for Linux containers expects the host operating system or the virtual machine to be running Linux. The containers share the Linux kernel with the host. Docker can be used to run Windows native containers, in a similar manner to Linux containers. The Windows kernel is shared among the host and guests. For discussion purposes, we'll focus on the Linux host and guests.

Docker containers are typically used to implement something like headless virtual machines. The use of virtual machines for each application you might create a container for is expensive – you must reserve a fixed amount of RAM and disk space for the virtual machine. On a 16 gigabyte RAM MacBook Pro, you can roughly fit three 4 gigabyte RAM virtual machines running at the same time. You do need to have some RAM for the host operating system to run. Starving the host or guest virtual machines of RAM will cause them to swap, which crushes performance:

Figure 2.2 – Docker containers illustrated

Containers are separated from the host operating system using host operating system features. The containers use the Linux kernel's namespaces feature (https://manpages.debian.org/stretch/manpages/namespaces.7.en.html) to separate the code running in containers from one another, and cgroups (see https://manpages.debian.org/stretch/manpages/cgroups.7.en.html) to limit the resources that a container may use (including RAM and CPU). Containers also use the Linux unionfs (https://manpages.debian.org/buster/unionfs-fuse/unionfs.8.en.html) filesystem to implement the layered filesystem our containers see when running under Docker.

From the applications running within the container's point of view, the container is a whole and dedicated computer; there is no direct communication with the host operating system.

Containers do not require the number of virtual CPUs or a dedicated block of RAM per container.

You are only limited by how much RAM the containers need and how much RAM the host has.

Containers share the host's Linux kernel, while virtual machines must have a whole operating system installed!

You may choose to limit the resources used by a container instance, but this is not required.

Host resources may be shared with the guest containers. The host's networking can be shared with any container, but this is only really needed for containers running applications that require this. For example, to use the host's Bonjour networking functionality, the guest would use the host's networking.

The guest containers may expose ports to the host and any computers that can access the host. For example, a container running an HTTP server might expose port 80 and, when the host is accessed at port 80, the container responds.

Containers have driven the concept of microservices. An application using microservice architecture implements a collection of services that communicate among themselves and the host. These services are meant to be trivial to implement – only the specific code required to support the service needs to be included in the program. It's not uncommon for microservices to be implemented in a single source code file with just a few lines of code.

Container architecture is quite scalable. You can run multiple containers running the same application (horizontal scaling) and you can dedicate more host resources to the container system (vertical scaling). For example, you might create a container running an HTTP server; you can create a server farm by instantiating as many of these containers as you desire.

Using Docker for development

A great reason to use Docker for development is that you don't have to install any programs, other than Docker itself, on your host to enable development. For example, you can run Apache in a container without installing it on your workstation.

You can also mix and match software versions within your containers. A microservices architecture might require one container to use Node.js version 8 and another container to use Node.js version 10. This is obviously problematic on a single host, but is straightforward when using Docker. One container installs and runs version 8, and another container installs and runs version 10.

During development, you can share your project's development files with the container so that when you edit these files, the container sees that the files have changed.

Each container has its own set of global environment variables. It's typical to configure the application using environment variables, rather than in source code or configuration files within the container.

When you are ready to deploy or publish a container, you can push it to a container hosting service, such as Docker Hub. In fact, Docker Hub is a terrific source for already-existing containers that may aid you in your project development. There are pre-made container images for MongoDB, Node.js (various versions), Apache, and so on.

Container construction is effectively object-oriented. You inherit from a base container and add the functionality you need to that. You can create a Node.js application in a container that starts with a ready-made Node.js container, install npm packages in the container, and run your custom code in the container.

You can always develop your own base containers. For these, you can start with ready-made packages for a flavor of Linux. The Alpine Linux base container is popular because it is one of the most lightweight images to start from. There are base containers for Fedora, Ubuntu, Arch Linux, and more. Whichever of these Linux containers you start from, you can use that operating system's installation tools to add packages from the official repositories for that operating system; that is, apt for Ubuntu, yum for Fedora, and so on.

It's a good idea to Dockerize an existing application that wasn't designed to run in a container. You can choose a flavor and version of Linux for the container that is compatible with the application, and you can split up the application into multiple container images to afford future scalability.

For example, you might have an older LAMP application that requires specific versions of PHP, MySQL, and Apache, as well as an older version of Ubuntu. You would break this up into a distinct MySQL container, and a distinct Apache plus PHP container. You would want your Apache+PHP containers to use a shared volume so that they're all running the same and latest PHP source code. You can set up the MySQL container to use master-slave replication. You can set up a load balancer in another container that balances between as many Apache and PHP container instances as you choose.

Time for a hands-on example, using Docker for development.